Sunday, June 11, 2023

GeoPackage 1.4

We are working on a minor revision to the GeoPackage Encoding Standard. We propose three substantive changes to the standard.

  1. Making DATETIME more flexible. "Rules as Written," a GeoPackage DATETIME must have the format YYYY-MM-DDTHH:MM:SS.SSSZ. In practice, this has proven to be unnecessarily strict. Now we will accept anything that is compatible with ISO-8601 (or its non-ISO counterpart, RFC-3339) with a Zulu / UTC time zone.
  2. Relaxing Requirement 4. When GeoPackage 1.0 was published, Requirement 4 established a distinction between "GeoPackage" (a GeoPackage with no extensions) and "Extended GeoPackage" (a GeoPackage with at least one registered extension). While there was sound reasoning for this at the time, now that the standard has been out for 9 years, extensions have proven to be essential for non-trivial operations. Meanwhile, adjudication of this requirement created test skips in the Executable Test Suite. These skips were difficult to interpret. By relaxing this requirement, we remove the confusion.
  3. Changing some R-Tree spatial index triggers. We discovered that one of the triggers used to maintain the R-tree spatial index was incompatible with UPSERT statements. We corrected this problem by replacing ..._update1 with two new triggers, ..._update6 and ..._update7. By deprecating the old trigger, we make it easier for clients to determine whether the change has been applied. While we were at it, we made a similar change for the trigger that was patched as part of GeoPackage 1.2.1, deprecating ..._update3 in favor of ..._update5.
We just initiated a public comment period for this change. If all goes well, we plan to publish GeoPackage 1.4 as an adopted standard by the end of this calendar year.

Wednesday, June 12, 2019

Introducing Application Profiles

Bottom Line Up Front: we want GeoPackage to be interoperable, but we are not there yet. It is not easy to achieve interoperability in an ecosystem that is extensible by design. Organizations that want to be use GeoPackages from disparate sources are not having great success. We have learned important lessons from the interoperability experiments that have been conducted:
  • GeoPackage has many degrees of freedom, including but not limited to extensions (e.g., SRSs, geometry types, tile matrix sets, tile formats, etc.)
  • conformance to the standard is no guarantee of interoperability
  • there is currently no clear way to determine whether a client will be able to fully use the information available in a particular GeoPackage
I propose to solve this problem by introducing application profiles to GeoPackage. An application profile would itemize all of the optional elements in use in the database. In terms of roadmap or capability evolution, I propose a set of three incremental capability levels.
  1. Verbal/written agreement
  2. Machine-readable manifests that declare what options are in use in the GeoPackage so that a GeoPackage Client can determine whether it can be fully used
  3. Allowing consumers to provide a "bill of materials" with a GeoPackage production request so that the ensuing GeoPackage only uses white-listed options
While Capability Level 1 would be better than nothing, I do want to propose a specific approach for the machine-readable document. Last week I introduced the concept of metadata profiles. Now I propose a new metadata profile (metadata scope: "manifest", reference scope: "geopackage") for a JSON document that captures this information. The working version of the JSON Schema for that document is on GitLab along with a sample manifest document.

Tuesday, June 4, 2019

It's Time We Had A Little Talk...About Metadata

I know this is going to make people uncomfortable, but it is time to talk about this. People want to put metadata in GeoPackage. It is just a fact of life. We need to be ready and that is why we are having this discussion. 

While GeoPackage has had support for metadata since its inception, I acknowledge that there is something missing. There is currently no agreement on how metadata should be used in GeoPackage to serve any particular purpose. Someone opening a GeoPackage would have no way of recognizing that the file has any particular type of metadata in it, short of inspecting every single row in the gpkg_metadata table. No way.

I propose that we address this gap by introducing "metadata profiles" to GeoPackage. A metadata profile is an agreement on what a metadata document will look like and how it will be used in the GeoPackage. I propose to leverage the extension mechanism to express this information. This approach has two parts:

  1. Introduce a new extension that defines a new extension "scope" (i.e., the gpkg_extensions.scope column) of "metadata"
  2. Create an extension for each metadata profile:
    • using this new "metadata" extension scope
    • defining the metadata scope, standard/specification, and MIME type that uniquely identify it
    • defining the reference scope ("geopackage", "table", "row", or "row/col") that it will be used for 
As part of OGC's Testbed-15, I intend to explore this concept further. Here are some profiles that we are considering:
I will explore these further in subsequent blog posts.

Monday, May 6, 2019

Views and GeoPackage

From the beginning, the intent of GeoPackage was to allow views to be used (instead of tables) wherever appropriate. In general, "appropriate" means user-defined tables (features, tiles, or attributes). There is some confusion here because, for example, the sample feature table or view definition table (Table 7) and sample feature table definition SQL (Annex C.4) specifically call out primary keys and the notion of primary keys for view does not exist in SQLite.

In response, the GeoPackage SWG has approved some changes designed to clarify how views should be implemented. While implementers should use primary keys wherever as possible, Requirements 150 (for features) and 151 (for attributes) specify that if the database element lacks a defined primary key (i.e., it is a view), then the first column shall be primary-key-like (i.e., be a unique integer). There is a similar note for tiles but no new requirement was needed there because the schema for tiles tables already requires the first column to be the ID.

Currently, these changes are available on the working version of the standard. The working version may get published as another GeoPackage revision (1.2.2 or 1.3.0) but there is currently no timetable for doing so. One thing that we would need to do is update the Executable Test Suite (ETS) to have tests. (These tests would only apply to GeoPackages with a version after 1.2.1.) As always, if you have a GeoPackage that fails the ETS and you believe the ETS is in error, please let us know. 

P.S. Some of you in the community have asked about updatable views. It is theoretically possible to do this with a combination of tables and triggers but I don't recommend it. 

Wednesday, February 20, 2019

The Vector Tiles Pilot

Last year, I posted regarding vector tiles. I am pleased to announce that I have much more to report on this front. For the last six months, OGC has been sponsoring the Vector Tiles Pilot (VTP). The purpose of the VTP was to investigate how vector tiles (or tiled feature data, if you will) in the Mapbox Vector Tiles (MVT) and GeoJSON formats can be supported through the OGC standards baseline, particularly Web Feature Server (WFS), Web Map Tile Server (WMTS), and GeoPackage (GPKG). 

The initial effort concluded late last year and culminated in some videos and the following Engineering Reports (ERs):
Following the completion of the VTP, the sponsors funded an extension to the pilot (the Vector Tiles Pilot Extension or VTPExt) to take a closer look at styling of the ensuing feature data using the Mapbox Styles and Styled Layer Descriptor (SLD) encodings. This effort concluded last week and the resulting videos have been posted to YouTube. The ER is under review now (OGC members can access the current draft) and should be published in the next month or two.

From the GeoPackage perspective, the Vector Tiles Pilot allowed us to try out a series of GeoPackage Extensions:
  • Tiled Feature Data: allowing tiled feature to be stored using the tiles mechanism
  • Mapbox Vector Tiles: allowing tiles to contain MVT
  • GeoJSON Vector Tiles: allowing tiles to contain GeoJSON
  • Styles: allowing styles documents (e.g., Mapbox Styles, SLD, or others) to be stored in a way that is loosely coupled with feature data (tiled or otherwise)
  • OWS Context: allowing for OWS Context information to be stored (while we have an agreement in principle here, we did not have time to test it out)
  • Vector Tiles Attributes: allowing for attributes to be stored in attributes tables instead of in the vector tiles to support better querying (this topic is very complex and we did not actually reach a consensus here)
While these extensions were introduced in the ER linked above, they were further refined during the VTPExt. It is on my TODO list to add them to the GeoPackage Extensions Page. The next step is to present this information to the GeoPackage Standards Working Group (SWG) and to propose the extensions as candidate standards. I believe we have a clear use case and a sound technical approach. If the SWG agrees and there is a commitment to implementation, we may have an adopted standard later this year.

Tuesday, February 19, 2019

GeoPackage Executable Test Suite

I have an update to my previous post regarding conformance testing. OGC has recently approved and released the Executable Test Suite (ETS) for GeoPackage 1.2. This tool allows for the testing of individual GeoPackages to verify conformance and identify any non-compliant elements. Organizations can get OGC certified if they pass the test. The suite works on all GeoPackage versions from 1.0 to 1.2.1 - it will detect the GeoPackage version and alter the test requirements where appropriate. The test supports the following components:

  • Core
  • Features
  • Tiles
  • Attributes
  • Extension Mechanism
  • Non-Linear Geometry Types
  • RTree Spatial Indexes
  • Tiles Encoding WebP
  • Metadata
  • Schema
  • WKT for Coordinate Reference Systems
  • Tiled Gridded Coverage Data
There are two ways to run the ETS:
  • If you have a GeoPackage that you want to test, go to the OGC Validation Website. From there, you can sign in (creating an account first if needed) and create a new session. Then you can select GeoPackage 1.2 from the standard list, provide your GeoPackage (by URL or as a file upload), and run the test. The tool will provide a report indicating tests that passed, failed, or were skipped.
  • If you want to run the test suite locally or contribute to its development, see the GitHub Repository. There are instructions for compiling and running the tool locally.
I recommend the ETS for anyone who plans to use GeoPackage in an operational setting, particularly when interoperability with other GeoPackage implementations is desired. Produce a GeoPackage that is representative of your operational use, run it through the ETS, and make verify that it is compliant. That's all there is to it!

Monday, January 28, 2019

Versioning of Extensions?

The GeoPackage Extension Mechanism gives stakeholders a way to add GeoPackage capabilities in a way that minimizes interoperability risks to systems that do not understand the extension. From my perspective, if different implementers are going to implement similar capabilities, then I want them to implement in the same way. That is the whole point of interoperability. The theory is that a client that does not understand an extension will ignore the data. I believe that the extension mechanism is a GeoPackage strength.

The GeoPackage Standards Working Group has two questions for the community:
  1. Is the extension mechanism working the way we want it to? Are we getting new capabilities? Are we getting the interoperability we are looking for?
  2. What should we do if an OGC-adopted extension has to change? 
    • We are leery of adding a "version" column to gpkg_extensions because GeoPackage clients that only understand version 1.2 or prior wouldn't even know about it. It is borderline whether this is a breaking change.
    • An alternative that has some backing is to allow extensions to evolve as long as the changes are non-breaking but to force an extension to take a new name if the changes are breaking. 
We would like to get positive answers to these questions because there are a number of initiatives going on that have the potential to add a number of new adopted extensions. Do we need to pivot? I believe the answer is "no" but it is possible I am too close to the situation to make a fair assessment.