W3C

Use Cases and Lessons for the Data Cube Vocabulary

W3C Working Group Note 01 August 2013

This version:
http://www.w3.org/TR/2013/NOTE-vocab-data-cube-use-cases-20130801/
Latest published version:
http://www.w3.org/TR/vocab-data-cube-use-cases/
Previous version:
None
Editors:
Benedikt Kämpgen, FZI Karlsruhe
Richard Cyganiak, DERI, NUI Galway

Abstract

Many national, regional and local governments, as well as other organizations in- and outside of the public sector, collect numeric data and aggregate this data into statistics. There is a need to publish these statistics in a standardized, machine-readable way on the Web, so that they can be freely integrated and reused in consuming applications.

In this document, the W3C Government Linked Data Working Group presents use cases and lessons supporting a recommendation of the RDF Data Cube Vocabulary [QB-2013]. We describe case studies of existing deployments of an earlier version of the Data Cube Vocabulary [QB-2010] as well as other possible use cases that would benefit from using the vocabulary. In particular, we identify benefits and challenges in using a vocabulary for representing statistics. Also, we derive lessons that can be used for future work on the vocabulary as well as for useful tools complementing the vocabulary.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This document was published by the Government Linked Data Working Group as a Working Group Note. If you wish to make comments regarding this document, please send them to [email protected] (subscribe, archives). All comments are welcome.

Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

1. Introduction

The aim of this document is to present concrete use cases and lessons for a vocabulary to publish statistics as Linked Data. An earlier version of the Data Cube Vocabulary [QB-2010] has existed for some time and has proven applicable in several deployments. The W3C Government Linked Data Working Group intends to transform the data cube vocabulary into a W3C Recommendation of the RDF Data Cube Vocabulary [QB-2013]. In this document, we describe use cases that would benefit from using the vocabulary. In particular, we identify possible benefits and challenges in using such a vocabulary for representing statistics. Also, we derive lessons that can motivate future work on the vocabulary as well as associated tools or services complementing the vocabulary.

The rest of this document is structured as follows. We will first give a short introduction to modeling statistics. Then, we will describe use cases that have been derived from existing deployments or from feedback to the earlier version of the Data Cube Vocabulary. In particular, we describe possible benefits and challenges of use cases. Afterwards, we will describe lessons derived from the use cases.

We use the term "Data Cube Vocabulary" throughout the document when referring to the vocabulary.

In the following, we describe the challenge of authoring an RDF vocabulary for publishing statistics as Linked Data. Describing statistics — collected and aggregated numeric data — is challenging for the following reasons:

The Statistical Data and Metadata eXchange [SDMX] — the ISO standard for exchanging and sharing statistical data and metadata among organizations — uses a "multidimensional model" to meet the above challenges in modeling statistics. It can describe statistics as observations. Observations exhibit values (Measures) that depend on dimensions (Members of Dimensions). Since the SDMX standard has proven applicable in many contexts, the Data Cube Vocabulary adopts the multidimensional model that underlies SDMX and will be compatible with SDMX.

2. Terminology

Statistics is the study of the collection, organization, analysis, and interpretation of data. Statistics comprise statistical data.

The basic structure of statistical data is a multidimensional table (also called a data cube) [SDMX], i.e., a set of observed values organized along a group of dimensions, together with associated metadata. We refer to aggregated statistical data as "macro-data" and unaggregated statistical data as "micro-data".

Statistical data can be collected in a dataset, typically published and maintained by an organization [SDMX]. The dataset contains metadata, e.g., about the time of collection and publication or about the maintaining and publishing organization.

Source data is data from data stores such as relational databases or spreadsheets that acts as a source for the Linked Data publishing process.

Metadata about statistics defines the data structure and gives contextual information about the statistics.

A format is machine-readable if it is amenable to automated processing by a machine, as opposed to presentation to a human user.

A publisher is a person or organization that exposes source data as Linked Data on the Web.

A consumer is a person or agent that uses Linked Data from the Web.

A registry allows a publisher to announce that data or metadata exists and to add information about how to obtain that data [SDMX 2.1].

3. Use cases

This section presents scenarios that are enabled by the existence of a standard vocabulary for the representation of statistics as Linked Data.

3.1 SDMX Web Dissemination Use Case

(Use case taken from SDMX Web Dissemination Use Case [SDMX 2.1])

Since we have adopted the multidimensional model that underlies SDMX, we also adopt the "Web Dissemination Use Case" which is the prime use case for SDMX since it is an increasingly popular use of SDMX and enables organizations to build a self-updating dissemination system.

The Web Dissemination Use Case contains three actors, a structural metadata Web service (registry) that collects metadata about statistical data in a registration fashion, a data Web service (publisher) that publishes statistical data and its metadata as registered in the structural metadata Web service, and a data consumption application (consumer) that first discovers data from the registry, then queries data from the corresponding publisher of selected data, and then visualizes the data.

Benefits

Challenges

3.2 Publisher Case Study: UK government financial data from Combined Online Information System (COINS)

(This use case has been summarized from Ian Dickinson et al. [COINS])

More and more organizations want to publish statistics on the Web, for reasons such as increasing transparency and trust. Although, in the ideal case, published data can be understood by both humans and machines, data often is simply published as CSV, PDF, XSL etc., lacking elaborate metadata, which makes free usage and analysis difficult.

Therefore, the goal in this scenario is to use a machine-readable and application-independent description of common statistics, expressed using open standards, to foster usage and innovation on the published data. In the "COINS as Linked Data" project [COINS], the Combined Online Information System (COINS) shall be published using a standard Linked Data vocabulary. Via the Combined Online Information System (COINS), HM Treasury, the principal custodian of financial data for the UK government, releases previously restricted financial information about government spending.

The COINS data has a hypercube structure. It describes financial transactions using seven independent dimensions (time, data-type, department etc.) and one dependent measure (value). Also, it allows thirty-three attributes that may further describe each transaction. COINS is an example of one of the more complex statistical datasets being publishing via data.gov.uk.

Part of the complexity of COINS arises from the nature of the data being released:

The published COINS datasets cover expenditure related to five different years (2005–06 to 2009–10). The actual COINS database at HM Treasury is updated daily. In principle at least, multiple snapshots of the COINS data could be released throughout the year.

The actual data and its hypercube structure are to be represented separately so that an application first can examine the structure before deciding to download the actual data, i.e., the transactions. The hypercube structure also defines, for each dimension and attribute, a range of permitted values that are to be represented.

An access or query interface to the COINS data, e.g., via a SPARQL endpoint or the linked data API, is planned. Queries that are expected to be interesting are: "spending for one department", "total spending by department", "retrieving all data for a given observation" etc.

Benefits

According to the COINS as Linked Data project, the reason for publishing COINS as Linked Data are threefold:

Challenges

The COINS use case leads to the following challenges:

3.3 Publisher Use Case: Publishing Excel Spreadsheets about Dutch historical census data as Linked Data

(This use case has been contributed by Rinke Hoekstra. See CEDA_R and Data2Semantics for more information.)

Not only in government, there is a need to publish considerable amounts of statistical data to be consumed in various (also unexpected) application scenarios. Typically, Microsoft Excel sheets are made available for download.

For instance, in the CEDA_R and Data2Semantics projects publishing and harmonizing Dutch historical census data (from 1795 onwards) is a goal. These censuses are now only available as Excel spreadsheets (obtained by data entry) that closely mimic the way in which the data was originally published and shall be published as Linked Data.

Those Excel sheets contain single spreadsheets with several multidimensional data tables, having a name and notes, as well as column values, row values, and cell values.

Another concrete example is the Stats2RDF project that intends to publish Excel sheets with biomedical statistical data. Here, Excel files are first translated into CSV and then translated into RDF using OntoWiki, a semantic wiki.

Benefits

Challenges

3.4 Publisher Use Case: Publishing hierarchically structured data from StatsWales and Open Data Communities

(Use case has been taken from [QB4OLAP] and from discussions at publishing-statistical-data mailing list)

It often comes up in statistical data that you have some kind of 'overall' figure, which is then broken down into parts.

Example (in pseudo-turtle RDF):

ex:obs1
  sdmx:refArea <uk>;
  sdmx:refPeriod "2011";
  ex:population "60" .
ex:obs2
  sdmx:refArea <england>;
  sdmx:refPeriod "2011";
  ex:population "50" .
ex:obs3
  sdmx:refArea <scotland>;
  sdmx:refPeriod "2011";
  ex:population "5" .
ex:obs4
  sdmx:refArea <wales>;
  sdmx:refPeriod "2011";
  ex:population "3" .
ex:obs5
  sdmx:refArea <northernireland>;
  sdmx:refPeriod "2011";
  ex:population "2" .	

We are looking for the best way (in the context of the RDF/Data Cube/SDMX approach) to express that the values for England, Scotland, Wales & Northern Ireland ought to add up to the value for the UK and constitute a more detailed breakdown of the overall UK figure. Since we might also have population figures for France, Germany, EU28 etc., it is not as simple as just taking a qb:Slice where you fix the time period and the measure.

Similarly, Etcheverry and Vaisman [QB4OLAP] present the use case to publish household data from StatsWales and Open Data Communities.

This multidimensional data contains for each fact a time dimension with one level Year and a location dimension with levels Unitary Authority, Government Office Region, Country, and ALL. As unit, units of 1000 households is used.

In this use case, one wants to publish not only a dataset on the bottom most level, i.e., what are the number of households at each Unitary Authority in each year, but also a dataset on more aggregated levels. For instance, in order to publish a dataset with the number of households at each Government Office Region per year, one needs to aggregate the measure of each fact having the same Government Office Region using the SUM function.

Similarly, for many uses then population broken down by some category (e.g., ethnicity) is expressed as a percentage. Separate datasets give the actual counts per category and aggregate counts. In such cases it is common to talk about the denominator (often DENOM) which is the aggregate count against which the percentages can be interpreted.

Benefits

Challenges

3.5 Publisher Case Study: Publishing Observational Data Sets about UK Bathing Water Quality

(Use case has been provided by Epimorphics Ltd, in their UK Bathing Water Quality deployment)

As part of their work with data.gov.uk and the UK Location Programme, Epimorphics Ltd have been working to pilot the publication of both current and historic bathing water quality information from the UK Environment Agency as Linked Data.

The UK has a number of areas, typically beaches, that are designated as bathing waters where people routinely enter the water. The Environment Agency monitors and reports on the quality of the water at these bathing waters.

The Environment Agency's data can be thought of as structured in 3 groups:

The most important dimensions of the data are bathing water, sampling point, and compliance classification.

Benefits

Challenges

3.6 Publisher Case Study: Site specific weather forecasts from Met Office, the UK's National Weather Service

(This section contributed by Dave Reynolds)

The Met Office, the UK's National Weather Service, provides a range of weather forecast products including openly available site-specific forecasts for the UK. The site specific forecasts cover over 5000 forecast points, each forecast predicts 10 parameters and spans a 5 day window at 3 hourly intervals, the whole forecast is updated each hour. A proof of concept project investigated the challenge of publishing this information as linked data using the Data Cube vocabulary.

Benefits

Challenges

This weather forecasts case study leads to the following challenges:

ISO19156 compatibility

The World Meteorological Organization (WMO) develops and recommends data interchange standard and within that community compatibility with ISO19156 "Geographic information — Observations and measurements" (O&M) is regarded as important. Thus, this supports lesson Modelers using ISO19156 - Observations & Measurements may need clarification regarding the relationship to the Data Cube Vocabulary.

Solution in this case study:

O&M provides a data model for an Observation with associated Phenomenon, measurement ProcessUsed, Domain (feature of interest) and Result. Prototype vocabularies developed at CSIRO and extended within this project allow this data model to be represented in RDF. For the site specific forecasts then a 5-day forecast for all 5000+ sites is regarded as a single O&M Observation.

To represent the forecast data itself, the Result in the O&M model, then the relevant standard is ISO19123 "Geographic information — Schema for coverage geometry and functions". This provides a data model for a Coverage which can represent a set of values across some space. It defines different types of Coverage including a DiscretePointCoverage suited to representing site-specific forecast results.

It turns out that it is straightforward to treat an RDF Data Cube as a particular concrete representation of the DiscretePointCoverage logical model. The cube has dimensions corresponding to the forecast time and location and the measure is a record representing the forecast values of the 10 phenomena. Slices by time and location provide subsets of the data that directly match the data packages supported by an existing on-line service.

Note that in this situation an observation in the sense of qb:Observation and an observation in the sense of ISO19156 Observations and Measurements are different things. The O&M Observation is the whole forecast whereas each qb:Observation corresponds to a single GeometryValuePair within the forecast results Coverage.

Data volume

Each hourly update comprises over 2 million data points and forecast data is requested by a large number of data consumers. Bandwidth costs are thus a key consideration and the apparent verbosity of RDF in general, and Data Cube specifically, was a concern. This supports lesson Publishers and consumers may need more guidance in efficiently processing data using the Data Cube Vocabulary.

Solution in this case study:

Regarding bandwidth costs then the key is not raw data volume but compressibility, since such data is transmitted in compressed form. A Turtle representation of a non-abbreviated data cube compressed to within 15-20% of the size of compressed, handcrafted XML and JSON representations. Thus obviating the need for abbreviations or custom serialization.

3.7 Publisher Case Study: Eurostat SDMX as Linked Data

(This use case has been taken from Eurostat Linked Data Wrapper and Linked Statistics Eurostat Data, both deployments for publishing Eurostat SDMX as Linked Data using the draft version of the Data Cube Vocabulary)

As mentioned already, the ISO standard for exchanging and sharing statistical data and metadata among organizations is Statistical Data and Metadata eXchange [SDMX]. Since this standard has proven applicable in many contexts, we adopt the multidimensional model that underlies SDMX and intend the standard vocabulary to be compatible to SDMX. Therefore, in this use case we explain the benefit and challenges of publishing SDMX data as Linked Data.

As one of the main adopters of SDMX, Eurostat publishes large amounts of European statistics coming from a data warehouse as SDMX and other formats on the Web. Eurostat also provides an interface to browse and explore the datasets. However, linking such multidimensional data to related data sets and concepts would require downloading of interesting datasets and manual integration. The goal here is to improve integration with other datasets; Eurostat data should be published on the Web in a machine-readable format, possibly to be linked with other datasets, and possibly to be freely consumed by applications. Both Eurostat Linked Data Wrapper and Linked Statistics Eurostat Data intend to publish Eurostat SDMX data as 5 Star Linked Open Data. Eurostat data is partly published as SDMX, partly as tabular data (TSV, similar to CSV). Eurostat provides a TOC of published datasets as well as a feed of modified and new datasets. Eurostat provides a list of used code lists, i.e., range of permitted dimension values. Any Eurostat dataset contains a varying set of dimensions (e.g., date, geo, obs_status, sex, unit) as well as measures (generic value, content is specified by dataset, e.g., GDP per capita in PPS, Total population, Employment rate by sex).

Benefits

Challenges

3.8 Publisher Case Study: Improving trust in published sustainability information at the Digital Enterprise Research Institute (DERI)

(This use case has mainly been taken from [COGS])

In several applications, relationships between statistical data need to be represented.

The goal of this use case is to describe provenance, transformations, and versioning around statistical data, so that the history of statistics published on the Web becomes clear. This may also relate to the issue of having relationships between datasets published.

A concrete example is given by Freitas et al. [COGS], where transformations on financial datasets, e.g., the addition of derived measures, conversion of units, aggregations, OLAP operations, and enrichment of statistical data are executed on statistical data before showing them in a Web-based report.

See SWPM 2012 Provenance Example for screenshots about this use case.

Benefits

Making transparent the transformation a dataset has been exposed to increases trust in the data.

Challenges

3.9 Consumer Case Study: Simple chart visualizations of (integrated) published climate sensor data

(Use case taken from SMART natural sciences research project)

Data that is published on the Web is typically visualized by transforming it manually into CSV or Excel and then creating a visualization on top of these formats using Excel, Tableau, RapidMiner, Rattle, Weka etc.

This use case shall demonstrate how statistical data published on the Web can be visualized inside a webpage with little effort and without using commercial or highly-complex tools.

An example scenario is environmental research done within the SMART research project. Here, statistics about environmental aspects (e.g., measurements about the climate in the Lower Jordan Valley) shall be visualized for scientists and decision makers. Statistics should also be possible to be integrated and displayed together. The data is available as XML files on the Web which are re-published as Linked Data using the Data Cube Vocabulary. On a separate website, specific parts of the data shall be queried and visualized in simple charts, e.g., line diagrams.

Figure 1: HTML embedded line chart of an environmental measure over time for three regions in the lower Jordan valley

display of an environmental measure over time for three regions in the lower Jordan valley

Figure 2: Showing the same data in a pivot table aggregating to single months. Here, the aggregate COUNT of measures per cell is given.

Figure: Showing the same data in a pivot
		table aggregating to single months. Here, the aggregate COUNT of measures per cell is given.

Benefits

Easy, flexible and powerful visualizations of published statistical data.

Challenges

3.10 Consumer Use Case: Visualizing published statistical data in Google Public Data Explorer

(Use case taken from Google Public Data Explorer (GPDE))

Google Public Data Explorer (GPDE) provides an easy possibility to visualize and explore statistical data. Data needs to be in the Dataset Publishing Language (DSPL) to be uploaded to the data explorer. A DSPL dataset is a bundle that contains an XML file, the schema, and a set of CSV files, the actual data. Google provides a tutorial to create a DSPL dataset from your data, e.g., in CSV. This requires a good understanding of XML, as well as a good understanding of the data that shall be visualized and explored.

In this use case, the goal is to take statistical data published as Linked Data re-using the Data Cube Vocabulary and to transform it into DSPL for visualization and exploration using GPDE with as few effort as possible.

For instance, Eurostat data about Unemployment rate downloaded from the Web as shown in the following figure:

Figure 3: An interactive chart in GPDE for visualizing Eurostat data described with DSPL

An interactive chart in GPDE for visualising Eurostat data in the DSPL

There are different possible approaches each having advantages and disadvantages: 1) A customer C is downloading this data into a triple store; SPARQL queries on this data can be used to transform the data into DSPL and uploaded and visualized using GPDE. 2) or, one or more XLST transformation on the RDF/XML transforms the data into DSPL.

Benefits

Challenges

3.11 Consumer Case Study: Analyzing published financial (XBRL) data from the SEC with common OLAP systems

(Use case taken from Financial Information Observation System (FIOS))

Online Analytical Processing (OLAP) [OLAP] is an analysis method on multidimensional data. It is an explorative analysis method that allows users to interactively view the data on different angles (rotate, select) or granularities (drill-down, roll-up), and filter it for specific information (slice, dice).

OLAP systems are commonly used in industry to analyze statistical data on a regular basis. OLAP systems first use ETL pipelines to extract-load-transform relevant data in a data warehouse and then allow interfaces to efficiently issue OLAP queries on the data.

The goal in this use case is to allow analysis of published statistical data with common OLAP systems [OLAP4LD].

For that a multidimensional model of the data needs to be generated. A multidimensional model consists of facts summarized in data cubes. Facts exhibit measures depending on members of dimensions. Members of dimensions can be further structured along hierarchies of levels.

An example scenario of this use case is the Financial Information Observation System (FIOS) [FIOS], where XBRL data provided by the SEC on the Web is re-published as Linked Data and made possible to explore and analyze by stakeholders in a Web-based OLAP client Saiku.

The following figure shows an example of using FIOS. Here, for three different companies, the Cost of Goods Sold as disclosed in XBRL documents are analyzed. As cell values either the number of disclosures or — if only one available — the actual number in USD is given:

Figure 4: Example of using FIOS for OLAP operations on financial data

Example of using FIOS for OLAP operations on financial data

Benefits

Challenges

3.12 Registry Use Case: Registering published statistical data in data catalogs

(Use case motivated by Data Catalog vocabulary and RDF Data Cube Vocabulary datasets in the PlanetData Wiki)

After statistics have been published as Linked Data, the question remains how to communicate the publication and to let users discover the statistics. There are catalogs to register datasets, e.g., CKAN, datacite.org, da|ra, and Pangea. Those catalogs require specific configurations to register statistical data.

The goal of this use case is to demonstrate how to expose and distribute statistics after publication. For instance, to allow automatic registration of statistical data in such catalogs, for finding and evaluating datasets. To solve this issue, it should be possible to transform the published statistical data into formats that can be used by data catalogs.

A concrete use case is the structured collection of RDF Data Cube Vocabulary datasets in the PlanetData Wiki. This list is supposed to describe statistical datasets on a higher level — for easy discovery and selection — and to provide a useful overview of RDF Data Cube deployments in the Linked Data cloud.

Benefits

Challenges

4. Lessons

The use cases presented in the previous section give rise to the following lessons that can motivate future work on the vocabulary as well as associated tools or services complementing the vocabulary.

4.1 There is a putative requirement to update to SDMX 2.1 if there are specific use cases that demand it

The draft version of the vocabulary builds upon SDMX Standards Version 2.0. A newer version of SDMX, SDMX Standards, Version 2.1, is available.

The requirement is to at least build upon Version 2.0, if specific use cases derived from Version 2.1 become available, the working group may consider building upon Version 2.1.

Background information:

Supporting use cases:

4.2 Publishers may need more guidance in creating and managing slices or arbitrary groups of observations

There should be a consensus on the issue of flattening or abbreviating data; one suggestion is to author data without the duplication, but have the data publication tools "flatten" the compact representation into standalone observations during the publication process.

Background information:

Supporting use cases:

4.3 Publishers may need more guidance to decide which representation of hierarchies is most suitable for their use case

First, hierarchical code lists may be supported via SKOS [SKOS]. Allow for cross-location and cross-time analysis of statistical datasets.

Second, one can think of non-SKOS hierarchical code lists. E.g., if simple skos:narrower / skos:broader relationships are not sufficient or if a vocabulary uses specific hierarchical properties, e.g., geo:containedIn .

Also, the use of hierarchy levels needs to be clarified. It has been suggested, to allow skos:Collections as value of qb:codeList .

Richard Cyganiak gave a summary of different options for specifying the allowed dimension values of a coded property, possibly including hierarchies (see mail):

  1. All instances of a given rdfs:Class (via rdf:type).
  2. All skos:Concepts in a given skos:ConceptScheme (via skos:inScheme).
  3. All skos:Concepts in a given skos:Collection or its subcollections (via skos:member).
  4. All resources that are roots, or children of a root, of a qb:HierarchicalCodeList.

Background information:

Supporting use cases:

4.4 Modelers using ISO19156 - Observations & Measurements may need clarification regarding the relationship to the Data Cube Vocabulary

A number of organizations, particularly in the Climate and Meteorological area, already have some commitment to the OGC "Observations and Measurements" (O&M) logical data model, also published as ISO 19156. Are there any statements about compatibility and interoperability between O&M and Data Cube that can be made to give guidance to such organizations?

Partly solved by description for Publisher Case study: Site specific weather forecasts from Met Office, the UK's National Weather Service.

Background information:

Supporting use cases:

4.5 Publishers may need guidance in how to represent common analytical operations such as Slice, Dice, Rollup on data cubes

Background information:

Supporting use cases:

4.6 Publishers may need guidance in making transparent the pre-processing of aggregate statistics

Background information:

Supporting use cases:

4.7 Publishers and consumers may need guidance in checking and making use of well-formedness of published data using data cube

Background information:

Supporting use cases:

4.8 Publishers may need guidance in conversions from common statistical representations such as CSV, Excel, ARFF etc.

Background information:

Supporting use cases:

4.9 Consumers may need guidance in conversions into formats that can easily be displayed and further investigated in tools such as Google Data Explorer, R, Weka etc.

Background information:

Supporting use cases:

4.10 Publishers and consumers may need more guidance in efficiently processing data using the Data Cube Vocabulary

Background information:

Supporting use cases:

4.11 Publishers may need guidance in communicating the availability of published statistical data to external parties and to allow automatic discovery of statistical data

Clarify the relationship between DCAT and QB.

Background information:

Supporting use cases:

A. Acknowledgements

We thank Phil Archer, John Erickson, Rinke Hoekstra, Bernadette Hyland, Aftab Iqbal, James McKinney, Dave Reynolds, Biplav Srivastava, Boris Villazón-Terrazas for feedback and input.

We thank Hadley Beeman, Sandro Hawke, Bernadette Hyland, George Thomas for their help with publishing this document.

References

[COG]
SDMX Content Oriented Guidelines, http://sdmx.org/?page_id=11.
[COGS]
Freitas, A., Kämpgen, B., Oliveira, J. G., O’Riain, S., & Curry, E. (2012). Representing Interoperable Provenance Descriptions for ETL Workflows. ESWC 2012 Workshop Highlights (pp. 1–15). Springer Verlag, 2012 (in press). (Extended Paper published in Conf. Proceedings.). http://andrefreitas.org/papers/preprint_provenance_ETL_workflow_eswc_highlights.pdf.
[COINS]
Ian Dickinson et al., COINS as Linked Data http://data.gov.uk/resources/coins, last visited on Jan 9 2013.
[FIOS]
Andreas Harth, Sean O'Riain, Benedikt Kämpgen. Submission XBRL Challenge 2011. http://xbrl.us/research/appdev/Pages/275.aspx.
[FOWLER97]
Fowler, Martin (1997). Analysis Patterns: Reusable Object Models. Addison-Wesley. ISBN 0201895420.
[LOD]
Linked Data, http://linkeddata.org/.
[OLAP]
Online Analytical Processing Data Cubes, http://en.wikipedia.org/wiki/OLAP_cube.
[OLAP4LD]
Kämpgen, B. and Harth, A. (2011). Transforming Statistical Linked Data for Use in OLAP Systems. I-Semantics 2011. http://www.aifb.kit.edu/web/Inproceedings3211.
[QB-2010]
RDF Data Cube vocabulary, http://publishing-statistical-data.googlecode.com/svn/trunk/specs/src/main/html/cube.html.
[QB-2013]
RDF Data Cube vocabulary, http://www.w3.org/TR/vocab-data-cube/.
[QB4OLAP]
Etcheverry, Vaismann. QB4OLAP : A New Vocabulary for OLAP Cubes on the Semantic Web. http://publishing-multidimensional-data.googlecode.com/git/index.html.
[RDF]
Resource Description Framework, http://www.w3.org/RDF/.
[SCOVO]
The Statistical Core Vocabulary, http://vocab.deri.ie/scovo
SCOVO: Using Statistics on the Web of data, http://dl.acm.org/citation.cfm?id=1561533.1561592.
[SKOS]
Simple Knowledge Organization System, http://www.w3.org/2004/02/skos/.
[SMDX]
SMDX - SDMX User Guide Version 2009.1, http://sdmx.org/wp-content/uploads/2009/02/sdmx-userguide-version2009-1-71.pdf, last visited Jan 8 2013.
[SMDX 2.1]
SDMX 2.1 User Guide Version. Version 0.1 - 19/09/2012. http://sdmx.org/wp-content/uploads/2012/11/SDMX_2-1_User_Guide_draft_0-1.pdf. last visited on 8 Jan 2013.
[XKOS]
Extended Knowledge Organization System (XKOS), https://github.com/linked-statistics/xkos.