industry 4.0 layered analytics and data service orchestration

Enterprise datasets are continuously evolving, are often distributed and reused across applications, analysis projects, and data science efforts with little formality; extended, modified and tailored to address specific needs; added to address new requirements without thought to master data or reuse; spun off as with corresponding business units; acquired as part of the data assets that come with acquisitions; integrated; factored; abandoned to obsolescence, deprecated or removed.

Applications that are built on data analysis, or, as in ML, trained on data sets, will increasingly require effective, coherent and performant ways to maintain the history and provenance of these of data, and their relationships to the performance, configuration, change management, and operation of said applications.

These applications will require the capabilities to:

  1. reference and identify data at particular points in time
  2. produce views of data set and schema evolution
  3. create unified views of data with varied sample rates
  4. validate temporal data
  5. relate data set semantics to evolving applications

The datagenous service provides these capabilities in a direct manipulation data programming environment or through javascript API, SPARQL, GraphQL, and linked data interfaces.