Follow Us:


Modern Data Managment

Home FAQs

Frequently Asked Questions


RDF is a standard model for data interchange on the Web. RDF has features that facilitate data merging even if the underlying schemas differ, and it specifically supports the evolution of schemas over time without requiring all the data consumers to be changed.

RDF extends the linking structure of the Web to use URIs to name the relationship between things as well as the two ends of the link (this is usually referred to as a “triple”). Using this simple model, it allows structured and semi-structured data to be mixed, exposed, and shared across different applications.

This linking structure forms a directed, labelled graph, where the edges represent the named link between two resources, represented by the graph nodes. This graph view is the easiest possible mental model for RDF and is often used in easy-to-understand visual explanations.

Quorum 360 is built on the principles of RDF and linked data

Whilst there are many types of datastore (NoSQL, Realtional, Graph, etc) RDF is a standards-based datastore

The Knowledge Graph represents a collection of interlinked descriptions of entities – real-world objects, events, situations or abstract concepts – where:
• Descriptions have a formal structure that allows both people and computers to process them in an efficient and unambiguous manner;
• Entity descriptions contribute to one another, forming a network, where each entity represents part of the description of the entities, related to it.
Knowledge Graphs combine characteristics of several data management paradigms. The Knowledge Graph can be seen as a specific type of:
• Database, because it can be queried via structured queries;
• Graph, because it can be analyzed as any other network data structure;
• Knowledge base, because the data in it bears formal semantics, which can be used to interpret the data and infer new facts.

Data integration in Quorum 360 is an integrated data logistics process and is fully automated enabling fast and real time movement across disparate systems. The integration platform is data source agnostic, supports disparate and distributed sources of differing formats, schemas, protocols, speeds and sizes such as machines, geo location devices, click streams, files, social feeds, log files and videos and more.

Quorum 360 provides a simple, reliable, scalable foundation for streaming analytics and event-driven data processing by utilizing the Publish/Subscribe features of Cloud Platforms. This Pub/Sub service ingests event streams and delivers them to the Quorum Hub (data staging area) by aligning the stream set data as a triple. Relying on Quorum Pub/Sub service will guarantee delivery of event data in support of use cases such as:

• Real-time personalization
• Fast reporting, targeting and optimization

Dynamic Data Modelling in Quorum 360ois a key value-add and is based on the principles of Linked Data & Resource Description Framework (RDF). In the staging area every data element from the source is pivoted into a standard machine-readable format and assigned a Universal Resource Identifiers (URIs). URIs are then linked with each other to generate a semantic ontological data model similar to a knowledge graph, that can be traversed through i.e. queried upon for knowledge discovery.

Any data files representing different modules in the application will be analysed, imported into the staging area to create the data model. The methodology that is used to generate the model is based on the Extract – Load – Transform mechanisms, which enables a loss less state of the original data from source

Our approach to creating flexibility and extensibility for data modelling starts with the organisation of information and data. We do so by using:

-SKOS (Simple Knowledge Organization system)
-OWL (Web Ontology Language)
-RDFS (Resource Description Framework Schema)

At the uppermost level use SKOS (Simple Knowledge Organization System) model: Every ‘thing’ is a Concept such as Products, Nutrients, Fruits, Vegetables, Nuts, Drinks and sub-concepts such as Fruit Juices, Dried Fruits, Pureed Fruits, Dried Vegetables etc. This allows general searches to find a concept by name, type, browsing a hierarchy etc. It is also very easy to allow the SKOS model to grow organically with new ‘concepts’, even if complete details of that new concept are unknown.

OWL allows concepts to be ‘related’ to another concept in a more precise way than SKOS alone. For example: {ProductA contains FruitX and NutY}, but we do not lose the simplicity of SKOS because we can also infer that {ProductA semanticallyRelatedTo FruitX and NutY}.

RDFS allows to capture the details and attributes of the individual products and ingredients such as Energy (kJ) Sat Fat (g) Total Sugar (g) Sodium (mg)Fruit, Veg & Nuts (%) NSP Fibre ' (g) Or AOAC Fibre ' (g) and Protein (g). These are the attributes that will be used in the NPM calculations.

At the lowest level use PROV model: As the taxonomy grows it can be useful to add another layer of structure beyond a catalog of concepts. Many models are not just documenting the ‘state’ of an entity. Instead they are often tracking the actions performed (e.g. blending) on entities (e.g. Products) by agents (e.g. Manufactures). This also allows the inclusion of date/time information enabling the tracking of how products and their ingredients may have changed and how this has affected the overall product scoring over time.

SHACL shapes (aka templates) simply define a pattern of the graph that meets the requirement for attributes and relationships. For example: SHACL shapes may be used for validation rules of manufacturer’s product claims but also may capture the rules for score calculation and the classification of products as “less healthy”. The templates also allow time/data information to be used as the nutritional profile definitions and rules change.

Our approach to dynamic data modelling approach combines the simplicity of SKOS with the accuracy of OWL, RDFS and the expressivity of PROV.

Quorum 360:

• Model driven so that changes propagate without any code changes.
• Drive templates by a SHACL model (which is also part of the model).
- SHACL shapes (aka templates) simply define a pattern of the graph that meets the requirement for attributes and relationships.
– Thus, simultaneously the model is open-world (unrestricted) and closed-world (conforming to predefined templates).

Our approach to solve accessibility is to: • Provide access via universally accepted
RESTFul API (OData) that has been adopted by Microsoft (Excel, SharePont etc), SAP (replaces BAPI), and leading BI tool vendors (Tableau, Spotfire etc)
– This then enables easy access to any of the contextualized model via, say, Excel.


Data is a REST based open web protocol for sharing data in a standardized format for easy consumption by other systems. It uses well known web technologies like HTTP, AtomPub and JSON. OData provides an entire query language directly in the URL. Having a query language in the url basically means that by changing the url, the data returned from an OData feed(endpoint) also changes. Being able to control what data you get back from the consumer side means that the OData feed has complete control over what parts of the content to use. OData also offer more than just exposing content, it offers full CRUD support by using the different HTTP methods:

Provide access via universally accepted RESTFul API (OData) that has been adopted by Microsoft (Excel, SharePont etc), SAP (replaces BAPI), and leading BI tool vendors (Tableau, Spotfire etc).

OData also offer more than just exposing content, it offers full CRUD support by using the different HTTP methods:

• GET: Gets one or many entries.
• POST: Create a new entry.
• PUT: Update an existing entry.
• DELETE: Remove an entry.

The basic idea is to make it easy to break down the data silos and let content increase its value by making it available for more people.

To compare SPARQL with OData is somewhat misleading. After all SPARQL has its roots as a very powerful query language for RDF data, and is not intended as a RESTful protocol. Similarly OData has its roots as an abstract interface to any type of datastore, not as a specification of that datastore. Some have said “OData is the equivalent of ODBC for the Web”.

The data management strengths of SPARQL/RDF can be combined with the application development strengths of OData with a protocol proxy: OData2SPARQL, a Janus-point between the application development world and the semantic information world.

• Brings together the strength of a ubiquitous RESTful interface standard (OData) with the flexibility, federation ability of RDF/SPARQL.
• Allows standard queries to be published via the OData service along with the deduced classes.
• SPARQL/OData Interop proposed W3C interoperation proxy between OData and SPARQL.
• Opens up many popular user-interface development frameworks and tools such as KendUI and SAP WebIDE.
• Acts as a Janus-point between application development and data-sources.
• User interface developers are not, and do not want to be, database developers. Therefore they want to use a standardized interface that abstracts away the database, even to the extent of what type of database: RDBMS, NoSQL, or RDF/SPARQL.
• By providing an OData4SPARQL server, it opens up any SPARQL data-source to the C#/LINQ development world.
• Opens up many productivity tools such as Excel/PowerQuery, and SharePoint to be consumers of SPARQL data such as Dbpedia, Chembl, Chebi, BioPax and any of the Linked Open Data endpoints!
• Microsoft has been joined by IBM and SAP using OData as their primary interface method which means there will many application developers familiar with OData as the means to communicate with a backend data source.


• The Context Model uses any standard graph store (RDF-based)
– Recommended to use HBase Halyard or AWS Neptune for scalability
• It is accessible directly via SPARQL
– Only recommended for developer usage
• It is accessible indirectly via OData RESTFul interface
– Recommended access for users of Excel, PowerQuery, etc

• The Context Schema expresses the open model using SKOS , OWL , and PROV model
– SKOS, the Simple Knowledge Organization System, offers an easy to understand schema for vocabularies and taxonomies. However modeling precision is lost when skos:semanticRelation predicates are introduced.
– Combining SKOS with RDFS/OWL allows both the precision of owl:ObjectProperty to be combined with the flexibility of SKOS. However clarity is then lost as the number of core concepts (aka owl:Class) grow.
– Many models are not just documenting the 'state' of an entity. Instead they are often tracking the actions performed on entities by agents at locations. Thus aligning the core concepts to the Activity, Entity, Agent, and Location classes of the PROV ontology provides a generic upper-ontology within which to organize the model details.

• The Context Schema is an example of an Open World Model (anything property can be added to anything)
– This can be strength because the model can grow organically
– This can be a weakness if core rules are not followed
• The SHACL W3C standard allows definition of the ‘shapes’ of the graph, adding Closed World Model concepts
• For example
– PumpShape
• Must have inlet connection
• Must have outlet connection
• Must have outlet pressure measurement
• …
– ReciprocatingPumpShape
• Derived from PumpShape
• Must have number of stages
• …
• Templates are used
– To validate any data in the model for consistency
– To define what is published via the OData service
– (Indirectly) to define the UI in the Management Applications

• The Context Data is that which is captured to model the underlying business

• Query services provide SPARQL access to the data stores
• SPARQL access is implicit with the Context database since it uses RDF graph
• SPARQL access to the Real Time Database will be via a SPARQL endpoint
– This endpoint converts SPARQL requests to RealTime Database API calls
• A single SPARQL request can call on one or other or both services in a single call
– This uses a SPARQL ‘SERVICE’

• Provides a standards based, RESTFul interface to both Context and Realtime database
• Metadata is automatically derived from Context data schema and templates
• User-defined operations can also be defined for create, update, read, and delete operations beyond the model-defined entities
• OData can be consumed by many applications, especially Excel allowing user’s familiar access to both context and time-series data

• Lens2OData Provides a flexible, model-driven, user interface via the OData service to the context and time-series data.
• Lens2OData is completely open, built using the SAPUI5/OpenUI5 framework
• Lens2OData provides
– Query, to find entities by text matching
– Search, links to entities and between entities
– Navigation, from one entity to another
– Explore, using graph views of the entities


The Quorum 360o platform has been built to adapt to new data models with minimum effort and therefore save on time and costs. This methodology is called as “Model-Driven” and it implies that all components of the product are driven by the underlying data model.
This methodology is further underpinned by the following sequence of steps that are foundational both for timely delivery, scalability and end-user adoption.
- Compilation
- Composition
- Configuration

The compilation activity of Quorum 360o is where the core libraries of the platform are installed, and this level enables all constituents of the platform to perform at a certain standard. This constitutes hard core programming, querying, transformation mechanisms and remains standard across the multi-tenant architecture. In effect the compilation is something that doesn’t change and gives the platform a massive advantage in terms of usability, scalability as well as swift implementation timeline.

The composition activity of Quorum 360o acts as the orchestration mechanism of the platform and at this level the design, implementation and activation of the model is enabled. The composition activity also allows the creation of the OData connections to the models, which in turns enables the display of the data supported by the model in the UI – both tabular as well as graph visualization views.

Both the Compilation and Composition methods are significant as they are automated to a large extent and this is beneficial for maintaining shorter timescales for implementation. Furthermore, a core activity underestimated in general platform design is the ability to accommodate changes to source systems as the impact that such changes would have recipient systems. This is better taken care of in the design of Quorum 360o as aligning with the model-driven methodology, any changes to the source are automatically cascaded down the value chain and hence available dynamically.

The last activity is the configuration activity, and this entails the capabilities that can be handled by the power user of the platform and include the following:
- Enable/Disable of columns to display.
- Query builder to support the combining of different data sets.
- Building reports to support diverse use cases.
- Building dashboards to embed reports and for distribution across user groups.

en_USEnglish fr_FRFrench