Quorum 360 handles all types of data from standard enterprise applications both on-premise & cloud, social and open data sources as standard.
The Platform Capabilities
Any and all information types — supported by optimized storage and processing.
A flexible platform that allows the use of pre-built connectors or customizable functions for bespoke applications.
Inside and outside (cloud, external) the enterprise.
Pulled together as and where needed by federated and virtualization capabilities.
Distributed architecture composed of multiple repositories
Data integration in Quorum 360 is an integrated data logistics process and is fully automated enabling fast and real-time movement across disparate systems.
The integration platform is truly schema-less, data source agnostic, supports disparate and distributed sources of differing formats, schemas, protocols, speeds and sizes such as machines, geolocation devices, click streams, files, social feeds, log files and videos and more.
Staging Area for Storage
The staging area of Quorum 360 is built on Hortonworks Data Platform (HDP).
Data at rest is handled using the Hadoop framework and HDFS provides a scalable, fault tolerant, cost-efficient storage.
Security is woven and integrated into Quorum in multiple layers. Critical features for authentication, authorization, accountability and data protection are in place to help secure the solution across these key requirements.
Consistent with this approach throughout all of the enterprise Hadoop capabilities, Quorum 360 also ensures enterprises can integrate and extend their current security solutions to provide a single, consistent, secure umbrella over an enterprise’s modern data architecture.
Dynamic Data Modelling
Dynamic Data Modelling in Quorum 360 is a key value-add and is based on the principles of Linked Data & Resource Description Framework (RDF). In the staging area, every data element from the source is pivoted into a standard machine-readable format and assigned a Universal Resource Identifiers (URIs).
URIs are then linked with each other to generate a semantic ontological data model similar to a knowledge graph, that can traversed through i.e. queried upon for knowledge discovery.
The standard model is fully customizable to cater to different enterprise requirements, can be extended as requirements change or when new requirements appear. Additionally, because of the cross-linkage, the model entails, new & ad-hoc models can be derived using the query builder (akin to virtualisation) and this will allow creating use case, specific models.