In the beginning, software processing was about automating paper-based business processes. The processes modeled were reasonably static and the data often fit into the tabular format. Hugely successful, this approach resulted in the digitization of immense amounts of information creating a plethora of siloed data sources.

As data grew, understanding the connections between these disparate sources became as important or, arguably, even more important than each source alone. Enabling connectivity requires a method for resolving identity and meaning of information elements across the sources. Today, we are increasingly interested in new types of data that are more diverse in structure and less predictable in nature. We capture interrelated information about all things happening around us and have to implement systems that are considerably more dynamic than those of the past. Each year, the speed of change in business, and in software systems that support everything we do, accelerates. Using established traditional technologies to answer these needs has been driving up the cost and complexity of enterprise IT systems.

These drivers have led to the rise of new technologies, referred to as NoSQL (Not Only SQL). While they are called schema-less, this does not literally mean that the data they manage has no structure. It means that the data is stored in containers that are much more fluid than the relational model permits. This is useful when data lacks uniformity. It is also critical when its structure and content can change unpredictably and frequently. For example, as a result of new legislation, new business initiatives or new policies.

The NoSQL technology landscape in some way still resembles the “wild west” of the past with many proprietary approaches that have evolved from the initial implementations by the web giants like Google, Facebook and Amazon. At the same time, relevant standards have been developed by the W3C (World Wide Web) consortium. They offer a standard approach for describing rich and flexible data models and for querying the model and data alike. Importantly, they also offer a way to uniquely identify, connect and access data across many diverse sources. This standards-based approach is becoming known as “Linked Data” as it enables the interconnection of rich networks of data. A growing number of standards-compliant products offer a interoperable alternative to using proprietary technologies that is much more ‘future-proof’ in terms of enabling unanticipated changes, additions and dynamic interconnections among data sources.

To further understand the concepts and technologies involved in turning enterprise data into Linked Data, click here to read our partner Taxonic’s white paper. Jan Voskuil, CEO of Taxonic, describes real examples of how this approach reduces the cost of data integration and implementation of agile and flexible systems needed to support modern enterprises.