All Posts

The Mesh: Data as a Product

Nedim Skalonjic
Chief Technology Officer

ChatGPT's arrival over a year ago sent shockwaves through the tech landscape like we have never seen before. The hype around advanced data and generative AI has dominated industry chatter.

Despite the talk of this revolution, the truly transformative strides are yet to take place. This leads to the bigger question: How do we turn these grand ideas into actionable reality?

In the first part of my blog series, “The need for speed”, I covered the topic of streaming data and how it was a vital cog required to unlock the true power of generative AI. Streaming data is only one part of the equation. Highly trusted, easily discovered, consumable data is the other.

Introducing “The Mesh”

The concept of the “data mesh”, introduced first by Zhamak Dehghani, challenges the age-old assumption that centralisation is the sole path to effective data management. Drawing inspiration from domain-driven design, it introduces a paradigm where multi-disciplinary, domain-oriented decentralised teams take ownership of their data and treat their data as a product.

At its core, it involves shifting the traditional view of data as a mere byproduct of business operations to recognising it as a valuable and self-contained asset with its own lifecycle, ownership, and consumers.

This recognition forces the domain data producers to cater for the needs of their data consumers, some of which sit outside its original domain and outside the systems in which the data was originally intended to be held or used, and to serve published clean and trusted data in the shape that is sought by them.

Treating data as a product

Treating data as a product is a strategic change in mindset which places emphasis in the importance of data as an asset that requires careful curation, responsibility, and consideration of the needs of its consumers.

And as with any product, the key to success is understanding the customer and their needs and ultimately being able cater for them.

As an example, let’s consider the reference data domain within a typical asset manager and how three other distinct domains rely on this data.

Trade management. The systems in this domain need reference data to enrich trade information for the purposes of confirmation, affirmation, and settlement.

Accounting. The systems in this domain need reference data to enrich or trigger transactions (transactions meaning anything that affects positions or cash such as a deposit, a coupon payment, an execution in the market, etc) that ideally, enable real-time Book of Record (IBOR, ABOR, PBOR etc.) views.

ESG. The systems in this domain may combine reference data with ESG data to calculate an overall ESG score for investment portfolios.

In all three examples, reference data is combined with specific domain data to create something new, but also to generate further insights, aiding decision making and reducing time to conviction. This new combined data may even be further published by that domain to serve other domains’ data needs downstream.

It is important to note that the needs of all three domains in this example may be different, certainly and at the very least the needs of the ESG domain, traditionally OLAP (Online Analytical Processing) type systems will vary from OLTP (Online Transaction Processing) type systems but the difference here is that the data is served to those systems in the way that they can easily consume it and immediately get value from it. The data here is served as a product.

This approach is especially relevant for enterprises engaged in trade management and execution where buy side institutions, facing heightened regulatory scrutiny, find immense value in swiftly discovering, extracting, processing, and disseminating data for analysis and actionable insights.

The challenge

While most in the industry see the benefit of leveraging the concept of the data mesh and treating data as a product, and while they even have some base component parts in place, adoption seems to be elusive.

One big reason, and a significant challenge, is outdated architectures and technology stacks, which are unable to scale or adapt due to their monolithic and non-modular design. Another is reliance on tribal knowledge which may no longer align with contemporary data practices.

Finally, the prevalence of fragmented data silos exacerbates the issue, impeding decision-making processes and limiting knowledge sharing across the organisation.

To truly adopt the data mesh, several significant shifts are required – from legacy systems to scalable cloud infrastructure, centralised data management to controlled data democratisation.

Conclusion

While the data mesh requires cultural and organisational changes it must be underpinned and empowered by technology.  The only way to data utopia is through investment in a cloud native, self-servicing data platform that enables users across the organisation to own, publish and discover trustworthy data in real-time.

At HUB, we focused heavily on data from the point of ingestion all the way to maximising its value. By serving it in a fast, reliable, and decentralised way to a broad range of consumers, we empower enterprises by embedding data mesh principles in our platform.

Having trustworthy, easily discovered and readily available data means we are ready to truly start to realise the power of generative AI.

The Mesh: Data as a Product
Nedim Skalonjic
Chief Technology Officer

Find out more about how we can help you scale, diversify and accelerate growth.