NonStop Insider

job types


Site navigation


Recent articles


Editions


Subscribe


For monthly updates and news.
Subscribe here
NonStop Insider

Striim: Has the time come to “mesh” with our data?

Striim

DanDan

striim

Sriim Aug 21 - 1

When it comes to decentralization nothing screams louder than what we are hearing about decentralized workforces! We may all be focused on a return to normalcy following months of isolation but the future of business may never return to familiar, traditional models of the past.  Most important of all, consideration of decentralization extends to data and with the growing support for an “edge to cloud platform as a service,” as espoused by HPE, it’s becoming clear that we cannot simply rely on data making it to the center.

Decentralized may just be another form of distributed but there is a subtle difference. With distributed it can be argued that distribution emanates out of a central point where decentralized eliminates that central point entirely. “The amount of data created or replicated in 2020 reached 64.2 zettabytes — around 64 billion one terabyte hard drives worth of data.” Given such amounts, might the future for processing data lie in the creation of a different model for integrating data that is decentralized?

Up until now, the plan for most enterprises has always been to store “in a centralized location, such as a data lake and data warehouse, (to be) managed by a specialized data team.” As the HPE community embraces the “age of insight” and as data continues to be created on HPE NonStop systems, perhaps the time has come where NonStop solutions need to consider the benefits from participating in a broader mesh-like architecture?

If we do not want distributed networks where a central repository for data is almost always retained, should we be asking whether there might be another way? Should we indeed be asking whether we have reached the point where it’s time to “mesh” with our data? The above quotes come from a July 19, 2021 post to the Striim blog, What is Data Mesh (and Who Should Be Using It) where Mariana Park said:

“Data mesh is one way out of this deadlock. It’s a decentralized approach to data architecture that enables companies to scale their operations faster and get more value out of data. Data mesh is especially useful for larger companies that collect, manage, and analyze huge data sets.

“Data mesh is a highly decentralized data architecture in which independent teams are responsible for managing data within their domains. Each domain or department, such as finance or supply chain, becomes a building block of an ecosystem of data products called mesh.”

Data mesh shouldn’t be confused with what has been called a data fabric, but as Park notes, there are fundamental differences both in terms of where data is stored and in who is ultimately responsible for the data:

“The data mesh concept may appear similar to data fabric as both architectures provide access to data across various platforms. But there are several differences between the two.

“For one, data fabric brings data to a unified location, while with data mesh, data sets are stored across multiple domains. Also, data fabric is tech-centric because it primarily focuses on technologies, such as purpose-built APIs and how they can be efficiently used to collect and distribute data. Data mesh, however, goes a step further. It not only requires teams to build data products by copying data into relevant data sets but also introduces organizational changes, including the decentralization of data ownership.”

Striim Aug 21 - 2

Striim enables a decentralized data architecture by empowering teams to
manage, analyze, and share data across a hybrid cloud technology.

Several companies, including Intuit, JPMorgan Chase, Zalando, and HSBC
have mentioned that they have either implemented
or are experimenting with data mesh.

It is with the recognition of these differences together with the appreciation that former models supporting vast amounts of data cannot continue to scale up to accommodate the anticipated volume of data being predicted. This throws the spotlight directly onto Striim and how it supports a decentralized, mesh data architecture:

“Striim is a data integration solution that combines real-time data ingestion, stream processing, pipeline monitoring, and real-time delivery with validation in a single product. 

“Striim makes it easy to synchronize and process data across diverse environments (on-premise, in the cloud, and across heterogeneous environments) in real-time. 

“Data mesh can help teams better organize their ever-growing volumes and types of data. This approach is about moving away from centralized data architecture and having a network of domain teams own the data and handle it as a product. And while data mesh might not be for everyone, more and more companies will consider decentralized data architecture as a way to get more value out of their data operations.” 

Major enterprises are already pursuing data mesh architectures and given that Striim provides a solution that embraces NonStop, the challenge isn’t so much about overthinking this option but rather taking the necessary baby-steps today. Our data may truly be headed to becoming a mess but data mesh architectures provide a viable way out.

Should you have any questions about the Striim’s ability to provide an entry into data mesh architectures, please don’t hesitate to reach out to us, the Striim team.  We would be only too happy to hear from you, anytime and all the time.

Ferhat Hatay, Ph.D.
Sr. Director of Partnerships and Alliances, Striim, Inc.