Data Mesh and Event-Driven Architecture: Unleashing Healthcare Data Potential
Harness the power of data mesh and event-driven architecture. Streamline healthcare data management, drive innovation, and unlock lasting growth.
Data mesh, an innovative approach to data architecture, has become a hot topic in the world of data management.
In this interview with Michael Miller, Egen’s VP of Engineering, led by Lia Parisyan-Schmidt, Head of Content Marketing, we explore Michael’s perspective on data mesh in healthcare and how companies can gradually adopt the concept.
We delve into the importance of culture change in adopting data mesh along with the benefits of an event-driven architecture compared to traditional methods.
And examine how the data mesh approach can help businesses better align with their goals, improve efficiency, and drive long-term growth.
What is your perspective on data mesh in healthcare, and if a company is resistant to that approach, how can they gradually adopt the data mesh concept?
I believe that data mesh will become the standard approach in the next five years. However, convincing clients to adopt it can be challenging, especially when they are entrenched in traditional methods.
One way to gradually introduce data mesh is through the Strangler Pattern.
This involves building a purpose-driven pipeline for one product with a cross-functional team. By starting with a small project and demonstrating its benefits, the client may become more open to the idea of data mesh.
How can companies address the culture change needed to adopt data mesh?
Culture change is a challenge for many companies.
What we need to do is provide better training on how to execute culture change and show the benefits of data mesh.
As experts in the transformation process, we guide clients through the transition and demonstrate the advantages of cross-functional teams and a more decentralized approach.
Can you share some of the top problems you’ve identified with traditional data architecture?
The main issues I’ve observed are the overstretch of centralized teams and poor data platform design. Centralized teams often struggle to translate data and satisfy the requirements of both operational and analytics sides. While traditional centralized data architectures often result in suboptimal solutions, data mesh, by contrast, can offer a more flexible and efficient approach to managing data.
I understand that in your previous role, you hadn't fully implemented data mesh, and yet the event-driven part was beneficial. Can you share some insights on event-driven architecture and how it contributed to your success?
Event-driven architecture allowed us to consolidate both real-time and batch processing into a single pipeline, simplifying maintenance and improving efficiency.
By transforming batch data into events and using a centralized event chain, we were able to manage data more effectively. Companies like Uber have successfully adopted similar approaches.
How does the data mesh approach align with business goals, and in what way does it differ from traditional data architecture?
Data mesh focuses on product thinking, which involves starting with a clear understanding of the desired data product or dashboard before building purpose-driven pipelines.
This makes data mesh more business-centric than traditional approaches, often involving sequential and siloed processes. By moving towards decentralized data products and cross-functional teams, companies can better align their technology with business objectives.
Can you elaborate on the advantages of moving from a traditional centralized approach to a more decentralized, data product-focused approach with domains and cross-functional teams?
The decentralized approach of data mesh enables companies to break down data silos and streamline their processes.
By creating domains for data products and assembling cross-functional teams, organizations can foster better collaboration and align their data initiatives more closely with business goals.
This shift from a monolithic, centralized system to a more agile, domain-driven design improves overall efficiency and adaptability.
How does the data mesh architecture benefit the development and usage of data products compared to the traditional approach?
Data mesh, along with event-driven architecture, allows data products to emerge more naturally.
It’s more experience-centric, and it requires businesses to have a clear idea of what they want to achieve before starting any development work.
Instead of just dumping all available data into a “swamp” and expecting analytics to make sense of it, this approach is purpose-driven and more user-oriented.
How does data mesh aim to improve upon the previous attempts at decentralization in data analytics?
Historically, data analytics was decentralized with data marts tailored to specific problems but lacking standardization.
Data mesh seeks to reintroduce decentralization with added standards and a focus on creating data products with a consistent approach.
This helps prevent the development of non-standard, isolated solutions and improves overall organization and accessibility of data across the company.
How does the operational system in a data mesh with event-driven architecture contribute to the system’s overall flexibility?
The operational system is responsible for processing data and emitting events into the Message Broker. This creates chains of events that are replayable and accessible to any system or data product that subscribes to them.
The flexibility comes from the ability to build multiple data products or systems based on these events without any hard-coded or inflexible pipelines. This also allows operational systems to pick up on events emitted by other data products or systems, enhancing collaboration and data exchange.
How does the architecture contribute to establishing a single source of truth in a business environment?
The approach captures and organizes important business events as event chains. These event chains, essentially functioning as a single source of truth, are accessible to any system or data product within the organization.
They represent important business events in a format that is easily understandable by both the development and business teams. This helps create a more accurate and cohesive understanding of business processes and data across the organization.
So, how does this data mesh structure save on cost? Is it more efficient? How can we quantify its efficiency and value from a business perspective?
It’s difficult to quantify savings, but data mesh makes businesses more flexible and innovative. By having a list of event chains, businesses can replay and adapt easily. This improves their speed, agility, and time to market, which are important in this data-driven age.
Would you say that once you have the infrastructure in place, you can go faster and save on costs?
Yes, once the infrastructure is in place, you can get things done quicker, which saves costs. For example, you don’t have to talk to five different teams if you need to change something. You have one cross-functional team that can make the changes more efficiently.
Would you agree that the data mesh approach helps to nurture the next generation of talent and assists businesses prepare for the future?
Yes, data mesh is a more efficient and innovative approach. It encourages businesses to have more clarity in their vision, which enables them to work more effectively with tech teams. This creates a common language between business and development, making collaborating easier for the two sides.
Can you explain how this ecosystem of data products feeds off each other in a data mesh?
Certainly, with a data mesh, you build specific pipelines for particular data products. Each data product can feed off events from other data products. This creates an interconnected ecosystem that allows businesses to make the most of their data without needing a monolithic data warehouse.
Can you give a brief overview of how data mesh and event-driven architectures can benefit businesses, especially those struggling with their current data structures?
Data mesh and event-driven architectures provide a decentralized approach to data management, allowing teams to work independently and more efficiently.
Businesses can easily connect the dots between different data sources and improve their decision-making processes by creating well-defined events that can be replayed anytime. This approach also allows for better scalability and adaptability with changing business needs.
Can you give examples of large companies implementing data mesh and event-driven architectures, and how can smaller businesses learn from their experiences?
Companies like Netflix and Uber have successfully implemented these strategies, showcasing their ability to handle large amounts of data and remain agile in a rapidly evolving landscape.
Smaller businesses can learn from these examples by adopting similar principles and focusing on building out their domains in a more decentralized manner. This can help streamline their data processing capabilities and drive innovation in their own products and services.
What key decision-makers need to be convinced of the benefits of data mesh and event-driven architectures, and how can these ideas be made more appealing to them?
Typically, CTOs and other leadership figures are the ones who need to be convinced of the benefits of these approaches.
To make data mesh and event-driven architectures more appealing, it’s important to emphasize the potential cost savings, increased efficiency, and accelerated time to market for data products that can result from implementing these strategies.
Demonstrating the success of larger companies and offering proof-of-concept projects can also help gain their support.
How does the implementation of data mesh and event-driven architectures align with the growing interest in machine learning operations (MLOps)?
MLOps is focused on the efficient training and deployment of machine learning models, which often rely on well-structured data in tables or files.
While data mesh and event-driven architectures operate at an earlier stage in the data pipeline, they can still indirectly support MLOps by providing a more efficient and organized data environment.
However, we should recognize that transforming the industry to focus more on events rather than records and tables is an ongoing challenge.
Given the success of data mesh and event-driven architectures in larger companies, how can businesses differentiate themselves when implementing these strategies, and how can they leverage them for long-term growth?
To differentiate themselves, businesses need to focus on delivering a compelling narrative around data mesh and event-driven architectures, highlighting their benefits and showcasing their own expertise in the field.
By committing to a more innovative and agile approach to data management, businesses can attract larger clients and position themselves as thought leaders in the industry.
Emphasizing cost savings, efficiency improvements, and faster product development can allow businesses to establish themselves as pioneers in the field and drive long-term growth.
How does the data mesh concept impact multi-cloud environments?
Event-driven architecture, like Kafka, allows for flexibility across multiple clouds. You can deploy Kafka on any cloud, making it easier to migrate data between environments.
An event-driven approach makes it possible to have a multi-cloud environment with data products in different clouds, all connected through events.
Related reading: Why most cloud migrations fail
Could the ability to move data across instances without changing its structure or content be a selling point for the data mesh approach?
Yes, that’s a great idea. It can be a strong argument for the benefits of the data mesh concept, especially with the increasing popularity of multi-cloud environments.
Can you elaborate on how this event-driven approach can be used to share data between companies?
With events, we can replace traditional file interface agreements and simply share relevant events with clients.
By using an external-facing Kafka broker, companies can subscribe to events and receive data in real time, making the sharing process more efficient and secure.
Can you quantify the cost savings of using event-driven architecture for external consumption?
I don’t have exact numbers, but I can say that using event-driven architecture can save a lot of time and resources.
Particularly when compared to dealing with FIA-related issues. It makes the process of sharing data outside the network much easier and more flexible.
Can you build events on top of other events to create more meaningful data products for customers?
Yes, absolutely. You can combine multiple events to create a data product that makes more sense for the customer. This approach is incredibly flexible and allows for a wide range of customization.
[Interview has been edited for length and clarity]
Embracing data mesh and event-driven architectures can significantly benefit businesses struggling with their current data structures.
By adopting a more decentralized and innovative approach to data management, companies can streamline their data processing capabilities, align more closely with business objectives, and remain agile despite frequent and rapid changes.
Decision-makers must understand the potential cost savings, increased efficiency, and accelerated time to market that can result from implementing these strategies.
By learning from the successes of larger companies and focusing on building a compelling narrative around the advantages of data mesh, businesses can position themselves for long-term growth and stay ahead in the competitive landscape.
Ready to explore the potential of data mesh and event-driven architectures for your business?
Learn more about how these innovative strategies can help you streamline your data management processes, enhance efficiency, and drive long-term growth.
Contact us to schedule a consultation and start your journey toward a more agile and data-driven future.
You might also like
How AI and Personalized Marketing are Transforming Retail Sales
How AI/ML, CDP, personalization, and BI are revolutionizing retail, fashion, and beauty. Dive into brand examples from Sephora, ThredUp, and H&M.Read article
19 Cloud Computing Statistics You Need to Know in 2023
By 2025, over 100 zettabytes of data will be stored in the cloud—50% of all global data storage.Read article
5 Ways to Transform Grocery Retail with an AI-Driven Data Strategy
Explore 5 AI-driven data strategies for grocery retail. Learn how to solve challenges like workforce management, pricing, and disconnected CX.Read article
Copilot and the Future of AI-Assisted Coding: Insights from a Software Engineer
GitHub's Copilot is an IDE-integrated tool that streamlines coding by offering real-time text completion suggestions, predicting what developers might type next.Read article
Unlocking Hidden Revenue: Optimizing Data Systems for a Leading Car Retailer
Explore how a leading care retailer teamed up with us to optimize workflows, boost security, and uncover hidden revenue.Read article
Rapid Migration of a Legacy Java Monolith for a Major Retail Brand
Find out how we helped a grocery giant tackle a two-year standstill and the loss of key team members to conquer a critical deadline.Read article
How an HC & Tech Services Provider Optimized Scalability, Agility, and Security
Discover how a human capital and tech services provider unlocked the power of cloud infrastructure to optimize scalability, agility, and security.Read article
Egen Cuts Sifter's Infrastructure Costs by Over 30%
Learn how Sifter, an online grocery platform, reduced infrastructure costs by over 30%, allowing the startup to focus on growth and profitability.Read article
The Rise of Chief Data Officers in Retail
How CDOs drive revenue growth, enable analytics, and empower data-driven decision-making.Read article