Data Mesh in Plain English: A Finance & Telco Playbook
How telecom and banking teams create scalable data products, avoid pitfalls & drive ROI
If you're leading data or digital transformation in finance or telecom, you've probably asked yourself this:
Why is it still so hard to get the right data to the right people, at the right time?
You've invested in data lakes, warehouses, ETL tools, and dashboards. But when it comes to scaling insights across teams, enabling innovation, or keeping governance in check, things stall. Requests pile up, projects slow down, and trust in data starts to slip.
That’s where Data Mesh enters the picture. Not as a silver bullet, but as a practical shift: moving from one central team doing everything, to domain teams owning their own data products with shared standards, automation, and self-service baked in.
In this article, we break down what Data Mesh means in plain English. We’ll look at how it’s being used in finance and telco, where things go wrong, and how Intellicy’s reference design helps organisations roll it out in a way that actually works.
Whether you’re just starting to rethink your data architecture or looking for a smarter way to scale, this playbook is for you.
1. Why Data Mesh Matters in Finance & Telco
The pressure points in legacy data setups
If you’ve ever worked inside a large bank or telco, you’ve likely felt the drag of traditional data platforms. Centralised lakes and warehouses promised a single source of truth, but what we got instead was bottlenecks, data queues, and one overwhelmed IT team trying to do it all. When every request funnels through a central hub, agility takes a back seat.
This setup might work in theory, but in practice, it slows things down. Projects get delayed. Business teams get frustrated. And most importantly, the value locked in your data stays locked for far too long.
Letting domain teams take the wheel
Data Mesh flips the model. Instead of relying on one central team to serve the whole company, it gives ownership back to where it matters, the domain teams. That could be your credit risk team in a bank or the network analytics team in a telco. These teams already understand the data. They're closer to the problems, the insights, and the real-world use cases.
By letting them build and manage their own data products with the right guardrails in place, you get faster delivery, better context, and a lot less back-and-forth. It's not just about decentralising the tech. It's about putting data responsibility in the hands of the people who actually use it.
What we're seeing in the real world
In finance, companies like UniCredit are already seeing the benefits. They've moved from fragmented platforms to a unified approach powered by domain teams. By using self-service infrastructure and governance built into the platform, they’ve cut delivery time and scaled more confidently.
In telco, the need for real-time data is even sharper. Customer usage, network events, churn signals, it all changes by the minute. Data Mesh enables teams to respond faster without waiting on a centralised team that’s already stretched thin.
When your customers are moving fast, your data needs to move faster. And Data Mesh makes that possible.
2. Real-World Examples
Finance Sector: Modern banking at scale
Large banks are under constant pressure to move faster without compromising trust, compliance, or security. The challenge? Their data estates are often stitched together across multiple regions, legacy platforms, and reporting layers.
Data Mesh is helping to untangle that.
Take UniCredit, one of Europe’s major financial institutions. With 15 million customers across Italy, Germany and Eastern Europe, they were juggling disconnected data platforms, Teradata, Cloudera, Palantir, and internal lakes all running in silos.
Instead of doing another top-down overhaul, they introduced domain-based ownership and a self-service data product platform, modelled on the Data Mesh Boost reference architecture. Business domains like credit, risk, customer channels and ESG became responsible for publishing their own data products ready for consumption, with embedded policies and quality checks.
The result?
Faster access to high-quality data for AI models, reporting, and product development
Governance applied automatically at deployment, not after the fact
Templates and automation reduced the reliance on specialist engineers in every team
Banks using Data Mesh this way are cutting delays, increasing trust in their data, and giving teams what they need to build smarter services.
Telecommunications: Silo-busting through domains
In telco, the need for fast, reliable, real-time data is even more urgent. Customer churn signals, usage events, and service metrics don’t wait for batch jobs or monthly reports. And when data lives in separate platforms, CRM, billing, network, IoT, the pain shows up in dropped calls, poor service and missed upsell opportunities.
Some telcos are starting to approach the problem differently. Instead of trying to centralise everything, they’re breaking it apart in a good way.
By shifting ownership to domain teams (like network engineering, customer service, or product analytics), telcos can produce and consume data products independently. These aren’t just API endpoints; they’re well-documented, monitored, versioned, and built to be reused across the company.
A good example of this is Microsoft’s TAP (Telco Analytics POC) Accelerator, which helps telcos stand up data mesh-ready environments using their existing Azure stack. Operators use it to:
Build real-time streaming use cases
Create customer insights dashboards with fresher, more relevant data
Monitor network events with consistent governance and performance
With this setup, domain teams can move faster, while still playing by shared rules.
For telcos, Data Mesh isn’t a buzzword. It’s a way to make the most of the data they already have without rebuilding the whole stack.
3. Common Pitfalls to Avoid
Undefined governance or inconsistent standards across domains
One of the biggest traps in early Data Mesh adoption is assuming that shifting ownership to domains means less need for governance. In reality, it means the opposite. Without shared standards, every domain builds things their own way. What looks like agility at first quickly turns into a patchwork of incompatible data products.
The result? Confusion, rework, and growing resistance from consumers who don’t know which version of the truth to trust.
What works better is a clear, enforceable governance model ideally defined once and applied automatically through the platform. It’s less about policing and more about enabling. When teams have templates and policies baked into their workflow, consistency becomes the default.
Data quality gaps without central monitoring or contracts
When domains own their data products, quality has to travel with the product. If a dashboard breaks because the structure of a dataset changed and no one flagged it, trust erodes quickly.
This is where data contracts come in. They help set expectations between producers and consumers on what’s included, how often it updates, and how issues are handled. But contracts alone aren’t enough. They need to be supported by real monitoring and alerting.
Some platforms still leave this out, expecting manual checks or Slack messages to catch errors. That’s not sustainable.
Talent shortages or lack of domain expertise slowing change
It’s one thing to say “let domain teams own their data.” It’s another to make it happen. Many organisations run into delays here, not because people aren’t willing, but because they’re under-resourced or don’t have the right mix of skills yet.
Central teams often get stuck filling in the gaps, which defeats the purpose of decentralisation.
That’s why platforms like Intellicy’s reference design matter. They give domain teams ready-made templates, reusable components, and automation that lowers the technical barrier. The goal isn’t to turn every analyst into a full-stack engineer; it’s to make the work manageable.
Reliance on guidelines instead of automation
Plenty of organisations document governance policies in Confluence or handbooks. But unless those policies are built into the platform, they’re rarely followed.
Human memory fades. Priorities shift. Guidelines get skipped when deadlines loom.
The better approach is computational governance. Policies are written as code and applied at deployment. That means every new data product gets checked automatically for structure, metadata, documentation, SLAs, and more. Teams can focus on what matters, and still stay compliant without adding extra friction.
This shift from written advice to automated enforcement is one of the key reasons Data Mesh works or fails.
4. Reference Design: My Blueprint
If you're considering Data Mesh but don’t want to start from a blank page, that’s where my reference design comes in. Inspired by proven models like Agile Lab’s Boost platform, we've shaped a blueprint that gives organisations a real head start without forcing them to rebuild everything.
It’s built for teams that want to move fast, stay aligned, and keep complexity under control.
Here’s how it works:
Self‑serve infrastructure platform
Every domain team gets access to pre-built templates like Lego blocks for data. Instead of writing infrastructure code from scratch, they can use these templates to spin up data products that are ready to go. The heavy lifting is handled under the hood, freeing up teams to focus on what they know best: the data itself.
This approach cuts setup time from weeks to hours. It also means fewer dependencies on central teams, so delivery doesn’t get stuck waiting for scarce DevOps or data engineers.
Declarative data product spec
We use metadata‑driven YAML specifications to define each data product. This makes everything transparent, version-controlled, and automation-friendly. Teams declare what they need; a data source, a transformation job, an output port and the platform takes care of the rest.
It’s repeatable, auditable, and easy to evolve over time.
Federated governance as code
Rather than pushing policies after things go live, our model applies governance rules at deployment. These policies are written as code, versioned, and enforced automatically.
Think of it like guardrails: teams stay compliant without being slowed down. Whether it's security, documentation, lineage, or SLAs, each product is checked against your standards before it ever reaches a consumer.
This makes compliance consistent, without adding friction or relying on manual reviews.
Technology-agnostic provisioning via microservices
Different teams use different tools, and that’s fine. Our platform doesn’t lock anyone into a specific stack. Instead, each part of the data product (from storage to processing) is provisioned using modular microservices.
Domains can bring in new technologies when needed. Each microservice plugs into the platform cleanly, ensuring that governance and automation still apply, no matter what tech is underneath.
It’s flexibility without the chaos.
Marketplace for discovery and access
Once a data product is deployed, it’s automatically published to a searchable internal marketplace. This includes documentation, data contracts, quality signals, and access controls.
Consumers, like analysts, product teams, or other domains, can browse, request access, and start using trusted data without needing to chase down the producer.
This helps increase adoption, encourages reuse, and keeps producers accountable without endless meetings.
5. Industry Playbook: Steps to Success
Making Data Mesh work at scale doesn’t happen by accident. It takes structure, the right tooling, and a shift in how teams think about data. Here’s a practical path we’ve seen work, whether you're in finance, telco, or any large organisation with multiple data producers and consumers.
1. Map your domains
Start by understanding how your business is structured. In telco, that might mean domains like network operations, billing, customer support, and digital channels. In finance, it could be risk, credit, transactions, and product performance.
You're not just mapping systems, you’re mapping ownership. Ask: Who truly understands this data, and who’s best positioned to improve it?
2. Shift the mindset: data is a product
This step is all about accountability. Each domain team becomes responsible for delivering usable, discoverable, and trustworthy data products, not raw dumps, but packages designed to serve others.
It’s a mindset change: data isn't just collected and stored. It’s built, documented, versioned, and supported just like a software product.
3. Create templates for repeatable delivery
To help teams get moving without burning time on infrastructure, build a set of templates. These might include:
Ingestion pipelines for common sources
Transformation patterns (batch, stream, CDC)
Deployment guides for specific output types (e.g. dashboards, APIs, data lakes)
Templates remove guesswork, boost consistency, and make it easier to onboard new teams.
4. Automate governance with policy‑as‑code
Instead of relying on checklists or audits, shift your governance into the platform itself. Write your policies in code (for example, using something like QLang) and apply them during provisioning.
This lets you enforce:
Naming conventions
Metadata completeness
Quality thresholds
Data contracts
Access controls
All without slowing things down.
5. Launch a data product marketplace
Once teams start publishing data products, make them easy to find. A marketplace becomes the front door where consumers can search, preview, request access, and understand what’s available across the business.
It’s not just a catalogue. It builds trust, increases reuse, and helps teams avoid reinventing the wheel.
6. Iterate and improve
Don’t treat this like a one-and-done transformation. The most successful implementations treat Data Mesh as a living system. After your first few domains are up and running:
Collect feedback
Tune your templates
Adjust your policies
Add new microservices
Scale to new business areas
Over time, you build a culture where data products are normal, not special. Where teams expect quality and get it. And where insights flow faster because the pipes are finally working the way they should.
6. What's the Gap You're Closing?
Reading about architecture and governance is one thing. Seeing the impact on your teams, timelines, and bottom line is what drives change.
If you’re still:
Waiting weeks for new data products to go live
Rewriting the same pipelines across different teams
Fighting for consistency across reports, systems, and APIs
You’re not alone. These are signs the current model has reached its limit.
The good news? The shift doesn’t have to be big-bang or risky. I help organisations close this gap with a tested reference design, automation built in, and a gradual rollout model that scales as you go.
Want to explore how it could work in your business? Let’s talk.
7. Key Takeaways
Decentralised ownership is more than a technical shift; it’s a cultural one. When domain teams take full responsibility for their data products, trust grows, and delivery becomes faster and more aligned with business needs. This model empowers the people who understand the data best.
Consistency comes from structure, not from guesswork. Declarative templates and policy‑as‑code create repeatable standards that don’t rely on memory or manual checks. It’s the difference between hoping things are done right and knowing they are.
A self-service data platform is the foundation that makes Data Mesh workable. Without it, decentralisation can easily spiral into chaos. With it, teams can move quickly while staying in sync.
Start small, scale smart. Rolling out Data Mesh across a large finance or telco organisation is a journey. Begin with a few key domains, gather feedback, and evolve your templates and policies as you go. This structured onboarding approach lowers risk and builds confidence.
Finally, Intellicy brings it all together. Our Data Mesh playbook and reference design, shaped by real-world finance and telco examples, helps organisations reduce time-to-value, avoid common pitfalls, and create data products that deliver measurable results.
Wrap-up
Data Mesh doesn’t have to feel out of reach. With the right structure in place, finance and telecom teams can adopt it confidently and get real outcomes from it.
You don’t need to reinvent your architecture or start from zero. What makes the difference is putting domain ownership into practice, using self-serve platforms that reduce friction, and applying federated governance through policy‑as‑code. When those pieces come together, your teams can deliver better data products faster without the delays and confusion of centralised models.
Whether you're dealing with telco architecture challenges or shaping a finance data mesh strategy, the path forward doesn’t have to be slow or risky. The key is to start with a proven reference design and adapt it to your business, one domain at a time.
I’d love to walk you through it. No jargon. Just real, practical steps to help your team get started with Data Mesh confidently and at your own pace.
Want a Second Set of Eyes on Your Data Stack?
Before you commit to new tools or restructure your stack, get a second set of eyes. I offer a no-obligation architecture review to help you spot hidden costs, close security gaps, and line up your platform with your business goals.
It’s a practical step that can save months of trial and error.