From Stack to System: What 2025 Is Teaching Us About the Future of Data, AI, and Value Creation

September 12, 2025
3 min.
By
Salma Bakouk
Writen by
Salma Bakouk
Co-founder and CEO at Sifflet

&
Writen by

Reviewed by
Writen by

Expert Reviewed by
Writen by

A reflection on 2025’s data and AI landscape, showing how the Modern Data Stack is evolving into a system of trust where metadata, observability, and AI-native products drive real business value.

With just 4 months left in the year, I like to reflect on what’s happened since January in the world of data and analytics. I usually write about trends, market shifts, or emerging categories. But this year feels different.

I’m interested in writing about the questions people are asking, the patterns I’m seeing, and how it all connects to the broader transformation underway.

What’s real? What’s broken? What’s next?

Let’s dive in.

AI Beyond the Hype: A Year of Growing Pains

In the past, I’ve made some predictions. In my mind, while 2023 was the year we discovered Generative AI, the real adoption curve wouldn’t kick in until 2024. That’s exactly what happened.

On paper, the numbers are impressive: McKinsey reports that 65% of companies are now using GenAI in at least one business function. But when you look closer, you see a different story. BCG found that only 26% of those companies have moved beyond proof of concept and actually generate tangible business results.

We’re still in the early innings. That’s not a bad thing — but it is important context.

One thing that did accelerate meaningfully this year has been the rise of agents. GenAI stopped being just about chat and became more purposeful, from automating repetitive workflows and assisting in code review, to writing copy and running queries. But with that power came complexity.

Gartner now predicts that by 2028, at least 15% of business decisions will be made autonomously by AI agents. That’s exciting and also a bit terrifying. The same research says 25% of enterprise breaches will be caused by AI agent misuse.

We’re entering the autonomous phase of AI. The next bottleneck isn’t performance or accuracy. It’s governance.

The Modern Data Stack Isn’t Dead. It’s Just Growing Up

One of the most popular questions I still get asked is: Is the Modern Data Stack dead?

My honest answer? No, but it’s evolving. Or maybe mutating is the better word.

To understand what’s happening, you have to remember how it all started. The Modern Data Stack wasn’t just a group of tools. It was a philosophy: modular, cloud-native, API-driven. It gave data teams superpowers. It also gave rise to a wave of innovation, from Fivetran and dbt to Looker and Snowflake, and billions of VC dollars flowed in to fund that ecosystem.

But in the past couple of years, things have shifted. Budgets tightened. Enterprises grew tired of stitching together half a dozen tools. And most importantly, AI changed the game.

Suddenly, pipelines weren’t enough.
We needed context. Lineage. Trust. Semantic meaning.

And that’s when the Modern Data Stack started to feel a little…incomplete.

The Rise of Data Products (and the Fall of the Warehouse as Center of the Universe)

Here’s a mental model I keep coming back to:

Cloud applications were to the cloud what data products are to the modern data stack.

Just like cloud apps productized software, data products are now productizing data itself:making it ownable, governed, discoverable, observable, and above all, usable.

This is where the “data as a product” mindset goes beyond philosophy and becomes operational reality. We’re seeing it across the board:

  • Teams tying datasets to SLAs
  • Semantic layers getting revived
  • Observability baked into pipelines
  • AI agents relying on structured metrics and lineage to reason correctly

What started as a niche discussion on Data Mesh forums has now become the architecture underpinning modern data systems. In many ways, the data product is becoming the core unit of enterprise analytics and AI readiness.

The Shift to the Right Side of the Warehouse

The Modern Data Stack, for most of its life, catered to the left side of the data lifecycle - ingestion, transformation, orchestration. But the real action is shifting to the right: how data is consumed, interpreted, and used to make decisions.

That shift brings new expectations.
Business users no longer just want dashboards.
They want trustworthy insights, delivered in Slack, embedded in Notion, surfaced by an AI assistant, explained in context.

The old “single source of truth” doesn’t hold up in a world of decentralized architecture and real-time workflows. What’s emerging instead is a more fluid, resilient concept: a system of trust. Something that works regardless of where data lives or how it’s consumed.

That shift toward the data consumer is one of the most important cultural changes happening in enterprise data today.

Data Observability in the AI Age: From Alerting to Understanding

For the past five years, data observability has been about fixing broken pipelines. Schema changes. Null spikes. Volume drops. Important, yes - but mechanical.

Now, we need more.

We need observability that understands not just what broke, but why it matters - and to whom.

In the AI-native stack, observability isn’t just about monitoring freshness or anomalies. It’s about:

  • Surfacing downstream impact
  • Connecting lineage to usage
  • Enabling agents to explain and adapt
  • Helping humans trust the data they’re seeing

In short: observability needs to grow up.

It needs to move from noise to nuance.

From alerting to insight.

From red blinking dashboards to context-aware decision support.

As I wrote in this earlier post, data observability needs to stop thinking like infrastructure and start thinking like product.

So… Is the Stack Dead? No. But the System Is Emerging.

If the past decade was about building a modular, scalable stack, the next one will be about orchestrating a cohesive system — one that connects metadata, governance, semantic layers, observability, and AI-native interfaces.

A system that helps teams move not just data… but decisions.

A system that doesn’t just answer questions - it understands the questioner.

In that system, metadata is the new runtime.Context is currency. Trust is the platform.

Final Thoughts

We’ve spent the last 10 years building pipelines.

The next 10 will be about activating the flow - of insight, of context, of action.

The winners won’t be those who just ingest, store, or transform data better.

They’ll be the ones who deliver decision-making infrastructure that’s intelligent, explainable, and embedded in every layer of the enterprise.

That’s where the future is heading. And it’s already begun.

Discover more ressources

No items found.