3 Comments
User's avatar
Paul's avatar

Hi Egor,

Very interesting approach. I was wondering why cube.js is needed, and if there could be a way to interface dbt metrics directly to AI and BI tools. I guess this approach is to keep everything open-source and not have to rely on dbt cloud Semantic Layer? Taking into account the new OSI standard (bascally dbt MetricFlow at the moment) do you think skipping cube.js is feasible?

Thanks!

Egor Tarasenko's avatar

You’re right that, especially with OSI / MetricFlow, going more directly from dbt -> consumers is becoming more realistic. Standardized metric definitions are a big step forward.

The reason we still like having a layer like Cube in between is mostly architectural:

We see a difference between

dbt = defining transformations & metrics

and

a semantic service = serving those metrics to many consumers

If every BI tool, app, or AI workflow reads and interprets dbt artifacts on its own, you’re back to multiple integrations and slightly different interpretations of the same logic. That’s how semantic drift creeps in again, just at a different layer.

Putting a serving layer in the middle means that metrics are exposed through one consistent interface, downstream tools query the same definitions the same way, you don’t have to build and maintain a custom “dbt adapter” per tool

So it’s less about “must use Cube” and more about the pattern:

Define metrics once (dbt) -> serve them through one semantic endpoint -> many consumers

Could you skip Cube in simpler setups? Definitely.

At larger scale, though, having a dedicated serving layer tends to be cleaner than turning dbt itself into the integration point for BI tools, apps, and AI.

User's avatar
Comment removed
Nov 24
Comment removed
Egor Tarasenko's avatar

The goal is simple: define metrics once upstream and stop re-implementing them per BI tool. If this reduces cross-tool drift even a bit, it’s already worth it.