Home

Enterprise spending on Databricks, Snowflake, and cloud data platforms grows 20-25% every year - and that's before AI agents enter the picture.

One AI agent running on Claude, GPT, or Codex consumes 50x the tokens of a human user, turning LLM token costs into the fastest-growing line item in your data stack. Two cost curves. Compounding fast. Zero governance.

In this session, Strategy reveals the dual savings framework behind 500+ Mosaic customers: 30%+ reduction in cloud compute costs and 40-70% lower LLM token consumption - powered by one architectural decision: an Enterprise semantic layer.

Whether you're managing Databricks or Snowflake costs, scaling agentic AI workloads across OpenAI, Anthropic, or open-source models, or trying to rein in runaway cloud consumption, you'll leave with a 3-year savings model you can take straight to your CFO.

What You'll Learn:

  • Why one AI agent consumes as many tokens as 50 human users — and why most teams have no visibility into that cost
  • A proven framework to cut 30%+ from Databricks and Snowflake compute and 40-70% from LLM token spend across Claude, GPT, Codex, and other models
  • How to build a defensible 3-year savings model covering both cloud and AI costs — ready for your CFO

What We'll Cover:

  • The two cost curves compounding against you: cloud data platform spend growing 20-25% annually and agentic AI workloads doubling year over year
  • How a universal semantic layer eliminates redundant compute, governs what your LLMs see, and bends both cost curves down from one architectural decision
  • Real benchmarks from 500+ customer deployments and a step-by-step walkthrough to model your own savings


Speakers

asim-lilani.png
Asim Lilani

Vice President of Value Engineering

Strategy