How to Generate DAX Measures with AI (Without Exposing Your Data)
March 7, 2026
By Tony Thomas
TL;DR: AI can write DAX measures for you, but most approaches require pasting raw data or connecting directly to your model. For most organizations, that's a non-starter. The alternative: feed AI the structure of your semantic model — table names, column names, data types, and relationships — without any actual data. It gets enough context to write correct, schema-aware DAX while your data stays where it belongs. Here's how that works, why it produces better results than generic prompting, and how Draft BI's Model Studio puts it into practice.
The DAX Productivity Problem
Writing DAX measures is one of the slowest parts of Power BI development. Not because DAX is inherently difficult (though it can be), but because the feedback loop is punishing.
You write a measure. Test it. The numbers look wrong. You check the filter context, realize the relationship between two tables is filtering in a direction you didn't expect. Rewrite. Test again. Numbers are right for one visual but wrong when a slicer is applied. Add CALCULATE. Test again.
Experienced DAX developers can grind through this cycle. Business analysts, report builders, and BI consultants who write DAX occasionally rather than daily lose serious time to it. And even experienced developers lose time on boilerplate. Writing the fifteenth variation of a year-over-year growth measure isn't intellectually challenging, but it still takes twenty minutes once you account for testing.
AI should help here. It does — with a caveat.
The Problem with Generic AI Prompting
The most common approach to AI-generated DAX today is pasting a prompt into ChatGPT, Claude, or Copilot:
"Write a DAX measure that calculates year-over-year revenue growth."
The AI produces something like:
YoY Revenue Growth =
VAR CurrentRevenue = SUM( Sales[Revenue] )
VAR PriorRevenue =
CALCULATE(
SUM( Sales[Revenue] ),
SAMEPERIODLASTYEAR( 'Date'[Date] )
)
RETURN
DIVIDE( CurrentRevenue - PriorRevenue, PriorRevenue )
Looks correct. It might even be correct — if your model happens to have a table called Sales with a column called Revenue, and a date table called Date with a column called Date.
But your model probably doesn't use those exact names. Your revenue column might be FactSales[TotalAmount]. Your date table might be DimDate[FullDate]. The AI doesn't know this because you didn't tell it.
So you copy the measure, paste it into Power BI Desktop, and get an error. Fix the table reference. Another error. Fix the column name. The measure runs but returns blank because the relationship between your fact table and date table uses a different key column than the AI assumed.
Generic prompting produces plausible DAX, not correct DAX. The gap between the two is where your time goes.
Why Data Exposure Is a Non-Starter
The obvious fix: give the AI more context. Some developers paste sample data rows alongside their prompt. Others connect AI tools directly to a live Power BI model, granting read access to the underlying data.
Both work technically. Both create a data governance problem.
Most enterprise Power BI models contain sensitive information — revenue figures, employee compensation, customer records, healthcare data, financial forecasts. Pasting sample rows into a public AI chat means that data now lives in the AI provider's training pipeline (unless you're on an enterprise plan with data retention guarantees). Connecting a tool directly to your model means granting API access to every table, including the ones you'd never expose in a report.
For organizations subject to GDPR, HIPAA, SOC 2, or internal data classification policies, this isn't a gray area. Raw data can't leave the controlled environment. Full stop.
That leaves you stuck: the AI needs context to produce accurate DAX, but providing that context through raw data is unacceptable for most professional use cases.
The Schema-Only Approach
Your semantic model has two layers: the structure (what tables exist, what columns they contain, what data types they use, how tables relate to each other) and the data (the actual values stored in those columns).
AI needs the structure. It doesn't need the data.
Think about what's actually required to write a correct year-over-year revenue measure:
- The fact table is called
FactSales, notSales - The revenue column is
TotalAmount, notRevenue - The date table is
DimDateand the date column isFullDate - A relationship exists between
FactSalesandDimDate TotalAmountis adecimaltype andFullDateis adateTimetype
None of that requires a single row of actual data. Table names, column names, data types, and relationships are metadata. They describe the shape of your data, not its contents.
This metadata already exists in a standardized format: TMDL (Tabular Model Definition Language). Every Power BI semantic model can be exported as TMDL through Tabular Editor, Fabric Git integration, or Power BI Desktop's PBIR format. A TMDL excerpt looks like this:
table FactSales
column TotalAmount
dataType: decimal
formatString: $ #,##0.00
column OrderDate
dataType: dateTime
measure 'Total Revenue' = SUM( 'FactSales'[TotalAmount] )
formatString: $ #,##0.00
Give an AI this schema — and only this schema — and it knows the exact vocabulary of your model. Every table reference, column reference, and data type in the generated DAX matches what's actually in Power BI. No hallucinated field names. No guessing at relationships.
And critically: no revenue figures, no customer names, no transaction dates. The schema tells the AI what kind of thing is in each column, not what values are in it.
How Model Studio Implements This
Draft BI's Model Studio is built entirely around this schema-only principle. The workflow has four steps:
1. Load your TMDL schema. Paste your TMDL text or upload it from a saved model. Model Studio parses it into an interactive schema tree — tables, columns (with data types), existing measures, and relationships. You can browse your entire model structure in the left panel, much like Power BI Desktop's model view.
2. Select context. Click the tables and columns relevant to the measure you want to build. If you have a 200-column model but only need a measure that references FactSales and DimDate, selecting those two tables narrows the AI's attention and produces more focused results.
You can also click an existing measure to use as a reference — useful when you need a variant ("like Total Revenue, but filtered to the current quarter").
3. Describe the measure. Type a plain-English description: "Year-over-year revenue growth, as a percentage, using the DimDate table for time intelligence." Or use the guided mode to select a measure type (SUM, AVERAGE, YTD, custom) and let the interface build the prompt for you.
4. Review, refine, and save. The AI returns a complete DAX measure with:
- The DAX expression, formatted for readability (VAR/RETURN pattern, indented arguments)
- A suggested measure name following the naming conventions already present in your model
- A plain-English explanation of what the measure calculates
- A format string (e.g.,
"0.00%"for a percentage measure) - A suggested home table based on which fact table the measure primarily aggregates
The DAX appears in a Monaco editor with full syntax highlighting. Edit it directly — fix a filter, adjust a variable name, add a condition. If the measure is close but not quite right, type a follow-up in the conversation thread: "Add a filter to exclude cancelled orders." The AI refines the measure while keeping the full conversation context.
Once you're satisfied, save the measure to your personal library for reuse across projects.
What Gets Sent to the AI (and What Doesn't)
Transparency matters. Here's exactly what the AI receives when you generate a DAX measure in Model Studio:
Sent:
- Table names (e.g.,
FactSales,DimDate,DimProduct) - Column names and data types (e.g.,
TotalAmount: decimal,FullDate: dateTime) - Existing measure names and their DAX expressions
- Relationship metadata (e.g.,
FactSales[DateKey] -> DimDate[DateKey], manyToOne) - Your natural-language prompt describing the measure
- Conversation history (capped at the most recent turns to keep context focused)
Never sent:
- Row-level data (no actual revenue numbers, customer names, dates, or any cell values)
- Connection strings or server addresses
- Credentials or authentication tokens
- Data from columns not present in your TMDL export
Think of it the way you'd brief a consultant: you hand them a data dictionary and say "write me a measure." The dictionary describes the model. The consultant writes the DAX. At no point do they need to see the actual data.
Post-Generation Validation
AI-generated DAX, even with full schema context, can contain errors. Model Studio runs server-side validation on every generated measure before returning it:
- Table name quoting: All table references get the required single-quote syntax (
'FactSales'[TotalAmount], notFactSales[TotalAmount]). Omitting quotes is a common AI mistake that causes silent failures in Power BI Desktop. - Case correction: Table and column references are matched to the exact casing in your schema. DAX is case-insensitive at runtime, but inconsistent casing creates maintenance headaches and confusing diffs in version-controlled models.
- Unknown reference detection: Any table or column name not found in the schema gets flagged. If the AI hallucinates a column name, you see a warning before you paste anything into Desktop.
- Structural checks: Mismatched parentheses and VAR blocks without RETURN statements — two of the most common syntax errors in complex DAX — are caught automatically.
These checks run in microseconds. They catch the errors that would otherwise surface as cryptic messages in Power BI Desktop minutes later.
Who Benefits Most
Model Studio isn't trying to replace DAX expertise. If you write CALCULATE/FILTER patterns in your sleep, you're probably faster writing measures by hand than describing them in natural language.
But most Power BI work isn't done by DAX specialists.
Business analysts build reports as 30% of their job. They understand the business logic ("show me revenue growth excluding returns, by quarter") but struggle to translate that into filter context and time intelligence functions. Schema-aware AI bridges that gap without requiring them to become DAX experts.
BI consultants work across multiple client models. Every client has different table names, different naming conventions, different relationship patterns. Instead of memorizing each model's vocabulary, load the TMDL and let the AI handle the references.
Report builders inherit complex semantic models from data engineers. The model has 40 tables and 200 measures. Browsing the schema tree in Model Studio is faster than hunting through Power BI Desktop's field list when you need to understand what already exists before writing something new.
Teams standardizing their measure library can generate measures, refine them through conversation until they're correct, save them to a library, and reuse across projects. The library becomes a living reference — more useful than documentation because the measures are already tested.
Getting Started
If you have a Power BI semantic model and can export its TMDL (through Tabular Editor, Fabric Git integration, or Power BI Desktop's model view), you have everything you need.
- Copy your TMDL schema text
- Open Model Studio in Draft BI
- Paste the schema — the interactive tree appears immediately
- Select the relevant tables, describe your measure, and generate
The first generation takes a few seconds. Refinements are faster because the conversation context is already loaded. Our TMDL blog post covers where to find your TMDL and how to export it if you haven't done this before.
Further Reading
- TMDL overview — Microsoft Learn
- Using TMDL to Drive Power BI Report Design — Draft BI Blog
- DAX patterns reference — SQLBI
- Tabular Editor 2 — GitHub (open-source)
Ready to generate DAX measures from your semantic model without exposing your data? Try Model Studio free — paste your TMDL schema and describe the measure you need in plain English.

Founder of Draft BI, building the design-first companion for Power BI report development. Writing about PBIR, WCAG accessibility, DAX measures, and the workflows that help Power BI developers and analysts deliver better reports faster.