How to Audit a Power BI Semantic Model with AI
March 13, 2026
By Tony Thomas
TL;DR: You inherit a Power BI model with 40 tables and no documentation. Before building reports on top of it, you need to know if the model follows best practices: star schema adherence, proper date tables, clean relationships, consistent naming. That's hours of manual work. Draft BI's Model Studio does it in seconds: paste your TMDL, get a health score across five categories, see every issue mapped on an interactive diagram, and fix problems with AI-generated rewrites you review before accepting. Your actual data never enters the picture. Only the schema.
The Model You Inherited
Every Power BI developer has been here. A colleague leaves, a client hands off a project, or your team merges two workspaces. You open the semantic model and find 40 tables, 200 columns, 80 measures, and zero documentation.
The reports on top of this model are in production. Stakeholders rely on them. You need to add a new report page, but first you need to answer a basic question: is this model any good?
Not "does it work." It clearly works, the reports are running. But does it follow the patterns that make Power BI models maintainable, performant, and correct at scale? Is there a proper date table, or is someone using a column from a fact table for time intelligence? Are the relationships clean, or is there a bidirectional cross-filter hiding somewhere that produces unexpected numbers when you add a slicer? Are there orphan tables sitting in the model that nobody uses and nobody deleted?
You could trace through the model view in Power BI Desktop. Click each table. Check the relationships panel. Search for CALCULATE patterns in the DAX. Compare naming conventions across tables. For a 40-table model, that takes an afternoon. For a model you didn't build, it takes longer because you're learning the domain at the same time.
There's a faster way.
What a Healthy Model Looks Like
Before we talk about tooling, let's define what "healthy" means. These aren't opinions. They're the consensus best practices from Microsoft's documentation, SQLBI's modeling guidance, and Marco Russo and Alberto Ferrari's Tabular modeling work:
Star schema adherence. Fact tables hold your numeric events (sales transactions, web visits, support tickets). Dimension tables describe the entities involved (customers, products, dates). Facts connect to dimensions through single-direction, many-to-one relationships. Violate this pattern (snowflaked dimensions, fact-to-fact relationships, bridge tables everywhere) and performance suffers. DAX becomes unnecessarily complex.
A dedicated date table. Time intelligence functions in DAX (TOTALYTD, SAMEPERIODLASTYEAR, DATEADD) need a continuous, gap-free date table marked with dataCategory: Time. Use a date column from a fact table instead, and those functions fail silently or produce wrong results. Most common modeling mistake.
Clean relationships. Every relationship should have a clear purpose. Many-to-many relationships cause unexpected aggregation behavior. Bidirectional cross-filtering creates ambiguous filter paths that slow performance and confuse developers. Inactive relationships should exist for a reason, not because someone forgot to clean up.
Consistent naming. Tables named tbl_Sales, DimProduct, dates, and FACT_Revenue in the same model? Four different people touched it, and nobody agreed on conventions. Inconsistent naming doesn't break anything, but it makes the model harder to understand and maintain.
No orphan tables. Tables with zero relationships are forgotten imports, abandoned experiments, or staging tables that never belonged in production. They clutter the field list and confuse report builders.
Checking all of this manually across a 40-table model is tedious but straightforward. The problem isn't complexity. It's time. Every minute auditing is a minute not building the report you were asked to deliver.
Auditing with Model Studio
Model Studio in Draft BI uses the same schema-only principle as our DAX generation post: paste your TMDL, and the tool works with your structure without touching your data.
The audit workflow has three steps: visualize, assess, and fix.
Step 1: Visualize — See the Model's Structure
Paste your TMDL into Model Studio and switch to Model View. The interactive diagram renders your entire model as a graph. Tables show up as cards, relationships as edges.
This isn't a static image. It's an interactive canvas:
- Tables are laid out by role automatically. Date tables cluster together. Orphan tables with no relationships are separated to the side so you spot them immediately. Fact and dimension tables arrange based on their relationship patterns.
- Click any table and unrelated tables fade. Only the selected table and its direct connections stay highlighted. For a 40-table model, this is the difference between staring at a wall of boxes and actually understanding a specific part of the schema.
- Expand a table to see its contents. Columns with data types, measures, hierarchies, and calculation groups, all visible inline without leaving the diagram.
- Relationship details on every edge. Cardinality badges (1 or *), cross-filter direction arrows, and dashed lines for inactive relationships. You can see at a glance where bidirectional filtering or many-to-many relationships exist.
For many inherited models, this step alone answers half your questions. The visual layout makes structural problems obvious. Power BI Desktop's model view, with its manual positioning and overlapping lines, doesn't.
Step 2: Assess — Get a Health Score
Click Assess Model. Model Studio runs a two-layer analysis.
Layer 1: Deterministic checks. Five rule-based checks run first. Zero false positives:
- Missing date table (no table with
dataCategory: Time) - Orphan tables (visible tables with no relationships)
- Many-to-many relationships
- Bidirectional cross-filtering
- System/internal tables that shouldn't be in a semantic model
These checks are instant and definitive. No date table marked with the time data category? That's a fact, not an interpretation.
Layer 2: AI analysis. The AI examines your schema in five categories:
- Star Schema & Relationships: Kimball pattern adherence, unnecessary snowflaking, fact-to-fact joins
- Naming Conventions: Table, column, and measure naming clarity and consistency
- Measure Organization: Whether measures live in a dedicated table, redundancy, logical grouping
- DAX Patterns: Expression efficiency, SUMX vs. SUM usage, VAR patterns, CALCULATE structure
- Model Complexity: Sizing, unused objects, missing helper tables
The results combine into a health score from 0 to 100, calculated as the average of five category sub-scores. The formula is transparent: errors cost 25 points, warnings 10 points, suggestions 3 points. No black box.
Every finding includes a severity level, a description of the issue, and the specific tables or objects affected. Hover a finding in the assessment panel and the affected table highlights on the Model View diagram so you can see exactly where the problem lives.
A score of 70 or above means the model is in good shape. Between 40 and 69, there are issues worth addressing before building on top of it. Below 40, you have structural problems that'll cause headaches.
Step 3: Fix — AI-Powered Rewrites with Diff Review
Here's where the audit becomes actionable. Select findings from the assessment results and click Fix.
Model Studio's AI generates targeted patches to your TMDL. Not a full rewrite. Precise, scoped changes that address exactly the issues you selected:
- Missing date table? The AI creates a calculated date table with a CALENDAR expression, marks it with the time data category, and adds the appropriate relationship to your fact table.
- Measures scattered across tables? It creates a dedicated measure table and moves the measures into it.
- Missing relationships? It infers the correct join columns from naming patterns and data types, and creates the relationship.
The key part: you review every change before accepting it. The rewrite opens in a side-by-side diff viewer showing exactly what changed. Original TMDL on the left, proposed revision on the right. Accept or revert. The AI never auto-applies changes.
After accepting, the assessment re-runs automatically. Your health score updates to reflect the improvements. Select the next batch of findings and fix those. The workflow is iterative: assess, fix, reassess, fix again until you're satisfied.
What the AI can't fix: Imported table and column names. Want to rename tbl_Sales to FactSales? That lives in Power Query, not TMDL. The assessment flags these as suggestions, not errors, and explains why.
What Gets Sent to the AI
Same data privacy principle as DAX generation:
Sent: Table names, column names, data types, relationship metadata, existing measure expressions, hierarchy definitions. This is your TMDL schema, the structure of your model.
Never sent: Row-level data. No revenue numbers, customer names, transaction records, or actual values from your columns. The assessment analyzes the shape of your model, not its contents.
For organizations with data classification policies, GDPR requirements, or SOC 2 compliance obligations, this distinction matters. Audit a production model's structure without exposing the data inside.
When to Audit
Not every model needs a full audit. Here's when the investment pays off:
Before building on an inherited model. Primary use case. Spend five minutes on an assessment and you might save yourself from discovering structural problems after you've built dependencies on them.
After a major model refactor. You restructured the star schema, added fact tables, or consolidated dimensions. Run the assessment to confirm you haven't introduced problems.
During code review. Use Fabric Git integration and review model changes through pull requests? The assessment gives reviewers a fast way to verify a change didn't degrade model quality.
When performance degrades. Bidirectional cross-filtering and many-to-many relationships are common culprits. The assessment flags both, plus the specific relationships involved.
Getting Started
If you can export your semantic model as TMDL through Tabular Editor, Fabric Git integration, or Power BI Desktop's developer mode, you're ready.
- Copy your TMDL schema text
- Open Model Studio in Draft BI
- Paste the schema. The model diagram and schema tree appear immediately
- Switch to Model View to visualize the structure, then click Assess Model
- Review findings, select issues to fix, and review the proposed TMDL changes in the diff viewer
The assessment takes seconds. The fixes take seconds more. The alternative, manually auditing the same model, takes an afternoon.
Further Reading
- Using TMDL to Drive Power BI Report Design — Draft BI Blog
- How to Generate DAX Measures with AI — Draft BI Blog
- Star schema and the importance for Power BI — Microsoft Learn
- TMDL overview — Microsoft Learn
- Model relationships in Power BI Desktop — Microsoft Learn
Ready to audit your Power BI semantic model? Try Model Studio free. Paste your TMDL and get a health score in seconds.

Founder of Draft BI, building the design-first companion for Power BI report development. Writing about PBIR, WCAG accessibility, DAX measures, and the workflows that help Power BI developers and analysts deliver better reports faster.