Category: Data Quality Assurance

Improve DAX performance

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Implement and manage semantic models (25-30%)
--> Optimize enterprise-scale semantic models
--> Improve DAX performance

Effective DAX (Data Analysis Expressions) is essential for high-performance semantic models in Microsoft Fabric. As datasets and business logic become more complex, inefficient DAX can slow down query execution and degrade report responsiveness. This article explains why DAX performance matters, common performance pitfalls, and best practices to optimize DAX in enterprise-scale semantic models.


Why DAX Performance Matters

In Fabric semantic models (Power BI datasets + Direct Lake / Import / composite models), DAX is used to define:

  • Measures (dynamic calculations)
  • Calculated columns (row-level expressions)
  • Calculated tables (derived data structures)

When improperly written, DAX can become a bottleneck — especially on large models or highly interactive reports (many slicers, visuals, etc.). Optimizing DAX ensures:

  • Faster query execution
  • Better user experience
  • Lower compute consumption
  • More efficient use of memory

The DP-600 exam tests your ability to identify and apply performance-aware DAX patterns.


Understand DAX Execution Engines

DAX queries are executed by two engines:

  • Formula Engine (FE) — processes logic that can’t be delegated
  • Storage Engine (SE) — processes optimized aggregations and scans

Performance improves when more computation can be done in the Storage Engine (columnar operations) rather than the Formula Engine (row-by-row logic).

Rule of thumb: Favor patterns that minimize work done in the Formula Engine.


Common DAX Performance Anti-Patterns

1. Repeated Calculations Without Variables

Example:

Total Sales + Total Cost - Total Discount

If Total Sales, Total Cost, and Total Discount all compute the same sub-expressions repeatedly, the engine may evaluate redundant logic multiple times.

Anti-Pattern:

Repeated expressions without variables.


2. Nested Iterator Functions

Using iterators like SUMX or FILTER on large tables many times in a measure increases compute overhead.

Example:

SUMX(
    FILTER(FactSales, FactSales[SalesAmount] > 0),
    FactSales[Quantity] * FactSales[UnitPrice]
)

Filtering inside iterators and then iterating again adds overhead.


3. Large Row Context with Filters

Complex FILTER expressions that operate on large intermediate tables will push computation into the Formula Engine, which is slower.


4. Frequent Use of EARLIER

While useful, EARLIER is often replaced with clearer, faster patterns using variables or iterator functions.


Best Practices for Optimizing DAX


1. Use Variables (VAR)

Variables reduce redundant computations, enhance readability, and often improve performance:

Measure Optimized =
VAR BaseTotal = SUM(FactSales[SalesAmount])
RETURN
IF(BaseTotal > 0, BaseTotal, BLANK())

Benefits:

  • Computed once per filter context
  • Reduces repeated expression evaluation

2. Favor Storage Engine Over Formula Engine

Use functions that can be processed by the Storage Engine:

  • SUM, COUNT, AVERAGE, MIN, MAX run faster
  • Avoid SUMX when a plain SUM suffices

Example:

Total Sales = SUM(FactSales[SalesAmount])

Over:

Total Sales =
SUMX(FactSales, FactSales[SalesAmount])


3. Simplify Filter Expressions

When possible, use simpler filter arguments:

Better:

CALCULATE([Total Sales], DimDate[Year] = 2025)

Instead of:

CALCULATE([Total Sales], FILTER(DimDate, DimDate[Year] = 2025))

Why?
The simpler condition is more likely to push to the Storage Engine without extra row processing.


4. Use TRUE/FALSE Filters

When filtering on a Boolean or condition:

Better:

CALCULATE([Total Sales], FactSales[IsActive] = TRUE)

Instead of:

CALCULATE([Total Sales], FILTER(FactSales, FactSales[IsActive] = TRUE))


5. Limit Column and Table Scans

  • Remove unused columns from the model
  • Avoid high-cardinality columns in calculations where unnecessary
  • Use star schema design to improve filter propagation

6. Reuse Measures

Instead of duplicating logic:

Total Profit =
[Total Sales] - [Total Cost]

Reuse basic measures within more complex logic.


7. Prefer Measures Over Calculated Columns

Measures calculate at query time and respect filter context; calculated columns are evaluated during refresh. Use calculated columns only when necessary.


8. Reduce Iterators on Large Tables

If SUMX is needed for row-level expressions, consider summarizing first or using aggregation tables.


9. Understand Evaluation Context

Complex measures often inadvertently alter filter context. Use functions like:

  • ALL
  • REMOVEFILTERS
  • KEEPFILTERS

…carefully, as they affect performance and results.


10. Leverage DAX Studio or Performance Analyzer

While not directly tested with UI steps, knowing when to use tools to diagnose DAX is helpful:

  • Performance Analyzer identifies slow visuals
  • DAX Studio exposes query plans and engine timings

Performance Patterns and Anti-Patterns

PatternGood / BadNotes
VAR usageGoodMakes measures efficient and readable
SUM over SUMXGood if applicableLeverages Storage Engine
FILTER inside SUMXBadForces row context early
EARLIER / nested row contextBadHard to optimize, slows performance
Simple CALCULATE filtersGoodMore likely to fold

Example Before / After

Before (inefficient):

Measure = 
SUMX(
    FILTER(FactSales, FactSales[SalesAmount] > 1000),
    FactSales[Quantity] * FactSales[UnitPrice]
)

After (optimized):

VAR FilteredSales =
    CALCULATETABLE(
        FactSales,
        FactSales[SalesAmount] > 1000
    )
RETURN
SUMX(
    FilteredSales,
    FilteredSales[Quantity] * FilteredSales[UnitPrice]
)

Why better?
Explicit filtering via CALCULATETABLE often pushes more work to the Storage Engine than iterating within FILTER.


Exam-Focused Takeaways

For DP-600 questions related to DAX performance:

  • Identify inefficient row context patterns
  • Prefer variables and simple aggregations
  • Favor Storage Engine–friendly functions
  • Avoid unnecessary nested iterators
  • Recognize when a measure should be rewritten for performance

Summary

Improving DAX performance is about writing efficient calculations and avoiding patterns that force extra processing in the Formula Engine. By using variables, minimizing iterator overhead, simplifying filter expressions, and leveraging star schema design, you can significantly improve query responsiveness — a key capability for enterprise semantic models and the DP-600 exam.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions to guide you
  • Expect scenario-based questions rather than direct definitions

Question 1

You have a DAX measure that repeats the same complex calculation multiple times. Which change is most likely to improve performance?

A. Convert the calculation into a calculated column
B. Use a DAX variable (VAR) to store the calculation result
C. Replace CALCULATE with SUMX
D. Enable bidirectional relationships

Correct Answer: B

Explanation:
DAX variables evaluate their expression once per query context and reuse the result. This avoids repeated execution of the same logic and reduces Formula Engine overhead, making variables one of the most effective performance optimization techniques.


Question 2

Which aggregation function is generally the most performant when no row-by-row logic is required?

A. SUMX
B. AVERAGEX
C. SUM
D. FILTER

Correct Answer: C

Explanation:
Native aggregation functions like SUM, COUNT, and AVERAGE are optimized to run in the Storage Engine, which is much faster than iterator-based functions such as SUMX that require row-by-row evaluation in the Formula Engine.


Question 3

Why is this DAX pattern potentially slow on large tables?

CALCULATE([Total Sales], FILTER(FactSales, FactSales[SalesAmount] > 1000))

A. FILTER disables relationship filtering
B. FILTER forces evaluation in the Formula Engine
C. CALCULATE cannot push filters to the Storage Engine
D. The expression produces incorrect results

Correct Answer: B

Explanation:
The FILTER function iterates over rows, forcing Formula Engine execution. When possible, using simple Boolean expressions inside CALCULATE (e.g., FactSales[SalesAmount] > 1000) allows the Storage Engine to handle filtering more efficiently.


Question 4

Which CALCULATE filter expression is more performant?

A. FILTER(Sales, Sales[Year] = 2024)
B. Sales[Year] = 2024
C. ALL(Sales[Year])
D. VALUES(Sales[Year])

Correct Answer: B

Explanation:
Simple Boolean filters allow DAX to push work to the Storage Engine, while FILTER requires row-by-row evaluation. This distinction is frequently tested on the DP-600 exam.


Question 5

Which practice helps reduce the Formula Engine workload?

A. Using nested iterator functions
B. Replacing measures with calculated columns
C. Reusing base measures in more complex calculations
D. Increasing column cardinality

Correct Answer: C

Explanation:
Reusing base measures promotes efficient evaluation plans and avoids duplicated logic. Nested iterators and high cardinality columns increase computational complexity and slow down queries.


Question 6

Which modeling choice can indirectly improve DAX query performance?

A. Using snowflake schemas
B. Increasing the number of calculated columns
C. Removing unused columns and tables
D. Enabling bidirectional relationships by default

Correct Answer: C

Explanation:
Removing unused columns reduces memory usage, dictionary size, and scan costs. Smaller models lead to faster Storage Engine operations and improved overall query performance.


Question 7

Which DAX pattern is considered a performance anti-pattern?

A. Using measures instead of calculated columns
B. Using SUMX when SUM would suffice
C. Using star schema relationships
D. Using single-direction filters

Correct Answer: B

Explanation:
Iterator functions like SUMX should only be used when row-level logic is required. Replacing simple aggregations with iterators unnecessarily shifts work to the Formula Engine.


Question 8

Why can excessive use of EARLIER negatively impact performance?

A. It prevents relationship traversal
B. It creates complex nested row contexts
C. It only works in measures
D. It disables Storage Engine scans

Correct Answer: B

Explanation:
EARLIER introduces nested row contexts that are difficult for the DAX engine to optimize. Modern DAX best practices recommend using variables instead of EARLIER.


Question 9

Which relationship configuration can negatively affect DAX performance if overused?

A. Single-direction filtering
B. Many-to-one relationships
C. Bidirectional filtering
D. Active relationships

Correct Answer: C

Explanation:
Bidirectional relationships increase filter propagation paths and query complexity. While useful in some scenarios, overuse can significantly degrade performance in enterprise-scale models.


Question 10

Which tool should you use to identify slow visuals caused by inefficient DAX measures?

A. Power Query Editor
B. Model View
C. Performance Analyzer
D. Deployment Pipelines

Correct Answer: C

Explanation:
Performance Analyzer captures visual query durations, DAX query times, and rendering times, making it the primary tool for diagnosing DAX and visual performance issues in Power BI and Fabric semantic models.

Apply sensitivity labels to items in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Apply sensitivity labels to items

To Do:
Complete the related module for this topic in the Microsoft Learn course: Secure data access in Microsoft Fabric

Sensitivity labels are a data protection and governance feature in Microsoft Fabric that help organizations classify, protect, and control the handling of sensitive data. They integrate with Microsoft Purview Information Protection and extend data protection consistently across Fabric, Power BI, and Microsoft 365.

For the DP-600 exam, you should understand what sensitivity labels are, how they are applied, what they affect, and how they differ from access controls.

What Are Sensitivity Labels?

Sensitivity labels:

  • Classify data based on confidentiality and business impact
  • Travel with the data across supported services
  • Can trigger protection behaviors, such as encryption or usage restrictions

Common label examples include:

  • Public
  • Internal
  • Confidential
  • Highly Confidential

Labels are organizationally defined and managed centrally.

Where Sensitivity Labels Come From

Sensitivity labels in Fabric are:

  • Created and managed in Microsoft Purview
  • Defined at the tenant level by security or compliance administrators
  • Made available to Fabric and Power BI through tenant settings

Fabric users apply labels, but typically do not define them.

Items That Can Be Labeled in Microsoft Fabric

Sensitivity labels can be applied to many Fabric items, including:

  • Semantic models (datasets)
  • Reports
  • Dashboards
  • Dataflows
  • Lakehouses and Warehouses (where supported)
  • Exported artifacts (Excel, PowerPoint, PDF)

This makes labeling a cross-workload governance mechanism.

How Sensitivity Labels Are Applied

Labels can be applied:

  • Manually by item owners or authorized users
  • Automatically through inherited labeling
  • Programmatically via APIs (advanced scenarios)

Label Inheritance

In many cases:

  • Reports inherit the label from their underlying semantic model
  • Dashboards inherit labels from pinned tiles
  • Exported files inherit the label of the source item

This inheritance model is frequently tested in exam scenarios.

What Sensitivity Labels Do (and Do Not Do)

What they do:

  • Classify data for compliance and governance
  • Enable protection such as:
    • Encryption
    • Watermarking
    • Usage restrictions (e.g., block external sharing)
  • Travel with data when exported or shared

What they do NOT do:

  • Grant or restrict user access
  • Replace workspace, item-level, or data-level security
  • Filter rows or columns

Key exam distinction:
Sensitivity labels protect data after access is granted.

Sensitivity Labels vs Endorsements

These two concepts are often confused on exams.

FeatureSensitivity LabelsEndorsements
PurposeData protectionTrust and quality
EnforcedYesNo
Affects behaviorYes (encryption, sharing rules)No
Security-relatedYesGovernance guidance

Governance and Compliance Benefits

Sensitivity labels support:

  • Regulatory compliance (e.g., GDPR, HIPAA)
  • Data loss prevention (DLP)
  • Auditing and reporting
  • Consistent handling of sensitive data across platforms

They are especially important in environments with:

  • Self-service analytics
  • Data exports to Excel or PowerPoint
  • External sharing scenarios

Common Exam Scenarios

You may see questions such as:

  • A report exported to Excel must remain encrypted → sensitivity label
  • Data should be classified as confidential but still shared internally → labeling, not access restriction
  • Users can view data but cannot share externally → label-driven protection
  • A report automatically inherits its dataset’s classification → label inheritance

Best Practices to Remember

  • Apply labels at the semantic model level to ensure inheritance
  • Use sensitivity labels alongside:
    • Workspace and item-level access controls
    • RLS and CLS
    • Endorsements
  • Review labeling regularly to ensure accuracy
  • Educate users on selecting the correct label

Key Exam Takeaways

  • Sensitivity labels classify and protect data
  • They are defined in Microsoft Purview
  • Labels can enforce encryption and sharing restrictions
  • Labels do not control access
  • Inheritance behavior is important for DP-600 questions

Exam Tips

  • If a question focuses on classifying, protecting, or controlling how data is shared after access, think sensitivity labels.
  • If it focuses on who can see the data, think security roles or permissions.
  • Expect scenario questions involving:
    • PII, financial data, or confidential data
    • Export restrictions
    • Label inheritance
  • Know the difference between:
    • Security (RLS, OLS, item access)
    • Governance & compliance (sensitivity labels)
  • Always associate sensitivity labels with Microsoft Purview

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of applying sensitivity labels to items in Microsoft Fabric?

A. Improve query performance
B. Control row-level data access
C. Classify and protect data based on sensitivity
D. Grant workspace permissions

Correct Answer: C

Explanation:
Sensitivity labels are used for data classification, protection, and governance, not for performance or access control.


Question 2 (Scenario-based)

Your organization requires that all reports containing customer PII automatically display a watermark and restrict external sharing. What feature enables this?

A. Row-level security
B. Sensitivity labels with protection settings
C. Item-level access controls
D. Conditional access policies

Correct Answer: B

Explanation:
Sensitivity labels can apply visual markings, encryption, and sharing restrictions when integrated with Microsoft Purview.


Question 3 (Multi-select)

Which Fabric items can have sensitivity labels applied? (Select all that apply.)

A. Power BI reports
B. Semantic models
C. Lakehouses and warehouses
D. Notebooks

Correct Answers: A, B, C, D

Explanation:
Sensitivity labels can be applied to most Fabric artifacts, enabling consistent governance across analytics assets.


Question 4 (Scenario-based)

A semantic model inherits a sensitivity label from its underlying data source. What does this behavior represent?

A. Manual labeling
B. Label inheritance
C. Workspace-level labeling
D. Object-level security

Correct Answer: B

Explanation:
Label inheritance ensures that downstream artifacts maintain appropriate sensitivity classifications automatically.


Question 5 (Single choice)

Which service must be configured to define and manage sensitivity labels used in Microsoft Fabric?

A. Azure Active Directory
B. Microsoft Defender
C. Microsoft Purview
D. Power BI Admin portal

Correct Answer: C

Explanation:
Sensitivity labels are defined and managed in Microsoft Purview, then applied across Microsoft Fabric and Power BI.


Question 6 (Scenario-based)

A report is labeled Highly Confidential, but a user attempts to export its data to Excel. What is the expected behavior?

A. Export always succeeds
B. Export is blocked or encrypted based on label policy
C. Export ignores sensitivity labels
D. Only row-level security applies

Correct Answer: B

Explanation:
Sensitivity labels can restrict exports, apply encryption, or enforce protection based on policy.


Question 7 (Multi-select)

Which actions can sensitivity labels enforce? (Select all that apply.)

A. Data encryption
B. Watermarks and headers
C. External sharing restrictions
D. Row-level filtering

Correct Answers: A, B, C

Explanation:
Sensitivity labels control protection and compliance, not data filtering.


Question 8 (Scenario-based)

You apply a sensitivity label to a lakehouse. Which downstream artifact is MOST likely to automatically inherit the label?

A. A Power BI report built on the semantic model
B. A notebook in a different workspace
C. An external CSV export
D. An Azure SQL Database

Correct Answer: A

Explanation:
Label inheritance flows through Fabric analytics artifacts, especially semantic models and reports.


Question 9 (Single choice)

Who is typically allowed to apply or change sensitivity labels on Fabric items?

A. Any workspace Viewer
B. Only Microsoft admins
C. Users with sufficient item permissions
D. External users

Correct Answer: C

Explanation:
Users must have appropriate permissions (Contributor/Owner or item-level rights) to apply labels.


Question 10 (Scenario-based)

Your compliance team wants visibility into how sensitive data is used across Fabric. Which feature supports this requirement?

A. Query caching
B. Audit logs
C. Sensitivity labels with Purview reporting
D. Direct Lake mode

Correct Answer: C

Explanation:
Sensitivity labels integrate with Microsoft Purview reporting and auditing for compliance and governance tracking.


Endorse items in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Endorse items

To Do:
Complete the related module for this topic in the Microsoft Learn course: Secure data access in Microsoft Fabric

Item endorsement is a governance feature in Microsoft Fabric that helps organizations identify trusted, high-quality, and officially supported analytics assets. Endorsements guide users toward the right data and reports, reduce duplication, and promote consistent decision-making.

For the DP-600 exam, you should understand what endorsement is, the types of endorsements available, who can apply them, and how endorsements affect user behavior (not security).

What Does It Mean to Endorse an Item?

Endorsing an item signals to users that the content is:

  • Reliable
  • Well-maintained
  • Appropriate for reuse and decision-making

Endorsement is not a security mechanism. It does not grant or restrict access—it provides trust and visibility cues within the Fabric experience.

Endorsements can be applied to:

  • Semantic models (datasets)
  • Reports
  • Dashboards
  • Other supported Fabric items

Types of Endorsements

Microsoft Fabric supports three endorsement states:

1. None

There is no endorsement on the content.

2. Promoted

Promoted items are considered:

  • Useful
  • Reviewed
  • Suitable for reuse

Key characteristics:

  • Any item owner can promote their own content
  • Indicates quality, but not official certification
  • Common for team-approved or department-level assets

Typical use cases

  • A curated dataset used by multiple analysts
  • A well-designed report shared across a department

3. Certified

Certified items represent the highest level of trust.

Key characteristics:

  • Only authorized users (often admins or designated certifiers) can certify
  • Indicates the item meets organizational standards for:
    • Data quality
    • Governance
    • Security
  • Intended for enterprise-wide consumption

Typical use cases

  • Official financial reporting datasets
  • Executive dashboards
  • Enterprise semantic models

Who Can Endorse Items?

  • Promoted: Item owners
  • Certified: Users authorized by Fabric or Power BI tenant settings (often admins or data stewards)

This distinction is important for the exam: not everyone can certify content, even if they own it.

Where Endorsements Appear

Endorsements are visible across the Fabric and Power BI experiences:

  • In search results
  • In lineage view
  • In the data hub
  • When users select data sources for report creation

Certified items are typically:

  • Ranked higher
  • More visible
  • Preferred in self-service analytics workflows

Endorsements vs Security Controls

A common exam trap is confusing endorsements with access control.

FeatureEndorsementAccess Control
PurposeTrust and qualitySecurity and restriction
Limits access?NoYes
Affects visibilityYesYes
Enforced by systemNo (informational)Yes (mandatory)

The “Make discoverable” setting

Within the selection settings dialog for Endorsement, there is also a selection option for “Make discoverable“. This option, when selected, allows users to discover the content even if they do not have access to it, and they can then request access.

Summary table

Endorsement and Discovery stateWhat it isWho can do itTypical Use Cases
NoneThere is no endorsement on the content
PromotedThe content is endorsed/flagged as Promoted

Promoted items are considered:
– Useful
– Reviewed
– Suitable for reuse

Key characteristics:
– Any item owner can promote their own content
– Indicates quality, but not official certification
– Common for team-approved or department-level assets
– Users can assign “Promoted” without any specific admin settings.
– Users can “Promote” as long as they have write permissions on a semantic model.
– A curated dataset used by multiple analysts
– A well-designed report shared across a department
CertifiedThe content is endorsed/flagged as Certified.

Certified items represent the highest level of trust.

Key characteristics:
– Only authorized users (often admins or designated certifiers) can certify
– Indicates the item meets organizational standards for: Data quality, Governance, and Security
– Intended for enterprise-wide consumption
Certification” requires admin approval to be able to set it.– Official financial reporting datasets
– Executive dashboards
– Enterprise semantic models
Make discoverableThe content is endorsed/flagged as Findable. And the discoverability can be set for selected users, the entire company, or all except selected users.Make content discoverable even to those that do not currently have access, so that they become aware it’s available and can request access to it.

Key takeaway:
A user must still have workspace or item-level access to use an endorsed item.

Role of Endorsements in Governance

Endorsements support governance by:

  • Encouraging reuse of approved assets
  • Reducing “shadow BI”
  • Helping users choose the right data source
  • Aligning self-service analytics with enterprise standards

They are especially important in large Fabric environments with:

  • Many workspaces
  • Multiple datasets covering similar subject areas
  • Mixed technical and business users

Common Exam Scenarios

Expect questions such as:

  • When to use Promoted vs Certified
  • Who is allowed to certify an item
  • Whether certification affects access permissions (it does not)
  • How endorsements support discoverability and trust

Example scenario:

Business users are building reports from multiple datasets and need guidance on which one is authoritative.
Correct concept: Certified semantic models.

Best Practices to Remember

  • Promote items early to guide reuse
  • Reserve certification for high-value, governed assets
  • Combine endorsements with:
    • Clear workspace organization
    • Descriptions and documentation
    • Proper access controls
  • Review certifications periodically to ensure relevance

Key Exam Takeaways

  • Endorsements indicate trust, not permission
  • Two endorsement levels: Promoted and Certified
  • Certification requires special authorization
  • Endorsements improve discoverability and governance in Fabric

Final Exam Tips

  • If a question is about helping users identify trusted or official data, think endorsements.
  • If it’s about restricting access, think workspace, item-level, or data-level security.
  • Know the difference between Promoted and Certified
  • Expect scenario questions about:
    • Data trust
    • Self-service vs governed BI
    • Discoverability in Data hub
  • Remember:
    • Endorsements ≠ security
    • Endorsements ≠ performance tuning
  • Certification permissions are centrally controlled

Link to documentation on this topic: Endorse your content


Practice Questions


Question 1 (Single choice)

What is the PRIMARY purpose of endorsing items in Microsoft Fabric?

A. Improve dataset refresh performance
B. Control data access permissions
C. Identify trusted and authoritative content
D. Apply compliance policies

Correct Answer: C

Explanation:
Endorsements help users quickly identify reliable, trusted content such as official semantic models and reports.


Question 2 (Multi-select)

Which endorsement types are available in Microsoft Fabric? (Select all that apply.)

A. Certified
B. Promoted
C. Verified
D. Approved

Correct Answers: A, B

Explanation:
Fabric supports Promoted and Certified endorsements. “Verified” and “Approved” are not valid endorsement types.


Question 3 (Scenario-based)

A business analyst creates a report that is useful but not officially validated. What endorsement is MOST appropriate?

A. Certified
B. Promoted
C. Deprecated
D. Restricted

Correct Answer: B

Explanation:
Promoted indicates content that is useful and recommended, but not formally governed or validated.


Question 4 (Scenario-based)

Your organization wants only centrally governed semantic models to be marked as official sources of truth. Which endorsement should be used?

A. Promoted
B. Shared
C. Certified
D. Published

Correct Answer: C

Explanation:
Certified content represents authoritative, validated data assets approved by data owners or governance teams.


Question 5 (Single choice)

Who can typically certify an item in Microsoft Fabric?

A. Any workspace Member
B. Only the item creator
C. Users authorized by tenant or workspace settings
D. External users

Correct Answer: C

Explanation:
Certification is restricted and controlled by tenant-level or workspace-level governance policies.


Question 6 (Multi-select)

Which Fabric items can be endorsed? (Select all that apply.)

A. Semantic models
B. Reports
C. Dashboards
D. Dataflows Gen2

Correct Answers: A, B, D

Explanation:
Semantic models, reports, and dataflows can be endorsed. Dashboards are less commonly emphasized in Fabric exam contexts.


Question 7 (Scenario-based)

A user searches for datasets in the Data hub. How do endorsements help in this scenario?

A. They hide non-endorsed items
B. They improve query performance
C. They help users identify trusted content
D. They automatically grant access

Correct Answer: C

Explanation:
Endorsements improve discoverability and trust, not access or performance.


Question 8 (Single choice)

What is the relationship between endorsements and security?

A. Endorsements enforce access controls
B. Endorsements replace RLS
C. Endorsements are independent of security
D. Endorsements automatically grant read access

Correct Answer: C

Explanation:
Endorsements do not control access. Security must be handled separately via permissions and access controls.


Question 9 (Scenario-based)

Your organization wants users to prefer centrally curated datasets without blocking self-service models. What approach BEST supports this?

A. Apply row-level security
B. Disable dataset creation
C. Certify governed datasets
D. Use Direct Lake mode

Correct Answer: C

Explanation:
Certifying official datasets encourages reuse while still allowing self-service analytics.


Question 10 (Fill in the blank)

In Microsoft Fabric, ________ items represent fully validated and authoritative content, while ________ items indicate recommended but not formally governed content.

Correct Answer:
Certified, Promoted

Explanation:
Certified = authoritative source of truth
Promoted = useful and recommended, but not governed


Configure version control for a workspace in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Configure version control for a workspace

Version control in Microsoft Fabric enables teams to track changes, collaborate safely, and manage the lifecycle of analytics assets using source control practices. Fabric integrates workspace items with Git repositories, bringing DevOps discipline to analytics development.

For the DP-600 exam, you should understand how Git integration works in Fabric, what items are supported, how changes flow, and common governance scenarios.

What Is Workspace Version Control in Fabric?

Workspace version control allows you to:

  • Connect a Fabric workspace to a Git repository
  • Store item definitions as code artifacts
  • Track changes through commits, branches, and pull requests
  • Support collaborative and auditable development

This capability is often referred to as Git integration for Fabric workspaces.

Supported Source Control Platform

Microsoft Fabric supports:

  • Azure DevOps (ADO) Git repositories

Key points:

  • GitHub support is limited or evolving (exam questions typically reference Azure DevOps)
  • Repositories must already exist
  • Authentication is handled via Microsoft Entra ID

Exam note: Expect Azure DevOps to be the default answer unless stated otherwise.

What Items Can Be Version Controlled?

Common Fabric items that support version control include:

  • Semantic models
  • Reports
  • Lakehouses
  • Warehouses
  • Notebooks
  • Data pipelines
  • Dataflows Gen2

Items are serialized into files and folders in the Git repo, allowing:

  • Diffing
  • History tracking
  • Rollbacks

How to Configure Version Control for a Workspace

At a high level, the process is:

  1. Open the Fabric workspace settings
  2. Enable Git integration
  3. Select:
    • Azure DevOps organization
    • Project
    • Repository
    • Branch
  4. Choose a workspace folder structure
  5. Initialize synchronization

Once configured:

  • Workspace changes can be committed to Git
  • Repo changes can be synced back into the workspace

How Changes Flow Between Workspace and Git

From Workspace to Git

  • Users make changes in Fabric (e.g., update a report)
  • Changes are committed to the connected branch
  • Commit history tracks who changed what and when

From Git to Workspace

  • Changes merged into the branch can be pulled into Fabric
  • Enables controlled deployment across environments

Important exam concept:
Synchronization is not automatic—users must explicitly commit and sync.

Branching and Environment Strategy

A common lifecycle pattern:

  • Development workspace → linked to a dev branch
  • Test workspace → linked to a test branch
  • Production workspace → linked to a main branch

This supports:

  • Code reviews
  • Pull requests
  • Controlled promotion of changes

Permissions and Governance Considerations

To configure and use version control:

  • Users need sufficient workspace permissions (typically Admin or Member)
  • Users also need Git repository access
  • Git permissions are managed outside Fabric

Version control complements—but does not replace:

  • Workspace-level access controls
  • Item-level permissions
  • Endorsements and sensitivity labels

Benefits of Version Control in Fabric

Version control enables:

  • Collaboration among multiple developers
  • Change traceability and auditability
  • Rollback of problematic changes
  • CI/CD-style deployment patterns
  • Alignment with enterprise DevOps practices

These benefits are a frequent theme in DP-600 scenario questions.

Common Exam Scenarios

You may be asked to:

  • Identify when Git integration is appropriate
  • Choose the correct platform for source control
  • Understand how changes move between Git and Fabric
  • Design a dev/test/prod workspace strategy
  • Troubleshoot why changes are not reflected (sync not performed)

Example:

Multiple developers need to work on the same semantic model with change tracking.
Correct concept: Configure workspace version control with Git.

Key Exam Takeaways

  • Fabric supports Git-based version control at the workspace level
  • Azure DevOps is the primary supported platform
  • Changes require explicit commit and sync
  • Version control supports structured development and deployment
  • It is a core part of the analytics development lifecycle

Exam Tips

  • If a question mentions tracking changes, collaboration, rollback, or DevOps practices, think workspace version control with Git.
  • If it mentions moving changes between environments, think branches and multiple workspaces.
  • Know who can configure it → Workspace Admins
  • Understand Git integration flow
  • Expect scenario questions comparing:
    • Git vs deployment pipelines
    • Collaboration vs governance
  • Remember:
    • JSON-based artifacts
    • Not all items are supported
    • No automatic commits

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of configuring version control for a Fabric workspace?

A. Improve query execution performance
B. Enable collaboration, change tracking, and rollback
C. Enforce row-level security
D. Automatically deploy content to production

Correct Answer: B

Explanation:
Version control enables source control integration, allowing teams to track changes, collaborate safely, and roll back when needed.


Question 2 (Multi-select)

Which version control systems can be integrated with Microsoft Fabric workspaces? (Select all that apply.)

A. Azure DevOps Git repositories
B. GitHub repositories
C. OneDrive for Business
D. SharePoint document libraries

Correct Answers: A, B

Explanation:
Fabric supports Git integration using Azure DevOps and GitHub. OneDrive and SharePoint are not supported for workspace version control.


Question 3 (Scenario-based)

A team wants to manage Power BI reports, semantic models, and dataflows using pull requests and branching. What should they configure?

A. Deployment pipelines
B. Sensitivity labels
C. Workspace version control with Git
D. Incremental refresh

Correct Answer: C

Explanation:
Git-based workspace version control enables branching, pull requests, and code reviews.


Question 4 (Single choice)

Which workspace role is REQUIRED to configure version control for a workspace?

A. Viewer
B. Contributor
C. Member
D. Admin

Correct Answer: D

Explanation:
Only workspace Admins can connect a workspace to a Git repository.


Question 5 (Scenario-based)

After connecting a workspace to a Git repository, where are Fabric items stored?

A. As binary files
B. As JSON-based artifact definitions
C. As SQL scripts
D. As Excel files

Correct Answer: B

Explanation:
Fabric artifacts are stored as JSON files, making them suitable for source control and comparison.


Question 6 (Multi-select)

Which items can be included in workspace version control? (Select all that apply.)

A. Reports
B. Semantic models
C. Dataflows Gen2
D. Dashboards

Correct Answers: A, B, C

Explanation:
Reports, semantic models, and dataflows are supported. Dashboards are typically excluded from version control scenarios.


Question 7 (Scenario-based)

A developer modifies a semantic model directly in the Fabric workspace while Git integration is enabled. What happens NEXT?

A. The change is automatically committed
B. The change is rejected
C. The workspace shows uncommitted changes
D. The change is immediately deployed to production

Correct Answer: C

Explanation:
Changes made in the workspace appear as pending/uncommitted changes until explicitly committed to the repository.


Question 8 (Single choice)

What is the relationship between workspace version control and deployment pipelines?

A. They are the same feature
B. Version control replaces deployment pipelines
C. They complement each other
D. Deployment pipelines require version control

Correct Answer: C

Explanation:
Version control handles source management, while deployment pipelines manage promotion across environments.


Question 9 (Scenario-based)

Your organization wants to prevent accidental overwrites when multiple developers edit the same item. Which feature BEST helps?

A. Row-level security
B. Sensitivity labels
C. Git branching and pull requests
D. Incremental refresh

Correct Answer: C

Explanation:
Git workflows enable controlled collaboration through branches, reviews, and merges.


Question 10 (Fill in the blank)

When version control is enabled, Fabric workspace changes must be ________ to the repository and ________ to update the workspace from Git.

Correct Answer:
Committed, synced (or pulled)

Explanation:
Changes flow both ways:

  • Commit workspace → Git
  • Sync Git → workspace

Perform impact analysis of downstream dependencies from lakehouses, data warehouses, dataflows, and semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Perform impact analysis of downstream dependencies from lakehouses,
data warehouses, dataflows, and semantic models

Impact analysis in Microsoft Fabric helps analytics engineers understand how changes to upstream data assets affect downstream items such as datasets, reports, dashboards, notebooks, and pipelines. It is a critical lifecycle practice that reduces the risk of breaking analytics solutions when making schema, logic, or data changes.

For the DP-600 exam, you should understand what impact analysis is, which Fabric tools support it, what dependencies are tracked, and how to use it in real-world lifecycle scenarios.

What Is Impact Analysis?

Impact analysis answers the question:

“If I change or delete this item, what else will be affected?”

It allows you to:

  • Identify downstream dependencies
  • Assess risk before making changes
  • Communicate potential impacts to stakeholders
  • Support safe development and deployment practices

Impact analysis is observational and informational—it does not enforce controls.

Where Impact Analysis Is Used in Fabric

Impact analysis applies across many Fabric items, including:

  • Lakehouses
  • Data Warehouses
  • Dataflows Gen2
  • Semantic models
  • Reports and dashboards
  • Notebooks and pipelines

These items form a connected analytics graph, which Fabric can visualize.

Lineage View: The Core Tool for Impact Analysis

The primary tool for impact analysis in Fabric is Lineage View.

What Lineage View Shows

  • Upstream data sources
  • Transformations and processing steps
  • Downstream consumers
  • Relationships between items

Lineage view provides a visual map of dependencies across workloads.

Impact Analysis by Asset Type

Lakehouses

Changing a Lakehouse can impact:

  • Notebooks reading tables
  • Semantic models using Direct Lake
  • Dataflows writing or reading data
  • Reports built on dependent models

Common risk: Dropping or renaming a column.

Data Warehouses

Warehouse changes may affect:

  • Views and SQL queries
  • Semantic models using DirectQuery
  • Reports and dashboards
  • External tools

Exam insight: Schema changes are a common source of downstream failures.

Dataflows Gen2

Dataflows often sit between raw data and analytics.

Changes can impact:

  • Lakehouses or Warehouses they load into
  • Semantic models consuming curated tables
  • Pipelines orchestrating refreshes

Semantic Models

Semantic models are among the most sensitive assets.

Changes may affect:

  • Reports and dashboards
  • Excel workbooks
  • Composite models
  • End-user self-service analytics

Exam note: Removing measures or renaming fields is high risk.

How to Perform Impact Analysis (High Level)

  1. Select the item (Lakehouse, Warehouse, Dataflow, or Semantic Model)
  2. Open Lineage view
  3. Review downstream dependencies
  4. Identify:
    • Reports
    • Datasets
    • Pipelines
    • Other dependent items
  5. Communicate or mitigate risk before making changes

Impact Analysis in the Development Lifecycle

Impact analysis is typically performed:

  • Before deploying changes
  • Before modifying schemas
  • Before deleting items
  • During troubleshooting

It supports:

  • Safe Git commits
  • Controlled pipeline deployments
  • Production stability

Common Exam Scenarios

You may see questions such as:

  • A column change breaks multiple reports → impact analysis was skipped
  • An engineer needs to know which reports use a dataset → lineage view
  • A Lakehouse schema update affects downstream models → review dependencies
  • A dataset should not be modified due to executive reports → high downstream impact

Example:

Before removing a table from a semantic model, what should you do?
Correct concept: Perform impact analysis using lineage view.

Impact Analysis vs Deployment Pipelines

These concepts are related but distinct.

FeatureImpact AnalysisDeployment Pipelines
PurposeRisk assessmentControlled promotion
EnforcedNoYes
TimingBefore changesDuring deployment
ToolLineage viewPipeline UI

Best Practices to Remember

  • Always check lineage before schema changes
  • Pay extra attention to semantic models and certified items
  • Communicate impacts to report owners
  • Pair impact analysis with:
    • Version control
    • Development pipelines
    • Endorsements and certification

Key Exam Takeaways

  • Impact analysis identifies downstream dependencies
  • Lineage view is the primary tool in Fabric
  • Applies to Lakehouses, Warehouses, Dataflows, and Semantic Models
  • Supports safe lifecycle and governance practices
  • A common scenario-based exam topic

Final Exam Tip

  • If a question asks what will break if I change this, the answer is impact analysis via lineage view.
  • If it asks how to safely move changes, the answer is pipelines or Git.
  • Expect questions that test:
    • When to perform impact analysis
    • Which items are affected by changes
    • Operational decision-making before deployments
  • Common traps:
    • Confusing impact analysis with lineage documentation
    • Assuming Fabric blocks breaking changes automatically
    • Forgetting semantic models are often the most impacted layer

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of impact analysis in Microsoft Fabric?

A. Improve query performance
B. Identify downstream objects affected by a change
C. Enforce data security policies
D. Reduce data refresh frequency

Correct Answer: B

Explanation:
Impact analysis helps you understand what items depend on a given artifact, so you can assess the risk of changes.

  • ❌ A: Performance tuning is separate
  • ❌ C: Security is not the focus
  • ❌ D: Refresh tuning is unrelated

Question 2 (Multi-select)

Which Fabric items can be analyzed for downstream dependencies? (Select all that apply.)

A. Lakehouses
B. Data warehouses
C. Dataflows
D. Semantic models

Correct Answers: A, B, C, D

Explanation:
Microsoft Fabric supports dependency tracking across all major analytical artifacts, enabling end-to-end lineage visibility.


Question 3 (Scenario-based)

You plan to rename a column in a lakehouse table. Which Fabric feature should you use FIRST?

A. Version control
B. Deployment pipeline
C. Impact analysis
D. Incremental refresh

Correct Answer: C

Explanation:
Renaming a column may break:

  • Semantic models
  • SQL queries
  • Reports

Impact analysis identifies what will be affected before the change.


Question 4 (Single choice)

Where do you access impact analysis for an item in Fabric?

A. Power BI Desktop
B. Microsoft Purview portal
C. Item settings in the Fabric workspace
D. Azure DevOps

Correct Answer: C

Explanation:
Impact analysis is accessible directly from the item context or settings within a Fabric workspace.

  • ❌ Purview focuses on governance/catalog
  • ❌ DevOps is not used for lineage

Question 5 (Scenario-based)

A dataflow loads data into a lakehouse that feeds multiple semantic models. What does impact analysis show?

A. Only the lakehouse
B. Only the semantic models
C. All downstream dependencies
D. Only refresh schedules

Correct Answer: C

Explanation:
Impact analysis provides a full dependency graph, showing all downstream items affected by changes.


Question 6 (Multi-select)

Which changes typically REQUIRE impact analysis before execution? (Select all that apply.)

A. Dropping columns
B. Renaming tables
C. Changing data types
D. Adding a new report page

Correct Answers: A, B, C

Explanation:
Structural changes can break dependencies. Adding a report page does not affect downstream items.


Question 7 (Scenario-based)

A semantic model is used by several reports and dashboards. What happens if you delete the model without impact analysis?

A. Nothing; reports are cached
B. Reports automatically reconnect
C. Reports and dashboards break
D. Fabric blocks the deletion

Correct Answer: C

Explanation:
Deleting a semantic model removes the data source for:

  • Reports
  • Dashboards

Impact analysis helps prevent such disruptions.


Question 8 (Single choice)

Which view best represents impact analysis results?

A. Tabular grid
B. SQL execution plan
C. Dependency graph
D. DAX query view

Correct Answer: C

Explanation:
Impact analysis is presented as a visual dependency graph, showing upstream and downstream relationships.


Question 9 (Scenario-based)

Which role MOST benefits from performing impact analysis regularly?

A. Report consumers
B. Workspace admins and data engineers
C. End-user analysts
D. External auditors

Correct Answer: B

Explanation:
Admins and engineers are responsible for:

  • Schema changes
  • Deployments
  • Stability

Impact analysis supports safe operational changes.


Question 10 (Multi-select)

Which best practices apply when using impact analysis? (Select all that apply.)

A. Perform before structural changes
B. Use in conjunction with deployment pipelines
C. Skip for minor schema updates
D. Communicate findings to stakeholders

Correct Answers: A, B, D

Explanation:
Impact analysis should:

  • Precede schema changes
  • Inform deployment decisions
  • Be communicated to stakeholders

❌ “Minor” changes can still break dependencies.


Create and Update Reusable Assets, including Power BI template (.pbit) files, Power BI data source (.pbids) files, and shared semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Create and update reusable assets, including Power BI template (.pbit)
files, Power BI data source (.pbids) files, and shared semantic models

Reusable assets are a key lifecycle concept in Microsoft Fabric and Power BI. They enable consistency, scalability, and efficiency by allowing teams to standardize how data is connected, modeled, and visualized across multiple solutions.

For the DP-600 exam, you should understand what reusable assets are, how to create and manage them, and when each type is appropriate.

What Are Reusable Assets?

Reusable assets are analytics artifacts designed to be:

  • Used by multiple users or teams
  • Reapplied across projects
  • Centrally governed and maintained

Common reusable assets include:

  • Power BI template (.pbit) files
  • Power BI data source (.pbids) files
  • Shared semantic models

Power BI Template Files (.pbit)

What Is a PBIT File?

A .pbit file is a Power BI template that contains:

  • Report layout and visuals
  • Data model structure (tables, relationships, measures)
  • Parameters and queries (without data)

It does not include actual data.

When to Use PBIT Files

PBIT files are ideal when:

  • Standardizing report design and metrics
  • Distributing reusable report frameworks
  • Supporting self-service analytics at scale
  • Onboarding new analysts

Creating and Updating PBIT Files

  • Create a report in Power BI Desktop
  • Remove data (if present)
  • Save as Power BI Template (.pbit)
  • Store in source control or shared repository
  • Update centrally and redistribute as needed

Power BI Data Source Files (.pbids)

What Is a PBIDS File?

A .pbids file is a JSON-based file that defines:

  • Data source connection details
  • Server, database, or endpoint information
  • Authentication type (but not credentials)

Opening a PBIDS file launches Power BI Desktop and guides users through connecting to the correct data source.

When to Use PBIDS Files

PBIDS files are useful for:

  • Standardizing data connections
  • Reducing configuration errors
  • Guiding business users to approved sources
  • Supporting governed self-service analytics

Managing PBIDS Files

  • Create manually or export from Power BI Desktop
  • Store centrally (e.g., Git, SharePoint)
  • Update when connection details change
  • Pair with shared semantic models where possible

Shared Semantic Models

What Are Shared Semantic Models?

Shared semantic models are centrally managed datasets that:

  • Define business logic, measures, and relationships
  • Serve as a single source of truth
  • Are reused across multiple reports

They are one of the most important reusable assets in Fabric.

Benefits of Shared Semantic Models

  • Consistent metrics across reports
  • Reduced duplication
  • Centralized governance
  • Better performance and manageability

Managing Shared Semantic Models

Shared semantic models are:

  • Developed by analytics engineers
  • Published to Fabric workspaces
  • Shared using Build permission
  • Governed with:
    • RLS and OLS
    • Sensitivity labels
    • Endorsements (Promoted/Certified)

How These Assets Work Together

A common pattern:

  • PBIDS → Standardizes connection
  • Shared semantic model → Defines logic
  • PBIT → Standardizes report layout

This layered approach is frequently tested in exam scenarios.

Reusable Assets and the Development Lifecycle

Reusable assets support:

  • Faster development
  • Consistent deployments
  • Easier maintenance
  • Scalable self-service analytics

They align naturally with:

  • PBIP projects
  • Git version control
  • Development pipelines
  • XMLA-based automation

Common Exam Scenarios

You may be asked:

  • How to distribute a standardized report template → PBIT
  • How to ensure users connect to the correct data source → PBIDS
  • How to enforce consistent business logic → Shared semantic model
  • How to reduce duplicate datasets → Shared model + Build permission

Example:

Multiple teams need to create reports using the same metrics and layout.
Correct concepts: Shared semantic model and PBIT.

Best Practices to Remember

  • Centralize ownership of shared semantic models
  • Certify trusted reusable assets
  • Store templates and PBIDS files in source control
  • Avoid duplicating business logic in individual reports
  • Pair reusable assets with governance features

Key Exam Takeaways

  • Reusable assets improve consistency and scalability
  • PBIT files standardize report design
  • PBIDS files standardize data connections
  • Shared semantic models centralize business logic
  • All are core lifecycle tools in Fabric

Exam Tips

  • If a question focuses on standardization, reuse, or self-service at scale, think PBIT, PBIDS, and shared semantic models—and choose the one that matches the problem being solved.
  • Expect scenarios that test:
    • When to use PBIT vs PBIDS vs shared semantic models
    • Governance and consistency
    • Enterprise BI scalability
  • Quick memory aid:
    • PBIT = Layout + Model (no data)
    • PBIDS = Connection only
    • Shared model = Logic once, reports many

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of a Power BI template (.pbit) file?

A. Store report data for reuse
B. Share report layout and model structure without data
C. Store credentials securely
D. Enable real-time data refresh

Correct Answer: B

Explanation:
A .pbit file contains:

  • Report layout
  • Semantic model (tables, relationships, measures)
  • No data

It’s used to standardize report creation.


Question 2 (Multi-select)

Which components are included in a Power BI template (.pbit)? (Select all that apply.)

A. Report visuals
B. Data model schema
C. Data source credentials
D. DAX measures

Correct Answers: A, B, D

Explanation:

  • Templates include visuals, schema, relationships, and measures.
  • ❌ Credentials and data are never included.

Question 3 (Scenario-based)

Your organization wants users to quickly connect to approved data sources while preventing incorrect connection strings. Which reusable asset is BEST?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: C

Explanation:
PBIDS files:

  • Predefine connection details
  • Guide users to approved data sources
  • Improve governance and consistency

Question 4 (Single choice)

Which statement about Power BI data source (.pbids) files is TRUE?

A. They contain report visuals
B. They contain DAX measures
C. They define connection metadata only
D. They store dataset refresh schedules

Correct Answer: C

Explanation:
PBIDS files only store:

  • Data source type
  • Server/database info
    They do NOT include visuals, data, or logic.

Question 5 (Scenario-based)

You want multiple reports to use the same curated dataset to ensure consistent KPIs. What should you implement?

A. Multiple PBIX files
B. Power BI templates
C. Shared semantic model
D. PBIDS files

Correct Answer: C

Explanation:
A shared semantic model allows:

  • Centralized logic
  • Single source of truth
  • Multiple reports connected via Live/Direct Lake

Question 6 (Multi-select)

Which benefits are provided by shared semantic models? (Select all that apply.)

A. Consistent calculations across reports
B. Reduced duplication of datasets
C. Independent refresh schedules per report
D. Centralized security management

Correct Answers: A, B, D

Explanation:

  • Shared models enforce consistency and reduce maintenance.
  • ❌ Refresh is managed at the model level, not per report.

Question 7 (Scenario-based)

You update a shared semantic model’s calculation logic. What is the impact?

A. Only new reports see the change
B. All connected reports reflect the change
C. Reports must be republished
D. Only the workspace owner sees updates

Correct Answer: B

Explanation:
All reports connected to a shared semantic model automatically reflect changes.


Question 8 (Single choice)

Which reusable asset BEST supports report creation without requiring Power BI Desktop modeling skills?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: D

Explanation:
Users can build reports directly on shared semantic models using existing fields and measures.


Question 9 (Scenario-based)

You want to standardize report branding, page layout, and slicers across teams. What should you distribute?

A. PBIDS file
B. Shared semantic model
C. PBIT file
D. XMLA script

Correct Answer: C

Explanation:
PBIT files are ideal for:

  • Visual consistency
  • Reusable layouts
  • Standard filters and slicers

Question 10 (Multi-select)

Which are BEST practices when managing reusable Power BI assets? (Select all that apply.)

A. Store PBIT and PBIDS files in version control
B. Update shared semantic models directly in production without testing
C. Document reusable asset usage
D. Combine shared semantic models with deployment pipelines

Correct Answers: A, C, D

Explanation:
Best practices emphasize:

  • Governance
  • Controlled updates
  • Documentation

❌ Direct production edits increase risk.


Data Cleaning methods

Data cleaning is an essential step in the data preprocessing pipeline when preparing data for analytics or data science. It involves identifying and correcting or removing errors, inconsistencies, and inaccuracies in the dataset to improve its quality and reliability. It is essential that data is cleaned before being used in analyses, reporting, development or integration. Here are some common data cleaning methods:

Handling missing values:

  • Delete rows or columns with a high percentage of missing values if they don’t contribute significantly to the analysis.
  • Impute missing values by replacing them with a statistical measure such as mean, median, mode, or using more advanced techniques like regression imputation or k-nearest neighbors imputation.

Handling categorical variables:

  • Encode categorical variables into numerical representations using techniques like one-hot encoding, label encoding, or target encoding.

Removing duplicates:

  • Identify and remove duplicate records based on one or more key variables.
  • Be cautious when removing duplicates, as sometimes duplicated entries may be valid and intentional.

Handling outliers:

  • Identify outliers using statistical methods like z-scores, box plots, or domain knowledge.
  • Decide whether to remove outliers or transform them based on the nature of the data and the analysis goals.

Correcting inconsistent data:

  • Standardize data formats: Convert data into a consistent format (e.g., converting dates to a specific format).
  • Resolve inconsistencies: Identify and correct inconsistent values (e.g., correcting misspelled words, merging similar categories).

Dealing with irrelevant or redundant features:

  • Remove irrelevant features that do not contribute to the analysis or prediction task.
  • Identify and handle redundant features that provide similar information to avoid multicollinearity issues.

Data normalization or scaling:

  • Normalize numerical features to a common scale (e.g., min-max scaling or z-score normalization) to prevent certain features from dominating the analysis due to their larger magnitudes.

Data integrity issues:

Finally, you need to address data integrity issues.

  • Check for data integrity problems such as inconsistent data types, incorrect data ranges, or violations of business rules.
  • Resolve integrity issues by correcting or removing problematic data.

It’s important to note that the specific data cleaning methods that need to be applied to a dataset will vary depending on the nature of the dataset, the analysis goals, and domain knowledge. It’s recommended to thoroughly understand the data and consult with domain experts when preparing to perform data cleaning tasks.

Quality Assurance (QA) for Data Projects or Data Applications

This post discusses Quality Assurance (QA) activities for data projects.

What is Quality Assurance (QA)?  Simply put, Quality Assurance, also called QA, Testing or Validation, is about testing an application or solution to ensure that all the stated/promised/expected requirements are met. It is a critically important activity for all software application development or implementations. Data applications are no different. They need to be tested to ensure they work as intended.

QA stands between development and deployment. And QA makes the difference between a delivered product and a high quality delivered product.

There are a number of things to keep in mind when you plan your Quality Assurance activities for data solutions. I present some of them in this post as suggestions, considerations, or prompting questions. The things mentioned here will not apply to all data applications but can be used as a guide or a check.

People / Teams

The number of people and teams involved in a project will vary depending on the size, scope and complexity of the project.

The technical team building the application needs to perform an initial level of validation of the solution.

If there is a Quality Assurance team that performs the validation tasks, then that team will need to perform the “official” validation.

The business analysts and end-users of the application also need to validate. Where possible, work with as many end users as efficiently possible. The more real users you have testing the application, the better the chances of finding issues early.

Where it makes sense, Test IDs that simulate various types of users or groups should be used to help test various usage and security scenarios. This is particularly useful in automated testing.

On large projects where there is a lot to be tested, it is best to break up the testing across multiple people or teams. This will help to prevent testing fatigue and sloppy testing and result in higher quality testing.

Plan ahead to ensure that access for all the relevant users is set up in the testing environments.

Communication

With all the teams and people involved, it is important to have a plan for how they will communicate. Things to consider and have a plan for include:

  • How will teams communicate within? Email, Microsoft Teams, SharePoint, Shared Files, are some options.
  • How will the various teams involved communicate with each other? In other words, how will cross-team communication be handled? As above, Email, Microsoft Teams, SharePoint, Shared Files, are some options.
  • How will issues and status be communicated? Weekly meetings, Status emails or documents, Shared files available on shared spaces are options.
  • How will changes and resolutions be tracked? Files, SDLC applications, Change Management applications are options.
  • How will teams and individuals be notified when they need to perform a task? Manual communication or automated notifications from tools are options.

Data

The most important thing to ensure in data projects is that the data is high quality, particularly the “base” data set. If the base data is incorrect, everything built on top of it will be bad. Of course, the correctness of intermediate and user-facing data is also just as important, but the validation of the base data is critical to achieving the correct data all over.

  • Ensure that table counts, field counts and row counts of key data are correct.
  • Does the data warehouse data match the source data?
  • Test detailed, low level records with small samples of data
  • Test to ensure that the data and the values conform to what is expected. For example, ensuring that there is no data older than 3 years old, or ensuring that there are no account values outside a certain range. The Data Governance Team may become involved in these activities across all projects.

Next in line is the “intermediate” data such as derived metrics, aggregates, specialized subsets, and more. These will also need to be verified.

  • Are the calculated values correct?
  • Are the aggregates correct? Test aggregate data with small, medium and large sets of data
  • Verify metric calculations

Then the user-facing data or data prepared for self-service usage needs to be validated.

  • Does the data on the dashboard match the data in the database?
  • Are the KPIs correctly reflecting the status?

Test the full flow of the data. The validity of the data should be verified at each stage of the data flow – from the source, to the staging, to the final tables in the data warehouse, to aggregates or subsets, to the dashboard.

Take snapshots of key datasets or reports so you can compare results post data migration.

Some additional data prep might be needed in some cases.

  • These include making sure that you have sourced adequate data for testing. For example, if you need to test an annual trend, then it might be best to have at least a year’s worth of data, preferably two.
  • You may need to scramble or redact some data for testing. Often Test data is taken from the Production environment and then scrambled and/or redacted in order to not expose sensitive information.
  • You may need to temporarily load in data for testing. For various reasons, you may need to load some Production data into the QA environment just to test the solution or a particular feature and then remove the data after the testing is complete. While this can be time consuming, sometimes it’s necessary, and it’s good to be aware of the need early and make plans accordingly.

Aesthetics & Representation of Data

Presentation matters. Although the most critical thing is data correctness, how the data is presented is also very important. Good presentation helps with understanding, usability, and adoption. A few things to consider include:

  • Does the application, such as dashboard, look good?  Does it look right? 
  • Are the components laid out properly so that there is no overcrowding?
  • Are the logos, colors and fonts in line with company expectations?
  • Are proper chart options used to display the various types of data and metrics?
  • Is the information provided in a way that users can digest?

Usage

The data application or solution should be user friendly, preferably intuitive or at least have good documentation. The data must be useful to the intended audience, in that, it should help them to understand the information and make good decisions or take sensible actions based on it.

The application should present data in a manner that is effective – easy to access, and easy to understand.

The presentation should satisfy the analytic workflows of the various users. Users should be able to logically step through the application to find information at the appropriate level of detail that they need based on their role.

A few things that affect usability include:

  • Prompts – ensure that all the proper prompts or selections are available to users to slice and filter the data as necessary. And of course, verify that they work.
  • Drill downs and drill throughs – validate that users can drill-down and across data to find the information they need in a simple, logical manner.
  • Easy interrogation of the data – if the application is ad-hoc in nature, validate that users can navigate it or at least verify that the documentation is comprehensive enough for users to follow.

Security

Securing the application and its data so that only authorized users have access to it is critical.

Application security comprises of “authentication”– access to the application, and “authorization” – what a user is authorized to do when he or she accesses the application.

Authorization (what a user is authorized to do within the application) can be broken into “object security” – what objects or features a user has access to, and “data security” – what data elements a user has access to within the various objects or features.

For example, a user has access to an application (authenticated / can log in), and within the application the user has access to (authorized to see and use) 3 of 10 reports (object-level security). The user is not authorized to see the other 7 reports (object-level security) and, therefore, will not have access to them. Now, within the 3 reports that the user has access to, he or she can only see data related to 1 of 5 departments (data-level security).

All object-level and data-level security needs to be validated. This includes negative testing. Not only test to make sure that users have the access they need, but testing should also ensure that users do not have access that they should not have.

  • Data for testing should be scrambled or redacted as appropriate to protect it.
  • Some extremely sensitive data may need to be filtered out entirely.
  • Can all the appropriate users access the application?
  • Are non-authorized users blocked from accessing the application?
  • Can user see the data they should be able to see to perform their jobs?

Performance

Performance of the data solution is important to user efficiency and user adoption. If users cannot get the results they need in a timely manner, they will look elsewhere to get what they need. Even if they have no choice, a poorly performing application will result in wasted time and dollars.

A few things to consider for ensuring quality around performance:

  • Application usage – is the performance acceptable? Do the results get returned in an acceptable time?
  • Data Integration – is the load performance acceptable?
  • Data processing – can the application perform all the processing it needs to do in a reasonable amount of time?
  • Stress Testing – how is performance with many users? How is it with a lot data?
  • How is performance with various selections or with no selections at all?
  • Is ad-hoc usage setup to be flexible but avoid rogue analyses that may cripple the system?
  • Is real-time analysis needed and is the application quick enough?

These items need to be validated and any issues need to be reported to the appropriate teams for performance tuning before the application is released for general usage.

Methodology

Each organization, and even each team within an organization, will have a preferred methodology for application development and change management, including how they perform QA activities.

Some things to consider include:

  • Get QA resources involved in projects early so that they gain an early understanding of the requirements and the solutions to assess and plan how best to test.
  • When appropriate, do not wait until all testing is complete before notifying development teams of issue discovered. By notifying them early, this could make the difference between your project being on-time or late.
  • Create a test plan and test scripts – even if they are high-level.
  • Where possible, execute tasks in an agile, iterative manner.
  • Each environment will have unique rules and guidelines that need to be validated. For example, your application may have a special naming convention, color & font guidelines, special metadata items, and more. You need to validate that these rules and guidelines are followed.
  • Use a checklist to ensure that you validate with consistency from deliverable to deliverable
  • When the solution being developed is replacing an existing system or dataset, use the new and old solutions in parallel to validate the new against the old.
  • Document test results. All testing participants should document what has been tested and the results. This may be as simple as a checkmark or a “Done” status, but may also include things like data entered, screenshots, results, errors, and more.
  • Update the appropriate tracking tools (such as your SDLC or Change Management tools) to document changes and validation. These tools will vary from company to company, but it is best to have a trail of the development, testing, and release to production.
  • For each company and application, there will a specific, unique set of things that will need to be done. It is best if you have a standard test plan or test checklist to help you confirm that you have tested all important aspects and scenarios of the application.

This is not an all-encompassing coverage of Quality Assurance for data solutions, but I hope the article gives you enough information to get started or tips for improving what you currently have in place. You can share your questions, thoughts and input via comments to this post. Thanks for reading!

How to generate detailed Oracle BI (OBIEE) Repository Documentation

In this post, I will show the steps for using the OBIEE “Repository Documentation” utility to generate repository (RPD) lineage information.  I will also provide a couple example of how this documentation (output file) can be used.

To access and run the Repository Documentation utility,  from the BI Admin Tool menu, select Tools -> Utilities.

biadmintool_menu_tools_utilities

From the Utilities dialog, select “Repository Documentation”, and click “Execute…”

utilitiesdialog

In the “Save As” dialog, select the destination and enter the name you would like for the output file.

saverepositorydocumentationdialog

When it finishes, it will generate the output csv file.  Note  – this will likely be a large file.  It will contain all your repository objects.

obieerepositoryoutputfile

The RPD documentation file will contain the following columns:
Subject Area, Presentation Table, Presentation Column, Description – Presentation Column, Business Model, Derived logical table, Derived logical column, Description – Derived Logical Column, Expression, Logical Table, Logical Column, Description – Logical Column, Logical Table Source, Expression, Initialization Block, Variable, Database, Physical Catalog, Physical Schema, Physical Table, Alias, Physical Column, Description – Physical Column

You can use this file to quickly track lineage from physical sources to the logical columns to the presentation columns and identify all the logic and variables in between.
You can also use it to identify where and how much a specified table, column, variable, etc. is used which will help you to identify dependencies and know the effect of making changes or deleting elements.

Development, Data Governance, and Quality Assurance teams may find this information useful in this format.