Tag: Data Warehouse

Perform impact analysis of downstream dependencies from lakehouses, data warehouses, dataflows, and semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Perform impact analysis of downstream dependencies from lakehouses,
data warehouses, dataflows, and semantic models

Impact analysis in Microsoft Fabric helps analytics engineers understand how changes to upstream data assets affect downstream items such as datasets, reports, dashboards, notebooks, and pipelines. It is a critical lifecycle practice that reduces the risk of breaking analytics solutions when making schema, logic, or data changes.

For the DP-600 exam, you should understand what impact analysis is, which Fabric tools support it, what dependencies are tracked, and how to use it in real-world lifecycle scenarios.

What Is Impact Analysis?

Impact analysis answers the question:

“If I change or delete this item, what else will be affected?”

It allows you to:

  • Identify downstream dependencies
  • Assess risk before making changes
  • Communicate potential impacts to stakeholders
  • Support safe development and deployment practices

Impact analysis is observational and informational—it does not enforce controls.

Where Impact Analysis Is Used in Fabric

Impact analysis applies across many Fabric items, including:

  • Lakehouses
  • Data Warehouses
  • Dataflows Gen2
  • Semantic models
  • Reports and dashboards
  • Notebooks and pipelines

These items form a connected analytics graph, which Fabric can visualize.

Lineage View: The Core Tool for Impact Analysis

The primary tool for impact analysis in Fabric is Lineage View.

What Lineage View Shows

  • Upstream data sources
  • Transformations and processing steps
  • Downstream consumers
  • Relationships between items

Lineage view provides a visual map of dependencies across workloads.

Impact Analysis by Asset Type

Lakehouses

Changing a Lakehouse can impact:

  • Notebooks reading tables
  • Semantic models using Direct Lake
  • Dataflows writing or reading data
  • Reports built on dependent models

Common risk: Dropping or renaming a column.

Data Warehouses

Warehouse changes may affect:

  • Views and SQL queries
  • Semantic models using DirectQuery
  • Reports and dashboards
  • External tools

Exam insight: Schema changes are a common source of downstream failures.

Dataflows Gen2

Dataflows often sit between raw data and analytics.

Changes can impact:

  • Lakehouses or Warehouses they load into
  • Semantic models consuming curated tables
  • Pipelines orchestrating refreshes

Semantic Models

Semantic models are among the most sensitive assets.

Changes may affect:

  • Reports and dashboards
  • Excel workbooks
  • Composite models
  • End-user self-service analytics

Exam note: Removing measures or renaming fields is high risk.

How to Perform Impact Analysis (High Level)

  1. Select the item (Lakehouse, Warehouse, Dataflow, or Semantic Model)
  2. Open Lineage view
  3. Review downstream dependencies
  4. Identify:
    • Reports
    • Datasets
    • Pipelines
    • Other dependent items
  5. Communicate or mitigate risk before making changes

Impact Analysis in the Development Lifecycle

Impact analysis is typically performed:

  • Before deploying changes
  • Before modifying schemas
  • Before deleting items
  • During troubleshooting

It supports:

  • Safe Git commits
  • Controlled pipeline deployments
  • Production stability

Common Exam Scenarios

You may see questions such as:

  • A column change breaks multiple reports → impact analysis was skipped
  • An engineer needs to know which reports use a dataset → lineage view
  • A Lakehouse schema update affects downstream models → review dependencies
  • A dataset should not be modified due to executive reports → high downstream impact

Example:

Before removing a table from a semantic model, what should you do?
Correct concept: Perform impact analysis using lineage view.

Impact Analysis vs Deployment Pipelines

These concepts are related but distinct.

FeatureImpact AnalysisDeployment Pipelines
PurposeRisk assessmentControlled promotion
EnforcedNoYes
TimingBefore changesDuring deployment
ToolLineage viewPipeline UI

Best Practices to Remember

  • Always check lineage before schema changes
  • Pay extra attention to semantic models and certified items
  • Communicate impacts to report owners
  • Pair impact analysis with:
    • Version control
    • Development pipelines
    • Endorsements and certification

Key Exam Takeaways

  • Impact analysis identifies downstream dependencies
  • Lineage view is the primary tool in Fabric
  • Applies to Lakehouses, Warehouses, Dataflows, and Semantic Models
  • Supports safe lifecycle and governance practices
  • A common scenario-based exam topic

Final Exam Tip

  • If a question asks what will break if I change this, the answer is impact analysis via lineage view.
  • If it asks how to safely move changes, the answer is pipelines or Git.
  • Expect questions that test:
    • When to perform impact analysis
    • Which items are affected by changes
    • Operational decision-making before deployments
  • Common traps:
    • Confusing impact analysis with lineage documentation
    • Assuming Fabric blocks breaking changes automatically
    • Forgetting semantic models are often the most impacted layer

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of impact analysis in Microsoft Fabric?

A. Improve query performance
B. Identify downstream objects affected by a change
C. Enforce data security policies
D. Reduce data refresh frequency

Correct Answer: B

Explanation:
Impact analysis helps you understand what items depend on a given artifact, so you can assess the risk of changes.

  • ❌ A: Performance tuning is separate
  • ❌ C: Security is not the focus
  • ❌ D: Refresh tuning is unrelated

Question 2 (Multi-select)

Which Fabric items can be analyzed for downstream dependencies? (Select all that apply.)

A. Lakehouses
B. Data warehouses
C. Dataflows
D. Semantic models

Correct Answers: A, B, C, D

Explanation:
Microsoft Fabric supports dependency tracking across all major analytical artifacts, enabling end-to-end lineage visibility.


Question 3 (Scenario-based)

You plan to rename a column in a lakehouse table. Which Fabric feature should you use FIRST?

A. Version control
B. Deployment pipeline
C. Impact analysis
D. Incremental refresh

Correct Answer: C

Explanation:
Renaming a column may break:

  • Semantic models
  • SQL queries
  • Reports

Impact analysis identifies what will be affected before the change.


Question 4 (Single choice)

Where do you access impact analysis for an item in Fabric?

A. Power BI Desktop
B. Microsoft Purview portal
C. Item settings in the Fabric workspace
D. Azure DevOps

Correct Answer: C

Explanation:
Impact analysis is accessible directly from the item context or settings within a Fabric workspace.

  • ❌ Purview focuses on governance/catalog
  • ❌ DevOps is not used for lineage

Question 5 (Scenario-based)

A dataflow loads data into a lakehouse that feeds multiple semantic models. What does impact analysis show?

A. Only the lakehouse
B. Only the semantic models
C. All downstream dependencies
D. Only refresh schedules

Correct Answer: C

Explanation:
Impact analysis provides a full dependency graph, showing all downstream items affected by changes.


Question 6 (Multi-select)

Which changes typically REQUIRE impact analysis before execution? (Select all that apply.)

A. Dropping columns
B. Renaming tables
C. Changing data types
D. Adding a new report page

Correct Answers: A, B, C

Explanation:
Structural changes can break dependencies. Adding a report page does not affect downstream items.


Question 7 (Scenario-based)

A semantic model is used by several reports and dashboards. What happens if you delete the model without impact analysis?

A. Nothing; reports are cached
B. Reports automatically reconnect
C. Reports and dashboards break
D. Fabric blocks the deletion

Correct Answer: C

Explanation:
Deleting a semantic model removes the data source for:

  • Reports
  • Dashboards

Impact analysis helps prevent such disruptions.


Question 8 (Single choice)

Which view best represents impact analysis results?

A. Tabular grid
B. SQL execution plan
C. Dependency graph
D. DAX query view

Correct Answer: C

Explanation:
Impact analysis is presented as a visual dependency graph, showing upstream and downstream relationships.


Question 9 (Scenario-based)

Which role MOST benefits from performing impact analysis regularly?

A. Report consumers
B. Workspace admins and data engineers
C. End-user analysts
D. External auditors

Correct Answer: B

Explanation:
Admins and engineers are responsible for:

  • Schema changes
  • Deployments
  • Stability

Impact analysis supports safe operational changes.


Question 10 (Multi-select)

Which best practices apply when using impact analysis? (Select all that apply.)

A. Perform before structural changes
B. Use in conjunction with deployment pipelines
C. Skip for minor schema updates
D. Communicate findings to stakeholders

Correct Answers: A, B, D

Explanation:
Impact analysis should:

  • Precede schema changes
  • Inform deployment decisions
  • Be communicated to stakeholders

❌ “Minor” changes can still break dependencies.


Deploy and Manage Semantic Models Using the XMLA Endpoint

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Deploy and manage semantic models by using the XMLA endpoint

The XMLA endpoint enables advanced, enterprise-grade management of Power BI semantic models in Microsoft Fabric. It allows analytics engineers to deploy, modify, automate, and govern semantic models using external tools and scripts—bringing full ALM (Application Lifecycle Management) capabilities to analytics solutions.

For the DP-600 exam, you should understand what the XMLA endpoint is, when to use it, what it enables, and how it fits into the analytics development lifecycle.

What Is the XMLA Endpoint?

The XMLA (XML for Analysis) endpoint is a programmatic interface that exposes semantic models in Fabric as Analysis Services-compatible models.

Through the XMLA endpoint, you can:

  • Deploy semantic models
  • Modify model metadata
  • Manage partitions and refreshes
  • Automate changes across environments
  • Integrate with DevOps workflows

Exam note:
The XMLA endpoint is enabled by default in Fabric workspaces backed by appropriate capacity.

When to Use the XMLA Endpoint

The XMLA endpoint is used when you need:

  • Advanced model editing beyond Power BI Desktop
  • Automated deployments
  • Bulk changes across models
  • Integration with CI/CD pipelines
  • Scripted refresh and partition management

It is commonly used in enterprise and large-scale deployments.

Tools That Use the XMLA Endpoint

Several tools connect to Fabric semantic models through XMLA:

  • Tabular Editor
  • SQL Server Management Studio (SSMS)
  • PowerShell scripts
  • Azure DevOps pipelines
  • Custom automation tools

These tools operate directly on the semantic model metadata.

Common XMLA-Based Management Tasks

Deploying Semantic Models

  • Push model definitions from source control
  • Promote models across Dev, Test, and Prod
  • Align models with environment-specific settings

Managing Model Metadata

  • Create or modify:
    • Measures
    • Calculated columns
    • Relationships
    • Perspectives
  • Apply bulk changes efficiently

Managing Refresh and Partitions

  • Configure incremental refresh
  • Trigger or monitor refresh operations
  • Manage large models efficiently

XMLA Endpoint and the Development Lifecycle

XMLA plays a key role in:

  • CI/CD pipelines for analytics
  • Automated model validation
  • Environment promotion
  • Controlled production updates

It complements:

  • PBIP projects
  • Git integration
  • Development pipelines

Permissions and Requirements

To use the XMLA endpoint:

  • The workspace must be on supported capacity
  • The user must have sufficient permissions:
    • Workspace Admin or Member
  • Access is governed by Fabric and Entra ID

Exam insight:
Viewers cannot use XMLA to modify models.

XMLA Endpoint vs Power BI Desktop

FeaturePower BI DesktopXMLA Endpoint
Visual modelingYesNo
Scripted changesNoYes
AutomationLimitedStrong
Bulk editsNoYes
CI/CD integrationLimitedYes

Key takeaway:
Power BI Desktop is for design; XMLA is for enterprise management and automation.

Common Exam Scenarios

Expect questions such as:

  • Automating semantic model deployment → XMLA
  • Making bulk changes to measures → XMLA
  • Managing partitions for large models → XMLA
  • Integrating Power BI models into DevOps → XMLA
  • Editing a production model without Desktop → XMLA

Example:

A company needs to automate semantic model deployments across environments.
Correct concept: Use the XMLA endpoint.

Best Practices to Remember

  • Use XMLA for production changes and automation
  • Combine XMLA with:
    • Git repositories
    • Tabular Editor
    • Deployment pipelines
  • Limit XMLA access to trusted roles
  • Avoid manual production edits when automation is available

Key Exam Takeaways

  • XMLA enables advanced semantic model management
  • Supports automation, scripting, and CI/CD
  • Used with tools like Tabular Editor and SSMS
  • Requires appropriate permissions and capacity
  • A core ALM feature for DP-600

Exam Tips

  • If a question mentions automation, scripting, bulk model changes, or CI/CD, the answer is almost always the XMLA endpoint.
  • If it mentions visual report design, the answer is Power BI Desktop.
  • Expect questions that test:
    • When to use XMLA vs Power BI Desktop
    • Tool selection (Tabular Editor vs pipelines)
    • Security and permissions
    • Enterprise deployment scenarios
  • High-value keywords to remember:
    • XMLA • TMSL • External tools • CI/CD • Metadata management

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of the XMLA endpoint in Microsoft Fabric?

A. Enable SQL querying of lakehouses
B. Provide programmatic management of semantic models
C. Secure data using row-level security
D. Schedule data refreshes

Correct Answer: B

Explanation:
The XMLA endpoint enables advanced management and deployment of semantic models using tools such as:

  • Tabular Editor
  • SQL Server Management Studio (SSMS)
  • Power BI REST APIs

Question 2 (Multi-select)

Which tools can connect to a Fabric semantic model via the XMLA endpoint? (Select all that apply.)

A. Tabular Editor
B. SQL Server Management Studio (SSMS)
C. Power BI Desktop
D. Azure Data Studio

Correct Answers: A, B

Explanation:

  • Tabular Editor and SSMS use XMLA to manage models.
  • ❌ Power BI Desktop uses a local model, not XMLA.
  • ❌ Azure Data Studio does not manage semantic models via XMLA.

Question 3 (Scenario-based)

You want to deploy a semantic model from Development to Production while preserving model metadata. What is the BEST approach?

A. Export and re-import a PBIX file
B. Use deployment pipelines only
C. Use XMLA with model scripting
D. Rebuild the model manually

Correct Answer: C

Explanation:
XMLA enables:

  • Model scripting (TMSL)
  • Metadata-preserving deployments
  • Controlled promotion across environments

Question 4 (Single choice)

Which capability requires the XMLA endpoint to be enabled?

A. Creating reports
B. Editing DAX measures outside Power BI Desktop
C. Viewing model lineage
D. Applying sensitivity labels

Correct Answer: B

Explanation:
Editing measures, calculation groups, and partitions using external tools requires XMLA connectivity.


Question 5 (Scenario-based)

An enterprise team wants to automate semantic model deployment through CI/CD pipelines. Which XMLA-based artifact is MOST commonly used?

A. PBIP project file
B. TMSL scripts
C. DAX Studio queries
D. SQL views

Correct Answer: B

Explanation:
Tabular Model Scripting Language (TMSL) is the standard XMLA-based format for:

  • Creating
  • Updating
  • Deploying semantic models programmatically

Question 6 (Multi-select)

Which operations can be performed through the XMLA endpoint? (Select all that apply.)

A. Create and modify measures
B. Configure partitions and refresh policies
C. Apply row-level security
D. Build report visuals

Correct Answers: A, B, C

Explanation:
XMLA supports model-level operations. Report visuals are created in Power BI reports, not via XMLA.


Question 7 (Scenario-based)

You attempt to connect to a semantic model via XMLA but the connection fails. What is the MOST likely cause?

A. XMLA endpoint is disabled for the workspace
B. Dataset refresh is in progress
C. Data source credentials are missing
D. The report is unpublished

Correct Answer: A

Explanation:
XMLA must be:

  • Enabled at the capacity or workspace level
  • Supported by the Fabric SKU

Question 8 (Single choice)

Which security requirement applies when using the XMLA endpoint?

A. Viewer permissions are sufficient
B. Read permission only
C. Contributor or higher workspace role
D. Report Builder permissions

Correct Answer: C

Explanation:
Managing semantic models via XMLA requires Contributor, Member, or Admin roles.


Question 9 (Scenario-based)

A developer edits calculation groups using Tabular Editor via XMLA. What happens after saving changes?

A. Changes remain local only
B. Changes are immediately published to the semantic model
C. Changes require a dataset refresh to apply
D. Changes are stored in the PBIX file

Correct Answer: B

Explanation:
Edits made via XMLA tools apply directly to the deployed semantic model in Fabric.


Question 10 (Multi-select)

Which are BEST practices when managing semantic models using XMLA? (Select all that apply.)

A. Use source control for TMSL scripts
B. Limit XMLA access to production workspaces
C. Make direct changes in production without testing
D. Combine XMLA with deployment pipelines

Correct Answers: A, B, D

Explanation:
Best practices include:

  • Version control
  • Controlled access
  • Structured deployments

❌ Direct production changes without testing increase risk.


Create and Update Reusable Assets, including Power BI template (.pbit) files, Power BI data source (.pbids) files, and shared semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Create and update reusable assets, including Power BI template (.pbit)
files, Power BI data source (.pbids) files, and shared semantic models

Reusable assets are a key lifecycle concept in Microsoft Fabric and Power BI. They enable consistency, scalability, and efficiency by allowing teams to standardize how data is connected, modeled, and visualized across multiple solutions.

For the DP-600 exam, you should understand what reusable assets are, how to create and manage them, and when each type is appropriate.

What Are Reusable Assets?

Reusable assets are analytics artifacts designed to be:

  • Used by multiple users or teams
  • Reapplied across projects
  • Centrally governed and maintained

Common reusable assets include:

  • Power BI template (.pbit) files
  • Power BI data source (.pbids) files
  • Shared semantic models

Power BI Template Files (.pbit)

What Is a PBIT File?

A .pbit file is a Power BI template that contains:

  • Report layout and visuals
  • Data model structure (tables, relationships, measures)
  • Parameters and queries (without data)

It does not include actual data.

When to Use PBIT Files

PBIT files are ideal when:

  • Standardizing report design and metrics
  • Distributing reusable report frameworks
  • Supporting self-service analytics at scale
  • Onboarding new analysts

Creating and Updating PBIT Files

  • Create a report in Power BI Desktop
  • Remove data (if present)
  • Save as Power BI Template (.pbit)
  • Store in source control or shared repository
  • Update centrally and redistribute as needed

Power BI Data Source Files (.pbids)

What Is a PBIDS File?

A .pbids file is a JSON-based file that defines:

  • Data source connection details
  • Server, database, or endpoint information
  • Authentication type (but not credentials)

Opening a PBIDS file launches Power BI Desktop and guides users through connecting to the correct data source.

When to Use PBIDS Files

PBIDS files are useful for:

  • Standardizing data connections
  • Reducing configuration errors
  • Guiding business users to approved sources
  • Supporting governed self-service analytics

Managing PBIDS Files

  • Create manually or export from Power BI Desktop
  • Store centrally (e.g., Git, SharePoint)
  • Update when connection details change
  • Pair with shared semantic models where possible

Shared Semantic Models

What Are Shared Semantic Models?

Shared semantic models are centrally managed datasets that:

  • Define business logic, measures, and relationships
  • Serve as a single source of truth
  • Are reused across multiple reports

They are one of the most important reusable assets in Fabric.

Benefits of Shared Semantic Models

  • Consistent metrics across reports
  • Reduced duplication
  • Centralized governance
  • Better performance and manageability

Managing Shared Semantic Models

Shared semantic models are:

  • Developed by analytics engineers
  • Published to Fabric workspaces
  • Shared using Build permission
  • Governed with:
    • RLS and OLS
    • Sensitivity labels
    • Endorsements (Promoted/Certified)

How These Assets Work Together

A common pattern:

  • PBIDS → Standardizes connection
  • Shared semantic model → Defines logic
  • PBIT → Standardizes report layout

This layered approach is frequently tested in exam scenarios.

Reusable Assets and the Development Lifecycle

Reusable assets support:

  • Faster development
  • Consistent deployments
  • Easier maintenance
  • Scalable self-service analytics

They align naturally with:

  • PBIP projects
  • Git version control
  • Development pipelines
  • XMLA-based automation

Common Exam Scenarios

You may be asked:

  • How to distribute a standardized report template → PBIT
  • How to ensure users connect to the correct data source → PBIDS
  • How to enforce consistent business logic → Shared semantic model
  • How to reduce duplicate datasets → Shared model + Build permission

Example:

Multiple teams need to create reports using the same metrics and layout.
Correct concepts: Shared semantic model and PBIT.

Best Practices to Remember

  • Centralize ownership of shared semantic models
  • Certify trusted reusable assets
  • Store templates and PBIDS files in source control
  • Avoid duplicating business logic in individual reports
  • Pair reusable assets with governance features

Key Exam Takeaways

  • Reusable assets improve consistency and scalability
  • PBIT files standardize report design
  • PBIDS files standardize data connections
  • Shared semantic models centralize business logic
  • All are core lifecycle tools in Fabric

Exam Tips

  • If a question focuses on standardization, reuse, or self-service at scale, think PBIT, PBIDS, and shared semantic models—and choose the one that matches the problem being solved.
  • Expect scenarios that test:
    • When to use PBIT vs PBIDS vs shared semantic models
    • Governance and consistency
    • Enterprise BI scalability
  • Quick memory aid:
    • PBIT = Layout + Model (no data)
    • PBIDS = Connection only
    • Shared model = Logic once, reports many

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of a Power BI template (.pbit) file?

A. Store report data for reuse
B. Share report layout and model structure without data
C. Store credentials securely
D. Enable real-time data refresh

Correct Answer: B

Explanation:
A .pbit file contains:

  • Report layout
  • Semantic model (tables, relationships, measures)
  • No data

It’s used to standardize report creation.


Question 2 (Multi-select)

Which components are included in a Power BI template (.pbit)? (Select all that apply.)

A. Report visuals
B. Data model schema
C. Data source credentials
D. DAX measures

Correct Answers: A, B, D

Explanation:

  • Templates include visuals, schema, relationships, and measures.
  • ❌ Credentials and data are never included.

Question 3 (Scenario-based)

Your organization wants users to quickly connect to approved data sources while preventing incorrect connection strings. Which reusable asset is BEST?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: C

Explanation:
PBIDS files:

  • Predefine connection details
  • Guide users to approved data sources
  • Improve governance and consistency

Question 4 (Single choice)

Which statement about Power BI data source (.pbids) files is TRUE?

A. They contain report visuals
B. They contain DAX measures
C. They define connection metadata only
D. They store dataset refresh schedules

Correct Answer: C

Explanation:
PBIDS files only store:

  • Data source type
  • Server/database info
    They do NOT include visuals, data, or logic.

Question 5 (Scenario-based)

You want multiple reports to use the same curated dataset to ensure consistent KPIs. What should you implement?

A. Multiple PBIX files
B. Power BI templates
C. Shared semantic model
D. PBIDS files

Correct Answer: C

Explanation:
A shared semantic model allows:

  • Centralized logic
  • Single source of truth
  • Multiple reports connected via Live/Direct Lake

Question 6 (Multi-select)

Which benefits are provided by shared semantic models? (Select all that apply.)

A. Consistent calculations across reports
B. Reduced duplication of datasets
C. Independent refresh schedules per report
D. Centralized security management

Correct Answers: A, B, D

Explanation:

  • Shared models enforce consistency and reduce maintenance.
  • ❌ Refresh is managed at the model level, not per report.

Question 7 (Scenario-based)

You update a shared semantic model’s calculation logic. What is the impact?

A. Only new reports see the change
B. All connected reports reflect the change
C. Reports must be republished
D. Only the workspace owner sees updates

Correct Answer: B

Explanation:
All reports connected to a shared semantic model automatically reflect changes.


Question 8 (Single choice)

Which reusable asset BEST supports report creation without requiring Power BI Desktop modeling skills?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: D

Explanation:
Users can build reports directly on shared semantic models using existing fields and measures.


Question 9 (Scenario-based)

You want to standardize report branding, page layout, and slicers across teams. What should you distribute?

A. PBIDS file
B. Shared semantic model
C. PBIT file
D. XMLA script

Correct Answer: C

Explanation:
PBIT files are ideal for:

  • Visual consistency
  • Reusable layouts
  • Standard filters and slicers

Question 10 (Multi-select)

Which are BEST practices when managing reusable Power BI assets? (Select all that apply.)

A. Store PBIT and PBIDS files in version control
B. Update shared semantic models directly in production without testing
C. Document reusable asset usage
D. Combine shared semantic models with deployment pipelines

Correct Answers: A, C, D

Explanation:
Best practices emphasize:

  • Governance
  • Controlled updates
  • Documentation

❌ Direct production edits increase risk.