Category: Microsoft Fabric

Implement item-level access controls in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Implement item-level access controls

To Do:
Complete the related module for this topic in the Microsoft Learn course: Secure data access in Microsoft Fabric

Item-level access controls in Microsoft Fabric determine who can access or interact with specific items inside a workspace, rather than the entire workspace. Items include reports, semantic models, Lakehouses, Warehouses, notebooks, pipelines, dashboards, and other Fabric artifacts.

For the DP-600 exam, it’s important to understand how item-level permissions differ from workspace roles, when to use them, and how they interact with data-level security such as RLS.

What Are Item-Level Access Controls?

Item-level access controls:

  • Apply to individual Fabric items
  • Are more granular than workspace-level roles
  • Allow selective sharing without granting broad workspace access

They are commonly used when:

  • Users need access to one report or dataset, not the whole workspace
  • Consumers should view content without seeing development artifacts
  • External or business users need limited access

Common Items That Support Item-Level Permissions

In Microsoft Fabric, item-level permissions can be applied to:

  • Power BI reports
  • Semantic models (datasets)
  • Dashboards
  • Lakehouses and Warehouses
  • Notebooks and pipelines (via workspace + item context)

The most frequently tested scenarios in DP-600 involve reports and semantic models.

Sharing Reports and Dashboards

Report Sharing

Reports can be shared directly with users or groups.

When you share a report:

  • Users can be granted View or Reshare permissions
  • The report appears in the recipient’s “Shared with me” section
  • Access does not automatically grant workspace access

Exam considerations

  • Sharing a report does not grant edit permissions
  • Sharing does not bypass data-level security (RLS still applies)
  • Users must also have access to the underlying semantic model

Semantic Model (Dataset) Permissions

Semantic models support explicit permissions that control how users interact with data.

Common permissions include:

  • Read – View and query the model
  • Build – Create reports using the model
  • Write – Modify the model (typically for owners)
  • Reshare – Share the model with others

Typical use cases

  • Allow analysts to build their own reports (Build permission)
  • Allow consumers to view reports without building new ones
  • Restrict direct querying of datasets

Exam tips

  • Build permission is required for “Analyze in Excel” and report creation
  • RLS and OLS are enforced at the semantic model level
  • Dataset permissions can be granted independently of report sharing

Item-Level Access vs Workspace-Level Roles

Understanding this distinction is critical for the exam.

FeatureWorkspace-Level AccessItem-Level Access
ScopeEntire workspaceSingle item
Typical rolesAdmin, Member, Contributor, ViewerView, Build, Reshare
Best forTeam collaborationTargeted sharing
GranularityCoarseFine-grained

Key exam insight:
Item-level access does not override workspace permissions. A user cannot edit an item if their workspace role is Viewer, even if the item is shared.

Interaction with Data-Level Security

Item-level access works together with:

  • Row-Level Security (RLS)
  • Column-Level Security (CLS)
  • Object-Level Security (OLS)

Important behaviors:

  • Sharing a report does not expose restricted rows or columns
  • RLS is evaluated based on the user’s identity
  • Item access only determines whether a user can query the item, not what data they see

Common Exam Scenarios

You may encounter questions such as:

  • A user can see a report but cannot build a new one → missing Build permission
  • A user has report access but sees no data → likely RLS
  • A business user needs access to one report only → item-level sharing, not workspace access
  • An analyst can’t query a dataset in Excel → lacks Build permission

Best Practices to Remember

  • Use item-level access for consumers and ad-hoc sharing
  • Use workspace roles for development teams
  • Assign permissions to Entra ID security groups when possible
  • Always pair item access with appropriate semantic model permissions

Key Exam Takeaways

  • Item-level access controls provide fine-grained security
  • Reports and semantic models are the most tested items
  • Build permission is critical for self-service analytics
  • Item-level access complements, but does not replace, workspace roles

Exam Tips

  • Think “Can they see the object at all?”
  • Combine:
    • Workspace roles → broad access
    • Item-level access → fine-grained control
    • RLS/CLS → data-level restrictions
  • Expect scenarios involving:
    • Preventing access to lakehouses
    • Separating authors from consumers
    • Protecting production assets
  • If a question asks who can view or build from a specific report or dataset without granting workspace access, the correct answer almost always involves item-level access controls.

Practice Questions:

Question 1 (Single choice)

What is the PRIMARY purpose of item-level access controls in Microsoft Fabric?

A. Control which rows a user can see
B. Control which columns a user can see
C. Control access to specific workspace items
D. Control DAX query execution speed

Correct Answer: C

Explanation:

  • Item-level access controls determine who can access specific items (lakehouses, warehouses, semantic models, notebooks, reports).
  • Row-level and column-level security are semantic model features, not item-level controls.

Question 2 (Scenario-based)

A user should be able to view reports but must NOT access the underlying lakehouse or semantic model. Which control should you use?

A. Workspace Viewer role
B. Item-level permissions on the lakehouse and semantic model
C. Row-level security
D. Column-level security

Correct Answer: B

Explanation:

  • Item-level access allows you to block direct access to specific items even when the user has workspace access.
  • Viewer role alone may still expose certain metadata.

Question 3 (Multi-select)

Which Fabric items support item-level access control? (Select all that apply.)

A. Lakehouses
B. Warehouses
C. Semantic models
D. Power BI reports

Correct Answers: A, B, C, D

Explanation:

  • Item-level access can be applied to most Fabric artifacts, including data storage, models, and reports.
  • This allows fine-grained governance beyond workspace roles.

Question 4 (Scenario-based)

You want data engineers to manage a lakehouse, but analysts should only consume a semantic model built on top of it. What is the BEST approach?

A. Assign Analysts as Workspace Viewers
B. Deny item-level access to the lakehouse for Analysts
C. Use Row-Level Security only
D. Disable SQL endpoint access

Correct Answer: B

Explanation:

  • Analysts can access the semantic model while being explicitly denied access to the lakehouse via item-level permissions.
  • This is a common enterprise pattern in Fabric.

Question 5 (Single choice)

Which permission is required for a user to edit or manage an item at the item level?

A. Read
B. View
C. Write
D. Execute

Correct Answer: C

Explanation:

  • Write permissions allow editing, updating, or managing an item.
  • Read/View permissions are consumption-only.

Question 6 (Scenario-based)

A user can see a report but receives an error when trying to connect to its semantic model using Power BI Desktop. Why?

A. XMLA endpoint is disabled
B. They lack item-level permission on the semantic model
C. The dataset is in Direct Lake mode
D. The report uses DirectQuery

Correct Answer: B

Explanation:

  • Viewing a report does not automatically grant access to the underlying semantic model.
  • Item-level access must explicitly allow it.

Question 7 (Multi-select)

Which statements about workspace access vs item-level access are TRUE? (Select all that apply.)

A. Workspace access automatically grants access to all items
B. Item-level access can further restrict workspace permissions
C. Item-level access overrides Row-Level Security
D. Workspace roles are broader than item-level permissions

Correct Answers: B, D

Explanation:

  • Workspace roles define baseline access.
  • Item-level access can tighten restrictions on specific assets.
  • RLS still applies within semantic models.

Question 8 (Scenario-based)

You want to prevent accidental modification of a production semantic model while still allowing users to query it. What should you do?

A. Assign Viewer role at the workspace level
B. Grant Read permission at the item level
C. Disable the SQL endpoint
D. Remove the semantic model

Correct Answer: B

Explanation:

  • Read item-level permission allows querying and consumption without edit rights.
  • This is safer than relying on workspace roles alone.

Question 9 (Single choice)

Which security layer is MOST appropriate for restricting access to entire objects rather than data within them?

A. Row-level security
B. Column-level security
C. Object-level security
D. Item-level access control

Correct Answer: D

Explanation:

  • Item-level access controls whether a user can access an object at all.
  • Object-level security applies inside semantic models.

Question 10 (Scenario-based)

A compliance requirement states that only approved users can access notebooks in a workspace. What is the BEST solution?

A. Place notebooks in a separate workspace
B. Apply item-level access controls to notebooks
C. Use Row-Level Security
D. Restrict workspace Viewer access

Correct Answer: B

Explanation:

  • Item-level access allows targeted restriction without restructuring workspaces.
  • This is the preferred Fabric governance approach.

Implement Row-Level, Column-Level, Object-Level, and File-Level Access Controls in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Implement row-level, column-level, object-level, and file-level access control

To Do:
Complete the related module for this topic in the Microsoft Learn course: Secure data access in Microsoft Fabric

Security and governance are foundational responsibilities of a Fabric Analytics Engineer. Microsoft Fabric provides multiple layers of access control to ensure users can only see and interact with the data they are authorized to access. For the DP-600 exam, it is important to understand what each access control type does, where it is applied, and when to use it.

1. Row-Level Security (RLS)

What it is

Row-Level Security (RLS) restricts access to specific rows in a table based on the identity or role of the user querying the data.

Where it is implemented

  • Power BI semantic models (datasets)
  • Direct Lake or Import models in Fabric
  • Applies at query time

How it works

  • You define DAX filter expressions on tables.
  • Users are assigned to roles, and those roles determine which rows are visible.
  • The filtering is enforced automatically whenever the model is queried.

Common use cases

  • Sales users see only their assigned regions
  • Managers see only their department’s data
  • Multi-tenant reporting scenarios

Exam tips

  • RLS filters rows, not columns
  • RLS is evaluated dynamically based on user context
  • Know the difference between static RLS (hard-coded filters) and dynamic RLS (based on USERPRINCIPALNAME or lookup tables)

2. Column-Level Security (CLS)

What it is

Column-Level Security (CLS) restricts access to specific columns within a table, preventing sensitive fields from being exposed.

Where it is implemented

  • Power BI semantic models
  • Defined within the model, not in reports

How it works

  • Columns are marked as hidden for certain roles
  • Users in those roles cannot query or visualize the restricted columns

Common use cases

  • Hiding personally identifiable information (PII)
  • Restricting access to salary, cost, or confidential metrics

Exam tips

  • CLS does not hide entire rows
  • Users without access cannot bypass CLS using visuals or queries
  • CLS is evaluated before data reaches the report layer

3. Object-Level Security (OLS)

What it is

Object-Level Security (OLS) controls access to entire objects within a semantic model, such as:

  • Tables
  • Columns
  • Measures

Where it is implemented

  • Power BI semantic models in Fabric
  • Typically managed using external tools or advanced model editing

How it works

  • Objects are explicitly denied to specific roles
  • Denied objects are completely invisible to the user

Common use cases

  • Hiding technical or staging tables
  • Preventing access to internal calculation measures
  • Supporting multiple audiences from the same model

Exam tips

  • OLS is stronger than CLS (objects are invisible, not just hidden)
  • OLS affects metadata discovery
  • Users cannot query objects they do not have access to

4. File-Level Access Controls

What it is

File-level access control governs who can access files stored in OneLake, including:

  • Lakehouse files
  • Warehouse data
  • Files accessed via notebooks or Spark jobs

Where it is implemented

  • OneLake
  • Workspace permissions
  • Underlying Azure Data Lake Gen2 permission model

How it works

  • Permissions are assigned at:
    • Workspace level
    • Item level (Lakehouse, Warehouse)
    • Folder or file level (where applicable)
  • Uses role-based access control (RBAC)

Common use cases

  • Restricting raw data access to engineers only
  • Allowing analysts read-only access to curated zones
  • Enforcing separation between development and production data

Exam tips

  • File-level security applies before data reaches semantic models
  • Workspace roles (Admin, Member, Contributor, Viewer) matter
  • OneLake follows a centralized storage model across Fabric workloads

Key Comparisons to Remember for the Exam

Security TypeScopeEnforced AtTypical Use
Row-Level (RLS)RowsQuery timeUser-specific data filtering
Column-Level (CLS)ColumnsModel levelProtect sensitive fields
Object-Level (OLS)Tables, columns, measuresModel metadataHide entire objects
File-LevelFiles and foldersStorage/workspaceControl raw and curated data access

How This Fits into Fabric Governance

In Microsoft Fabric, these access controls work together:

  • File-level security protects data at rest
  • Object-, column-, and row-level security protect data at the semantic model layer
  • Workspace roles govern who can create, modify, or consume items

For the DP-600 exam, expect scenario-based questions that test:

  • Choosing the right level of security
  • Understanding where security is enforced
  • Knowing limitations and interactions between security types

Final Exam Tips

If the question mentions who can see which data values, think RLS or CLS.
If it mentions who can see which objects, think OLS.
If it mentions access to files or raw data, think file-level and workspace permissions.

DP-600 Exam Strategy Notes

  • Security evaluation order (exam favorite):
    1. Workspace access
    2. Item-level access
    3. Object-level security
    4. Column-level security
    5. Row-level security
  • Use:
    • RLSWho sees which rows?
    • CLSWho sees which columns?
    • OLSWho sees which tables/measures?
    • File-levelWho sees which files?


Practice Questions

Question 1 (Single choice)

Which access control mechanism restricts which rows of data a user can see in a semantic model?

A. Column-level security
B. Object-level security
C. Row-level security
D. Item-level access

Correct Answer: C

Explanation:

  • Row-level security (RLS) filters rows dynamically based on user identity.
  • CLS restricts columns, OLS restricts objects, and item-level controls access to the artifact itself.

Question 2 (Scenario-based)

A sales manager should only see sales data for their assigned region across all reports. Which solution should you implement?

A. Column-level security
B. Row-level security with dynamic DAX
C. Object-level security
D. Workspace Viewer role

Correct Answer: B

Explanation:

  • Dynamic RLS uses functions like USERPRINCIPALNAME() to filter rows per user.
  • Workspace roles do not filter data.

Question 3 (Multi-select)

Which security types are configured within a Power BI semantic model? (Select all that apply.)

A. Row-level security
B. Column-level security
C. Object-level security
D. File-level security

Correct Answers: A, B, C

Explanation:

  • RLS, CLS, and OLS are semantic model features.
  • File-level security applies to OneLake files, not semantic models.

Question 4 (Scenario-based)

You want to prevent users from seeing a Salary column but still allow access to other columns in the table. What should you use?

A. Row-level security
B. Object-level security
C. Column-level security
D. Item-level access

Correct Answer: C

Explanation:

  • Column-level security hides specific columns from unauthorized users.
  • RLS filters rows, not columns.

Question 5 (Single choice)

Which access control hides entire tables or measures from users?

A. Row-level security
B. Column-level security
C. Object-level security
D. File-level security

Correct Answer: C

Explanation:

  • Object-level security (OLS) hides tables, columns, or measures completely.
  • Users won’t even see them in the field list.

Question 6 (Scenario-based)

A user should be able to query a semantic model but must not see a calculated measure used only internally. Which control is BEST?

A. Column-level security
B. Object-level security
C. Row-level security
D. Workspace permission

Correct Answer: B

Explanation:

  • OLS can hide measures entirely.
  • CLS only applies to columns, not measures.

Question 7 (Multi-select)

Which scenarios require file-level access controls in Microsoft Fabric? (Select all that apply.)

A. Restricting access to specific Parquet files in OneLake
B. Limiting access to a lakehouse table
C. Controlling access to raw ingestion files
D. Filtering rows in a semantic model

Correct Answers: A, C

Explanation:

  • File-level access applies to files and folders in OneLake.
  • Table and row access are handled elsewhere.

Question 8 (Scenario-based)

A data engineer needs access to raw files in OneLake, but analysts should only see curated tables. What should you implement?

A. Row-level security
B. Column-level security
C. File-level access controls
D. Object-level security

Correct Answer: C

Explanation:

  • File-level access ensures analysts cannot browse or access raw files.
  • RLS and CLS don’t apply at the file system level.

Question 9 (Single choice)

Which security type is evaluated first when a user attempts to access data?

A. Row-level security
B. Column-level security
C. Item-level access
D. Object-level security

Correct Answer: C

Explanation:

  • Item-level access determines whether the user can access the artifact at all.
  • If denied, other security layers are never evaluated.

Question 10 (Scenario-based)

A user can access a report but receives an error when querying a table directly from the semantic model. What is the MOST likely cause?

A. Missing Row-Level Security role
B. Column-level security blocking access
C. Object-level security hiding the table
D. File-level security restriction

Correct Answer: C

Explanation:

  • If OLS hides a table, it cannot be queried—even if reports still function.
  • Reports may rely on cached or abstracted queries.

Apply sensitivity labels to items in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Apply sensitivity labels to items

To Do:
Complete the related module for this topic in the Microsoft Learn course: Secure data access in Microsoft Fabric

Sensitivity labels are a data protection and governance feature in Microsoft Fabric that help organizations classify, protect, and control the handling of sensitive data. They integrate with Microsoft Purview Information Protection and extend data protection consistently across Fabric, Power BI, and Microsoft 365.

For the DP-600 exam, you should understand what sensitivity labels are, how they are applied, what they affect, and how they differ from access controls.

What Are Sensitivity Labels?

Sensitivity labels:

  • Classify data based on confidentiality and business impact
  • Travel with the data across supported services
  • Can trigger protection behaviors, such as encryption or usage restrictions

Common label examples include:

  • Public
  • Internal
  • Confidential
  • Highly Confidential

Labels are organizationally defined and managed centrally.

Where Sensitivity Labels Come From

Sensitivity labels in Fabric are:

  • Created and managed in Microsoft Purview
  • Defined at the tenant level by security or compliance administrators
  • Made available to Fabric and Power BI through tenant settings

Fabric users apply labels, but typically do not define them.

Items That Can Be Labeled in Microsoft Fabric

Sensitivity labels can be applied to many Fabric items, including:

  • Semantic models (datasets)
  • Reports
  • Dashboards
  • Dataflows
  • Lakehouses and Warehouses (where supported)
  • Exported artifacts (Excel, PowerPoint, PDF)

This makes labeling a cross-workload governance mechanism.

How Sensitivity Labels Are Applied

Labels can be applied:

  • Manually by item owners or authorized users
  • Automatically through inherited labeling
  • Programmatically via APIs (advanced scenarios)

Label Inheritance

In many cases:

  • Reports inherit the label from their underlying semantic model
  • Dashboards inherit labels from pinned tiles
  • Exported files inherit the label of the source item

This inheritance model is frequently tested in exam scenarios.

What Sensitivity Labels Do (and Do Not Do)

What they do:

  • Classify data for compliance and governance
  • Enable protection such as:
    • Encryption
    • Watermarking
    • Usage restrictions (e.g., block external sharing)
  • Travel with data when exported or shared

What they do NOT do:

  • Grant or restrict user access
  • Replace workspace, item-level, or data-level security
  • Filter rows or columns

Key exam distinction:
Sensitivity labels protect data after access is granted.

Sensitivity Labels vs Endorsements

These two concepts are often confused on exams.

FeatureSensitivity LabelsEndorsements
PurposeData protectionTrust and quality
EnforcedYesNo
Affects behaviorYes (encryption, sharing rules)No
Security-relatedYesGovernance guidance

Governance and Compliance Benefits

Sensitivity labels support:

  • Regulatory compliance (e.g., GDPR, HIPAA)
  • Data loss prevention (DLP)
  • Auditing and reporting
  • Consistent handling of sensitive data across platforms

They are especially important in environments with:

  • Self-service analytics
  • Data exports to Excel or PowerPoint
  • External sharing scenarios

Common Exam Scenarios

You may see questions such as:

  • A report exported to Excel must remain encrypted → sensitivity label
  • Data should be classified as confidential but still shared internally → labeling, not access restriction
  • Users can view data but cannot share externally → label-driven protection
  • A report automatically inherits its dataset’s classification → label inheritance

Best Practices to Remember

  • Apply labels at the semantic model level to ensure inheritance
  • Use sensitivity labels alongside:
    • Workspace and item-level access controls
    • RLS and CLS
    • Endorsements
  • Review labeling regularly to ensure accuracy
  • Educate users on selecting the correct label

Key Exam Takeaways

  • Sensitivity labels classify and protect data
  • They are defined in Microsoft Purview
  • Labels can enforce encryption and sharing restrictions
  • Labels do not control access
  • Inheritance behavior is important for DP-600 questions

Exam Tips

  • If a question focuses on classifying, protecting, or controlling how data is shared after access, think sensitivity labels.
  • If it focuses on who can see the data, think security roles or permissions.
  • Expect scenario questions involving:
    • PII, financial data, or confidential data
    • Export restrictions
    • Label inheritance
  • Know the difference between:
    • Security (RLS, OLS, item access)
    • Governance & compliance (sensitivity labels)
  • Always associate sensitivity labels with Microsoft Purview

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of applying sensitivity labels to items in Microsoft Fabric?

A. Improve query performance
B. Control row-level data access
C. Classify and protect data based on sensitivity
D. Grant workspace permissions

Correct Answer: C

Explanation:
Sensitivity labels are used for data classification, protection, and governance, not for performance or access control.


Question 2 (Scenario-based)

Your organization requires that all reports containing customer PII automatically display a watermark and restrict external sharing. What feature enables this?

A. Row-level security
B. Sensitivity labels with protection settings
C. Item-level access controls
D. Conditional access policies

Correct Answer: B

Explanation:
Sensitivity labels can apply visual markings, encryption, and sharing restrictions when integrated with Microsoft Purview.


Question 3 (Multi-select)

Which Fabric items can have sensitivity labels applied? (Select all that apply.)

A. Power BI reports
B. Semantic models
C. Lakehouses and warehouses
D. Notebooks

Correct Answers: A, B, C, D

Explanation:
Sensitivity labels can be applied to most Fabric artifacts, enabling consistent governance across analytics assets.


Question 4 (Scenario-based)

A semantic model inherits a sensitivity label from its underlying data source. What does this behavior represent?

A. Manual labeling
B. Label inheritance
C. Workspace-level labeling
D. Object-level security

Correct Answer: B

Explanation:
Label inheritance ensures that downstream artifacts maintain appropriate sensitivity classifications automatically.


Question 5 (Single choice)

Which service must be configured to define and manage sensitivity labels used in Microsoft Fabric?

A. Azure Active Directory
B. Microsoft Defender
C. Microsoft Purview
D. Power BI Admin portal

Correct Answer: C

Explanation:
Sensitivity labels are defined and managed in Microsoft Purview, then applied across Microsoft Fabric and Power BI.


Question 6 (Scenario-based)

A report is labeled Highly Confidential, but a user attempts to export its data to Excel. What is the expected behavior?

A. Export always succeeds
B. Export is blocked or encrypted based on label policy
C. Export ignores sensitivity labels
D. Only row-level security applies

Correct Answer: B

Explanation:
Sensitivity labels can restrict exports, apply encryption, or enforce protection based on policy.


Question 7 (Multi-select)

Which actions can sensitivity labels enforce? (Select all that apply.)

A. Data encryption
B. Watermarks and headers
C. External sharing restrictions
D. Row-level filtering

Correct Answers: A, B, C

Explanation:
Sensitivity labels control protection and compliance, not data filtering.


Question 8 (Scenario-based)

You apply a sensitivity label to a lakehouse. Which downstream artifact is MOST likely to automatically inherit the label?

A. A Power BI report built on the semantic model
B. A notebook in a different workspace
C. An external CSV export
D. An Azure SQL Database

Correct Answer: A

Explanation:
Label inheritance flows through Fabric analytics artifacts, especially semantic models and reports.


Question 9 (Single choice)

Who is typically allowed to apply or change sensitivity labels on Fabric items?

A. Any workspace Viewer
B. Only Microsoft admins
C. Users with sufficient item permissions
D. External users

Correct Answer: C

Explanation:
Users must have appropriate permissions (Contributor/Owner or item-level rights) to apply labels.


Question 10 (Scenario-based)

Your compliance team wants visibility into how sensitive data is used across Fabric. Which feature supports this requirement?

A. Query caching
B. Audit logs
C. Sensitivity labels with Purview reporting
D. Direct Lake mode

Correct Answer: C

Explanation:
Sensitivity labels integrate with Microsoft Purview reporting and auditing for compliance and governance tracking.


Endorse items in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Endorse items

To Do:
Complete the related module for this topic in the Microsoft Learn course: Secure data access in Microsoft Fabric

Item endorsement is a governance feature in Microsoft Fabric that helps organizations identify trusted, high-quality, and officially supported analytics assets. Endorsements guide users toward the right data and reports, reduce duplication, and promote consistent decision-making.

For the DP-600 exam, you should understand what endorsement is, the types of endorsements available, who can apply them, and how endorsements affect user behavior (not security).

What Does It Mean to Endorse an Item?

Endorsing an item signals to users that the content is:

  • Reliable
  • Well-maintained
  • Appropriate for reuse and decision-making

Endorsement is not a security mechanism. It does not grant or restrict access—it provides trust and visibility cues within the Fabric experience.

Endorsements can be applied to:

  • Semantic models (datasets)
  • Reports
  • Dashboards
  • Other supported Fabric items

Types of Endorsements

Microsoft Fabric supports three endorsement states:

1. None

There is no endorsement on the content.

2. Promoted

Promoted items are considered:

  • Useful
  • Reviewed
  • Suitable for reuse

Key characteristics:

  • Any item owner can promote their own content
  • Indicates quality, but not official certification
  • Common for team-approved or department-level assets

Typical use cases

  • A curated dataset used by multiple analysts
  • A well-designed report shared across a department

3. Certified

Certified items represent the highest level of trust.

Key characteristics:

  • Only authorized users (often admins or designated certifiers) can certify
  • Indicates the item meets organizational standards for:
    • Data quality
    • Governance
    • Security
  • Intended for enterprise-wide consumption

Typical use cases

  • Official financial reporting datasets
  • Executive dashboards
  • Enterprise semantic models

Who Can Endorse Items?

  • Promoted: Item owners
  • Certified: Users authorized by Fabric or Power BI tenant settings (often admins or data stewards)

This distinction is important for the exam: not everyone can certify content, even if they own it.

Where Endorsements Appear

Endorsements are visible across the Fabric and Power BI experiences:

  • In search results
  • In lineage view
  • In the data hub
  • When users select data sources for report creation

Certified items are typically:

  • Ranked higher
  • More visible
  • Preferred in self-service analytics workflows

Endorsements vs Security Controls

A common exam trap is confusing endorsements with access control.

FeatureEndorsementAccess Control
PurposeTrust and qualitySecurity and restriction
Limits access?NoYes
Affects visibilityYesYes
Enforced by systemNo (informational)Yes (mandatory)

The “Make discoverable” setting

Within the selection settings dialog for Endorsement, there is also a selection option for “Make discoverable“. This option, when selected, allows users to discover the content even if they do not have access to it, and they can then request access.

Summary table

Endorsement and Discovery stateWhat it isWho can do itTypical Use Cases
NoneThere is no endorsement on the content
PromotedThe content is endorsed/flagged as Promoted

Promoted items are considered:
– Useful
– Reviewed
– Suitable for reuse

Key characteristics:
– Any item owner can promote their own content
– Indicates quality, but not official certification
– Common for team-approved or department-level assets
– Users can assign “Promoted” without any specific admin settings.
– Users can “Promote” as long as they have write permissions on a semantic model.
– A curated dataset used by multiple analysts
– A well-designed report shared across a department
CertifiedThe content is endorsed/flagged as Certified.

Certified items represent the highest level of trust.

Key characteristics:
– Only authorized users (often admins or designated certifiers) can certify
– Indicates the item meets organizational standards for: Data quality, Governance, and Security
– Intended for enterprise-wide consumption
Certification” requires admin approval to be able to set it.– Official financial reporting datasets
– Executive dashboards
– Enterprise semantic models
Make discoverableThe content is endorsed/flagged as Findable. And the discoverability can be set for selected users, the entire company, or all except selected users.Make content discoverable even to those that do not currently have access, so that they become aware it’s available and can request access to it.

Key takeaway:
A user must still have workspace or item-level access to use an endorsed item.

Role of Endorsements in Governance

Endorsements support governance by:

  • Encouraging reuse of approved assets
  • Reducing “shadow BI”
  • Helping users choose the right data source
  • Aligning self-service analytics with enterprise standards

They are especially important in large Fabric environments with:

  • Many workspaces
  • Multiple datasets covering similar subject areas
  • Mixed technical and business users

Common Exam Scenarios

Expect questions such as:

  • When to use Promoted vs Certified
  • Who is allowed to certify an item
  • Whether certification affects access permissions (it does not)
  • How endorsements support discoverability and trust

Example scenario:

Business users are building reports from multiple datasets and need guidance on which one is authoritative.
Correct concept: Certified semantic models.

Best Practices to Remember

  • Promote items early to guide reuse
  • Reserve certification for high-value, governed assets
  • Combine endorsements with:
    • Clear workspace organization
    • Descriptions and documentation
    • Proper access controls
  • Review certifications periodically to ensure relevance

Key Exam Takeaways

  • Endorsements indicate trust, not permission
  • Two endorsement levels: Promoted and Certified
  • Certification requires special authorization
  • Endorsements improve discoverability and governance in Fabric

Final Exam Tips

  • If a question is about helping users identify trusted or official data, think endorsements.
  • If it’s about restricting access, think workspace, item-level, or data-level security.
  • Know the difference between Promoted and Certified
  • Expect scenario questions about:
    • Data trust
    • Self-service vs governed BI
    • Discoverability in Data hub
  • Remember:
    • Endorsements ≠ security
    • Endorsements ≠ performance tuning
  • Certification permissions are centrally controlled

Link to documentation on this topic: Endorse your content


Practice Questions


Question 1 (Single choice)

What is the PRIMARY purpose of endorsing items in Microsoft Fabric?

A. Improve dataset refresh performance
B. Control data access permissions
C. Identify trusted and authoritative content
D. Apply compliance policies

Correct Answer: C

Explanation:
Endorsements help users quickly identify reliable, trusted content such as official semantic models and reports.


Question 2 (Multi-select)

Which endorsement types are available in Microsoft Fabric? (Select all that apply.)

A. Certified
B. Promoted
C. Verified
D. Approved

Correct Answers: A, B

Explanation:
Fabric supports Promoted and Certified endorsements. “Verified” and “Approved” are not valid endorsement types.


Question 3 (Scenario-based)

A business analyst creates a report that is useful but not officially validated. What endorsement is MOST appropriate?

A. Certified
B. Promoted
C. Deprecated
D. Restricted

Correct Answer: B

Explanation:
Promoted indicates content that is useful and recommended, but not formally governed or validated.


Question 4 (Scenario-based)

Your organization wants only centrally governed semantic models to be marked as official sources of truth. Which endorsement should be used?

A. Promoted
B. Shared
C. Certified
D. Published

Correct Answer: C

Explanation:
Certified content represents authoritative, validated data assets approved by data owners or governance teams.


Question 5 (Single choice)

Who can typically certify an item in Microsoft Fabric?

A. Any workspace Member
B. Only the item creator
C. Users authorized by tenant or workspace settings
D. External users

Correct Answer: C

Explanation:
Certification is restricted and controlled by tenant-level or workspace-level governance policies.


Question 6 (Multi-select)

Which Fabric items can be endorsed? (Select all that apply.)

A. Semantic models
B. Reports
C. Dashboards
D. Dataflows Gen2

Correct Answers: A, B, D

Explanation:
Semantic models, reports, and dataflows can be endorsed. Dashboards are less commonly emphasized in Fabric exam contexts.


Question 7 (Scenario-based)

A user searches for datasets in the Data hub. How do endorsements help in this scenario?

A. They hide non-endorsed items
B. They improve query performance
C. They help users identify trusted content
D. They automatically grant access

Correct Answer: C

Explanation:
Endorsements improve discoverability and trust, not access or performance.


Question 8 (Single choice)

What is the relationship between endorsements and security?

A. Endorsements enforce access controls
B. Endorsements replace RLS
C. Endorsements are independent of security
D. Endorsements automatically grant read access

Correct Answer: C

Explanation:
Endorsements do not control access. Security must be handled separately via permissions and access controls.


Question 9 (Scenario-based)

Your organization wants users to prefer centrally curated datasets without blocking self-service models. What approach BEST supports this?

A. Apply row-level security
B. Disable dataset creation
C. Certify governed datasets
D. Use Direct Lake mode

Correct Answer: C

Explanation:
Certifying official datasets encourages reuse while still allowing self-service analytics.


Question 10 (Fill in the blank)

In Microsoft Fabric, ________ items represent fully validated and authoritative content, while ________ items indicate recommended but not formally governed content.

Correct Answer:
Certified, Promoted

Explanation:
Certified = authoritative source of truth
Promoted = useful and recommended, but not governed


Configure version control for a workspace in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Configure version control for a workspace

Version control in Microsoft Fabric enables teams to track changes, collaborate safely, and manage the lifecycle of analytics assets using source control practices. Fabric integrates workspace items with Git repositories, bringing DevOps discipline to analytics development.

For the DP-600 exam, you should understand how Git integration works in Fabric, what items are supported, how changes flow, and common governance scenarios.

What Is Workspace Version Control in Fabric?

Workspace version control allows you to:

  • Connect a Fabric workspace to a Git repository
  • Store item definitions as code artifacts
  • Track changes through commits, branches, and pull requests
  • Support collaborative and auditable development

This capability is often referred to as Git integration for Fabric workspaces.

Supported Source Control Platform

Microsoft Fabric supports:

  • Azure DevOps (ADO) Git repositories

Key points:

  • GitHub support is limited or evolving (exam questions typically reference Azure DevOps)
  • Repositories must already exist
  • Authentication is handled via Microsoft Entra ID

Exam note: Expect Azure DevOps to be the default answer unless stated otherwise.

What Items Can Be Version Controlled?

Common Fabric items that support version control include:

  • Semantic models
  • Reports
  • Lakehouses
  • Warehouses
  • Notebooks
  • Data pipelines
  • Dataflows Gen2

Items are serialized into files and folders in the Git repo, allowing:

  • Diffing
  • History tracking
  • Rollbacks

How to Configure Version Control for a Workspace

At a high level, the process is:

  1. Open the Fabric workspace settings
  2. Enable Git integration
  3. Select:
    • Azure DevOps organization
    • Project
    • Repository
    • Branch
  4. Choose a workspace folder structure
  5. Initialize synchronization

Once configured:

  • Workspace changes can be committed to Git
  • Repo changes can be synced back into the workspace

How Changes Flow Between Workspace and Git

From Workspace to Git

  • Users make changes in Fabric (e.g., update a report)
  • Changes are committed to the connected branch
  • Commit history tracks who changed what and when

From Git to Workspace

  • Changes merged into the branch can be pulled into Fabric
  • Enables controlled deployment across environments

Important exam concept:
Synchronization is not automatic—users must explicitly commit and sync.

Branching and Environment Strategy

A common lifecycle pattern:

  • Development workspace → linked to a dev branch
  • Test workspace → linked to a test branch
  • Production workspace → linked to a main branch

This supports:

  • Code reviews
  • Pull requests
  • Controlled promotion of changes

Permissions and Governance Considerations

To configure and use version control:

  • Users need sufficient workspace permissions (typically Admin or Member)
  • Users also need Git repository access
  • Git permissions are managed outside Fabric

Version control complements—but does not replace:

  • Workspace-level access controls
  • Item-level permissions
  • Endorsements and sensitivity labels

Benefits of Version Control in Fabric

Version control enables:

  • Collaboration among multiple developers
  • Change traceability and auditability
  • Rollback of problematic changes
  • CI/CD-style deployment patterns
  • Alignment with enterprise DevOps practices

These benefits are a frequent theme in DP-600 scenario questions.

Common Exam Scenarios

You may be asked to:

  • Identify when Git integration is appropriate
  • Choose the correct platform for source control
  • Understand how changes move between Git and Fabric
  • Design a dev/test/prod workspace strategy
  • Troubleshoot why changes are not reflected (sync not performed)

Example:

Multiple developers need to work on the same semantic model with change tracking.
Correct concept: Configure workspace version control with Git.

Key Exam Takeaways

  • Fabric supports Git-based version control at the workspace level
  • Azure DevOps is the primary supported platform
  • Changes require explicit commit and sync
  • Version control supports structured development and deployment
  • It is a core part of the analytics development lifecycle

Exam Tips

  • If a question mentions tracking changes, collaboration, rollback, or DevOps practices, think workspace version control with Git.
  • If it mentions moving changes between environments, think branches and multiple workspaces.
  • Know who can configure it → Workspace Admins
  • Understand Git integration flow
  • Expect scenario questions comparing:
    • Git vs deployment pipelines
    • Collaboration vs governance
  • Remember:
    • JSON-based artifacts
    • Not all items are supported
    • No automatic commits

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of configuring version control for a Fabric workspace?

A. Improve query execution performance
B. Enable collaboration, change tracking, and rollback
C. Enforce row-level security
D. Automatically deploy content to production

Correct Answer: B

Explanation:
Version control enables source control integration, allowing teams to track changes, collaborate safely, and roll back when needed.


Question 2 (Multi-select)

Which version control systems can be integrated with Microsoft Fabric workspaces? (Select all that apply.)

A. Azure DevOps Git repositories
B. GitHub repositories
C. OneDrive for Business
D. SharePoint document libraries

Correct Answers: A, B

Explanation:
Fabric supports Git integration using Azure DevOps and GitHub. OneDrive and SharePoint are not supported for workspace version control.


Question 3 (Scenario-based)

A team wants to manage Power BI reports, semantic models, and dataflows using pull requests and branching. What should they configure?

A. Deployment pipelines
B. Sensitivity labels
C. Workspace version control with Git
D. Incremental refresh

Correct Answer: C

Explanation:
Git-based workspace version control enables branching, pull requests, and code reviews.


Question 4 (Single choice)

Which workspace role is REQUIRED to configure version control for a workspace?

A. Viewer
B. Contributor
C. Member
D. Admin

Correct Answer: D

Explanation:
Only workspace Admins can connect a workspace to a Git repository.


Question 5 (Scenario-based)

After connecting a workspace to a Git repository, where are Fabric items stored?

A. As binary files
B. As JSON-based artifact definitions
C. As SQL scripts
D. As Excel files

Correct Answer: B

Explanation:
Fabric artifacts are stored as JSON files, making them suitable for source control and comparison.


Question 6 (Multi-select)

Which items can be included in workspace version control? (Select all that apply.)

A. Reports
B. Semantic models
C. Dataflows Gen2
D. Dashboards

Correct Answers: A, B, C

Explanation:
Reports, semantic models, and dataflows are supported. Dashboards are typically excluded from version control scenarios.


Question 7 (Scenario-based)

A developer modifies a semantic model directly in the Fabric workspace while Git integration is enabled. What happens NEXT?

A. The change is automatically committed
B. The change is rejected
C. The workspace shows uncommitted changes
D. The change is immediately deployed to production

Correct Answer: C

Explanation:
Changes made in the workspace appear as pending/uncommitted changes until explicitly committed to the repository.


Question 8 (Single choice)

What is the relationship between workspace version control and deployment pipelines?

A. They are the same feature
B. Version control replaces deployment pipelines
C. They complement each other
D. Deployment pipelines require version control

Correct Answer: C

Explanation:
Version control handles source management, while deployment pipelines manage promotion across environments.


Question 9 (Scenario-based)

Your organization wants to prevent accidental overwrites when multiple developers edit the same item. Which feature BEST helps?

A. Row-level security
B. Sensitivity labels
C. Git branching and pull requests
D. Incremental refresh

Correct Answer: C

Explanation:
Git workflows enable controlled collaboration through branches, reviews, and merges.


Question 10 (Fill in the blank)

When version control is enabled, Fabric workspace changes must be ________ to the repository and ________ to update the workspace from Git.

Correct Answer:
Committed, synced (or pulled)

Explanation:
Changes flow both ways:

  • Commit workspace → Git
  • Sync Git → workspace

Create and manage a Power BI Desktop project (.pbip) in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Create and manage a Power BI Desktop project (.pbip)

The Power BI Desktop project format (.pbip) is a modern, folder-based representation of a Power BI solution that enables source control, collaboration, and professional development lifecycle management. It replaces the traditional single-file .pbix model when teams adopt Git-based workflows.

For the DP-600 exam, you should understand what a PBIP is, how it is structured, how it integrates with version control, and when to use it.

What Is a Power BI Desktop Project (.pbip)?

A .pbip file is a project descriptor that points to a folder containing the full definition of a Power BI solution, including:

  • Semantic model metadata
  • Report layout and visuals
  • Connections and expressions

Unlike .pbix, a .pbip:

  • Is human-readable
  • Can be diffed and versioned
  • Works naturally with Git repositories

Key Benefits of Using PBIP

Using PBIP enables:

  • Source control integration
  • Multi-developer collaboration
  • Clear separation of model and report artifacts
  • Improved CI/CD and ALM practices
  • Easier change tracking and rollback

These benefits align directly with the analytics development lifecycle tested in DP-600.

PBIP Folder Structure (High Level)

A PBIP project typically includes:

  • A .pbip file (entry point for Power BI Desktop)
  • A SemanticModel folder
  • A Report folder

Each folder contains JSON-based definitions of:

  • Tables, relationships, measures
  • Visuals and report pages
  • Model properties and settings

Exam insight:
The semantic model and report can be versioned independently.

Creating a PBIP Project

Option 1: Create a New PBIP

  1. Open Power BI Desktop
  2. Create or open a report
  3. Save the project using Power BI Project (.pbip) format

Option 2: Convert an Existing PBIX

  • Open the .pbix file
  • Save As → Power BI Project (.pbip)

This converts the monolithic file into a folder-based project.

Managing PBIP Projects

Working with Source Control

  • Store PBIP projects in Azure DevOps Git repositories
  • Commit changes to track history
  • Use branches and pull requests for collaboration

Multi-Developer Scenarios

  • One developer can work on the semantic model
  • Another can work on report visuals
  • Changes can be merged safely using Git

Publishing to Fabric

  • Open the .pbip file in Power BI Desktop
  • Publish to a Fabric workspace
  • Workspace Git integration can align with the same repo

PBIP and Microsoft Fabric

PBIP works naturally with Fabric development practices:

  • Supports workspace Git integration
  • Aligns with dev/test/prod workspace patterns
  • Enables repeatable deployments
  • Complements Fabric items like Lakehouses and Warehouses

For DP-600, PBIP is often referenced as the recommended format for professional analytics development.

PBIP vs PBIX (Exam Comparison)

FeaturePBIXPBIP
File structureSingle binary fileFolder-based
Source control friendlyNoYes
Multi-developer supportLimitedStrong
CI/CD readinessLowHigh
Recommended for teamsNoYes

Common Exam Scenarios

You may be asked:

  • When to choose PBIP over PBIX
  • How PBIP supports Git and DevOps practices
  • How multiple developers collaborate on the same report
  • Why changes are easier to track with PBIP
  • How PBIP fits into Fabric workspace version control

Example:

A team wants to track changes to a semantic model using Git.
Correct answer: Use a PBIP project.

Best Practices to Remember

  • Use PBIP for team-based or enterprise solutions
  • Store PBIP projects in Git repositories
  • Pair PBIP with:
    • Workspace version control
    • Branching strategies
    • Separate dev/test/prod workspaces
  • Avoid PBIP for quick, ad-hoc analysis

Key Exam Takeaways

  • PBIP is a folder-based Power BI project format
  • Designed for source control and collaboration
  • Enables independent versioning of model and report
  • Strongly aligned with Fabric lifecycle management
  • Frequently tested in DP-600 ALM scenarios

Exam Tips

  • If a question mentions Git, collaboration, CI/CD, or multi-developer Power BI development, the correct concept is almost always Power BI Desktop projects (.pbip).
  • Expect comparison questions: .pbix vs .pbip
  • Know why .pbip exists → DevOps & collaboration
  • Understand:
    • Git-friendly file structure
    • No credentials stored
    • Works with Fabric workspace version control
  • Common scenario themes:
    • Multi-developer teams
    • CI/CD pipelines
    • Enterprise governance

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of using a Power BI Desktop project (.pbip) instead of a traditional .pbix file?

A. Improve report rendering performance
B. Enable better source control and collaboration
C. Reduce dataset refresh time
D. Support Direct Lake connectivity

Correct Answer: B

Explanation:
.pbip projects store report and model artifacts as multiple text-based files, making them suitable for Git version control, diffing, and team collaboration.


Question 2 (Multi-select)

Which components are stored separately when using a .pbip project? (Select all that apply.)

A. Report definition
B. Semantic model metadata
C. Data source credentials
D. Visual layout configuration

Correct Answers: A, B, D

Explanation:
.pbip breaks artifacts into JSON/text-based files for reports, models, and visuals. Credentials are not stored for security reasons.


Question 3 (Scenario-based)

A team wants multiple developers to work on the same Power BI solution using Git branches and pull requests. Which format should they use?

A. .pbix
B. .pbip
C. .pbit
D. .rdl

Correct Answer: B

Explanation:
.pbip is designed specifically for collaborative, Git-based workflows.


Question 4 (Single choice)

How do you create a Power BI Desktop project?

A. Save a report as .pbip from Power BI Service
B. Enable a setting and save from Power BI Desktop
C. Convert a .pbix automatically in Fabric
D. Import from Azure DevOps

Correct Answer: B

Explanation:
You enable Power BI Desktop Project support in Preview features, then save the report as a .pbip from Power BI Desktop.


Question 5 (Scenario-based)

After saving a report as .pbip, you notice dozens of files and folders. What is the BEST explanation?

A. The report was corrupted
B. Each artifact is stored as a separate definition
C. Temporary cache files were created
D. Power BI duplicated the dataset

Correct Answer: B

Explanation:
.pbip stores each logical artifact separately, enabling granular change tracking in source control.


Question 6 (Multi-select)

Which benefits does .pbip provide compared to .pbix? (Select all that apply.)

A. Meaningful Git diffs
B. Merge conflict resolution
C. Built-in deployment pipelines
D. Support for CI/CD workflows

Correct Answers: A, B, D

Explanation:
.pbip enables DevOps workflows, but deployment pipelines are a separate Fabric feature.


Question 7 (Scenario-based)

A developer modifies a DAX measure in a .pbip project. What happens in source control?

A. The entire report file changes
B. Only the affected model definition file changes
C. The change is ignored
D. The report must be re-imported

Correct Answer: B

Explanation:
Only the specific model file reflecting the DAX change is updated, enabling clean diffs.


Question 8 (Single choice)

Which file format is BETTER suited for enterprise development with Fabric Git integration?

A. .pbix
B. .pbip
C. .xlsx
D. .json

Correct Answer: B

Explanation:
.pbip aligns with Fabric workspace Git integration and enterprise development standards.


Question 9 (Scenario-based)

Your team wants to use .pbip but also publish reports to Fabric workspaces. What limitation should you consider?

A. .pbip reports cannot be published
B. Only Admins can publish .pbip
C. Local development requires Power BI Desktop
D. .pbip does not support semantic models

Correct Answer: C

Explanation:
.pbip is a Power BI Desktop development format; publishing still requires Desktop or pipeline automation.


Question 10 (Fill in the blank)

A .pbip project improves collaboration by storing Power BI artifacts as ________ files that work well with ________ systems.

Correct Answer:
Text-based (or JSON-based), source control (or Git)

Explanation:
Text-based files enable version tracking, branching, and code reviews.


Create and configure deployment pipelines

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Create and configure deployment pipelines

Development pipelines in Microsoft Fabric provide a structured, governed way to promote analytics content across environments—typically Development, Test, and Production. They are a core lifecycle management feature that helps teams deploy changes safely, consistently, and with minimal risk. For the DP-600 exam, you should understand what development pipelines are, how they are configured, what they support, and how they differ from Git-based version control.

What Are Development Pipelines?

A development pipeline is a Fabric feature that:

  • Connects multiple workspaces into an ordered promotion flow
  • Enables controlled deployment of items between environments
  • Supports validation and testing before production release

Pipelines are especially important for enterprise-scale analytics solutions.

Typical Pipeline Structure

A standard Fabric pipeline consists of three stages:

  1. Development
    • Active development
    • Frequent changes
    • Used by engineers and analysts
  2. Test
    • Validation and user acceptance testing
    • Data and logic verification
    • Limited access
  3. Production
    • Certified, trusted content
    • Broad consumer access
    • Minimal direct changes

Each stage is linked to a separate Fabric workspace.

Creating a Development Pipeline

At a high level, the process is:

  1. Create a deployment pipeline in Microsoft Fabric
  2. Assign a workspace to each stage:
    • Dev workspace
    • Test workspace
    • Prod workspace
  3. Configure pipeline settings
  4. Control who can deploy between stages

Once created, the pipeline provides a visual interface showing item differences across stages.

What Items Can Be Deployed Through Pipelines?

Development pipelines support deployment of many Fabric items, including:

  • Semantic models
  • Reports and dashboards
  • Dataflows Gen2
  • Lakehouses and Warehouses (supported scenarios)
  • Other supported analytics artifacts

Exam note:
Not every Fabric item supports pipeline deployment equally—expect questions to focus on Power BI and core analytics items.

How Deployment Works

Comparing Changes

  • Pipelines show differences between stages
  • You can review what will change before deploying

Deploying Content

  • Deploy from Dev → Test
  • Validate
  • Deploy from Test → Prod

Deployments:

  • Copy item definitions
  • Can update existing items or create new ones
  • Do not automatically move workspace permissions

Deployment Rules and Parameters

Pipelines support deployment rules, such as:

  • Changing data source connections per environment
  • Switching parameters between Dev, Test, and Prod
  • Avoiding hard-coded environment values

This is critical for:

  • Separating development and production data
  • Supporting safe testing

Pipelines vs Git Integration (Exam Comparison)

This distinction is frequently tested.

FeatureDevelopment PipelinesGit Integration
PurposeEnvironment promotionSource control
FocusDeploymentVersioning
Tracks historyNoYes
Supports branchingNoYes
Typical useDev → Test → ProdCode collaboration

Key insight:
They are complementary, not competing features.

Permissions and Governance

To use pipelines:

  • Users need appropriate pipeline permissions
  • Workspace access is still required
  • Production deployments are often restricted to a small group

Pipelines support governance by:

  • Reducing direct changes in production
  • Enforcing controlled release processes
  • Improving auditability

Common Exam Scenarios

You may be asked to:

  • Choose pipelines for controlled promotion of reports
  • Identify when pipelines are preferable to manual publishing
  • Combine pipelines with Git and PBIP
  • Configure different data sources per environment
  • Prevent accidental production changes

Example:

A report must be tested before being released to executives.
Correct concept: Use a development pipeline with Dev, Test, and Prod stages.

Best Practices to Remember

  • Use separate workspaces per environment
  • Restrict production deployment permissions
  • Combine pipelines with:
    • PBIP projects
    • Git integration
    • Endorsements and certification
  • Avoid direct editing in production

Key Exam Takeaways

  • Development pipelines manage content promotion across environments
  • They connect multiple Fabric workspaces
  • Pipelines support comparison, validation, and controlled deployment
  • They do not replace Git-based version control
  • A core feature of the Fabric analytics lifecycle

Exam Tips

  • If a question focuses on moving content safely from development to production, the correct answer is development pipelines.
  • If it focuses on tracking changes or collaboration, the answer is Git or PBIP.
  • Know how pipelines support:
    • Dev/Test/Prod lifecycle
    • Governance & change control
    • Environment-specific configuration
    • Enterprise-scale BI practices
  • Common exam traps:
    • Confusing workspace roles with deploy permissions
    • Assuming pipelines manage security or performance
    • Forgetting deployment rules

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of a deployment pipeline in Microsoft Fabric?

A. Schedule dataset refreshes
B. Promote content across lifecycle environments
C. Enable row-level security
D. Optimize DAX performance

Correct Answer: B

Explanation:
Deployment pipelines are designed to promote content across environments (for example, Development → Test → Production) in a controlled and governed manner.

  • ❌ A: Refresh scheduling is handled separately
  • ❌ C: Security is not the primary purpose
  • ❌ D: Performance tuning is unrelated

Question 2 (Multi-select)

Which stages are available by default in a Fabric deployment pipeline? (Select all that apply.)

A. Development
B. Test
C. Production
D. Sandbox

Correct Answers: A, B, C

Explanation:
Fabric deployment pipelines use a three-stage lifecycle:

  • Development
  • Test
  • Production

There is no default Sandbox stage.


Question 3 (Scenario-based)

A team wants analysts to freely modify reports, while only approved changes reach production. Which pipeline stage should analysts primarily work in?

A. Production
B. Test
C. Development
D. Any stage

Correct Answer: C

Explanation:
The Development stage is intended for:

  • Frequent changes
  • Experimentation
  • Initial validation

Higher stages are more controlled.


Question 4 (Single choice)

Which permission is required to deploy content from one stage to the next in a deployment pipeline?

A. Viewer
B. Contributor
C. Admin
D. Pipeline deploy permission

Correct Answer: D

Explanation:
Deploying content requires explicit pipeline deployment permissions, not just workspace roles.

  • ❌ Admin alone is not sufficient
  • ❌ Contributor may edit but not deploy

Question 5 (Scenario-based)

You deploy a semantic model from Test to Production. What happens to data source connections by default?

A. They are deleted
B. They remain unchanged
C. They can be overridden per stage
D. They must be manually reconfigured

Correct Answer: C

Explanation:
Deployment pipelines support parameter and data source rules, allowing environment-specific connections.


Question 6 (Multi-select)

Which items can be deployed using deployment pipelines? (Select all that apply.)

A. Reports
B. Semantic models
C. Dashboards
D. Notebooks

Correct Answers: A, B, C

Explanation:
Deployment pipelines support Power BI artifacts, including:

  • Reports
  • Semantic models
  • Dashboards

❌ Notebooks are Fabric artifacts but are not deployed via Power BI deployment pipelines.


Question 7 (Scenario-based)

A deployment shows warnings that some items are skipped. What is the MOST likely cause?

A. The workspace is full
B. Unsupported artifacts exist
C. The dataset is too large
D. Git integration is disabled

Correct Answer: B

Explanation:
Unsupported or incompatible artifacts (for example, unsupported report types) may be skipped during deployment.


Question 8 (Single choice)

Which feature allows different environments to use different data sources during deployment?

A. Row-level security
B. Dynamic format strings
C. Deployment rules
D. Incremental refresh

Correct Answer: C

Explanation:
Deployment rules allow:

  • Data source switching
  • Parameter overrides
  • Environment-specific configuration

Question 9 (Scenario-based)

You want production users to access only certified content. How do deployment pipelines help?

A. By enforcing sensitivity labels
B. By promoting tested content only
C. By encrypting production reports
D. By disabling edit access

Correct Answer: B

Explanation:
Deployment pipelines ensure:

  • Content is validated in Test
  • Only approved changes reach Production

They support trust and governance, not encryption or labeling.


Question 10 (Multi-select)

Which best practices apply when configuring deployment pipelines? (Select all that apply.)

A. Restrict deploy permissions
B. Use separate data sources per stage
C. Allow all users to deploy to Production
D. Validate content in Test before Production

Correct Answers: A, B, D

Explanation:
Best practices include:

  • Limited deploy access
  • Environment-specific configurations
  • Mandatory testing before production

❌ Allowing everyone to deploy defeats governance.


Perform impact analysis of downstream dependencies from lakehouses, data warehouses, dataflows, and semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Perform impact analysis of downstream dependencies from lakehouses,
data warehouses, dataflows, and semantic models

Impact analysis in Microsoft Fabric helps analytics engineers understand how changes to upstream data assets affect downstream items such as datasets, reports, dashboards, notebooks, and pipelines. It is a critical lifecycle practice that reduces the risk of breaking analytics solutions when making schema, logic, or data changes.

For the DP-600 exam, you should understand what impact analysis is, which Fabric tools support it, what dependencies are tracked, and how to use it in real-world lifecycle scenarios.

What Is Impact Analysis?

Impact analysis answers the question:

“If I change or delete this item, what else will be affected?”

It allows you to:

  • Identify downstream dependencies
  • Assess risk before making changes
  • Communicate potential impacts to stakeholders
  • Support safe development and deployment practices

Impact analysis is observational and informational—it does not enforce controls.

Where Impact Analysis Is Used in Fabric

Impact analysis applies across many Fabric items, including:

  • Lakehouses
  • Data Warehouses
  • Dataflows Gen2
  • Semantic models
  • Reports and dashboards
  • Notebooks and pipelines

These items form a connected analytics graph, which Fabric can visualize.

Lineage View: The Core Tool for Impact Analysis

The primary tool for impact analysis in Fabric is Lineage View.

What Lineage View Shows

  • Upstream data sources
  • Transformations and processing steps
  • Downstream consumers
  • Relationships between items

Lineage view provides a visual map of dependencies across workloads.

Impact Analysis by Asset Type

Lakehouses

Changing a Lakehouse can impact:

  • Notebooks reading tables
  • Semantic models using Direct Lake
  • Dataflows writing or reading data
  • Reports built on dependent models

Common risk: Dropping or renaming a column.

Data Warehouses

Warehouse changes may affect:

  • Views and SQL queries
  • Semantic models using DirectQuery
  • Reports and dashboards
  • External tools

Exam insight: Schema changes are a common source of downstream failures.

Dataflows Gen2

Dataflows often sit between raw data and analytics.

Changes can impact:

  • Lakehouses or Warehouses they load into
  • Semantic models consuming curated tables
  • Pipelines orchestrating refreshes

Semantic Models

Semantic models are among the most sensitive assets.

Changes may affect:

  • Reports and dashboards
  • Excel workbooks
  • Composite models
  • End-user self-service analytics

Exam note: Removing measures or renaming fields is high risk.

How to Perform Impact Analysis (High Level)

  1. Select the item (Lakehouse, Warehouse, Dataflow, or Semantic Model)
  2. Open Lineage view
  3. Review downstream dependencies
  4. Identify:
    • Reports
    • Datasets
    • Pipelines
    • Other dependent items
  5. Communicate or mitigate risk before making changes

Impact Analysis in the Development Lifecycle

Impact analysis is typically performed:

  • Before deploying changes
  • Before modifying schemas
  • Before deleting items
  • During troubleshooting

It supports:

  • Safe Git commits
  • Controlled pipeline deployments
  • Production stability

Common Exam Scenarios

You may see questions such as:

  • A column change breaks multiple reports → impact analysis was skipped
  • An engineer needs to know which reports use a dataset → lineage view
  • A Lakehouse schema update affects downstream models → review dependencies
  • A dataset should not be modified due to executive reports → high downstream impact

Example:

Before removing a table from a semantic model, what should you do?
Correct concept: Perform impact analysis using lineage view.

Impact Analysis vs Deployment Pipelines

These concepts are related but distinct.

FeatureImpact AnalysisDeployment Pipelines
PurposeRisk assessmentControlled promotion
EnforcedNoYes
TimingBefore changesDuring deployment
ToolLineage viewPipeline UI

Best Practices to Remember

  • Always check lineage before schema changes
  • Pay extra attention to semantic models and certified items
  • Communicate impacts to report owners
  • Pair impact analysis with:
    • Version control
    • Development pipelines
    • Endorsements and certification

Key Exam Takeaways

  • Impact analysis identifies downstream dependencies
  • Lineage view is the primary tool in Fabric
  • Applies to Lakehouses, Warehouses, Dataflows, and Semantic Models
  • Supports safe lifecycle and governance practices
  • A common scenario-based exam topic

Final Exam Tip

  • If a question asks what will break if I change this, the answer is impact analysis via lineage view.
  • If it asks how to safely move changes, the answer is pipelines or Git.
  • Expect questions that test:
    • When to perform impact analysis
    • Which items are affected by changes
    • Operational decision-making before deployments
  • Common traps:
    • Confusing impact analysis with lineage documentation
    • Assuming Fabric blocks breaking changes automatically
    • Forgetting semantic models are often the most impacted layer

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of impact analysis in Microsoft Fabric?

A. Improve query performance
B. Identify downstream objects affected by a change
C. Enforce data security policies
D. Reduce data refresh frequency

Correct Answer: B

Explanation:
Impact analysis helps you understand what items depend on a given artifact, so you can assess the risk of changes.

  • ❌ A: Performance tuning is separate
  • ❌ C: Security is not the focus
  • ❌ D: Refresh tuning is unrelated

Question 2 (Multi-select)

Which Fabric items can be analyzed for downstream dependencies? (Select all that apply.)

A. Lakehouses
B. Data warehouses
C. Dataflows
D. Semantic models

Correct Answers: A, B, C, D

Explanation:
Microsoft Fabric supports dependency tracking across all major analytical artifacts, enabling end-to-end lineage visibility.


Question 3 (Scenario-based)

You plan to rename a column in a lakehouse table. Which Fabric feature should you use FIRST?

A. Version control
B. Deployment pipeline
C. Impact analysis
D. Incremental refresh

Correct Answer: C

Explanation:
Renaming a column may break:

  • Semantic models
  • SQL queries
  • Reports

Impact analysis identifies what will be affected before the change.


Question 4 (Single choice)

Where do you access impact analysis for an item in Fabric?

A. Power BI Desktop
B. Microsoft Purview portal
C. Item settings in the Fabric workspace
D. Azure DevOps

Correct Answer: C

Explanation:
Impact analysis is accessible directly from the item context or settings within a Fabric workspace.

  • ❌ Purview focuses on governance/catalog
  • ❌ DevOps is not used for lineage

Question 5 (Scenario-based)

A dataflow loads data into a lakehouse that feeds multiple semantic models. What does impact analysis show?

A. Only the lakehouse
B. Only the semantic models
C. All downstream dependencies
D. Only refresh schedules

Correct Answer: C

Explanation:
Impact analysis provides a full dependency graph, showing all downstream items affected by changes.


Question 6 (Multi-select)

Which changes typically REQUIRE impact analysis before execution? (Select all that apply.)

A. Dropping columns
B. Renaming tables
C. Changing data types
D. Adding a new report page

Correct Answers: A, B, C

Explanation:
Structural changes can break dependencies. Adding a report page does not affect downstream items.


Question 7 (Scenario-based)

A semantic model is used by several reports and dashboards. What happens if you delete the model without impact analysis?

A. Nothing; reports are cached
B. Reports automatically reconnect
C. Reports and dashboards break
D. Fabric blocks the deletion

Correct Answer: C

Explanation:
Deleting a semantic model removes the data source for:

  • Reports
  • Dashboards

Impact analysis helps prevent such disruptions.


Question 8 (Single choice)

Which view best represents impact analysis results?

A. Tabular grid
B. SQL execution plan
C. Dependency graph
D. DAX query view

Correct Answer: C

Explanation:
Impact analysis is presented as a visual dependency graph, showing upstream and downstream relationships.


Question 9 (Scenario-based)

Which role MOST benefits from performing impact analysis regularly?

A. Report consumers
B. Workspace admins and data engineers
C. End-user analysts
D. External auditors

Correct Answer: B

Explanation:
Admins and engineers are responsible for:

  • Schema changes
  • Deployments
  • Stability

Impact analysis supports safe operational changes.


Question 10 (Multi-select)

Which best practices apply when using impact analysis? (Select all that apply.)

A. Perform before structural changes
B. Use in conjunction with deployment pipelines
C. Skip for minor schema updates
D. Communicate findings to stakeholders

Correct Answers: A, B, D

Explanation:
Impact analysis should:

  • Precede schema changes
  • Inform deployment decisions
  • Be communicated to stakeholders

❌ “Minor” changes can still break dependencies.


Deploy and Manage Semantic Models Using the XMLA Endpoint

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Implement security and governance
--> Deploy and manage semantic models by using the XMLA endpoint

The XMLA endpoint enables advanced, enterprise-grade management of Power BI semantic models in Microsoft Fabric. It allows analytics engineers to deploy, modify, automate, and govern semantic models using external tools and scripts—bringing full ALM (Application Lifecycle Management) capabilities to analytics solutions.

For the DP-600 exam, you should understand what the XMLA endpoint is, when to use it, what it enables, and how it fits into the analytics development lifecycle.

What Is the XMLA Endpoint?

The XMLA (XML for Analysis) endpoint is a programmatic interface that exposes semantic models in Fabric as Analysis Services-compatible models.

Through the XMLA endpoint, you can:

  • Deploy semantic models
  • Modify model metadata
  • Manage partitions and refreshes
  • Automate changes across environments
  • Integrate with DevOps workflows

Exam note:
The XMLA endpoint is enabled by default in Fabric workspaces backed by appropriate capacity.

When to Use the XMLA Endpoint

The XMLA endpoint is used when you need:

  • Advanced model editing beyond Power BI Desktop
  • Automated deployments
  • Bulk changes across models
  • Integration with CI/CD pipelines
  • Scripted refresh and partition management

It is commonly used in enterprise and large-scale deployments.

Tools That Use the XMLA Endpoint

Several tools connect to Fabric semantic models through XMLA:

  • Tabular Editor
  • SQL Server Management Studio (SSMS)
  • PowerShell scripts
  • Azure DevOps pipelines
  • Custom automation tools

These tools operate directly on the semantic model metadata.

Common XMLA-Based Management Tasks

Deploying Semantic Models

  • Push model definitions from source control
  • Promote models across Dev, Test, and Prod
  • Align models with environment-specific settings

Managing Model Metadata

  • Create or modify:
    • Measures
    • Calculated columns
    • Relationships
    • Perspectives
  • Apply bulk changes efficiently

Managing Refresh and Partitions

  • Configure incremental refresh
  • Trigger or monitor refresh operations
  • Manage large models efficiently

XMLA Endpoint and the Development Lifecycle

XMLA plays a key role in:

  • CI/CD pipelines for analytics
  • Automated model validation
  • Environment promotion
  • Controlled production updates

It complements:

  • PBIP projects
  • Git integration
  • Development pipelines

Permissions and Requirements

To use the XMLA endpoint:

  • The workspace must be on supported capacity
  • The user must have sufficient permissions:
    • Workspace Admin or Member
  • Access is governed by Fabric and Entra ID

Exam insight:
Viewers cannot use XMLA to modify models.

XMLA Endpoint vs Power BI Desktop

FeaturePower BI DesktopXMLA Endpoint
Visual modelingYesNo
Scripted changesNoYes
AutomationLimitedStrong
Bulk editsNoYes
CI/CD integrationLimitedYes

Key takeaway:
Power BI Desktop is for design; XMLA is for enterprise management and automation.

Common Exam Scenarios

Expect questions such as:

  • Automating semantic model deployment → XMLA
  • Making bulk changes to measures → XMLA
  • Managing partitions for large models → XMLA
  • Integrating Power BI models into DevOps → XMLA
  • Editing a production model without Desktop → XMLA

Example:

A company needs to automate semantic model deployments across environments.
Correct concept: Use the XMLA endpoint.

Best Practices to Remember

  • Use XMLA for production changes and automation
  • Combine XMLA with:
    • Git repositories
    • Tabular Editor
    • Deployment pipelines
  • Limit XMLA access to trusted roles
  • Avoid manual production edits when automation is available

Key Exam Takeaways

  • XMLA enables advanced semantic model management
  • Supports automation, scripting, and CI/CD
  • Used with tools like Tabular Editor and SSMS
  • Requires appropriate permissions and capacity
  • A core ALM feature for DP-600

Exam Tips

  • If a question mentions automation, scripting, bulk model changes, or CI/CD, the answer is almost always the XMLA endpoint.
  • If it mentions visual report design, the answer is Power BI Desktop.
  • Expect questions that test:
    • When to use XMLA vs Power BI Desktop
    • Tool selection (Tabular Editor vs pipelines)
    • Security and permissions
    • Enterprise deployment scenarios
  • High-value keywords to remember:
    • XMLA • TMSL • External tools • CI/CD • Metadata management

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of the XMLA endpoint in Microsoft Fabric?

A. Enable SQL querying of lakehouses
B. Provide programmatic management of semantic models
C. Secure data using row-level security
D. Schedule data refreshes

Correct Answer: B

Explanation:
The XMLA endpoint enables advanced management and deployment of semantic models using tools such as:

  • Tabular Editor
  • SQL Server Management Studio (SSMS)
  • Power BI REST APIs

Question 2 (Multi-select)

Which tools can connect to a Fabric semantic model via the XMLA endpoint? (Select all that apply.)

A. Tabular Editor
B. SQL Server Management Studio (SSMS)
C. Power BI Desktop
D. Azure Data Studio

Correct Answers: A, B

Explanation:

  • Tabular Editor and SSMS use XMLA to manage models.
  • ❌ Power BI Desktop uses a local model, not XMLA.
  • ❌ Azure Data Studio does not manage semantic models via XMLA.

Question 3 (Scenario-based)

You want to deploy a semantic model from Development to Production while preserving model metadata. What is the BEST approach?

A. Export and re-import a PBIX file
B. Use deployment pipelines only
C. Use XMLA with model scripting
D. Rebuild the model manually

Correct Answer: C

Explanation:
XMLA enables:

  • Model scripting (TMSL)
  • Metadata-preserving deployments
  • Controlled promotion across environments

Question 4 (Single choice)

Which capability requires the XMLA endpoint to be enabled?

A. Creating reports
B. Editing DAX measures outside Power BI Desktop
C. Viewing model lineage
D. Applying sensitivity labels

Correct Answer: B

Explanation:
Editing measures, calculation groups, and partitions using external tools requires XMLA connectivity.


Question 5 (Scenario-based)

An enterprise team wants to automate semantic model deployment through CI/CD pipelines. Which XMLA-based artifact is MOST commonly used?

A. PBIP project file
B. TMSL scripts
C. DAX Studio queries
D. SQL views

Correct Answer: B

Explanation:
Tabular Model Scripting Language (TMSL) is the standard XMLA-based format for:

  • Creating
  • Updating
  • Deploying semantic models programmatically

Question 6 (Multi-select)

Which operations can be performed through the XMLA endpoint? (Select all that apply.)

A. Create and modify measures
B. Configure partitions and refresh policies
C. Apply row-level security
D. Build report visuals

Correct Answers: A, B, C

Explanation:
XMLA supports model-level operations. Report visuals are created in Power BI reports, not via XMLA.


Question 7 (Scenario-based)

You attempt to connect to a semantic model via XMLA but the connection fails. What is the MOST likely cause?

A. XMLA endpoint is disabled for the workspace
B. Dataset refresh is in progress
C. Data source credentials are missing
D. The report is unpublished

Correct Answer: A

Explanation:
XMLA must be:

  • Enabled at the capacity or workspace level
  • Supported by the Fabric SKU

Question 8 (Single choice)

Which security requirement applies when using the XMLA endpoint?

A. Viewer permissions are sufficient
B. Read permission only
C. Contributor or higher workspace role
D. Report Builder permissions

Correct Answer: C

Explanation:
Managing semantic models via XMLA requires Contributor, Member, or Admin roles.


Question 9 (Scenario-based)

A developer edits calculation groups using Tabular Editor via XMLA. What happens after saving changes?

A. Changes remain local only
B. Changes are immediately published to the semantic model
C. Changes require a dataset refresh to apply
D. Changes are stored in the PBIX file

Correct Answer: B

Explanation:
Edits made via XMLA tools apply directly to the deployed semantic model in Fabric.


Question 10 (Multi-select)

Which are BEST practices when managing semantic models using XMLA? (Select all that apply.)

A. Use source control for TMSL scripts
B. Limit XMLA access to production workspaces
C. Make direct changes in production without testing
D. Combine XMLA with deployment pipelines

Correct Answers: A, B, D

Explanation:
Best practices include:

  • Version control
  • Controlled access
  • Structured deployments

❌ Direct production changes without testing increase risk.


Create and Update Reusable Assets, including Power BI template (.pbit) files, Power BI data source (.pbids) files, and shared semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Create and update reusable assets, including Power BI template (.pbit)
files, Power BI data source (.pbids) files, and shared semantic models

Reusable assets are a key lifecycle concept in Microsoft Fabric and Power BI. They enable consistency, scalability, and efficiency by allowing teams to standardize how data is connected, modeled, and visualized across multiple solutions.

For the DP-600 exam, you should understand what reusable assets are, how to create and manage them, and when each type is appropriate.

What Are Reusable Assets?

Reusable assets are analytics artifacts designed to be:

  • Used by multiple users or teams
  • Reapplied across projects
  • Centrally governed and maintained

Common reusable assets include:

  • Power BI template (.pbit) files
  • Power BI data source (.pbids) files
  • Shared semantic models

Power BI Template Files (.pbit)

What Is a PBIT File?

A .pbit file is a Power BI template that contains:

  • Report layout and visuals
  • Data model structure (tables, relationships, measures)
  • Parameters and queries (without data)

It does not include actual data.

When to Use PBIT Files

PBIT files are ideal when:

  • Standardizing report design and metrics
  • Distributing reusable report frameworks
  • Supporting self-service analytics at scale
  • Onboarding new analysts

Creating and Updating PBIT Files

  • Create a report in Power BI Desktop
  • Remove data (if present)
  • Save as Power BI Template (.pbit)
  • Store in source control or shared repository
  • Update centrally and redistribute as needed

Power BI Data Source Files (.pbids)

What Is a PBIDS File?

A .pbids file is a JSON-based file that defines:

  • Data source connection details
  • Server, database, or endpoint information
  • Authentication type (but not credentials)

Opening a PBIDS file launches Power BI Desktop and guides users through connecting to the correct data source.

When to Use PBIDS Files

PBIDS files are useful for:

  • Standardizing data connections
  • Reducing configuration errors
  • Guiding business users to approved sources
  • Supporting governed self-service analytics

Managing PBIDS Files

  • Create manually or export from Power BI Desktop
  • Store centrally (e.g., Git, SharePoint)
  • Update when connection details change
  • Pair with shared semantic models where possible

Shared Semantic Models

What Are Shared Semantic Models?

Shared semantic models are centrally managed datasets that:

  • Define business logic, measures, and relationships
  • Serve as a single source of truth
  • Are reused across multiple reports

They are one of the most important reusable assets in Fabric.

Benefits of Shared Semantic Models

  • Consistent metrics across reports
  • Reduced duplication
  • Centralized governance
  • Better performance and manageability

Managing Shared Semantic Models

Shared semantic models are:

  • Developed by analytics engineers
  • Published to Fabric workspaces
  • Shared using Build permission
  • Governed with:
    • RLS and OLS
    • Sensitivity labels
    • Endorsements (Promoted/Certified)

How These Assets Work Together

A common pattern:

  • PBIDS → Standardizes connection
  • Shared semantic model → Defines logic
  • PBIT → Standardizes report layout

This layered approach is frequently tested in exam scenarios.

Reusable Assets and the Development Lifecycle

Reusable assets support:

  • Faster development
  • Consistent deployments
  • Easier maintenance
  • Scalable self-service analytics

They align naturally with:

  • PBIP projects
  • Git version control
  • Development pipelines
  • XMLA-based automation

Common Exam Scenarios

You may be asked:

  • How to distribute a standardized report template → PBIT
  • How to ensure users connect to the correct data source → PBIDS
  • How to enforce consistent business logic → Shared semantic model
  • How to reduce duplicate datasets → Shared model + Build permission

Example:

Multiple teams need to create reports using the same metrics and layout.
Correct concepts: Shared semantic model and PBIT.

Best Practices to Remember

  • Centralize ownership of shared semantic models
  • Certify trusted reusable assets
  • Store templates and PBIDS files in source control
  • Avoid duplicating business logic in individual reports
  • Pair reusable assets with governance features

Key Exam Takeaways

  • Reusable assets improve consistency and scalability
  • PBIT files standardize report design
  • PBIDS files standardize data connections
  • Shared semantic models centralize business logic
  • All are core lifecycle tools in Fabric

Exam Tips

  • If a question focuses on standardization, reuse, or self-service at scale, think PBIT, PBIDS, and shared semantic models—and choose the one that matches the problem being solved.
  • Expect scenarios that test:
    • When to use PBIT vs PBIDS vs shared semantic models
    • Governance and consistency
    • Enterprise BI scalability
  • Quick memory aid:
    • PBIT = Layout + Model (no data)
    • PBIDS = Connection only
    • Shared model = Logic once, reports many

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of a Power BI template (.pbit) file?

A. Store report data for reuse
B. Share report layout and model structure without data
C. Store credentials securely
D. Enable real-time data refresh

Correct Answer: B

Explanation:
A .pbit file contains:

  • Report layout
  • Semantic model (tables, relationships, measures)
  • No data

It’s used to standardize report creation.


Question 2 (Multi-select)

Which components are included in a Power BI template (.pbit)? (Select all that apply.)

A. Report visuals
B. Data model schema
C. Data source credentials
D. DAX measures

Correct Answers: A, B, D

Explanation:

  • Templates include visuals, schema, relationships, and measures.
  • ❌ Credentials and data are never included.

Question 3 (Scenario-based)

Your organization wants users to quickly connect to approved data sources while preventing incorrect connection strings. Which reusable asset is BEST?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: C

Explanation:
PBIDS files:

  • Predefine connection details
  • Guide users to approved data sources
  • Improve governance and consistency

Question 4 (Single choice)

Which statement about Power BI data source (.pbids) files is TRUE?

A. They contain report visuals
B. They contain DAX measures
C. They define connection metadata only
D. They store dataset refresh schedules

Correct Answer: C

Explanation:
PBIDS files only store:

  • Data source type
  • Server/database info
    They do NOT include visuals, data, or logic.

Question 5 (Scenario-based)

You want multiple reports to use the same curated dataset to ensure consistent KPIs. What should you implement?

A. Multiple PBIX files
B. Power BI templates
C. Shared semantic model
D. PBIDS files

Correct Answer: C

Explanation:
A shared semantic model allows:

  • Centralized logic
  • Single source of truth
  • Multiple reports connected via Live/Direct Lake

Question 6 (Multi-select)

Which benefits are provided by shared semantic models? (Select all that apply.)

A. Consistent calculations across reports
B. Reduced duplication of datasets
C. Independent refresh schedules per report
D. Centralized security management

Correct Answers: A, B, D

Explanation:

  • Shared models enforce consistency and reduce maintenance.
  • ❌ Refresh is managed at the model level, not per report.

Question 7 (Scenario-based)

You update a shared semantic model’s calculation logic. What is the impact?

A. Only new reports see the change
B. All connected reports reflect the change
C. Reports must be republished
D. Only the workspace owner sees updates

Correct Answer: B

Explanation:
All reports connected to a shared semantic model automatically reflect changes.


Question 8 (Single choice)

Which reusable asset BEST supports report creation without requiring Power BI Desktop modeling skills?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: D

Explanation:
Users can build reports directly on shared semantic models using existing fields and measures.


Question 9 (Scenario-based)

You want to standardize report branding, page layout, and slicers across teams. What should you distribute?

A. PBIDS file
B. Shared semantic model
C. PBIT file
D. XMLA script

Correct Answer: C

Explanation:
PBIT files are ideal for:

  • Visual consistency
  • Reusable layouts
  • Standard filters and slicers

Question 10 (Multi-select)

Which are BEST practices when managing reusable Power BI assets? (Select all that apply.)

A. Store PBIT and PBIDS files in version control
B. Update shared semantic models directly in production without testing
C. Document reusable asset usage
D. Combine shared semantic models with deployment pipelines

Correct Answers: A, C, D

Explanation:
Best practices emphasize:

  • Governance
  • Controlled updates
  • Documentation

❌ Direct production edits increase risk.