Category: Data Development

Create Views, Functions, and Stored Procedures

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Transform data
--> Create views, functions, and stored procedures

Creating views, functions, and stored procedures is a core data transformation and modeling skill for analytics engineers working in Microsoft Fabric. These objects help abstract complexity, improve reusability, enforce business logic, and optimize downstream analytics and reporting.

This section of the DP-600 exam focuses on when, where, and how to use these objects effectively across Fabric components such as Lakehouses, Warehouses, and SQL analytics endpoints.

Views

What are Views?

A view is a virtual table defined by a SQL query. It does not store data itself but presents data dynamically from underlying tables.

Where Views Are Used in Fabric

  • Fabric Data Warehouse
  • Lakehouse SQL analytics endpoint
  • Exposed to Power BI semantic models and other consumers

Common Use Cases

  • Simplify complex joins and transformations
  • Present curated, analytics-ready datasets
  • Enforce column-level or row-level filtering logic
  • Provide a stable schema over evolving raw data

Key Characteristics

  • Always reflect the latest data
  • Can be used like tables in SELECT statements
  • Improve maintainability and readability
  • Can support security patterns when combined with permissions

Exam Tip

Know that views are ideal for logical transformations, not heavy compute or data persistence.

Functions

What are Functions?

Functions encapsulate reusable logic and return a value or a table. They help standardize calculations and transformations across queries.

Types of Functions (SQL)

  • Scalar functions: Return a single value (e.g., formatted date, calculated metric)
  • Table-valued functions (TVFs): Return a result set that behaves like a table

Where Functions Are Used in Fabric

  • Fabric Warehouses
  • SQL analytics endpoints for Lakehouses

Common Use Cases

  • Standardized business calculations
  • Reusable transformation logic
  • Parameterized filtering or calculations
  • Cleaner and more modular SQL code

Key Characteristics

  • Improve consistency across queries
  • Can be referenced in views and stored procedures
  • May impact performance if overused in large queries

Exam Tip

Functions promote reuse and consistency, but should be used thoughtfully to avoid performance overhead.

Stored Procedures

What are Stored Procedures?

Stored procedures are precompiled SQL code blocks that can accept parameters and perform multiple operations.

Where Stored Procedures Are Used in Fabric

  • Fabric Data Warehouses
  • SQL endpoints that support procedural logic

Common Use Cases

  • Complex transformation workflows
  • Batch processing logic
  • Conditional logic and control-of-flow (IF/ELSE, loops)
  • Data loading, validation, and orchestration steps

Key Characteristics

  • Can perform multiple SQL statements
  • Can accept input and output parameters
  • Improve performance by reducing repeated compilation
  • Support automation and operational workflows

Exam Tip

Stored procedures are best for procedural logic and orchestration, not ad-hoc analytics queries.

Choosing Between Views, Functions, and Stored Procedures

ObjectBest Used For
ViewsSimplifying data access and shaping datasets
FunctionsReusable calculations and logic
Stored ProceduresComplex, parameter-driven workflows

Understanding why you would choose one over another is frequently tested on the DP-600 exam.

Integration with Power BI and Analytics

  • Views are commonly consumed by Power BI semantic models
  • Functions help ensure consistent calculations across reports
  • Stored procedures are typically part of data preparation or orchestration, not directly consumed by reports

Governance and Best Practices

  • Use clear naming conventions (e.g., vw_, fn_, sp_)
  • Document business logic embedded in SQL objects
  • Minimize logic duplication across objects
  • Apply permissions carefully to control access
  • Balance reusability with performance considerations

What to Know for the DP-600 Exam

You should be comfortable with:

  • When to use views vs. functions vs. stored procedures
  • How these objects support data transformation
  • Their role in analytics-ready data preparation
  • How they integrate with Lakehouses, Warehouses, and Power BI
  • Performance and governance implications

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions to guide you
  • Expect scenario-based questions rather than direct definitions

1. What is the primary purpose of creating a view in a Fabric lakehouse or warehouse?

A. To permanently store transformed data
B. To execute procedural logic with parameters
C. To provide a virtual, query-based representation of data
D. To orchestrate batch data loads

Correct Answer: C

Explanation:
A view is a virtual table defined by a SQL query. It does not store data but dynamically presents data from underlying tables, making it ideal for simplifying access and shaping analytics-ready datasets.

2. Which Fabric component commonly exposes views directly to Power BI semantic models?

A. Eventhouse
B. SQL analytics endpoint
C. Dataflow Gen2
D. Real-Time hub

Correct Answer: B

Explanation:
The SQL analytics endpoint (for lakehouses and warehouses) exposes tables and views that Power BI semantic models can consume using SQL-based connectivity.

3. When should you use a scalar function instead of a view?

A. When you need to return a dataset with multiple rows
B. When you need to encapsulate reusable calculation logic
C. When you need to perform batch updates
D. When you want to persist transformed data

Correct Answer: B

Explanation:
Scalar functions are designed to return a single value and are ideal for reusable calculations such as formatting, conditional logic, or standardized metrics.

4. Which object type can return a result set that behaves like a table?

A. Scalar function
B. Stored procedure
C. Table-valued function
D. View index

Correct Answer: C

Explanation:
A table-valued function (TVF) returns a table and can be used in FROM clauses, similar to a view but with parameterization support.

5. Which scenario is the best use case for a stored procedure?

A. Creating a simplified reporting dataset
B. Applying row-level filters for security
C. Running conditional logic with multiple SQL steps
D. Exposing data to Power BI reports

Correct Answer: C

Explanation:
Stored procedures are best suited for procedural logic, including conditional branching, looping, and executing multiple SQL statements as part of a workflow.

6. Why are views commonly preferred over duplicating transformation logic in reports?

A. Views improve report rendering speed automatically
B. Views centralize and standardize transformation logic
C. Views permanently store transformed data
D. Views replace semantic models

Correct Answer: B

Explanation:
Views allow transformation logic to be defined once and reused consistently across multiple reports and consumers, improving maintainability and governance.

7. What is a potential downside of overusing functions in large SQL queries?

A. Increased storage costs
B. Reduced data freshness
C. Potential performance degradation
D. Loss of security enforcement

Correct Answer: C

Explanation:
Functions, especially scalar functions, can negatively impact query performance when used extensively on large datasets due to repeated execution per row.

8. Which object is most appropriate for parameter-driven data preparation steps in a warehouse?

A. View
B. Scalar function
C. Table
D. Stored procedure

Correct Answer: D

Explanation:
Stored procedures support parameters, control-of-flow logic, and multiple statements, making them ideal for complex, repeatable data preparation tasks.

9. How do views support governance and security in Microsoft Fabric?

A. By encrypting data at rest
B. By defining workspace-level permissions
C. By exposing only selected columns or filtered rows
D. By controlling OneLake storage access

Correct Answer: C

Explanation:
Views can limit the columns and rows exposed to users, helping implement logical data access patterns when combined with permissions and security models.

10. Which statement best describes how these objects fit into Fabric’s analytics lifecycle?

A. They replace Power BI semantic models
B. They are primarily used for real-time streaming
C. They prepare and standardize data for downstream analytics
D. They manage infrastructure-level security

Correct Answer: C

Explanation:
Views, functions, and stored procedures play a key role in transforming, standardizing, and preparing data for consumption by semantic models, reports, and analytics tools.

Choose Between a Lakehouse, Warehouse, or Eventhouse

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Get data
--> Choose Between a Lakehouse, Warehouse, or Eventhouse

One of the most important architectural decisions a Microsoft Fabric Analytics Engineer must make is selecting the right analytical store for a given workload. For the DP-600 exam, this topic tests your ability to choose between a Lakehouse, Warehouse, or Eventhouse based on data type, query patterns, latency requirements, and user personas.

Overview of the Three Options

Microsoft Fabric provides three primary analytics storage and query experiences:

OptionPrimary Purpose
LakehouseFlexible analytics on files and tables using Spark and SQL
WarehouseEnterprise-grade SQL analytics and BI reporting
EventhouseReal-time and near-real-time analytics on streaming data

Understanding why and when to use each is critical for DP-600 success.

Lakehouse

What Is a Lakehouse?

A Lakehouse combines the flexibility of a data lake with the structure of a data warehouse. Data is stored in Delta Lake format in OneLake and can be accessed using both Spark and SQL.

When to Choose a Lakehouse

Choose a Lakehouse when you need:

  • Flexible schema (schema-on-read or schema-on-write)
  • Support for data engineering and data science
  • Access to raw, curated, and enriched data
  • Spark-based transformations and notebooks
  • Mixed workloads (batch analytics, exploration, ML)

Key Characteristics

  • Supports files and tables
  • Uses Spark SQL and T-SQL endpoints
  • Ideal for ELT and advanced transformations
  • Easy integration with notebooks and pipelines

Exam signal words: flexible, raw data, Spark, data science, experimentation

Warehouse

What Is a Warehouse?

A Warehouse is a fully managed, SQL-first analytical store optimized for business intelligence and reporting. It enforces schema-on-write and provides a traditional relational experience.

When to Choose a Warehouse

Choose a Warehouse when you need:

  • Strong SQL-based analytics
  • High-performance reporting
  • Well-defined schemas and governance
  • Centralized enterprise BI
  • Compatibility with Power BI Import or DirectQuery

Key Characteristics

  • T-SQL only (no Spark)
  • Optimized for structured data
  • Best for star/snowflake schemas
  • Familiar experience for SQL developers

Exam signal words: enterprise BI, reporting, structured, governed, SQL-first

Eventhouse

What Is an Eventhouse?

An Eventhouse is optimized for real-time and streaming analytics, built on KQL (Kusto Query Language). It is designed to handle high-velocity event data.

When to Choose an Eventhouse

Choose an Eventhouse when you need:

  • Near-real-time or real-time analytics
  • Streaming data ingestion
  • Operational or telemetry analytics
  • Event-based dashboards and alerts

Key Characteristics

  • Uses KQL for querying
  • Integrates with Eventstreams
  • Handles massive ingestion rates
  • Optimized for time-series data

Exam signal words: streaming, telemetry, IoT, real-time, events

Choosing the Right Option (Exam-Critical)

The DP-600 exam often presents scenarios where multiple options could work, but only one best fits the requirements.

Decision Matrix

RequirementBest Choice
Raw + curated dataLakehouse
Complex Spark transformationsLakehouse
Enterprise BI reportingWarehouse
Strong governance and schemasWarehouse
Streaming or telemetry dataEventhouse
Near-real-time dashboardsEventhouse
SQL-only usersWarehouse
Data science workloadsLakehouse

Common Exam Scenarios

You may be asked to:

  • Choose a storage type for a new analytics solution
  • Migrate from traditional systems to Fabric
  • Support both engineers and analysts
  • Enable real-time monitoring
  • Balance governance with flexibility

Always identify:

  1. Data type (batch vs streaming)
  2. Latency requirements
  3. User personas
  4. Query language
  5. Governance needs

Best Practices to Remember

  • Use Lakehouse as a flexible foundation for analytics
  • Use Warehouse for polished, governed BI solutions
  • Use Eventhouse for real-time operational insights
  • Avoid forcing one option to handle all workloads
  • Let business requirements—not familiarity—drive the choice

Key Takeaway
For the DP-600 exam, choosing between a Lakehouse, Warehouse, or Eventhouse is about aligning data characteristics and access patterns with the right Fabric experience. Lakehouses provide flexibility, Warehouses deliver enterprise BI performance, and Eventhouses enable real-time analytics. The correct answer is almost always the one that best fits the scenario constraints.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions, with the below possible association:
    • Spark, raw, experimentationLakehouse
    • Enterprise BI, governed, SQL reportingWarehouse
    • Streaming, telemetry, real-timeEventhouse
  • Expect scenario-based questions rather than direct definitions

1. Which Microsoft Fabric component is BEST suited for flexible analytics on both files and tables using Spark and SQL?

A. Warehouse
B. Eventhouse
C. Lakehouse
D. Semantic model

Correct Answer: C

Explanation:
A Lakehouse stores data in Delta format in OneLake and supports both Spark and SQL, making it ideal for flexible analytics across files and tables.

2. A team of data scientists needs to experiment with raw and curated data using notebooks. Which option should they choose?

A. Warehouse
B. Eventhouse
C. Semantic model
D. Lakehouse

Correct Answer: D

Explanation:
Lakehouses are designed for data engineering and data science workloads, offering Spark-based notebooks and flexible schema handling.

3. Which option is MOST appropriate for enterprise BI reporting with well-defined schemas and strong governance?

A. Lakehouse
B. Warehouse
C. Eventhouse
D. OneLake

Correct Answer: B

Explanation:
Warehouses are SQL-first, schema-on-write systems optimized for structured data, governance, and high-performance BI reporting.

4. A solution must support near-real-time analytics on streaming IoT telemetry data. Which Fabric component should be used?

A. Lakehouse
B. Warehouse
C. Eventhouse
D. Dataflow Gen2

Correct Answer: C

Explanation:
Eventhouses are optimized for high-velocity streaming data and real-time analytics using KQL.

5. Which query language is primarily used to analyze data in an Eventhouse?

A. T-SQL
B. Spark SQL
C. DAX
D. KQL

Correct Answer: D

Explanation:
Eventhouses are built on KQL (Kusto Query Language), which is optimized for querying event and time-series data.

6. A business analytics team requires fast dashboard performance and is familiar only with SQL. Which option best meets this requirement?

A. Lakehouse
B. Warehouse
C. Eventhouse
D. Spark notebook

Correct Answer: B

Explanation:
Warehouses provide a traditional SQL experience optimized for BI dashboards and reporting performance.

7. Which characteristic BEST distinguishes a Lakehouse from a Warehouse?

A. Lakehouses support Power BI
B. Warehouses store data in OneLake
C. Lakehouses support Spark-based processing
D. Warehouses cannot be governed

Correct Answer: C

Explanation:
Lakehouses uniquely support Spark-based processing, enabling advanced transformations and data science workloads.

8. A solution must store structured batch data and unstructured files in the same analytical store. Which option should be selected?

A. Warehouse
B. Eventhouse
C. Semantic model
D. Lakehouse

Correct Answer: D

Explanation:
Lakehouses support both structured tables and unstructured or semi-structured files within the same environment.

9. Which scenario MOST strongly indicates the need for an Eventhouse?

A. Monthly financial reporting
B. Slowly changing dimension modeling
C. Real-time operational monitoring
D. Ad hoc SQL analysis

Correct Answer: C

Explanation:
Eventhouses are designed for real-time analytics on streaming data, making them ideal for operational monitoring scenarios.

10. When choosing between a Lakehouse, Warehouse, or Eventhouse on the DP-600 exam, which factor is MOST important?

A. Personal familiarity with the tool
B. The default Fabric option
C. Data characteristics and latency requirements
D. Workspace size

Correct Answer: C

Explanation:
DP-600 emphasizes selecting the correct component based on data type (batch vs streaming), latency needs, user personas, and governance—not personal preference.

Ingest or Access Data as Needed

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Get data
--> Ingest or access data as needed

A core responsibility of a Microsoft Fabric Analytics Engineer is deciding how data should be brought into Fabric—or whether it should be brought in at all. For the DP-600 exam, this topic focuses on selecting the right ingestion or access pattern based on performance, freshness, cost, and governance requirements.

Ingest vs. Access: Key Concept

Before choosing a tool or method, understand the distinction:

  • Ingest data: Physically copy data into Fabric-managed storage (OneLake)
  • Access data: Query or reference data where it already lives, without copying

The exam frequently tests your ability to choose the most appropriate option—not just a working one.

Common Data Ingestion Methods in Microsoft Fabric

1. Dataflows Gen2

Best for:

  • Low-code ingestion and transformation
  • Reusable ingestion logic
  • Business-friendly data preparation

Key characteristics:

  • Uses Power Query Online
  • Supports scheduled refresh
  • Stores results in OneLake (Lakehouse or Warehouse)
  • Ideal for centralized, governed ingestion

Exam tip:
Use Dataflows Gen2 when reuse, transformation, and governance are priorities.

2. Data Pipelines (Copy Activity)

Best for:

  • High-volume or frequent ingestion
  • Orchestration across multiple sources
  • ELT-style workflows

Key characteristics:

  • Supports many source and sink types
  • Enables scheduling, dependencies, and retries
  • Minimal transformation (primarily copy)

Exam tip:
Choose pipelines when performance and orchestration matter more than transformation.

3. Notebooks (Spark)

Best for:

  • Complex transformations
  • Data science or advanced engineering
  • Custom ingestion logic

Key characteristics:

  • Full control using Spark (PySpark, Scala, SQL)
  • Suitable for large-scale processing
  • Writes directly to OneLake

Exam tip:
Notebooks are powerful but require engineering skills—don’t choose them for simple ingestion scenarios.

Accessing Data Without Ingesting

1. OneLake Shortcuts

Best for:

  • Avoiding data duplication
  • Reusing data across workspaces
  • Accessing external storage

Key characteristics:

  • Logical reference only (no copy)
  • Supports ADLS Gen2 and Amazon S3
  • Appears native in Lakehouse tables or files

Exam tip:
Shortcuts are often the best answer when the question mentions avoiding duplication or reducing storage cost.

2. DirectQuery

Best for:

  • Near-real-time data access
  • Large datasets that cannot be imported
  • Centralized source-of-truth systems

Key characteristics:

  • Queries run against the source system
  • Performance depends on source
  • Limited modeling flexibility compared to Import

Exam tip:
Expect trade-off questions involving DirectQuery vs. Import.

3. Real-Time Access (Eventstreams / KQL)

Best for:

  • Streaming and telemetry data
  • Operational and real-time analytics

Key characteristics:

  • Event-driven ingestion
  • Supports near-real-time dashboards
  • Often discovered via Real-Time hub

Exam tip:
Use real-time ingestion when freshness is measured in seconds, not hours.

Choosing the Right Approach (Exam-Critical)

You should be able to decide based on these factors:

RequirementBest Option
Reusable ingestion logicDataflows Gen2
High-volume copyData pipelines
Complex transformationsNotebooks
Avoid duplicationOneLake shortcuts
Near real-time reportingDirectQuery / Eventstreams
Governance and trustIngestion + endorsement

Governance and Security Considerations

  • Ingested data can inherit sensitivity labels
  • Access-based methods rely on source permissions
  • Workspace roles determine who can ingest or access data
  • Endorsed datasets should be preferred for reuse

DP-600 often frames ingestion questions within a governance context.

Common Exam Scenarios

You may be asked to:

  • Choose between ingesting data or accessing it directly
  • Identify when shortcuts are preferable to ingestion
  • Select the right tool for a specific ingestion pattern
  • Balance data freshness vs. performance
  • Reduce duplication across workspaces

Best Practices to Remember

  • Ingest when performance and modeling flexibility are required
  • Access when freshness, cost, or duplication is a concern
  • Centralize ingestion logic for reuse
  • Prefer Fabric-native patterns over external tools
  • Let business requirements drive architectural decisions

Key Takeaway
For the DP-600 exam, “Ingest or access data as needed” is about making intentional, informed choices. Microsoft Fabric provides multiple ways to bring data into analytics solutions, and the correct approach depends on scale, freshness, reuse, governance, and cost. Understanding why one method is better than another is far more important than memorizing features.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions (for example, low code/no code, large dataset, high-volume data, reuse, complex transformations)
  • Expect scenario-based questions rather than direct definitions

Also, keep in mind that …

  • DP-600 questions often include multiple valid options, but only one that best aligns with the scenario’s constraints. Always identify and consider factors such as:
    • Data volume
    • Freshness requirements
    • Reuse and duplication concerns
    • Transformation complexity

1. What is the primary difference between ingesting data and accessing data in Microsoft Fabric?

A. Ingested data cannot be secured
B. Accessed data is always slower
C. Ingesting copies data into OneLake, while accessing queries data in place
D. Accessed data requires a gateway

Correct Answer: C

Explanation:
Ingestion physically copies data into Fabric-managed storage (OneLake), while access-based approaches query or reference data where it already exists.

2. Which option is BEST when the goal is to avoid duplicating large datasets across multiple workspaces?

A. Import mode
B. Dataflows Gen2
C. OneLake shortcuts
D. Notebooks

Correct Answer: C

Explanation:
OneLake shortcuts allow data to be referenced without copying it, making them ideal for reuse and cost control.

3. A team needs reusable, low-code ingestion logic with scheduled refresh. Which Fabric feature should they use?

A. Spark notebooks
B. Data pipelines
C. Dataflows Gen2
D. DirectQuery

Correct Answer: C

Explanation:
Dataflows Gen2 provide Power Query–based ingestion with refresh scheduling and reuse across Fabric items.

4. Which ingestion method is MOST appropriate for complex transformations requiring custom logic?

A. Dataflows Gen2
B. Copy activity in pipelines
C. OneLake shortcuts
D. Spark notebooks

Correct Answer: D

Explanation:
Spark notebooks offer full control over transformation logic and are suited for complex, large-scale processing.

5. When should DirectQuery be preferred over Import mode?

A. When the dataset is small
B. When data freshness is critical
C. When transformations are complex
D. When performance must be maximized

Correct Answer: B

Explanation:
DirectQuery is preferred when near-real-time access to data is required, even though performance depends on the source system.

6. Which Fabric component is BEST suited for orchestrating high-volume data ingestion with dependencies and retries?

A. Dataflows Gen2
B. Data pipelines
C. Semantic models
D. Power BI Desktop

Correct Answer: B

Explanation:
Data pipelines are designed for orchestration, handling large volumes of data, scheduling, and dependency management.

7. A dataset is queried infrequently but must support advanced modeling features. Which approach is most appropriate?

A. DirectQuery
B. Access via shortcut
C. Import into OneLake
D. Eventstream ingestion

Correct Answer: C

Explanation:
Import mode supports full modeling capabilities and high query performance, making it suitable even for infrequently accessed data.

8. Which scenario best fits the use of real-time ingestion methods such as Eventstreams or KQL databases?

A. Monthly financial reporting
B. Static reference data
C. IoT telemetry and operational monitoring
D. Slowly changing dimensions

Correct Answer: C

Explanation:
Real-time ingestion is designed for continuous, event-driven data such as IoT telemetry and operational metrics.

9. Why might ingesting data be preferred over accessing it directly?

A. It always reduces storage costs
B. It eliminates the need for security
C. It improves performance and modeling flexibility
D. It avoids data refresh

Correct Answer: C

Explanation:
Ingesting data into OneLake enables faster query performance and full support for modeling features.

10. Which factor is MOST important when deciding between ingesting data and accessing it?

A. The color of the dashboard
B. The number of reports
C. Business requirements such as freshness, scale, and governance
D. The Fabric region

Correct Answer: C

Explanation:
The decision to ingest or access data should be driven by business needs, including performance, freshness, cost, and governance—not technical convenience alone.

Create a Data Connection in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Get data
--> Create a data connection

Creating data connections is a foundational skill for a Microsoft Fabric Analytics Engineer. In the DP-600 exam, this topic focuses on how to securely and efficiently connect Fabric workloads—such as Lakehouses, Warehouses, Dataflows Gen2, and semantic models—to a wide variety of data sources.

What a Data Connection Means in Microsoft Fabric

A data connection defines how Fabric authenticates to, accesses, and retrieves data from a source system. It includes:

  • The data source type
  • Connection details (server, database, endpoint, file path, etc.)
  • Authentication method
  • Optional privacy and credential reuse settings

Once created, a data connection can often be reused across multiple items within a workspace.

Common Data Sources in Fabric

For the exam, you should be familiar with connecting to the following categories of data sources:

1. Azure and Microsoft Data Sources

  • Azure SQL Database
  • Azure Synapse (dedicated and serverless pools)
  • Azure Data Lake Storage Gen2
  • Azure Blob Storage
  • OneLake (Fabric-native storage)
  • Power BI semantic models (DirectQuery)

2. On-Premises Data Sources

  • SQL Server
  • Oracle
  • Other relational databases

These typically require an On-premises Data Gateway.

3. Files and Semi-Structured Data

  • CSV, JSON, Parquet, Excel
  • Files stored in OneLake, ADLS Gen2, SharePoint, or local file systems

Where Data Connections Are Created

In Microsoft Fabric, data connections can be created from several entry points:

  • Lakehouse: Add data via shortcuts or ingestion
  • Warehouse: Connect external data or ingest via pipelines
  • Dataflows Gen2: Define connections as part of Power Query Online
  • Pipelines: Configure source connections in copy activities
  • Semantic models: Connect via Import or DirectQuery

Understanding where the connection is configured is important for exam scenarios.

Authentication Methods

The DP-600 exam commonly tests authentication concepts. Be familiar with:

  • Microsoft Entra ID (OAuth) – Recommended and most secure
  • Service principal – Common for automation and CI/CD
  • Account key / Shared Access Signature (SAS) – Often used for storage
  • Username and password – Less secure, sometimes legacy

You should also understand when credentials are:

  • Stored at the connection level
  • Managed per workspace
  • Reused across multiple items

Gateways and Connectivity Modes

On-Premises Data Gateway

Required when connecting Fabric to on-premises sources. Key points:

  • Can be standard or personal (standard is preferred)
  • Must be online for refresh and query operations
  • Uses outbound connections only

Connectivity Modes

  • Import: Data is loaded into Fabric storage
  • DirectQuery: Queries run against the source system
  • Shortcut-based access: Data remains external but appears native in OneLake

Security and Governance Considerations

When creating data connections, Fabric enforces governance through:

  • Workspace roles (Viewer, Contributor, Member, Admin)
  • Credential isolation per workspace
  • Sensitivity labels inherited from data sources (when applicable)

Exam questions may test your ability to choose the most secure and scalable connection method.

Best Practices (Exam-Relevant)

  • Prefer Entra ID authentication over credentials or keys
  • Use OneLake shortcuts to avoid unnecessary data duplication
  • Centralize connections in Dataflows Gen2 for reuse
  • Validate gateway availability for on-premises sources
  • Align connection methods with performance needs (Import vs DirectQuery)

How This Appears on the DP-600 Exam

You may be asked to:

  • Identify the correct data connection method for a scenario
  • Choose the appropriate authentication type
  • Determine when a gateway is required
  • Decide where to create a connection for reuse and governance
  • Troubleshoot refresh or connectivity issues

Key Takeaway
Creating data connections in Microsoft Fabric is about more than just accessing data—it’s about security, performance, reusability, and governance. For the DP-600 exam, focus on understanding source types, authentication options, gateways, and where connections are defined within the Fabric ecosystem.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions (for example, gateway, authentication, reuse, DirectQuery vs Import)
  • Expect scenario-based questions rather than direct definitions

1. Which authentication method is generally recommended when creating data connections in Microsoft Fabric?

A. Username and password
B. Shared Access Signature (SAS)
C. Microsoft Entra ID (OAuth)
D. Account key

Correct Answer: C

Explanation:
Microsoft Entra ID (OAuth) is the recommended authentication method because it provides centralized identity management, better security, support for conditional access, and easier credential rotation compared to passwords or keys.

2. When is an On-premises Data Gateway required in Microsoft Fabric?

A. When connecting to Azure SQL Database
B. When connecting to OneLake
C. When connecting to an on-premises SQL Server
D. When connecting to Azure Data Lake Storage Gen2

Correct Answer: C

Explanation:
An On-premises Data Gateway is required when Fabric needs to access data sources that are hosted on-premises. Cloud-based sources such as Azure SQL Database or ADLS Gen2 do not require a gateway.

3. Which Fabric feature allows external data to appear as if it is stored in OneLake without copying the data?

A. Import mode
B. DirectQuery mode
C. OneLake shortcuts
D. Data pipelines

Correct Answer: C

Explanation:
OneLake shortcuts provide a logical reference to external storage locations (such as ADLS Gen2 or S3) without physically moving or duplicating the data.

4. You want multiple Fabric items in the same workspace to reuse a single data connection. Where should you create the connection?

A. In each semantic model
B. In Dataflows Gen2
C. In Power BI Desktop only
D. In Excel

Correct Answer: B

Explanation:
Dataflows Gen2 are designed for centralized data ingestion and transformation, making them ideal for creating reusable data connections across multiple Fabric items.

5. Which connectivity mode loads data into Fabric storage and provides the best query performance?

A. DirectQuery
B. Live connection
C. Shortcut-based access
D. Import

Correct Answer: D

Explanation:
Import mode copies data into Fabric-managed storage, enabling high-performance queries and full modeling capabilities at the cost of data freshness.

6. Which statement about DirectQuery connections in Fabric is true?

A. Data is stored in OneLake
B. Queries are always faster than Import mode
C. Queries are executed against the source system
D. A gateway is never required

Correct Answer: C

Explanation:
With DirectQuery, queries are sent directly to the source system at runtime. Performance depends on the source, and a gateway may be required for on-premises sources.

7. Which role is required to create or edit data connections within a Fabric workspace?

A. Viewer
B. Contributor
C. Member
D. Admin

Correct Answer: B

Explanation:
Users must have at least Contributor permissions to create or modify data connections. Viewers have read-only access and cannot manage connections.

8. Which file formats are commonly supported when creating file-based data connections in Fabric?

A. CSV only
B. CSV, JSON, Parquet, Excel
C. TXT only
D. XML only

Correct Answer: B

Explanation:
Microsoft Fabric supports a wide range of structured and semi-structured file formats, including CSV, JSON, Parquet, and Excel, especially when stored in OneLake or ADLS Gen2.

9. What is the primary security benefit of using a service principal for data connections?

A. Faster query performance
B. No need for a gateway
C. Automated, non-interactive authentication
D. Unlimited access to all workspaces

Correct Answer: C

Explanation:
Service principals enable secure, automated authentication scenarios (such as CI/CD pipelines) without relying on individual user credentials.

10. A data refresh in Fabric fails because credentials are missing. What is the most likely cause?

A. The dataset is in Import mode
B. The gateway is offline or misconfigured
C. The semantic model contains calculated columns
D. The file format is unsupported

Correct Answer: B

Explanation:
If a data source requires an On-premises Data Gateway and the gateway is offline or incorrectly configured, Fabric cannot access the credentials, causing refresh failures.

Improve DAX performance

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Implement and manage semantic models (25-30%)
--> Optimize enterprise-scale semantic models
--> Improve DAX performance

Effective DAX (Data Analysis Expressions) is essential for high-performance semantic models in Microsoft Fabric. As datasets and business logic become more complex, inefficient DAX can slow down query execution and degrade report responsiveness. This article explains why DAX performance matters, common performance pitfalls, and best practices to optimize DAX in enterprise-scale semantic models.


Why DAX Performance Matters

In Fabric semantic models (Power BI datasets + Direct Lake / Import / composite models), DAX is used to define:

  • Measures (dynamic calculations)
  • Calculated columns (row-level expressions)
  • Calculated tables (derived data structures)

When improperly written, DAX can become a bottleneck — especially on large models or highly interactive reports (many slicers, visuals, etc.). Optimizing DAX ensures:

  • Faster query execution
  • Better user experience
  • Lower compute consumption
  • More efficient use of memory

The DP-600 exam tests your ability to identify and apply performance-aware DAX patterns.


Understand DAX Execution Engines

DAX queries are executed by two engines:

  • Formula Engine (FE) — processes logic that can’t be delegated
  • Storage Engine (SE) — processes optimized aggregations and scans

Performance improves when more computation can be done in the Storage Engine (columnar operations) rather than the Formula Engine (row-by-row logic).

Rule of thumb: Favor patterns that minimize work done in the Formula Engine.


Common DAX Performance Anti-Patterns

1. Repeated Calculations Without Variables

Example:

Total Sales + Total Cost - Total Discount

If Total Sales, Total Cost, and Total Discount all compute the same sub-expressions repeatedly, the engine may evaluate redundant logic multiple times.

Anti-Pattern:

Repeated expressions without variables.


2. Nested Iterator Functions

Using iterators like SUMX or FILTER on large tables many times in a measure increases compute overhead.

Example:

SUMX(
    FILTER(FactSales, FactSales[SalesAmount] > 0),
    FactSales[Quantity] * FactSales[UnitPrice]
)

Filtering inside iterators and then iterating again adds overhead.


3. Large Row Context with Filters

Complex FILTER expressions that operate on large intermediate tables will push computation into the Formula Engine, which is slower.


4. Frequent Use of EARLIER

While useful, EARLIER is often replaced with clearer, faster patterns using variables or iterator functions.


Best Practices for Optimizing DAX


1. Use Variables (VAR)

Variables reduce redundant computations, enhance readability, and often improve performance:

Measure Optimized =
VAR BaseTotal = SUM(FactSales[SalesAmount])
RETURN
IF(BaseTotal > 0, BaseTotal, BLANK())

Benefits:

  • Computed once per filter context
  • Reduces repeated expression evaluation

2. Favor Storage Engine Over Formula Engine

Use functions that can be processed by the Storage Engine:

  • SUM, COUNT, AVERAGE, MIN, MAX run faster
  • Avoid SUMX when a plain SUM suffices

Example:

Total Sales = SUM(FactSales[SalesAmount])

Over:

Total Sales =
SUMX(FactSales, FactSales[SalesAmount])


3. Simplify Filter Expressions

When possible, use simpler filter arguments:

Better:

CALCULATE([Total Sales], DimDate[Year] = 2025)

Instead of:

CALCULATE([Total Sales], FILTER(DimDate, DimDate[Year] = 2025))

Why?
The simpler condition is more likely to push to the Storage Engine without extra row processing.


4. Use TRUE/FALSE Filters

When filtering on a Boolean or condition:

Better:

CALCULATE([Total Sales], FactSales[IsActive] = TRUE)

Instead of:

CALCULATE([Total Sales], FILTER(FactSales, FactSales[IsActive] = TRUE))


5. Limit Column and Table Scans

  • Remove unused columns from the model
  • Avoid high-cardinality columns in calculations where unnecessary
  • Use star schema design to improve filter propagation

6. Reuse Measures

Instead of duplicating logic:

Total Profit =
[Total Sales] - [Total Cost]

Reuse basic measures within more complex logic.


7. Prefer Measures Over Calculated Columns

Measures calculate at query time and respect filter context; calculated columns are evaluated during refresh. Use calculated columns only when necessary.


8. Reduce Iterators on Large Tables

If SUMX is needed for row-level expressions, consider summarizing first or using aggregation tables.


9. Understand Evaluation Context

Complex measures often inadvertently alter filter context. Use functions like:

  • ALL
  • REMOVEFILTERS
  • KEEPFILTERS

…carefully, as they affect performance and results.


10. Leverage DAX Studio or Performance Analyzer

While not directly tested with UI steps, knowing when to use tools to diagnose DAX is helpful:

  • Performance Analyzer identifies slow visuals
  • DAX Studio exposes query plans and engine timings

Performance Patterns and Anti-Patterns

PatternGood / BadNotes
VAR usageGoodMakes measures efficient and readable
SUM over SUMXGood if applicableLeverages Storage Engine
FILTER inside SUMXBadForces row context early
EARLIER / nested row contextBadHard to optimize, slows performance
Simple CALCULATE filtersGoodMore likely to fold

Example Before / After

Before (inefficient):

Measure = 
SUMX(
    FILTER(FactSales, FactSales[SalesAmount] > 1000),
    FactSales[Quantity] * FactSales[UnitPrice]
)

After (optimized):

VAR FilteredSales =
    CALCULATETABLE(
        FactSales,
        FactSales[SalesAmount] > 1000
    )
RETURN
SUMX(
    FilteredSales,
    FilteredSales[Quantity] * FilteredSales[UnitPrice]
)

Why better?
Explicit filtering via CALCULATETABLE often pushes more work to the Storage Engine than iterating within FILTER.


Exam-Focused Takeaways

For DP-600 questions related to DAX performance:

  • Identify inefficient row context patterns
  • Prefer variables and simple aggregations
  • Favor Storage Engine–friendly functions
  • Avoid unnecessary nested iterators
  • Recognize when a measure should be rewritten for performance

Summary

Improving DAX performance is about writing efficient calculations and avoiding patterns that force extra processing in the Formula Engine. By using variables, minimizing iterator overhead, simplifying filter expressions, and leveraging star schema design, you can significantly improve query responsiveness — a key capability for enterprise semantic models and the DP-600 exam.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions to guide you
  • Expect scenario-based questions rather than direct definitions

Question 1

You have a DAX measure that repeats the same complex calculation multiple times. Which change is most likely to improve performance?

A. Convert the calculation into a calculated column
B. Use a DAX variable (VAR) to store the calculation result
C. Replace CALCULATE with SUMX
D. Enable bidirectional relationships

Correct Answer: B

Explanation:
DAX variables evaluate their expression once per query context and reuse the result. This avoids repeated execution of the same logic and reduces Formula Engine overhead, making variables one of the most effective performance optimization techniques.


Question 2

Which aggregation function is generally the most performant when no row-by-row logic is required?

A. SUMX
B. AVERAGEX
C. SUM
D. FILTER

Correct Answer: C

Explanation:
Native aggregation functions like SUM, COUNT, and AVERAGE are optimized to run in the Storage Engine, which is much faster than iterator-based functions such as SUMX that require row-by-row evaluation in the Formula Engine.


Question 3

Why is this DAX pattern potentially slow on large tables?

CALCULATE([Total Sales], FILTER(FactSales, FactSales[SalesAmount] > 1000))

A. FILTER disables relationship filtering
B. FILTER forces evaluation in the Formula Engine
C. CALCULATE cannot push filters to the Storage Engine
D. The expression produces incorrect results

Correct Answer: B

Explanation:
The FILTER function iterates over rows, forcing Formula Engine execution. When possible, using simple Boolean expressions inside CALCULATE (e.g., FactSales[SalesAmount] > 1000) allows the Storage Engine to handle filtering more efficiently.


Question 4

Which CALCULATE filter expression is more performant?

A. FILTER(Sales, Sales[Year] = 2024)
B. Sales[Year] = 2024
C. ALL(Sales[Year])
D. VALUES(Sales[Year])

Correct Answer: B

Explanation:
Simple Boolean filters allow DAX to push work to the Storage Engine, while FILTER requires row-by-row evaluation. This distinction is frequently tested on the DP-600 exam.


Question 5

Which practice helps reduce the Formula Engine workload?

A. Using nested iterator functions
B. Replacing measures with calculated columns
C. Reusing base measures in more complex calculations
D. Increasing column cardinality

Correct Answer: C

Explanation:
Reusing base measures promotes efficient evaluation plans and avoids duplicated logic. Nested iterators and high cardinality columns increase computational complexity and slow down queries.


Question 6

Which modeling choice can indirectly improve DAX query performance?

A. Using snowflake schemas
B. Increasing the number of calculated columns
C. Removing unused columns and tables
D. Enabling bidirectional relationships by default

Correct Answer: C

Explanation:
Removing unused columns reduces memory usage, dictionary size, and scan costs. Smaller models lead to faster Storage Engine operations and improved overall query performance.


Question 7

Which DAX pattern is considered a performance anti-pattern?

A. Using measures instead of calculated columns
B. Using SUMX when SUM would suffice
C. Using star schema relationships
D. Using single-direction filters

Correct Answer: B

Explanation:
Iterator functions like SUMX should only be used when row-level logic is required. Replacing simple aggregations with iterators unnecessarily shifts work to the Formula Engine.


Question 8

Why can excessive use of EARLIER negatively impact performance?

A. It prevents relationship traversal
B. It creates complex nested row contexts
C. It only works in measures
D. It disables Storage Engine scans

Correct Answer: B

Explanation:
EARLIER introduces nested row contexts that are difficult for the DAX engine to optimize. Modern DAX best practices recommend using variables instead of EARLIER.


Question 9

Which relationship configuration can negatively affect DAX performance if overused?

A. Single-direction filtering
B. Many-to-one relationships
C. Bidirectional filtering
D. Active relationships

Correct Answer: C

Explanation:
Bidirectional relationships increase filter propagation paths and query complexity. While useful in some scenarios, overuse can significantly degrade performance in enterprise-scale models.


Question 10

Which tool should you use to identify slow visuals caused by inefficient DAX measures?

A. Power Query Editor
B. Model View
C. Performance Analyzer
D. Deployment Pipelines

Correct Answer: C

Explanation:
Performance Analyzer captures visual query durations, DAX query times, and rendering times, making it the primary tool for diagnosing DAX and visual performance issues in Power BI and Fabric semantic models.

Choose Between Direct Lake on OneLake and Direct Lake on SQL Endpoints

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Implement and manage semantic models (25-30%)
--> Optimize enterprise-scale semantic models
--> Choose between Direct Lake on OneLake and Direct Lake on SQL endpoints

In Microsoft Fabric, Direct Lake is a high-performance semantic model storage mode that allows Power BI and Fabric semantic models to query data directly from OneLake without importing it into VertiPaq. When implementing Direct Lake, you must choose where the semantic model reads from, either:

  • Direct Lake on OneLake
  • Direct Lake on SQL endpoints

Understanding the differences, trade-offs, and use cases for each option is critical for optimizing enterprise-scale semantic models, and this topic appears explicitly in the DP-600 exam blueprint.


Direct Lake on OneLake

What It Is

Direct Lake on OneLake connects the semantic model directly to Delta tables stored in OneLake, bypassing SQL engines entirely. Queries operate directly on Parquet/Delta files using the Fabric Direct Lake engine.

Key Characteristics

  • Reads Delta tables directly from OneLake
  • No dependency on a SQL query engine
  • Near-Import performance with zero data duplication
  • Minimal latency between data ingestion and reporting
  • Requires supported Delta table structures and data types

Advantages

  • Best performance for large-scale analytics
  • Always reflects the latest data written to OneLake
  • Eliminates Import refresh overhead
  • Ideal for lakehouse-centric architectures

Limitations

  • Some complex DAX patterns may cause fallback
  • Requires schema compatibility with Direct Lake
  • Less flexibility for SQL-based transformations

Typical Use Cases

  • Enterprise lakehouse analytics
  • High-volume fact tables
  • Near-real-time reporting
  • Fabric-native data pipelines

Direct Lake on SQL Endpoints

What It Is

Direct Lake on SQL endpoints connects the semantic model to the SQL analytics endpoint of a Lakehouse or Warehouse, while still using Direct Lake storage mode behind the scenes.

Instead of reading files directly, the semantic model relies on the SQL endpoint to expose the data.

Key Characteristics

  • Queries go through the SQL endpoint
  • Still benefits from Direct Lake storage
  • Enables SQL views and transformations
  • Slightly higher latency than pure OneLake access

Advantages

  • Supports SQL-based modeling (views, joins, calculated columns)
  • Easier integration with existing SQL logic
  • Familiar experience for SQL-first teams
  • Useful when business logic is already defined in SQL

Limitations

  • Additional query layer may impact performance
  • Less efficient than direct file access
  • SQL endpoint availability becomes a dependency

Typical Use Cases

  • Organizations with strong SQL development practices
  • Reuse of existing SQL views and transformations
  • Gradual migration from Warehouse or SQL models
  • Mixed BI and ad-hoc SQL workloads

Key Comparison Summary

AspectDirect Lake on OneLakeDirect Lake on SQL Endpoint
Data accessDirect file accessVia SQL analytics endpoint
PerformanceHighestSlightly lower
SQL dependencyNoneRequired
Schema flexibilityLowerHigher
Transformation styleLakehouse / SparkSQL-based
Ideal forScale & performanceSQL reuse & flexibility

Choosing Between the Two (Exam-Focused Guidance)

On the DP-600 exam, questions typically focus on architectural intent and performance optimization:

Choose Direct Lake on OneLake when:

  • Performance is the top priority
  • Data is already modeled in Delta tables
  • You want the simplest, most scalable architecture
  • Near-real-time analytics are required

Choose Direct Lake on SQL endpoints when:

  • You need SQL views or transformations
  • Existing logic already exists in SQL
  • Teams are more comfortable with SQL than Spark
  • Some flexibility is preferred over maximum performance

Exam Tip 💡

If a question emphasizes:

  • Maximum performance, minimal latency, or scalability/large-scale analyticsDirect Lake on OneLake
  • SQL views, SQL transformations, or SQL reuseDirect Lake on SQL endpoints

Expect scenario-based questions where both options are technically valid, but only one best aligns with the business and performance requirements.


Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions to guide you
  • Expect scenario-based questions rather than direct definitions

Question 1

A company has Delta tables stored in OneLake and wants the lowest possible query latency for Power BI reports without using SQL views. Which option should they choose?

A. Import mode
B. DirectQuery on SQL endpoint
C. Direct Lake on SQL endpoint
D. Direct Lake on OneLake

Correct Answer: D

Explanation:
Direct Lake on OneLake reads Delta tables directly from OneLake without a SQL layer, delivering the best performance and lowest latency.


Question 2

Which requirement would most strongly favor Direct Lake on SQL endpoints over Direct Lake on OneLake?

A. Maximum performance
B. Real-time data visibility
C. Use of SQL views for business logic
D. Minimal infrastructure dependencies

Correct Answer: C

Explanation:
Direct Lake on SQL endpoints allows semantic models to consume SQL views and transformations, making it ideal when business logic is defined in SQL.


Question 3

What is a key architectural difference between Direct Lake on OneLake and Direct Lake on SQL endpoints?

A. Only OneLake supports Delta tables
B. SQL endpoints require data import
C. OneLake access bypasses the SQL engine
D. SQL endpoints cannot be used with semantic models

Correct Answer: C

Explanation:
Direct Lake on OneLake reads Delta files directly from storage, while SQL endpoints introduce an additional SQL query layer.


Question 4

A Fabric semantic model uses Direct Lake on OneLake. Under which condition might it fallback to DirectQuery?

A. The model contains calculated columns
B. The dataset exceeds 1 TB
C. The Delta table schema is unsupported
D. The SQL endpoint is unavailable

Correct Answer: C

Explanation:
If the Delta table schema or data types are not supported by Direct Lake, Fabric automatically falls back to DirectQuery.


Question 5

Which scenario is best suited for Direct Lake on SQL endpoints?

A. High-volume streaming telemetry
B. SQL-first team reusing existing warehouse views
C. Near-real-time dashboards on raw lake data
D. Large fact tables optimized for scan performance

Correct Answer: B

Explanation:
Direct Lake on SQL endpoints is ideal when teams rely on SQL views and want to reuse existing SQL logic.


Question 6

Which statement about performance is most accurate?

A. SQL endpoints always outperform OneLake
B. OneLake always requires Import mode
C. Direct Lake on OneLake typically offers better performance
D. Direct Lake on SQL endpoints does not use Direct Lake

Correct Answer: C

Explanation:
Direct Lake on OneLake avoids the SQL layer, resulting in faster query execution in most scenarios.


Question 7

A Power BI model must reflect new data immediately after ingestion into OneLake. Which option best supports this requirement?

A. Import mode
B. DirectQuery
C. Direct Lake on SQL endpoint
D. Direct Lake on OneLake

Correct Answer: D

Explanation:
Direct Lake on OneLake reads data directly from Delta tables and reflects changes immediately without refresh.


Question 8

Which dependency exists when using Direct Lake on SQL endpoints that does not exist with Direct Lake on OneLake?

A. Delta Lake support
B. VertiPaq compression
C. SQL analytics endpoint availability
D. Semantic model compatibility

Correct Answer: C

Explanation:
Direct Lake on SQL endpoints depends on the SQL analytics endpoint being available, while OneLake access does not.


Question 9

From a DP-600 exam perspective, which factor most often determines the correct choice between these two options?

A. Dataset size alone
B. Whether SQL transformations are required
C. Number of report users
D. Power BI license type

Correct Answer: B

Explanation:
Exam questions typically focus on whether SQL logic (views, joins, transformations) is needed, which drives the choice.


Question 10

You are designing an enterprise semantic model focused on scalability and minimal complexity. The data is already curated as Delta tables. What is the best choice?

A. Import mode
B. DirectQuery on SQL endpoint
C. Direct Lake on SQL endpoint
D. Direct Lake on OneLake

Correct Answer: D

Explanation:
Direct Lake on OneLake offers the simplest architecture with the highest scalability and performance when Delta tables are already prepared.


Configure version control for a workspace in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Configure version control for a workspace

Version control in Microsoft Fabric enables teams to track changes, collaborate safely, and manage the lifecycle of analytics assets using source control practices. Fabric integrates workspace items with Git repositories, bringing DevOps discipline to analytics development.

For the DP-600 exam, you should understand how Git integration works in Fabric, what items are supported, how changes flow, and common governance scenarios.

What Is Workspace Version Control in Fabric?

Workspace version control allows you to:

  • Connect a Fabric workspace to a Git repository
  • Store item definitions as code artifacts
  • Track changes through commits, branches, and pull requests
  • Support collaborative and auditable development

This capability is often referred to as Git integration for Fabric workspaces.

Supported Source Control Platform

Microsoft Fabric supports:

  • Azure DevOps (ADO) Git repositories

Key points:

  • GitHub support is limited or evolving (exam questions typically reference Azure DevOps)
  • Repositories must already exist
  • Authentication is handled via Microsoft Entra ID

Exam note: Expect Azure DevOps to be the default answer unless stated otherwise.

What Items Can Be Version Controlled?

Common Fabric items that support version control include:

  • Semantic models
  • Reports
  • Lakehouses
  • Warehouses
  • Notebooks
  • Data pipelines
  • Dataflows Gen2

Items are serialized into files and folders in the Git repo, allowing:

  • Diffing
  • History tracking
  • Rollbacks

How to Configure Version Control for a Workspace

At a high level, the process is:

  1. Open the Fabric workspace settings
  2. Enable Git integration
  3. Select:
    • Azure DevOps organization
    • Project
    • Repository
    • Branch
  4. Choose a workspace folder structure
  5. Initialize synchronization

Once configured:

  • Workspace changes can be committed to Git
  • Repo changes can be synced back into the workspace

How Changes Flow Between Workspace and Git

From Workspace to Git

  • Users make changes in Fabric (e.g., update a report)
  • Changes are committed to the connected branch
  • Commit history tracks who changed what and when

From Git to Workspace

  • Changes merged into the branch can be pulled into Fabric
  • Enables controlled deployment across environments

Important exam concept:
Synchronization is not automatic—users must explicitly commit and sync.

Branching and Environment Strategy

A common lifecycle pattern:

  • Development workspace → linked to a dev branch
  • Test workspace → linked to a test branch
  • Production workspace → linked to a main branch

This supports:

  • Code reviews
  • Pull requests
  • Controlled promotion of changes

Permissions and Governance Considerations

To configure and use version control:

  • Users need sufficient workspace permissions (typically Admin or Member)
  • Users also need Git repository access
  • Git permissions are managed outside Fabric

Version control complements—but does not replace:

  • Workspace-level access controls
  • Item-level permissions
  • Endorsements and sensitivity labels

Benefits of Version Control in Fabric

Version control enables:

  • Collaboration among multiple developers
  • Change traceability and auditability
  • Rollback of problematic changes
  • CI/CD-style deployment patterns
  • Alignment with enterprise DevOps practices

These benefits are a frequent theme in DP-600 scenario questions.

Common Exam Scenarios

You may be asked to:

  • Identify when Git integration is appropriate
  • Choose the correct platform for source control
  • Understand how changes move between Git and Fabric
  • Design a dev/test/prod workspace strategy
  • Troubleshoot why changes are not reflected (sync not performed)

Example:

Multiple developers need to work on the same semantic model with change tracking.
Correct concept: Configure workspace version control with Git.

Key Exam Takeaways

  • Fabric supports Git-based version control at the workspace level
  • Azure DevOps is the primary supported platform
  • Changes require explicit commit and sync
  • Version control supports structured development and deployment
  • It is a core part of the analytics development lifecycle

Exam Tips

  • If a question mentions tracking changes, collaboration, rollback, or DevOps practices, think workspace version control with Git.
  • If it mentions moving changes between environments, think branches and multiple workspaces.
  • Know who can configure it → Workspace Admins
  • Understand Git integration flow
  • Expect scenario questions comparing:
    • Git vs deployment pipelines
    • Collaboration vs governance
  • Remember:
    • JSON-based artifacts
    • Not all items are supported
    • No automatic commits

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of configuring version control for a Fabric workspace?

A. Improve query execution performance
B. Enable collaboration, change tracking, and rollback
C. Enforce row-level security
D. Automatically deploy content to production

Correct Answer: B

Explanation:
Version control enables source control integration, allowing teams to track changes, collaborate safely, and roll back when needed.


Question 2 (Multi-select)

Which version control systems can be integrated with Microsoft Fabric workspaces? (Select all that apply.)

A. Azure DevOps Git repositories
B. GitHub repositories
C. OneDrive for Business
D. SharePoint document libraries

Correct Answers: A, B

Explanation:
Fabric supports Git integration using Azure DevOps and GitHub. OneDrive and SharePoint are not supported for workspace version control.


Question 3 (Scenario-based)

A team wants to manage Power BI reports, semantic models, and dataflows using pull requests and branching. What should they configure?

A. Deployment pipelines
B. Sensitivity labels
C. Workspace version control with Git
D. Incremental refresh

Correct Answer: C

Explanation:
Git-based workspace version control enables branching, pull requests, and code reviews.


Question 4 (Single choice)

Which workspace role is REQUIRED to configure version control for a workspace?

A. Viewer
B. Contributor
C. Member
D. Admin

Correct Answer: D

Explanation:
Only workspace Admins can connect a workspace to a Git repository.


Question 5 (Scenario-based)

After connecting a workspace to a Git repository, where are Fabric items stored?

A. As binary files
B. As JSON-based artifact definitions
C. As SQL scripts
D. As Excel files

Correct Answer: B

Explanation:
Fabric artifacts are stored as JSON files, making them suitable for source control and comparison.


Question 6 (Multi-select)

Which items can be included in workspace version control? (Select all that apply.)

A. Reports
B. Semantic models
C. Dataflows Gen2
D. Dashboards

Correct Answers: A, B, C

Explanation:
Reports, semantic models, and dataflows are supported. Dashboards are typically excluded from version control scenarios.


Question 7 (Scenario-based)

A developer modifies a semantic model directly in the Fabric workspace while Git integration is enabled. What happens NEXT?

A. The change is automatically committed
B. The change is rejected
C. The workspace shows uncommitted changes
D. The change is immediately deployed to production

Correct Answer: C

Explanation:
Changes made in the workspace appear as pending/uncommitted changes until explicitly committed to the repository.


Question 8 (Single choice)

What is the relationship between workspace version control and deployment pipelines?

A. They are the same feature
B. Version control replaces deployment pipelines
C. They complement each other
D. Deployment pipelines require version control

Correct Answer: C

Explanation:
Version control handles source management, while deployment pipelines manage promotion across environments.


Question 9 (Scenario-based)

Your organization wants to prevent accidental overwrites when multiple developers edit the same item. Which feature BEST helps?

A. Row-level security
B. Sensitivity labels
C. Git branching and pull requests
D. Incremental refresh

Correct Answer: C

Explanation:
Git workflows enable controlled collaboration through branches, reviews, and merges.


Question 10 (Fill in the blank)

When version control is enabled, Fabric workspace changes must be ________ to the repository and ________ to update the workspace from Git.

Correct Answer:
Committed, synced (or pulled)

Explanation:
Changes flow both ways:

  • Commit workspace → Git
  • Sync Git → workspace

Create and configure deployment pipelines

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Create and configure deployment pipelines

Development pipelines in Microsoft Fabric provide a structured, governed way to promote analytics content across environments—typically Development, Test, and Production. They are a core lifecycle management feature that helps teams deploy changes safely, consistently, and with minimal risk. For the DP-600 exam, you should understand what development pipelines are, how they are configured, what they support, and how they differ from Git-based version control.

What Are Development Pipelines?

A development pipeline is a Fabric feature that:

  • Connects multiple workspaces into an ordered promotion flow
  • Enables controlled deployment of items between environments
  • Supports validation and testing before production release

Pipelines are especially important for enterprise-scale analytics solutions.

Typical Pipeline Structure

A standard Fabric pipeline consists of three stages:

  1. Development
    • Active development
    • Frequent changes
    • Used by engineers and analysts
  2. Test
    • Validation and user acceptance testing
    • Data and logic verification
    • Limited access
  3. Production
    • Certified, trusted content
    • Broad consumer access
    • Minimal direct changes

Each stage is linked to a separate Fabric workspace.

Creating a Development Pipeline

At a high level, the process is:

  1. Create a deployment pipeline in Microsoft Fabric
  2. Assign a workspace to each stage:
    • Dev workspace
    • Test workspace
    • Prod workspace
  3. Configure pipeline settings
  4. Control who can deploy between stages

Once created, the pipeline provides a visual interface showing item differences across stages.

What Items Can Be Deployed Through Pipelines?

Development pipelines support deployment of many Fabric items, including:

  • Semantic models
  • Reports and dashboards
  • Dataflows Gen2
  • Lakehouses and Warehouses (supported scenarios)
  • Other supported analytics artifacts

Exam note:
Not every Fabric item supports pipeline deployment equally—expect questions to focus on Power BI and core analytics items.

How Deployment Works

Comparing Changes

  • Pipelines show differences between stages
  • You can review what will change before deploying

Deploying Content

  • Deploy from Dev → Test
  • Validate
  • Deploy from Test → Prod

Deployments:

  • Copy item definitions
  • Can update existing items or create new ones
  • Do not automatically move workspace permissions

Deployment Rules and Parameters

Pipelines support deployment rules, such as:

  • Changing data source connections per environment
  • Switching parameters between Dev, Test, and Prod
  • Avoiding hard-coded environment values

This is critical for:

  • Separating development and production data
  • Supporting safe testing

Pipelines vs Git Integration (Exam Comparison)

This distinction is frequently tested.

FeatureDevelopment PipelinesGit Integration
PurposeEnvironment promotionSource control
FocusDeploymentVersioning
Tracks historyNoYes
Supports branchingNoYes
Typical useDev → Test → ProdCode collaboration

Key insight:
They are complementary, not competing features.

Permissions and Governance

To use pipelines:

  • Users need appropriate pipeline permissions
  • Workspace access is still required
  • Production deployments are often restricted to a small group

Pipelines support governance by:

  • Reducing direct changes in production
  • Enforcing controlled release processes
  • Improving auditability

Common Exam Scenarios

You may be asked to:

  • Choose pipelines for controlled promotion of reports
  • Identify when pipelines are preferable to manual publishing
  • Combine pipelines with Git and PBIP
  • Configure different data sources per environment
  • Prevent accidental production changes

Example:

A report must be tested before being released to executives.
Correct concept: Use a development pipeline with Dev, Test, and Prod stages.

Best Practices to Remember

  • Use separate workspaces per environment
  • Restrict production deployment permissions
  • Combine pipelines with:
    • PBIP projects
    • Git integration
    • Endorsements and certification
  • Avoid direct editing in production

Key Exam Takeaways

  • Development pipelines manage content promotion across environments
  • They connect multiple Fabric workspaces
  • Pipelines support comparison, validation, and controlled deployment
  • They do not replace Git-based version control
  • A core feature of the Fabric analytics lifecycle

Exam Tips

  • If a question focuses on moving content safely from development to production, the correct answer is development pipelines.
  • If it focuses on tracking changes or collaboration, the answer is Git or PBIP.
  • Know how pipelines support:
    • Dev/Test/Prod lifecycle
    • Governance & change control
    • Environment-specific configuration
    • Enterprise-scale BI practices
  • Common exam traps:
    • Confusing workspace roles with deploy permissions
    • Assuming pipelines manage security or performance
    • Forgetting deployment rules

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of a deployment pipeline in Microsoft Fabric?

A. Schedule dataset refreshes
B. Promote content across lifecycle environments
C. Enable row-level security
D. Optimize DAX performance

Correct Answer: B

Explanation:
Deployment pipelines are designed to promote content across environments (for example, Development → Test → Production) in a controlled and governed manner.

  • ❌ A: Refresh scheduling is handled separately
  • ❌ C: Security is not the primary purpose
  • ❌ D: Performance tuning is unrelated

Question 2 (Multi-select)

Which stages are available by default in a Fabric deployment pipeline? (Select all that apply.)

A. Development
B. Test
C. Production
D. Sandbox

Correct Answers: A, B, C

Explanation:
Fabric deployment pipelines use a three-stage lifecycle:

  • Development
  • Test
  • Production

There is no default Sandbox stage.


Question 3 (Scenario-based)

A team wants analysts to freely modify reports, while only approved changes reach production. Which pipeline stage should analysts primarily work in?

A. Production
B. Test
C. Development
D. Any stage

Correct Answer: C

Explanation:
The Development stage is intended for:

  • Frequent changes
  • Experimentation
  • Initial validation

Higher stages are more controlled.


Question 4 (Single choice)

Which permission is required to deploy content from one stage to the next in a deployment pipeline?

A. Viewer
B. Contributor
C. Admin
D. Pipeline deploy permission

Correct Answer: D

Explanation:
Deploying content requires explicit pipeline deployment permissions, not just workspace roles.

  • ❌ Admin alone is not sufficient
  • ❌ Contributor may edit but not deploy

Question 5 (Scenario-based)

You deploy a semantic model from Test to Production. What happens to data source connections by default?

A. They are deleted
B. They remain unchanged
C. They can be overridden per stage
D. They must be manually reconfigured

Correct Answer: C

Explanation:
Deployment pipelines support parameter and data source rules, allowing environment-specific connections.


Question 6 (Multi-select)

Which items can be deployed using deployment pipelines? (Select all that apply.)

A. Reports
B. Semantic models
C. Dashboards
D. Notebooks

Correct Answers: A, B, C

Explanation:
Deployment pipelines support Power BI artifacts, including:

  • Reports
  • Semantic models
  • Dashboards

❌ Notebooks are Fabric artifacts but are not deployed via Power BI deployment pipelines.


Question 7 (Scenario-based)

A deployment shows warnings that some items are skipped. What is the MOST likely cause?

A. The workspace is full
B. Unsupported artifacts exist
C. The dataset is too large
D. Git integration is disabled

Correct Answer: B

Explanation:
Unsupported or incompatible artifacts (for example, unsupported report types) may be skipped during deployment.


Question 8 (Single choice)

Which feature allows different environments to use different data sources during deployment?

A. Row-level security
B. Dynamic format strings
C. Deployment rules
D. Incremental refresh

Correct Answer: C

Explanation:
Deployment rules allow:

  • Data source switching
  • Parameter overrides
  • Environment-specific configuration

Question 9 (Scenario-based)

You want production users to access only certified content. How do deployment pipelines help?

A. By enforcing sensitivity labels
B. By promoting tested content only
C. By encrypting production reports
D. By disabling edit access

Correct Answer: B

Explanation:
Deployment pipelines ensure:

  • Content is validated in Test
  • Only approved changes reach Production

They support trust and governance, not encryption or labeling.


Question 10 (Multi-select)

Which best practices apply when configuring deployment pipelines? (Select all that apply.)

A. Restrict deploy permissions
B. Use separate data sources per stage
C. Allow all users to deploy to Production
D. Validate content in Test before Production

Correct Answers: A, B, D

Explanation:
Best practices include:

  • Limited deploy access
  • Environment-specific configurations
  • Mandatory testing before production

❌ Allowing everyone to deploy defeats governance.


Perform impact analysis of downstream dependencies from lakehouses, data warehouses, dataflows, and semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Perform impact analysis of downstream dependencies from lakehouses,
data warehouses, dataflows, and semantic models

Impact analysis in Microsoft Fabric helps analytics engineers understand how changes to upstream data assets affect downstream items such as datasets, reports, dashboards, notebooks, and pipelines. It is a critical lifecycle practice that reduces the risk of breaking analytics solutions when making schema, logic, or data changes.

For the DP-600 exam, you should understand what impact analysis is, which Fabric tools support it, what dependencies are tracked, and how to use it in real-world lifecycle scenarios.

What Is Impact Analysis?

Impact analysis answers the question:

“If I change or delete this item, what else will be affected?”

It allows you to:

  • Identify downstream dependencies
  • Assess risk before making changes
  • Communicate potential impacts to stakeholders
  • Support safe development and deployment practices

Impact analysis is observational and informational—it does not enforce controls.

Where Impact Analysis Is Used in Fabric

Impact analysis applies across many Fabric items, including:

  • Lakehouses
  • Data Warehouses
  • Dataflows Gen2
  • Semantic models
  • Reports and dashboards
  • Notebooks and pipelines

These items form a connected analytics graph, which Fabric can visualize.

Lineage View: The Core Tool for Impact Analysis

The primary tool for impact analysis in Fabric is Lineage View.

What Lineage View Shows

  • Upstream data sources
  • Transformations and processing steps
  • Downstream consumers
  • Relationships between items

Lineage view provides a visual map of dependencies across workloads.

Impact Analysis by Asset Type

Lakehouses

Changing a Lakehouse can impact:

  • Notebooks reading tables
  • Semantic models using Direct Lake
  • Dataflows writing or reading data
  • Reports built on dependent models

Common risk: Dropping or renaming a column.

Data Warehouses

Warehouse changes may affect:

  • Views and SQL queries
  • Semantic models using DirectQuery
  • Reports and dashboards
  • External tools

Exam insight: Schema changes are a common source of downstream failures.

Dataflows Gen2

Dataflows often sit between raw data and analytics.

Changes can impact:

  • Lakehouses or Warehouses they load into
  • Semantic models consuming curated tables
  • Pipelines orchestrating refreshes

Semantic Models

Semantic models are among the most sensitive assets.

Changes may affect:

  • Reports and dashboards
  • Excel workbooks
  • Composite models
  • End-user self-service analytics

Exam note: Removing measures or renaming fields is high risk.

How to Perform Impact Analysis (High Level)

  1. Select the item (Lakehouse, Warehouse, Dataflow, or Semantic Model)
  2. Open Lineage view
  3. Review downstream dependencies
  4. Identify:
    • Reports
    • Datasets
    • Pipelines
    • Other dependent items
  5. Communicate or mitigate risk before making changes

Impact Analysis in the Development Lifecycle

Impact analysis is typically performed:

  • Before deploying changes
  • Before modifying schemas
  • Before deleting items
  • During troubleshooting

It supports:

  • Safe Git commits
  • Controlled pipeline deployments
  • Production stability

Common Exam Scenarios

You may see questions such as:

  • A column change breaks multiple reports → impact analysis was skipped
  • An engineer needs to know which reports use a dataset → lineage view
  • A Lakehouse schema update affects downstream models → review dependencies
  • A dataset should not be modified due to executive reports → high downstream impact

Example:

Before removing a table from a semantic model, what should you do?
Correct concept: Perform impact analysis using lineage view.

Impact Analysis vs Deployment Pipelines

These concepts are related but distinct.

FeatureImpact AnalysisDeployment Pipelines
PurposeRisk assessmentControlled promotion
EnforcedNoYes
TimingBefore changesDuring deployment
ToolLineage viewPipeline UI

Best Practices to Remember

  • Always check lineage before schema changes
  • Pay extra attention to semantic models and certified items
  • Communicate impacts to report owners
  • Pair impact analysis with:
    • Version control
    • Development pipelines
    • Endorsements and certification

Key Exam Takeaways

  • Impact analysis identifies downstream dependencies
  • Lineage view is the primary tool in Fabric
  • Applies to Lakehouses, Warehouses, Dataflows, and Semantic Models
  • Supports safe lifecycle and governance practices
  • A common scenario-based exam topic

Final Exam Tip

  • If a question asks what will break if I change this, the answer is impact analysis via lineage view.
  • If it asks how to safely move changes, the answer is pipelines or Git.
  • Expect questions that test:
    • When to perform impact analysis
    • Which items are affected by changes
    • Operational decision-making before deployments
  • Common traps:
    • Confusing impact analysis with lineage documentation
    • Assuming Fabric blocks breaking changes automatically
    • Forgetting semantic models are often the most impacted layer

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of impact analysis in Microsoft Fabric?

A. Improve query performance
B. Identify downstream objects affected by a change
C. Enforce data security policies
D. Reduce data refresh frequency

Correct Answer: B

Explanation:
Impact analysis helps you understand what items depend on a given artifact, so you can assess the risk of changes.

  • ❌ A: Performance tuning is separate
  • ❌ C: Security is not the focus
  • ❌ D: Refresh tuning is unrelated

Question 2 (Multi-select)

Which Fabric items can be analyzed for downstream dependencies? (Select all that apply.)

A. Lakehouses
B. Data warehouses
C. Dataflows
D. Semantic models

Correct Answers: A, B, C, D

Explanation:
Microsoft Fabric supports dependency tracking across all major analytical artifacts, enabling end-to-end lineage visibility.


Question 3 (Scenario-based)

You plan to rename a column in a lakehouse table. Which Fabric feature should you use FIRST?

A. Version control
B. Deployment pipeline
C. Impact analysis
D. Incremental refresh

Correct Answer: C

Explanation:
Renaming a column may break:

  • Semantic models
  • SQL queries
  • Reports

Impact analysis identifies what will be affected before the change.


Question 4 (Single choice)

Where do you access impact analysis for an item in Fabric?

A. Power BI Desktop
B. Microsoft Purview portal
C. Item settings in the Fabric workspace
D. Azure DevOps

Correct Answer: C

Explanation:
Impact analysis is accessible directly from the item context or settings within a Fabric workspace.

  • ❌ Purview focuses on governance/catalog
  • ❌ DevOps is not used for lineage

Question 5 (Scenario-based)

A dataflow loads data into a lakehouse that feeds multiple semantic models. What does impact analysis show?

A. Only the lakehouse
B. Only the semantic models
C. All downstream dependencies
D. Only refresh schedules

Correct Answer: C

Explanation:
Impact analysis provides a full dependency graph, showing all downstream items affected by changes.


Question 6 (Multi-select)

Which changes typically REQUIRE impact analysis before execution? (Select all that apply.)

A. Dropping columns
B. Renaming tables
C. Changing data types
D. Adding a new report page

Correct Answers: A, B, C

Explanation:
Structural changes can break dependencies. Adding a report page does not affect downstream items.


Question 7 (Scenario-based)

A semantic model is used by several reports and dashboards. What happens if you delete the model without impact analysis?

A. Nothing; reports are cached
B. Reports automatically reconnect
C. Reports and dashboards break
D. Fabric blocks the deletion

Correct Answer: C

Explanation:
Deleting a semantic model removes the data source for:

  • Reports
  • Dashboards

Impact analysis helps prevent such disruptions.


Question 8 (Single choice)

Which view best represents impact analysis results?

A. Tabular grid
B. SQL execution plan
C. Dependency graph
D. DAX query view

Correct Answer: C

Explanation:
Impact analysis is presented as a visual dependency graph, showing upstream and downstream relationships.


Question 9 (Scenario-based)

Which role MOST benefits from performing impact analysis regularly?

A. Report consumers
B. Workspace admins and data engineers
C. End-user analysts
D. External auditors

Correct Answer: B

Explanation:
Admins and engineers are responsible for:

  • Schema changes
  • Deployments
  • Stability

Impact analysis supports safe operational changes.


Question 10 (Multi-select)

Which best practices apply when using impact analysis? (Select all that apply.)

A. Perform before structural changes
B. Use in conjunction with deployment pipelines
C. Skip for minor schema updates
D. Communicate findings to stakeholders

Correct Answers: A, B, D

Explanation:
Impact analysis should:

  • Precede schema changes
  • Inform deployment decisions
  • Be communicated to stakeholders

❌ “Minor” changes can still break dependencies.


Create and Update Reusable Assets, including Power BI template (.pbit) files, Power BI data source (.pbids) files, and shared semantic models in Microsoft Fabric

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Maintain a data analytics solution
--> Maintain the analytics development lifecycle
--> Create and update reusable assets, including Power BI template (.pbit)
files, Power BI data source (.pbids) files, and shared semantic models

Reusable assets are a key lifecycle concept in Microsoft Fabric and Power BI. They enable consistency, scalability, and efficiency by allowing teams to standardize how data is connected, modeled, and visualized across multiple solutions.

For the DP-600 exam, you should understand what reusable assets are, how to create and manage them, and when each type is appropriate.

What Are Reusable Assets?

Reusable assets are analytics artifacts designed to be:

  • Used by multiple users or teams
  • Reapplied across projects
  • Centrally governed and maintained

Common reusable assets include:

  • Power BI template (.pbit) files
  • Power BI data source (.pbids) files
  • Shared semantic models

Power BI Template Files (.pbit)

What Is a PBIT File?

A .pbit file is a Power BI template that contains:

  • Report layout and visuals
  • Data model structure (tables, relationships, measures)
  • Parameters and queries (without data)

It does not include actual data.

When to Use PBIT Files

PBIT files are ideal when:

  • Standardizing report design and metrics
  • Distributing reusable report frameworks
  • Supporting self-service analytics at scale
  • Onboarding new analysts

Creating and Updating PBIT Files

  • Create a report in Power BI Desktop
  • Remove data (if present)
  • Save as Power BI Template (.pbit)
  • Store in source control or shared repository
  • Update centrally and redistribute as needed

Power BI Data Source Files (.pbids)

What Is a PBIDS File?

A .pbids file is a JSON-based file that defines:

  • Data source connection details
  • Server, database, or endpoint information
  • Authentication type (but not credentials)

Opening a PBIDS file launches Power BI Desktop and guides users through connecting to the correct data source.

When to Use PBIDS Files

PBIDS files are useful for:

  • Standardizing data connections
  • Reducing configuration errors
  • Guiding business users to approved sources
  • Supporting governed self-service analytics

Managing PBIDS Files

  • Create manually or export from Power BI Desktop
  • Store centrally (e.g., Git, SharePoint)
  • Update when connection details change
  • Pair with shared semantic models where possible

Shared Semantic Models

What Are Shared Semantic Models?

Shared semantic models are centrally managed datasets that:

  • Define business logic, measures, and relationships
  • Serve as a single source of truth
  • Are reused across multiple reports

They are one of the most important reusable assets in Fabric.

Benefits of Shared Semantic Models

  • Consistent metrics across reports
  • Reduced duplication
  • Centralized governance
  • Better performance and manageability

Managing Shared Semantic Models

Shared semantic models are:

  • Developed by analytics engineers
  • Published to Fabric workspaces
  • Shared using Build permission
  • Governed with:
    • RLS and OLS
    • Sensitivity labels
    • Endorsements (Promoted/Certified)

How These Assets Work Together

A common pattern:

  • PBIDS → Standardizes connection
  • Shared semantic model → Defines logic
  • PBIT → Standardizes report layout

This layered approach is frequently tested in exam scenarios.

Reusable Assets and the Development Lifecycle

Reusable assets support:

  • Faster development
  • Consistent deployments
  • Easier maintenance
  • Scalable self-service analytics

They align naturally with:

  • PBIP projects
  • Git version control
  • Development pipelines
  • XMLA-based automation

Common Exam Scenarios

You may be asked:

  • How to distribute a standardized report template → PBIT
  • How to ensure users connect to the correct data source → PBIDS
  • How to enforce consistent business logic → Shared semantic model
  • How to reduce duplicate datasets → Shared model + Build permission

Example:

Multiple teams need to create reports using the same metrics and layout.
Correct concepts: Shared semantic model and PBIT.

Best Practices to Remember

  • Centralize ownership of shared semantic models
  • Certify trusted reusable assets
  • Store templates and PBIDS files in source control
  • Avoid duplicating business logic in individual reports
  • Pair reusable assets with governance features

Key Exam Takeaways

  • Reusable assets improve consistency and scalability
  • PBIT files standardize report design
  • PBIDS files standardize data connections
  • Shared semantic models centralize business logic
  • All are core lifecycle tools in Fabric

Exam Tips

  • If a question focuses on standardization, reuse, or self-service at scale, think PBIT, PBIDS, and shared semantic models—and choose the one that matches the problem being solved.
  • Expect scenarios that test:
    • When to use PBIT vs PBIDS vs shared semantic models
    • Governance and consistency
    • Enterprise BI scalability
  • Quick memory aid:
    • PBIT = Layout + Model (no data)
    • PBIDS = Connection only
    • Shared model = Logic once, reports many

Practice Questions

Question 1 (Single choice)

What is the PRIMARY purpose of a Power BI template (.pbit) file?

A. Store report data for reuse
B. Share report layout and model structure without data
C. Store credentials securely
D. Enable real-time data refresh

Correct Answer: B

Explanation:
A .pbit file contains:

  • Report layout
  • Semantic model (tables, relationships, measures)
  • No data

It’s used to standardize report creation.


Question 2 (Multi-select)

Which components are included in a Power BI template (.pbit)? (Select all that apply.)

A. Report visuals
B. Data model schema
C. Data source credentials
D. DAX measures

Correct Answers: A, B, D

Explanation:

  • Templates include visuals, schema, relationships, and measures.
  • ❌ Credentials and data are never included.

Question 3 (Scenario-based)

Your organization wants users to quickly connect to approved data sources while preventing incorrect connection strings. Which reusable asset is BEST?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: C

Explanation:
PBIDS files:

  • Predefine connection details
  • Guide users to approved data sources
  • Improve governance and consistency

Question 4 (Single choice)

Which statement about Power BI data source (.pbids) files is TRUE?

A. They contain report visuals
B. They contain DAX measures
C. They define connection metadata only
D. They store dataset refresh schedules

Correct Answer: C

Explanation:
PBIDS files only store:

  • Data source type
  • Server/database info
    They do NOT include visuals, data, or logic.

Question 5 (Scenario-based)

You want multiple reports to use the same curated dataset to ensure consistent KPIs. What should you implement?

A. Multiple PBIX files
B. Power BI templates
C. Shared semantic model
D. PBIDS files

Correct Answer: C

Explanation:
A shared semantic model allows:

  • Centralized logic
  • Single source of truth
  • Multiple reports connected via Live/Direct Lake

Question 6 (Multi-select)

Which benefits are provided by shared semantic models? (Select all that apply.)

A. Consistent calculations across reports
B. Reduced duplication of datasets
C. Independent refresh schedules per report
D. Centralized security management

Correct Answers: A, B, D

Explanation:

  • Shared models enforce consistency and reduce maintenance.
  • ❌ Refresh is managed at the model level, not per report.

Question 7 (Scenario-based)

You update a shared semantic model’s calculation logic. What is the impact?

A. Only new reports see the change
B. All connected reports reflect the change
C. Reports must be republished
D. Only the workspace owner sees updates

Correct Answer: B

Explanation:
All reports connected to a shared semantic model automatically reflect changes.


Question 8 (Single choice)

Which reusable asset BEST supports report creation without requiring Power BI Desktop modeling skills?

A. PBIX file
B. PBIT file
C. PBIDS file
D. Shared semantic model

Correct Answer: D

Explanation:
Users can build reports directly on shared semantic models using existing fields and measures.


Question 9 (Scenario-based)

You want to standardize report branding, page layout, and slicers across teams. What should you distribute?

A. PBIDS file
B. Shared semantic model
C. PBIT file
D. XMLA script

Correct Answer: C

Explanation:
PBIT files are ideal for:

  • Visual consistency
  • Reusable layouts
  • Standard filters and slicers

Question 10 (Multi-select)

Which are BEST practices when managing reusable Power BI assets? (Select all that apply.)

A. Store PBIT and PBIDS files in version control
B. Update shared semantic models directly in production without testing
C. Document reusable asset usage
D. Combine shared semantic models with deployment pipelines

Correct Answers: A, C, D

Explanation:
Best practices emphasize:

  • Governance
  • Controlled updates
  • Documentation

❌ Direct production edits increase risk.