Category: BI Administration

Configure and Update a Workspace App (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Manage and secure Power BI (15–20%)
--> Create and manage workspaces and assets
--> Configure and Update a Workspace App


Note that there are 10 practice questions (with answers and explanations) at the end of each topic. Also, there are 2 practice tests with 60 questions each available on the hub below all the exam topics.

Overview

In Power BI, a workspace app is a curated, read-only package of reports, dashboards, and related content that is published from a workspace and shared with a broader audience.
For the PL-300 exam, you are expected to understand when and why to use an app, how to configure it, and how to update it safely without disrupting consumers.


What Is a Workspace App?

A workspace app is:

  • A consumption layer built on top of a workspace
  • Designed for end users, not report developers
  • Read-only by default
  • Published and maintained by workspace Members or Admins

Apps help separate:

  • Development and collaboration (workspace)
  • Consumption and distribution (app)

This separation is a key design principle tested on the PL-300 exam.


Why Use a Workspace App?

Common reasons to publish an app include:

  • Providing a controlled, polished experience for business users
  • Preventing users from modifying reports or models
  • Distributing content to large audiences
  • Centralizing access to related dashboards and reports
  • Supporting versioned updates without breaking access

Apps are preferred over direct report sharing for enterprise-scale distribution.


Who Can Configure and Update an App?

Only the following workspace roles can manage apps:

  • Admin
  • Member

Contributors and Viewers cannot publish or update workspace apps.


Configuring a Workspace App

When configuring an app, you define how users experience and access content.

Key Configuration Areas

1. Content Selection

You can choose:

  • Which reports and dashboards appear
  • The order in which they appear
  • Which items are hidden from consumers

This allows you to publish only approved, production-ready assets.


2. Navigation and Layout

You can:

  • Reorder items
  • Group content logically
  • Create a clean navigation experience

This improves usability and storytelling, even though the app itself is read-only.


3. Audience Access

Apps support audience-based access, allowing you to:

  • Define different audiences
  • Control which content each audience can see
  • Apply security without duplicating reports

Audiences do not replace dataset security (such as RLS); they control visibility, not data filtering.


4. Permissions

When publishing an app, you can:

  • Grant access to users or security groups
  • Allow or prevent users from resharing
  • Optionally allow users to connect to the underlying semantic model

Allowing semantic model access is important for:

  • Excel Analyze in Excel
  • Power BI “Build” permissions
  • Self-service reporting scenarios

Updating a Workspace App

How Updates Work

Apps are not updated automatically when workspace content changes.

To update an app:

  1. Make changes in the workspace
  2. Select Update app
  3. Republish the app

This ensures:

  • Changes are intentional
  • Consumers are not impacted by unfinished work
  • Version control is maintained

What Happens to Users When an App Is Updated?

  • Users retain access
  • Bookmarks and links continue to work
  • Updated content appears after republishing
  • No re-sharing is required

This makes apps ideal for controlled release cycles.


App Updates vs Workspace Changes

ActionWorkspaceApp
Edit reportYesNo
Test changesYesNo
Publish to usersNoYes
Control visibilityPartialFull

This distinction is frequently tested on the PL-300 exam.


Common Exam Scenarios

You may see questions such as:

  • When to use an app instead of sharing reports
  • Who can publish or update an app
  • How to limit what users see without duplicating content
  • How to update content without disrupting consumers

Key takeaway:
Apps are for distribution; workspaces are for collaboration.


Best Practices to Remember for the Exam

  • Use apps for broad distribution
  • Keep development content in the workspace
  • Use audiences to tailor visibility
  • Republish the app after changes
  • Assign Members or Admins to manage apps
  • Combine apps with RLS for secure data access

Summary

Configuring and updating a workspace app is a core Power BI governance skill. For the PL-300 exam, you must understand how apps:

  • Control access
  • Improve usability
  • Separate development from consumption
  • Enable safe, repeatable updates

Mastering this topic ensures you can design secure, scalable, and user-friendly Power BI solutions.


Practice Questions

Go to the practice questions for this topic.

Create and Configure a Workspace (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Manage and secure Power BI (15–20%)
--> Create and manage workspaces and assets
--> Create and Configure a Workspace


Note that there are 10 practice questions (with answers and explanations) at the end of each topic. Also, there are 2 practice tests with 60 questions each available on the hub below all the exam topics.

Exam Context

Power BI workspaces are a core governance and collaboration concept on the PL-300 exam. You are expected to understand how to create workspaces, configure settings, assign roles, and manage content in a secure and scalable way.


What Is a Power BI Workspace?

A workspace is a container in the Power BI service used to:

  • Store and manage reports, semantic models (datasets), dashboards, and dataflows
  • Control access and permissions
  • Support collaboration and deployment across teams

Workspaces are the foundation for app publishing, security, and content lifecycle management.


Creating a Workspace

How to Create a Workspace

In the Power BI Service:

  1. Select Workspaces
  2. Choose New workspace
  3. Provide:
    • Workspace name
    • Description (recommended)
    • Optional contact list
  4. Configure advanced settings (if applicable)
  5. Create the workspace

⚠️ Only users with appropriate Power BI licenses and tenant permissions can create workspaces.


Workspace Types and Capacity

Shared Capacity vs Premium Capacity

  • Shared capacity
    • Default for most workspaces
    • Limited performance and feature availability
  • Premium capacity (or Fabric capacity)
    • Required for features like:
      • Large semantic models
      • Incremental refresh (advanced scenarios)
      • Copilot
      • XMLA read/write
      • Deployment pipelines

Understanding which features require Premium is frequently tested on the exam.


Workspace Roles and Permissions

Workspace Roles

Power BI workspaces support four roles:

RoleKey Capabilities
AdminFull control (settings, users, deletion)
MemberCreate, edit, publish, and share content
ContributorCreate and modify content, but no user management
ViewerRead-only access

Exam Tip

  • Admins manage access and settings
  • Members/Contributors build content
  • Viewers consume content only

Configuring Workspace Settings

Key workspace configuration areas include:

1. General Settings

  • Workspace name and description
  • Contact list (for support and ownership clarity)

2. Access Settings

  • Add users or security groups
  • Assign appropriate roles
  • Enforce least-privilege access

3. License and Capacity Settings

  • Assign workspace to Premium capacity
  • Required for advanced features and scalability

Managing Workspace Content

Within a workspace, users can manage:

  • Reports
  • Semantic models
  • Dashboards
  • Dataflows

Key actions include:

  • Publishing from Power BI Desktop
  • Updating datasets
  • Configuring refresh schedules
  • Setting dataset permissions
  • Endorsing content (Promoted or Certified)

Workspace Apps

Workspaces can be used to publish Power BI Apps, which:

  • Provide a curated, read-only experience for consumers
  • Separate development from consumption
  • Are commonly used for enterprise distribution

Exam Insight

  • Apps are published from workspaces
  • Viewers often access content through apps, not the workspace itself

Security and Governance Considerations

Workspaces play a central role in Power BI governance:

  • Centralized content ownership
  • Controlled collaboration
  • Reduced sharing sprawl
  • Support for deployment pipelines (Dev/Test/Prod)

Good workspace design aligns with:

  • Team boundaries
  • Business domains
  • Data ownership

Common Exam Scenarios

You may be asked to determine:

  • Which role a user needs to publish reports
  • When to use Premium capacity
  • How to restrict editing but allow viewing
  • Where apps are created and managed
  • How to organize content for multiple teams

Key Takeaways for PL-300

  • Workspaces are the primary container for Power BI content
  • Role assignment directly impacts security and collaboration
  • Premium capacity unlocks advanced enterprise features
  • Apps are built from workspaces, not standalone
  • Proper workspace configuration supports scalability and governance

Practice Questions

Go to the Practice Questions for this topic.

Implement Performance Improvements in Queries and Report Visuals (DP-600 Exam Prep)

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Implement and manage semantic models (25-30%)
--> Optimize enterprise-scale semantic models
--> Implement performance improvements in queries and report visuals

Performance optimization is a critical skill for the Fabric Analytics Engineer. In enterprise-scale semantic models, poor query design, inefficient DAX, or overly complex visuals can significantly degrade report responsiveness and user experience. This exam section focuses on identifying performance bottlenecks and applying best practices to improve query execution, model efficiency, and report rendering.


1. Understand Where Performance Issues Occur

Performance problems typically fall into three layers:

a. Data & Storage Layer

  • Storage mode (Import, DirectQuery, Direct Lake, Composite)
  • Data source latency
  • Table size and cardinality
  • Partitioning and refresh strategies

b. Semantic Model & Query Layer

  • DAX calculation complexity
  • Relationships and filter propagation
  • Aggregation design
  • Use of calculation groups and measures

c. Report & Visual Layer

  • Number and type of visuals
  • Cross-filtering behavior
  • Visual-level queries
  • Use of slicers and filters

DP-600 questions often test your ability to identify the correct layer where optimization is needed.


2. Optimize Queries and Semantic Model Performance

a. Choose the Appropriate Storage Mode

  • Use Import for small-to-medium datasets requiring fast interactivity
  • Use Direct Lake for large OneLake Delta tables with high concurrency
  • Use Composite models to balance performance and real-time access
  • Avoid unnecessary DirectQuery when Import or Direct Lake is feasible

b. Reduce Data Volume

  • Remove unused columns and tables
  • Reduce column cardinality (e.g., avoid high-cardinality text columns)
  • Prefer surrogate keys over natural keys
  • Disable Auto Date/Time when not needed

c. Optimize Relationships

  • Use single-direction relationships by default
  • Avoid unnecessary bidirectional filters
  • Ensure relationships follow a star schema
  • Avoid many-to-many relationships unless required

d. Use Aggregations

  • Create aggregation tables to pre-summarize large fact tables
  • Enable query hits against aggregation tables before scanning detailed data
  • Especially valuable in composite models

3. Improve DAX Query Performance

a. Write Efficient DAX

  • Prefer measures over calculated columns
  • Use variables (VAR) to avoid repeated calculations
  • Minimize row context where possible
  • Avoid excessive iterators (SUMX, FILTER) over large tables

b. Use Filter Context Efficiently

  • Prefer CALCULATE with simple filters
  • Avoid complex nested FILTER expressions
  • Use KEEPFILTERS and REMOVEFILTERS intentionally

c. Avoid Expensive Patterns

  • Avoid EARLIER in favor of variables
  • Avoid dynamic table generation inside visuals
  • Minimize use of ALL when ALLSELECTED or scoped filters suffice

4. Optimize Report Visual Performance

a. Reduce Visual Complexity

  • Limit the number of visuals per page
  • Avoid visuals that generate multiple queries (e.g., complex custom visuals)
  • Use summary visuals instead of detailed tables where possible

b. Control Interactions

  • Disable unnecessary visual interactions
  • Avoid excessive cross-highlighting
  • Use report-level filters instead of visual-level filters when possible

c. Optimize Slicers

  • Avoid slicers on high-cardinality columns
  • Use dropdown slicers instead of list slicers
  • Limit the number of slicers on a page

d. Prefer Measures Over Visual Calculations

  • Avoid implicit measures created by dragging numeric columns
  • Define explicit measures in the semantic model
  • Reuse measures across visuals to improve cache efficiency

5. Use Performance Analysis Tools

a. Performance Analyzer

  • Identify slow visuals
  • Measure DAX query duration
  • Distinguish between query time and visual rendering time

b. Query Diagnostics (Power BI Desktop)

  • Analyze backend query behavior
  • Identify expensive DirectQuery or Direct Lake operations

c. DAX Studio (Advanced)

  • Analyze query plans
  • Measure storage engine vs formula engine time
  • Identify inefficient DAX patterns

(You won’t be tested on tool UI details, but knowing when and why to use them is exam-relevant.)


6. Common DP-600 Exam Scenarios

You may be asked to:

  • Identify why a report is slow and choose the best optimization
  • Identify the bottleneck layer (model, query, or visual)
  • Select the most appropriate storage mode for performance
  • Choose the least disruptive, most effective optimization
  • Improve a slow DAX measure
  • Reduce visual rendering time without changing the data source
  • Optimize performance for enterprise-scale models
  • Apply enterprise-scale best practices, not just quick fixes

Key Exam Takeaways

  • Always optimize the model first, visuals second
  • Star schema + clean relationships = better performance
  • Efficient DAX matters more than clever DAX
  • Fewer visuals and interactions = faster reports
  • Aggregations and Direct Lake are key enterprise-scale tools

Practice Questions:

Go to the Practice Exam Questions for this topic.

Identify Use Cases for and Configure Large Semantic Model Storage Format (DP-600 Exam Prep)

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Implement and manage semantic models (25-30%)
--> Design and build semantic models
--> Identify use cases for and configure large semantic model storage format

Overview

As datasets grow in size and complexity, standard semantic model storage can become a limiting factor. Microsoft Fabric (via Power BI semantic models) provides a Large Semantic Model storage format designed to support very large datasets, higher cardinality columns, and more demanding analytical workloads.

For the DP-600 exam, you are expected to understand when to use large semantic models, what trade-offs they introduce, and how to configure them correctly.


What Is the Large Semantic Model Storage Format?

The Large semantic model option changes how data is stored and managed internally by the VertiPaq engine to support:

  • Larger data volumes (beyond typical in-memory limits)
  • Higher column cardinality
  • Improved scalability for enterprise workloads

This setting is especially relevant in Fabric Lakehouse and Warehouse-backed semantic models where data size can grow rapidly.


Key Characteristics

  • Designed for enterprise-scale models
  • Supports very large tables and partitions
  • Optimized for memory management, not raw speed
  • Works best with Import mode or Direct Lake
  • Requires Premium capacity or Fabric capacity

Common Use Cases

1. Very Large Fact Tables

Use large semantic models when:

  • Fact tables contain hundreds of millions or billions of rows
  • Historical data is retained for many years
  • Aggregations alone are not sufficient

2. High-Cardinality Columns

Ideal when models include:

  • Transaction IDs
  • GUIDs
  • Timestamps at high granularity
  • User or device identifiers

Standard storage can struggle with memory pressure in these scenarios.


3. Enterprise-Wide Shared Semantic Models

Useful for:

  • Centralized datasets reused across many reports
  • Models serving hundreds or thousands of users
  • Organization-wide KPIs and analytics

4. Complex Models with Many Tables

When your model includes:

  • Numerous dimension tables
  • Multiple fact tables
  • Complex relationships

Large storage format improves stability and scalability.


5. Direct Lake Models Over OneLake

In Microsoft Fabric:

  • Large semantic models pair well with Direct Lake
  • Enable querying massive Delta tables without full data import
  • Reduce duplication of data between OneLake and the model

When NOT to Use Large Semantic Models

Avoid using large semantic models when:

  • The dataset is small or moderate in size
  • Performance is more critical than scalability
  • The model is used by a limited number of users
  • You rely heavily on fast interactive slicing

For smaller models, standard storage often provides better query performance.


Performance Trade-Offs

AspectStandard StorageLarge Storage
Memory efficiencyModerateHigh
Query speedFasterSlightly slower
Max model sizeLimitedMuch larger
Cardinality toleranceLowerHigher
Enterprise scalabilityLimitedHigh

Exam Tip: Large semantic models favor scalability over speed.


How to Configure Large Semantic Model Storage Format

Prerequisites

  • Fabric capacity or Power BI Premium
  • Import or Direct Lake storage mode
  • Dataset ownership permissions

Configuration Steps

  1. Open Power BI Desktop
  2. Go to Model view
  3. Select the semantic model
  4. In Model properties, locate Large dataset storage
  5. Enable the option
  6. Publish the model to Fabric or Power BI Service

Once enabled, the setting cannot be reverted back to standard storage.


Important Configuration Considerations

  • Enable before model grows significantly
  • Combine with:
    • Partitioning
    • Aggregation tables
    • Proper star schema design
  • Monitor memory usage in capacity metrics
  • Plan refresh strategies carefully

Relationship to DP-600 Exam Topics

This section connects directly with:

  • Storage mode selection
  • Semantic model scalability
  • Direct Lake and OneLake integration
  • Enterprise model design decisions

Expect scenario-based questions asking you to choose the appropriate storage format based on:

  • Data volume
  • Cardinality
  • Performance requirements
  • Capacity constraints

Key Takeaways for the Exam

  • Large semantic models support very large, complex datasets
  • Use large semantic models for scale, not speed
  • Best for enterprise-scale analytics
  • Ideal for high-cardinality, high-volume, enterprise models
  • Trade performance for scalability
  • Require Premium or Fabric capacity
  • One-way configuration—so, plan ahead
  • Often paired/combined with Direct Lake

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions to guide you
  • Expect scenario-based questions rather than direct definitions

1. When should you enable the large semantic model storage format?

A. When the model is used by a small number of users
B. When the dataset contains very large fact tables and high-cardinality columns
C. When query performance must be maximized for small datasets
D. When using Import mode with small dimension tables

Correct Answer: B

Explanation:
Large semantic models are designed to handle very large datasets and high-cardinality columns. Small or simple models do not benefit and may experience reduced performance.


2. Which storage modes support large semantic model storage format?

A. DirectQuery only
B. Import and Direct Lake
C. Live connection only
D. All Power BI storage modes

Correct Answer: B

Explanation:
Large semantic model storage format is supported with Import and Direct Lake modes. It is not applicable to Live connections or DirectQuery-only scenarios.


3. What is a primary trade-off when using large semantic model storage format?

A. Increased query speed
B. Reduced memory usage with no downsides
C. Slightly slower query performance in exchange for scalability
D. Loss of DAX functionality

Correct Answer: C

Explanation:
Large semantic models favor scalability and memory efficiency over raw query speed, which can be slightly slower compared to standard storage.


4. Which scenario is the best candidate for a large semantic model?

A. A departmental sales report with 1 million rows
B. A personal Power BI report with static data
C. An enterprise model with billions of transaction records
D. A DirectQuery model against a SQL database

Correct Answer: C

Explanation:
Large semantic models are ideal for enterprise-scale datasets with very large row counts and complex analytics needs.


5. What happens after enabling large semantic model storage format?

A. It can be disabled at any time
B. The model automatically switches to DirectQuery
C. The setting cannot be reverted
D. Aggregation tables are created automatically

Correct Answer: C

Explanation:
Once enabled, large semantic model storage format cannot be turned off, making early planning important.


6. Which capacity requirement applies to large semantic models?

A. Power BI Free
B. Power BI Pro
C. Power BI Premium or Microsoft Fabric capacity
D. Any capacity type

Correct Answer: C

Explanation:
Large semantic models require Premium capacity or Fabric capacity due to their increased resource demands.


7. Why are high-cardinality columns a concern in standard semantic models?

A. They prevent relationships from being created
B. They increase memory usage and reduce compression efficiency
C. They disable aggregations
D. They are unsupported in Power BI

Correct Answer: B

Explanation:
High-cardinality columns reduce VertiPaq compression efficiency, increasing memory pressure—one reason to use large semantic model storage.


8. Which Fabric feature commonly pairs with large semantic models for massive datasets?

A. Power Query Dataflows
B. DirectQuery
C. Direct Lake over OneLake
D. Live connection to Excel

Correct Answer: C

Explanation:
Large semantic models pair well with Direct Lake, allowing efficient querying of large Delta tables stored in OneLake.


9. Which statement best describes large semantic model performance?

A. Always faster than standard storage
B. Optimized for small, interactive datasets
C. Optimized for scalability and memory efficiency
D. Not compatible with DAX calculations

Correct Answer: C

Explanation:
Large semantic models prioritize scalability and efficient memory management, not maximum query speed.


10. Which design practice should accompany large semantic models?

A. Flat denormalized tables only
B. Star schema, aggregations, and partitioning
C. Avoid relationships entirely
D. Disable incremental refresh

Correct Answer: B

Explanation:
Best practices such as star schema design, aggregation tables, and partitioning are critical for maintaining performance and manageability in large semantic models.


Implement Relationships, Such as Bridge Tables and Many-to-Many Relationships

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Implement and manage semantic models (25-30%)
--> Design and build semantic models
--> Implement Relationships, Such as Bridge Tables

and Many-to-Many Relationships

Why Relationships Matter in Semantic Models

In Microsoft Fabric and Power BI semantic models, relationships define how tables interact and how filters propagate across data. Well-designed relationships are critical for:

  • Accurate aggregations
  • Predictable filtering behavior
  • Correct DAX calculations
  • Optimal query performance

While one-to-many relationships are preferred, real-world data often requires handling many-to-many relationships using techniques such as bridge tables.


Common Relationship Types in Semantic Models

1. One-to-Many (Preferred)

  • One dimension row relates to many fact rows
  • Most common and performant relationship
  • Typical in star schemas

Example:

  • DimCustomer → FactSales

2. Many-to-Many

  • Multiple rows in one table relate to multiple rows in another
  • More complex filtering behavior
  • Can negatively impact performance if not modeled correctly

Example:

  • Customers associated with multiple regions
  • Products assigned to multiple categories

Understanding Many-to-Many Relationships

Native Many-to-Many Relationships

Power BI supports direct many-to-many relationships, but these should be used carefully.

Characteristics:

  • Cardinality: Many-to-many
  • Filters propagate ambiguously
  • DAX becomes harder to reason about

Exam Tip:
Direct many-to-many relationships are supported but not always recommended for complex models.


Bridge Tables (Best Practice)

A bridge table (also called a factless fact table) resolves many-to-many relationships by introducing an intermediate table.

What Is a Bridge Table?

A table that:

  • Contains keys from two related entities
  • Has no numeric measures
  • Enables controlled filtering paths

Example Scenario

Business case:
Products can belong to multiple categories.

Tables:

  • DimProduct (ProductID, Name)
  • DimCategory (CategoryID, CategoryName)
  • BridgeProductCategory (ProductID, CategoryID)

Relationships:

  • DimProduct → BridgeProductCategory (one-to-many)
  • DimCategory → BridgeProductCategory (one-to-many)

This converts a many-to-many relationship into two one-to-many relationships.


Benefits of Using Bridge Tables

BenefitDescription
Predictable filteringClear filter paths
Better DAX controlEasier to write and debug measures
Improved performanceAvoids ambiguous joins
ScalabilityHandles complex relationships cleanly

Filter Direction Considerations

Single vs Bidirectional Filters

  • Single direction (recommended):
    Filters flow from dimension → bridge → fact
  • Bidirectional:
    Can simplify some scenarios but increases ambiguity

Exam Guidance:

  • Use single-direction filters by default
  • Enable bidirectional filtering only when required and understood

Many-to-Many and DAX Implications

When working with many-to-many relationships:

  • Measures may return unexpected results
  • DISTINCTCOUNT is commonly required
  • Explicit filtering using DAX functions may be necessary

Common DAX patterns:

  • CALCULATE
  • TREATAS
  • CROSSFILTER (advanced)

Relationship Best Practices for DP-600

  • Favor star schemas with one-to-many relationships
  • Use bridge tables instead of direct many-to-many when possible
  • Avoid unnecessary bidirectional filters
  • Validate relationship cardinality and direction
  • Test measures under different filtering scenarios

Common Exam Scenarios

You may see questions like:

  • “How do you model a relationship where products belong to multiple categories?”
  • “What is the purpose of a bridge table?”
  • “What are the risks of many-to-many relationships?”

Correct answers typically emphasize:

  • Bridge tables
  • Controlled filter propagation
  • Avoiding ambiguous relationships

Star Schema vs Many-to-Many Models

FeatureStar SchemaMany-to-Many
ComplexityLowHigher
PerformanceBetterLower
DAX simplicityHighLower
Use casesMost analyticsSpecialized scenarios

Summary

Implementing relationships correctly is foundational to building reliable semantic models in Microsoft Fabric:

  • One-to-many relationships are preferred
  • Many-to-many relationships should be handled carefully
  • Bridge tables provide a scalable, exam-recommended solution
  • Clear relationships lead to accurate analytics and simpler DAX

Exam Tip

If a question involves multiple entities relating to each other, or many-to-many relationships, the most likely answer usually includes using a “bridge table”.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions to guide you
  • Expect scenario-based questions rather than direct definitions

1. Which relationship type is generally preferred in Power BI semantic models?

A. Many-to-many
B. One-to-one
C. One-to-many
D. Bidirectional many-to-many

Correct Answer: C

Explanation:
One-to-many relationships provide predictable filter propagation, better performance, and simpler DAX calculations.


2. What is the primary purpose of a bridge table?

A. Store aggregated metrics
B. Normalize dimension attributes
C. Resolve many-to-many relationships
D. Improve data refresh performance

Correct Answer: C

Explanation:
Bridge tables convert many-to-many relationships into two one-to-many relationships, improving model clarity and control.


3. Which characteristic best describes a bridge table?

A. Contains numeric measures
B. Stores transactional data
C. Contains keys from related tables only
D. Is always filtered bidirectionally

Correct Answer: C

Explanation:
Bridge tables typically contain only keys (foreign keys) and no measures, enabling relationship resolution.


4. What is a common risk of using native many-to-many relationships directly?

A. They cannot be refreshed
B. They cause data duplication
C. They create ambiguous filter propagation
D. They are unsupported in Fabric

Correct Answer: C

Explanation:
Native many-to-many relationships can result in ambiguous filtering and unpredictable aggregation results.


5. In a bridge table scenario, how are relationships typically defined?

A. Many-to-many on both sides
B. One-to-one from both dimensions
C. One-to-many from each dimension to the bridge
D. Bidirectional many-to-one

Correct Answer: C

Explanation:
Each dimension connects to the bridge table using a one-to-many relationship.


6. When should bidirectional filtering be enabled?

A. Always, for simplicity
B. Only when necessary and well-understood
C. Only on fact tables
D. Never in semantic models

Correct Answer: B

Explanation:
Bidirectional filters can be useful but introduce complexity and ambiguity if misused.


7. Which scenario is best handled using a bridge table?

A. A customer has one address
B. A sale belongs to one product
C. A product belongs to multiple categories
D. A date table relates to a fact table

Correct Answer: C

Explanation:
Products belonging to multiple categories is a classic many-to-many scenario requiring a bridge table.


8. How does a properly designed bridge table affect DAX measures?

A. Makes measures harder to write
B. Requires custom SQL logic
C. Enables predictable filter behavior
D. Eliminates the need for CALCULATE

Correct Answer: C

Explanation:
Bridge tables create clear filter paths, making DAX behavior more predictable and reliable.


9. Which DAX function is commonly used to handle complex many-to-many filtering scenarios?

A. SUMX
B. RELATED
C. TREATAS
D. LOOKUPVALUE

Correct Answer: C

Explanation:
TREATAS is often used to apply filters across tables that are not directly related.


10. For DP-600 exam questions involving many-to-many relationships, which solution is typically preferred?

A. Direct many-to-many relationships
B. Denormalized fact tables
C. Bridge tables with one-to-many relationships
D. Duplicate dimension tables

Correct Answer: C

Explanation:
The exam emphasizes scalable, maintainable modeling practices — bridge tables are the recommended solution.


Choose a storage mode – additional information

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Implement and manage semantic models
--> Design and build semantic models
--> Choose a storage mode

This is supplemental information to what is included in the "Choose a storage mode" post.

DP-600 Cheat Sheet: Choosing a Storage Mode in Microsoft Fabric

Storage Mode Decision Matrix

Requirement / ScenarioImportDirectQueryDirect LakeComposite
Best query performance✅ Excellent❌ Depends on source✅ Excellent✅ Very good
Near real-time data❌ No✅ Yes✅ Yes✅ Yes
Large datasets (TB-scale)❌ Limited✅ Yes✅ Yes✅ Yes
Minimal refresh overhead❌ Requires refresh✅ No refresh✅ No refresh⚠ Partial
Uses OneLake Delta tables❌ Not required❌ Not required✅ Required✅ Optional
Full DAX & modeling features✅ Full support⚠ Limited⚠ Limited✅ Full
Calculated tables supported✅ Yes❌ No❌ No✅ Yes (Import tables only)
Lowest data duplication❌ High✅ None✅ None⚠ Mixed
Simple to manage✅ Yes⚠ Depends on source⚠ Fabric-specific❌ More complex

When to Choose Each Storage Mode

✅ Import Mode — Choose when:

  • Dataset fits comfortably in memory
  • You need complex DAX, calculated tables, or calculated columns
  • Performance is the top priority
  • Data freshness can be managed via scheduled refresh

Exam clue words: fastest, complex calculations, small to medium data


✅ DirectQuery — Choose when:

  • Data must always be current
  • Source system is highly optimized (SQL, Synapse, etc.)
  • Data volume is very large
  • You want zero data duplication

Exam clue words: real-time, source system, no refresh


✅ Direct Lake — Choose when:

  • Data is stored as Delta tables in OneLake
  • Dataset is large and frequently updated
  • You want Import-like performance without refresh
  • You’re working fully within Fabric

Exam clue words: OneLake, Delta, no refresh, Fabric-optimized


✅ Composite Model — Choose when:

  • You need flexibility across different tables
  • Fact tables are large and live (Direct Lake / DirectQuery)
  • Dimension tables are small and stable (Import)
  • You want performance and modeling flexibility

Exam clue words: hybrid, mix storage modes, dimension vs fact


Fast Exam Inclusion/Elimination Tips

  • Calculated tables required? → Import or Composite
  • OneLake + Delta tables? → Direct Lake
  • Real-time + external source? → DirectQuery
  • Best balance of flexibility and scale? → Composite

One-Sentence Exam Rule

If it’s in OneLake and too big to refresh, Direct Lake is usually the right answer.

Implement OneLake Integration for Eventhouse and Semantic Models

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Get data
--> Implement OneLake Integration for Eventhouse and Semantic Models

Microsoft Fabric is designed around the principle of OneLake as a single, unified data foundation. For the DP-600 exam, the topic “Implement OneLake integration for Eventhouse and semantic models” focuses on how both streaming data and analytical models can integrate with OneLake to enable reuse, governance, and multi-workload analytics.

This topic frequently appears in architecture and scenario-based questions, not as a pure feature checklist.

Why OneLake Integration Is Important

OneLake integration enables:

  • A single copy of data to support multiple analytics workloads
  • Reduced data duplication and ingestion complexity
  • Consistent governance and security
  • Seamless movement between real-time, batch, and BI analytics

For the exam, this is about understanding how data flows across Fabric experiences, not just where it lives.

OneLake Integration for Eventhouse

Eventhouse Recap

An Eventhouse is optimized for:

  • Real-time and near-real-time analytics
  • Streaming and telemetry data
  • High-ingestion rates
  • Querying with KQL (Kusto Query Language)

By default, Eventhouse is focused on real-time querying—but many solutions require more.

How Eventhouse Integrates with OneLake

When OneLake integration is implemented for an Eventhouse:

  • Streaming data ingested into the Eventhouse is persisted in OneLake
  • The same data becomes available for:
    • Lakehouses (Spark / SQL)
    • Warehouses (T-SQL reporting)
    • Notebooks
    • Semantic models
  • Real-time and historical analytics can coexist

This allows streaming data to participate in downstream analytics without re-ingestion.

Exam Signals for Eventhouse + OneLake

Look for phrases like:

  • Persist streaming data
  • Reuse event data
  • Combine real-time and batch analytics
  • Avoid duplicate ingestion pipelines

These strongly indicate OneLake integration for Eventhouse.

OneLake Integration for Semantic Models

Semantic Models Recap

A semantic model (Power BI dataset) defines:

  • Business-friendly tables and relationships
  • Measures and calculations (DAX)
  • Security rules (RLS, OLS)
  • A curated layer for reporting and analysis

Semantic models do not store raw data themselves—they rely on underlying data sources.

How Semantic Models Integrate with OneLake

Semantic models integrate with OneLake when their data source is:

  • A Lakehouse
  • A Warehouse
  • Eventhouse data persisted to OneLake

In these cases:

  • Data physically resides in OneLake
  • The semantic model acts as a logical abstraction
  • Multiple reports can reuse the same curated model

This supports the Fabric design pattern of shared semantic models over shared data.

Import vs DirectQuery (Exam-Relevant)

Semantic models can connect to OneLake-backed data using:

  • Import mode – best performance, scheduled refresh
  • DirectQuery – near-real-time access, source-dependent performance

DP-600 often tests your ability to choose the appropriate mode based on:

  • Data freshness requirements
  • Dataset size
  • Performance expectations

Eventhouse + OneLake + Semantic Models (End-to-End View)

A common DP-600 architecture looks like this:

  1. Streaming data is ingested into an Eventhouse
  2. Event data is persisted to OneLake
  3. Data is accessed by:
    • Lakehouse (for transformations)
    • Warehouse (for BI-friendly schemas)
  4. A shared semantic model is built on top
  5. Multiple Power BI reports reuse the model

This architecture supports real-time insights and historical analysis from the same data.

Governance and Security Benefits

OneLake integration ensures:

  • Centralized security and permissions
  • Sensitivity labels applied consistently
  • Reduced risk of shadow datasets
  • Clear lineage across streaming, batch, and BI layers

Exam questions often frame this as a governance or compliance requirement.

Common Exam Scenarios

You may be asked to:

  • Enable downstream analytics from streaming data
  • Avoid duplicating event ingestion
  • Support real-time dashboards and historical reports
  • Reuse a semantic model across teams
  • Align streaming analytics with enterprise BI

Always identify:

  • Where the data is persisted
  • Who needs access
  • How fresh the data must be
  • Which query language is required

Best Practices (DP-600 Focus)

  • Use Eventhouse for real-time ingestion and KQL analytics
  • Enable OneLake integration for reuse and persistence
  • Build shared semantic models on OneLake-backed data
  • Avoid multiple ingestion paths for the same data
  • Let OneLake act as the single source of truth

Key Takeaway
For the DP-600 exam, implementing OneLake integration for Eventhouse and semantic models is about enabling streaming data to flow seamlessly into governed, reusable analytical solutions. Eventhouse delivers real-time insights, OneLake provides a unified storage layer, and semantic models expose trusted, business-ready analytics—all without unnecessary duplication.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions to guide you
  • Expect scenario-based questions rather than direct definitions

And also keep in mind …

  • When you see streaming data + reuse + BI or ML, think:
    Eventhouse → OneLake → Lakehouse/Warehouse → Semantic model

1. What is the primary benefit of integrating an Eventhouse with OneLake?

A. Faster Power BI rendering
B. Ability to query event data using DAX
C. Persistence and reuse of streaming data across Fabric workloads
D. Elimination of real-time ingestion

Correct Answer: C

Explanation:
OneLake integration allows streaming data ingested into an Eventhouse to be persisted and reused by Lakehouses, Warehouses, notebooks, and semantic models—without re-ingestion.

2. Which query language is used for real-time analytics directly in an Eventhouse?

A. T-SQL
B. Spark SQL
C. DAX
D. KQL

Correct Answer: D

Explanation:
Eventhouses are built on KQL (Kusto Query Language), which is optimized for querying streaming and time-series data.

3. A team wants to combine real-time event data with historical batch data in Power BI. What is the BEST approach?

A. Build separate semantic models for each data source
B. Persist event data to OneLake and build a semantic model on top
C. Use DirectQuery to the Eventhouse only
D. Export event data to Excel

Correct Answer: B

Explanation:
Persisting event data to OneLake allows it to be combined with historical data and exposed through a single semantic model.

4. How do semantic models integrate with OneLake in Microsoft Fabric?

A. Semantic models store data directly in OneLake
B. Semantic models replace OneLake storage
C. Semantic models reference OneLake-backed sources such as Lakehouses and Warehouses
D. Semantic models only support streaming data

Correct Answer: C

Explanation:
Semantic models do not store raw data; they reference OneLake-backed sources like Lakehouses, Warehouses, or persisted Eventhouse data.

5. Which scenario MOST strongly indicates the need for OneLake integration for Eventhouse?

A. Ad hoc SQL reporting on static data
B. Monthly batch ETL processing
C. Reusing streaming data for BI, ML, and historical analysis
D. Creating a single real-time dashboard

Correct Answer: C

Explanation:
OneLake integration is most valuable when streaming data must be reused across multiple analytics workloads beyond real-time querying.

6. Which storage principle best describes the benefit of OneLake integration?

A. Multiple copies for better performance
B. One copy of data, many analytics experiences
C. Schema-on-read only
D. Real-time only storage

Correct Answer: B

Explanation:
Microsoft Fabric promotes the principle of storing one copy of data in OneLake and enabling multiple analytics experiences on top of it.

7. Which connectivity mode should be chosen for a semantic model when near-real-time access to event data is required?

A. Import
B. Cached mode
C. DirectQuery
D. Snapshot mode

Correct Answer: C

Explanation:
DirectQuery enables near-real-time access to the underlying data, making it suitable when freshness is critical.

8. What governance advantage does OneLake integration provide?

A. Automatic deletion of sensitive data
B. Centralized security and sensitivity labeling
C. Removal of workspace permissions
D. Unlimited data access

Correct Answer: B

Explanation:
OneLake integration supports centralized governance, including consistent permissions and sensitivity labels across streaming and batch data.

9. Which end-to-end architecture BEST supports both real-time dashboards and historical reporting?

A. Eventhouse only
B. Lakehouse only
C. Eventhouse with OneLake integration and a shared semantic model
D. Warehouse without ingestion

Correct Answer: C

Explanation:
This architecture enables real-time ingestion via Eventhouse, persistence in OneLake, and curated reporting through a shared semantic model.

10. On the DP-600 exam, which phrase is MOST likely to indicate the need for OneLake integration for Eventhouse?

A. “SQL-only reporting solution”
B. “Single-user analysis”
C. “Avoid duplicating streaming ingestion pipelines”
D. “Static reference data”

Correct Answer: C

Explanation:
Avoiding duplication and enabling reuse of streaming data across analytics workloads is a key signal for OneLake integration.

Choose Between a Lakehouse, Warehouse, or Eventhouse

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Get data
--> Choose Between a Lakehouse, Warehouse, or Eventhouse

One of the most important architectural decisions a Microsoft Fabric Analytics Engineer must make is selecting the right analytical store for a given workload. For the DP-600 exam, this topic tests your ability to choose between a Lakehouse, Warehouse, or Eventhouse based on data type, query patterns, latency requirements, and user personas.

Overview of the Three Options

Microsoft Fabric provides three primary analytics storage and query experiences:

OptionPrimary Purpose
LakehouseFlexible analytics on files and tables using Spark and SQL
WarehouseEnterprise-grade SQL analytics and BI reporting
EventhouseReal-time and near-real-time analytics on streaming data

Understanding why and when to use each is critical for DP-600 success.

Lakehouse

What Is a Lakehouse?

A Lakehouse combines the flexibility of a data lake with the structure of a data warehouse. Data is stored in Delta Lake format in OneLake and can be accessed using both Spark and SQL.

When to Choose a Lakehouse

Choose a Lakehouse when you need:

  • Flexible schema (schema-on-read or schema-on-write)
  • Support for data engineering and data science
  • Access to raw, curated, and enriched data
  • Spark-based transformations and notebooks
  • Mixed workloads (batch analytics, exploration, ML)

Key Characteristics

  • Supports files and tables
  • Uses Spark SQL and T-SQL endpoints
  • Ideal for ELT and advanced transformations
  • Easy integration with notebooks and pipelines

Exam signal words: flexible, raw data, Spark, data science, experimentation

Warehouse

What Is a Warehouse?

A Warehouse is a fully managed, SQL-first analytical store optimized for business intelligence and reporting. It enforces schema-on-write and provides a traditional relational experience.

When to Choose a Warehouse

Choose a Warehouse when you need:

  • Strong SQL-based analytics
  • High-performance reporting
  • Well-defined schemas and governance
  • Centralized enterprise BI
  • Compatibility with Power BI Import or DirectQuery

Key Characteristics

  • T-SQL only (no Spark)
  • Optimized for structured data
  • Best for star/snowflake schemas
  • Familiar experience for SQL developers

Exam signal words: enterprise BI, reporting, structured, governed, SQL-first

Eventhouse

What Is an Eventhouse?

An Eventhouse is optimized for real-time and streaming analytics, built on KQL (Kusto Query Language). It is designed to handle high-velocity event data.

When to Choose an Eventhouse

Choose an Eventhouse when you need:

  • Near-real-time or real-time analytics
  • Streaming data ingestion
  • Operational or telemetry analytics
  • Event-based dashboards and alerts

Key Characteristics

  • Uses KQL for querying
  • Integrates with Eventstreams
  • Handles massive ingestion rates
  • Optimized for time-series data

Exam signal words: streaming, telemetry, IoT, real-time, events

Choosing the Right Option (Exam-Critical)

The DP-600 exam often presents scenarios where multiple options could work, but only one best fits the requirements.

Decision Matrix

RequirementBest Choice
Raw + curated dataLakehouse
Complex Spark transformationsLakehouse
Enterprise BI reportingWarehouse
Strong governance and schemasWarehouse
Streaming or telemetry dataEventhouse
Near-real-time dashboardsEventhouse
SQL-only usersWarehouse
Data science workloadsLakehouse

Common Exam Scenarios

You may be asked to:

  • Choose a storage type for a new analytics solution
  • Migrate from traditional systems to Fabric
  • Support both engineers and analysts
  • Enable real-time monitoring
  • Balance governance with flexibility

Always identify:

  1. Data type (batch vs streaming)
  2. Latency requirements
  3. User personas
  4. Query language
  5. Governance needs

Best Practices to Remember

  • Use Lakehouse as a flexible foundation for analytics
  • Use Warehouse for polished, governed BI solutions
  • Use Eventhouse for real-time operational insights
  • Avoid forcing one option to handle all workloads
  • Let business requirements—not familiarity—drive the choice

Key Takeaway
For the DP-600 exam, choosing between a Lakehouse, Warehouse, or Eventhouse is about aligning data characteristics and access patterns with the right Fabric experience. Lakehouses provide flexibility, Warehouses deliver enterprise BI performance, and Eventhouses enable real-time analytics. The correct answer is almost always the one that best fits the scenario constraints.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions, with the below possible association:
    • Spark, raw, experimentationLakehouse
    • Enterprise BI, governed, SQL reportingWarehouse
    • Streaming, telemetry, real-timeEventhouse
  • Expect scenario-based questions rather than direct definitions

1. Which Microsoft Fabric component is BEST suited for flexible analytics on both files and tables using Spark and SQL?

A. Warehouse
B. Eventhouse
C. Lakehouse
D. Semantic model

Correct Answer: C

Explanation:
A Lakehouse stores data in Delta format in OneLake and supports both Spark and SQL, making it ideal for flexible analytics across files and tables.

2. A team of data scientists needs to experiment with raw and curated data using notebooks. Which option should they choose?

A. Warehouse
B. Eventhouse
C. Semantic model
D. Lakehouse

Correct Answer: D

Explanation:
Lakehouses are designed for data engineering and data science workloads, offering Spark-based notebooks and flexible schema handling.

3. Which option is MOST appropriate for enterprise BI reporting with well-defined schemas and strong governance?

A. Lakehouse
B. Warehouse
C. Eventhouse
D. OneLake

Correct Answer: B

Explanation:
Warehouses are SQL-first, schema-on-write systems optimized for structured data, governance, and high-performance BI reporting.

4. A solution must support near-real-time analytics on streaming IoT telemetry data. Which Fabric component should be used?

A. Lakehouse
B. Warehouse
C. Eventhouse
D. Dataflow Gen2

Correct Answer: C

Explanation:
Eventhouses are optimized for high-velocity streaming data and real-time analytics using KQL.

5. Which query language is primarily used to analyze data in an Eventhouse?

A. T-SQL
B. Spark SQL
C. DAX
D. KQL

Correct Answer: D

Explanation:
Eventhouses are built on KQL (Kusto Query Language), which is optimized for querying event and time-series data.

6. A business analytics team requires fast dashboard performance and is familiar only with SQL. Which option best meets this requirement?

A. Lakehouse
B. Warehouse
C. Eventhouse
D. Spark notebook

Correct Answer: B

Explanation:
Warehouses provide a traditional SQL experience optimized for BI dashboards and reporting performance.

7. Which characteristic BEST distinguishes a Lakehouse from a Warehouse?

A. Lakehouses support Power BI
B. Warehouses store data in OneLake
C. Lakehouses support Spark-based processing
D. Warehouses cannot be governed

Correct Answer: C

Explanation:
Lakehouses uniquely support Spark-based processing, enabling advanced transformations and data science workloads.

8. A solution must store structured batch data and unstructured files in the same analytical store. Which option should be selected?

A. Warehouse
B. Eventhouse
C. Semantic model
D. Lakehouse

Correct Answer: D

Explanation:
Lakehouses support both structured tables and unstructured or semi-structured files within the same environment.

9. Which scenario MOST strongly indicates the need for an Eventhouse?

A. Monthly financial reporting
B. Slowly changing dimension modeling
C. Real-time operational monitoring
D. Ad hoc SQL analysis

Correct Answer: C

Explanation:
Eventhouses are designed for real-time analytics on streaming data, making them ideal for operational monitoring scenarios.

10. When choosing between a Lakehouse, Warehouse, or Eventhouse on the DP-600 exam, which factor is MOST important?

A. Personal familiarity with the tool
B. The default Fabric option
C. Data characteristics and latency requirements
D. Workspace size

Correct Answer: C

Explanation:
DP-600 emphasizes selecting the correct component based on data type (batch vs streaming), latency needs, user personas, and governance—not personal preference.

Ingest or Access Data as Needed

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Get data
--> Ingest or access data as needed

A core responsibility of a Microsoft Fabric Analytics Engineer is deciding how data should be brought into Fabric—or whether it should be brought in at all. For the DP-600 exam, this topic focuses on selecting the right ingestion or access pattern based on performance, freshness, cost, and governance requirements.

Ingest vs. Access: Key Concept

Before choosing a tool or method, understand the distinction:

  • Ingest data: Physically copy data into Fabric-managed storage (OneLake)
  • Access data: Query or reference data where it already lives, without copying

The exam frequently tests your ability to choose the most appropriate option—not just a working one.

Common Data Ingestion Methods in Microsoft Fabric

1. Dataflows Gen2

Best for:

  • Low-code ingestion and transformation
  • Reusable ingestion logic
  • Business-friendly data preparation

Key characteristics:

  • Uses Power Query Online
  • Supports scheduled refresh
  • Stores results in OneLake (Lakehouse or Warehouse)
  • Ideal for centralized, governed ingestion

Exam tip:
Use Dataflows Gen2 when reuse, transformation, and governance are priorities.

2. Data Pipelines (Copy Activity)

Best for:

  • High-volume or frequent ingestion
  • Orchestration across multiple sources
  • ELT-style workflows

Key characteristics:

  • Supports many source and sink types
  • Enables scheduling, dependencies, and retries
  • Minimal transformation (primarily copy)

Exam tip:
Choose pipelines when performance and orchestration matter more than transformation.

3. Notebooks (Spark)

Best for:

  • Complex transformations
  • Data science or advanced engineering
  • Custom ingestion logic

Key characteristics:

  • Full control using Spark (PySpark, Scala, SQL)
  • Suitable for large-scale processing
  • Writes directly to OneLake

Exam tip:
Notebooks are powerful but require engineering skills—don’t choose them for simple ingestion scenarios.

Accessing Data Without Ingesting

1. OneLake Shortcuts

Best for:

  • Avoiding data duplication
  • Reusing data across workspaces
  • Accessing external storage

Key characteristics:

  • Logical reference only (no copy)
  • Supports ADLS Gen2 and Amazon S3
  • Appears native in Lakehouse tables or files

Exam tip:
Shortcuts are often the best answer when the question mentions avoiding duplication or reducing storage cost.

2. DirectQuery

Best for:

  • Near-real-time data access
  • Large datasets that cannot be imported
  • Centralized source-of-truth systems

Key characteristics:

  • Queries run against the source system
  • Performance depends on source
  • Limited modeling flexibility compared to Import

Exam tip:
Expect trade-off questions involving DirectQuery vs. Import.

3. Real-Time Access (Eventstreams / KQL)

Best for:

  • Streaming and telemetry data
  • Operational and real-time analytics

Key characteristics:

  • Event-driven ingestion
  • Supports near-real-time dashboards
  • Often discovered via Real-Time hub

Exam tip:
Use real-time ingestion when freshness is measured in seconds, not hours.

Choosing the Right Approach (Exam-Critical)

You should be able to decide based on these factors:

RequirementBest Option
Reusable ingestion logicDataflows Gen2
High-volume copyData pipelines
Complex transformationsNotebooks
Avoid duplicationOneLake shortcuts
Near real-time reportingDirectQuery / Eventstreams
Governance and trustIngestion + endorsement

Governance and Security Considerations

  • Ingested data can inherit sensitivity labels
  • Access-based methods rely on source permissions
  • Workspace roles determine who can ingest or access data
  • Endorsed datasets should be preferred for reuse

DP-600 often frames ingestion questions within a governance context.

Common Exam Scenarios

You may be asked to:

  • Choose between ingesting data or accessing it directly
  • Identify when shortcuts are preferable to ingestion
  • Select the right tool for a specific ingestion pattern
  • Balance data freshness vs. performance
  • Reduce duplication across workspaces

Best Practices to Remember

  • Ingest when performance and modeling flexibility are required
  • Access when freshness, cost, or duplication is a concern
  • Centralize ingestion logic for reuse
  • Prefer Fabric-native patterns over external tools
  • Let business requirements drive architectural decisions

Key Takeaway
For the DP-600 exam, “Ingest or access data as needed” is about making intentional, informed choices. Microsoft Fabric provides multiple ways to bring data into analytics solutions, and the correct approach depends on scale, freshness, reuse, governance, and cost. Understanding why one method is better than another is far more important than memorizing features.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Look for and understand the usage scenario of keywords in exam questions (for example, low code/no code, large dataset, high-volume data, reuse, complex transformations)
  • Expect scenario-based questions rather than direct definitions

Also, keep in mind that …

  • DP-600 questions often include multiple valid options, but only one that best aligns with the scenario’s constraints. Always identify and consider factors such as:
    • Data volume
    • Freshness requirements
    • Reuse and duplication concerns
    • Transformation complexity

1. What is the primary difference between ingesting data and accessing data in Microsoft Fabric?

A. Ingested data cannot be secured
B. Accessed data is always slower
C. Ingesting copies data into OneLake, while accessing queries data in place
D. Accessed data requires a gateway

Correct Answer: C

Explanation:
Ingestion physically copies data into Fabric-managed storage (OneLake), while access-based approaches query or reference data where it already exists.

2. Which option is BEST when the goal is to avoid duplicating large datasets across multiple workspaces?

A. Import mode
B. Dataflows Gen2
C. OneLake shortcuts
D. Notebooks

Correct Answer: C

Explanation:
OneLake shortcuts allow data to be referenced without copying it, making them ideal for reuse and cost control.

3. A team needs reusable, low-code ingestion logic with scheduled refresh. Which Fabric feature should they use?

A. Spark notebooks
B. Data pipelines
C. Dataflows Gen2
D. DirectQuery

Correct Answer: C

Explanation:
Dataflows Gen2 provide Power Query–based ingestion with refresh scheduling and reuse across Fabric items.

4. Which ingestion method is MOST appropriate for complex transformations requiring custom logic?

A. Dataflows Gen2
B. Copy activity in pipelines
C. OneLake shortcuts
D. Spark notebooks

Correct Answer: D

Explanation:
Spark notebooks offer full control over transformation logic and are suited for complex, large-scale processing.

5. When should DirectQuery be preferred over Import mode?

A. When the dataset is small
B. When data freshness is critical
C. When transformations are complex
D. When performance must be maximized

Correct Answer: B

Explanation:
DirectQuery is preferred when near-real-time access to data is required, even though performance depends on the source system.

6. Which Fabric component is BEST suited for orchestrating high-volume data ingestion with dependencies and retries?

A. Dataflows Gen2
B. Data pipelines
C. Semantic models
D. Power BI Desktop

Correct Answer: B

Explanation:
Data pipelines are designed for orchestration, handling large volumes of data, scheduling, and dependency management.

7. A dataset is queried infrequently but must support advanced modeling features. Which approach is most appropriate?

A. DirectQuery
B. Access via shortcut
C. Import into OneLake
D. Eventstream ingestion

Correct Answer: C

Explanation:
Import mode supports full modeling capabilities and high query performance, making it suitable even for infrequently accessed data.

8. Which scenario best fits the use of real-time ingestion methods such as Eventstreams or KQL databases?

A. Monthly financial reporting
B. Static reference data
C. IoT telemetry and operational monitoring
D. Slowly changing dimensions

Correct Answer: C

Explanation:
Real-time ingestion is designed for continuous, event-driven data such as IoT telemetry and operational metrics.

9. Why might ingesting data be preferred over accessing it directly?

A. It always reduces storage costs
B. It eliminates the need for security
C. It improves performance and modeling flexibility
D. It avoids data refresh

Correct Answer: C

Explanation:
Ingesting data into OneLake enables faster query performance and full support for modeling features.

10. Which factor is MOST important when deciding between ingesting data and accessing it?

A. The color of the dashboard
B. The number of reports
C. Business requirements such as freshness, scale, and governance
D. The Fabric region

Correct Answer: C

Explanation:
The decision to ingest or access data should be driven by business needs, including performance, freshness, cost, and governance—not technical convenience alone.

Discover Data by Using OneLake Catalog and Real-Time Hub

This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub; and this topic falls under these sections: 
Prepare data
--> Get data
--> Discover data by using OneLake catalog and Real-Time hub

Discovering existing data assets efficiently is a critical capability for a Microsoft Fabric Analytics Engineer. For the DP-600 exam, this topic emphasizes how to find, understand, and evaluate data sources using Fabric’s built-in discovery experiences: OneLake catalog and Real-Time hub.

Purpose of Data Discovery in Microsoft Fabric

In large Fabric environments, data already exists across:

  • Lakehouses
  • Warehouses
  • Semantic models
  • Streaming and event-based sources

The goal of data discovery is to:

  • Avoid duplicate ingestion
  • Promote reuse of trusted data
  • Understand data ownership, sensitivity, and freshness
  • Accelerate analytics development

OneLake Catalog

What Is the OneLake Catalog?

The OneLake catalog is a centralized metadata and discovery experience that allows users to browse and search data assets stored in OneLake, Fabric’s unified data lake.

It provides visibility into:

  • Lakehouses and Warehouses
  • Tables, views, and files
  • Shortcuts to external data
  • Endorsement and sensitivity metadata

Key Capabilities of the OneLake Catalog

For the exam, you should understand that the OneLake catalog enables users to:

  • Search and filter data assets across workspaces
  • View schema details (columns, data types)
  • Identify endorsed (Certified or Promoted) assets
  • See sensitivity labels applied to data
  • Discover data ownership and location
  • Reuse existing data rather than re-ingesting it

This supports both governance and efficiency.

Endorsement and Trust Signals

Within the OneLake catalog, users can quickly identify:

  • Certified items (approved and governed)
  • Promoted items (recommended but not formally certified)

These trust signals are important in exam scenarios that ask how to guide users toward reliable data sources.

Shortcuts and External Data

The catalog also exposes OneLake shortcuts, which allow data from:

  • Azure Data Lake Storage Gen2
  • Amazon S3
  • Other Fabric workspaces

to appear as native OneLake data without duplication. This is a key discovery mechanism tested in DP-600.

Real-Time Hub

What Is the Real-Time Hub?

The Real-Time hub is a discovery experience focused on streaming and event-driven data sources in Microsoft Fabric.

It centralizes access to:

  • Eventstreams
  • Azure Event Hubs
  • Azure IoT Hub
  • Azure Data Explorer (KQL databases)
  • Other real-time data producers

Key Capabilities of the Real-Time Hub

For exam purposes, understand that the Real-Time hub allows users to:

  • Discover available streaming data sources
  • Preview live event data
  • Subscribe to or reuse existing event streams
  • Understand data velocity and schema
  • Reduce duplication of real-time ingestion pipelines

This is especially important in architectures involving operational analytics or near real-time reporting.

OneLake Catalog vs. Real-Time Hub

FeatureOneLake CatalogReal-Time Hub
Primary focusStored dataStreaming / event data
Data typesTables, files, shortcutsEvents, streams, telemetry
Use caseAnalytical and historical dataReal-time and operational analytics
Governance signalsEndorsement, sensitivityOwnership, stream metadata

Understanding when to use each is a common exam theme.

Security and Governance Considerations

Data discovery respects Fabric security:

  • Users only see items they have permission to access
  • Sensitivity labels are visible in discovery views
  • Workspace roles control discovery depth

This ensures compliance while still promoting self-service analytics.

Exam-Relevant Scenarios

On the DP-600 exam, you may be asked to:

  • Identify how users can discover existing datasets before ingesting new data
  • Choose between OneLake catalog and Real-Time hub based on data type
  • Locate endorsed or certified data assets
  • Reduce duplication by reusing existing tables or streams
  • Enable self-service discovery while maintaining governance

Best Practices (Aligned to DP-600)

  • Use OneLake catalog first before creating new data connections
  • Encourage use of endorsed and certified assets
  • Use Real-Time hub to discover existing event streams
  • Leverage shortcuts to reuse data without copying
  • Combine discovery with proper labeling and endorsement

Key Takeaway
For the DP-600 exam, discovering data in Microsoft Fabric is about visibility, trust, and reuse. The OneLake catalog helps users find and understand stored analytical data, while the Real-Time hub enables discovery of live streaming sources. Together, they reduce redundancy, improve governance, and accelerate analytics development.

Practice Questions:

Here are 10 questions to test and help solidify your learning and knowledge. As you review these and other questions in your preparation, make sure to …

  • Identifying and understand why an option is correct (or incorrect) — not just which one
  • Pay close attention to when to use OneLake catalog vs. Real-Time hub
  • Look for and understand the usage scenario of keywords in exam questions (for example, discover, reuse, streaming, endorsed, shortcut)
  • Expect scenario-based questions that test architecture choices, rather than direct definitions

1. What is the primary purpose of the OneLake catalog in Microsoft Fabric?

A. To ingest streaming data
B. To schedule data refreshes
C. To discover and explore data stored in OneLake
D. To manage workspace permissions

Correct Answer: C

Explanation:
The OneLake catalog is a centralized discovery and metadata experience that helps users find, understand, and reuse data stored in OneLake across Fabric workspaces.

2. Which type of data is the Real-Time hub primarily designed to help users discover?

A. Historical data in Lakehouses
B. Structured warehouse tables
C. Streaming and event-driven data sources
D. Power BI semantic models

Correct Answer: C

Explanation:
The Real-Time hub focuses on streaming and event-based data such as Eventstreams, Azure Event Hubs, IoT Hub, and KQL databases.

3. A user wants to avoid re-ingesting data that already exists in another workspace. Which Fabric feature best supports this goal?

A. Data pipelines
B. OneLake shortcuts
C. Import mode
D. DirectQuery

Correct Answer: B

Explanation:
OneLake shortcuts allow data stored externally or in another workspace to appear as native OneLake data without physically copying it.

4. Which metadata element in the OneLake catalog helps users identify trusted and approved data assets?

A. Workspace name
B. File size
C. Endorsement status
D. Refresh schedule

Correct Answer: C

Explanation:
Endorsements (Promoted and Certified) act as trust signals, helping users quickly identify reliable and governed data assets.

5. Which statement about data visibility in the OneLake catalog is true?

A. All users can see all data across the tenant
B. Only workspace admins can see catalog entries
C. Users can only see items they have permission to access
D. Sensitivity labels hide data from discovery

Correct Answer: C

Explanation:
The OneLake catalog respects Fabric security boundaries—users only see data assets they are authorized to access.

6. A team is building a real-time dashboard and wants to see what streaming data already exists. Where should they look first?

A. OneLake catalog
B. Power BI Service
C. Dataflows Gen2
D. Real-Time hub

Correct Answer: D

Explanation:
The Real-Time hub centralizes discovery of streaming and event-based data sources, making it the best starting point for real-time analytics scenarios.

7. Which of the following items is most likely discovered through the Real-Time hub?

A. Parquet files in OneLake
B. Lakehouse Delta tables
C. Azure Event Hub streams
D. Warehouse SQL views

Correct Answer: C

Explanation:
Azure Event Hubs and other event-driven sources are exposed through the Real-Time hub, not the OneLake catalog.

8. What advantage does data discovery provide in large Fabric environments?

A. Faster Power BI rendering
B. Reduced licensing costs
C. Reduced data duplication and improved reuse
D. Automatic data modeling

Correct Answer: C

Explanation:
Discovering existing data assets helps teams reuse trusted data, reducing redundant ingestion and improving governance.

9. Which information is commonly visible when browsing an asset in the OneLake catalog?

A. User passwords
B. Column-level schema details
C. Tenant-wide permissions
D. Gateway configuration

Correct Answer: B

Explanation:
The OneLake catalog exposes metadata such as table schemas, column names, and data types to help users evaluate suitability before use.

10. Which scenario best demonstrates correct use of OneLake catalog and Real-Time hub together?

A. Using DirectQuery for all reports
B. Creating a new pipeline for every dataset
C. Discovering historical data in OneLake and live events in Real-Time hub
D. Applying sensitivity labels to dashboards

Correct Answer: C

Explanation:
OneLake catalog is optimized for discovering stored analytical data, while Real-Time hub is designed for discovering live streaming sources. Using both ensures comprehensive data discovery.