
This post is a part of the DP-600: Implementing Analytics Solutions Using Microsoft Fabric Exam Prep Hub. Bookmark this hub and use it as a guide to help you prepare for the DP-600 certification exam.
This is a practice exam for the
DP-600: Implementing Analytics Solutions Using Microsoft Fabric
certification exam.
– It contains: 60 Questions (the questions are of varying type and difficulty)
– The answer key is located at: the end of the exam; i.e., after all the questions. We recommend that you try to answer the questions before looking at the answers.
– Upon successful completion of the official certification exam, you earn the Fabric Analytics Engineer Associate certification.
Good luck to you!
SECTION A – Prepare Data (Questions 1–24)
Question 1 (Single Choice)
You need to ingest CSV files from an Azure Data Lake Gen2 account into a Lakehouse with minimal transformation. Which option is most appropriate?
A. Power BI Desktop
B. Dataflow Gen2
C. Warehouse COPY INTO
D. Spark notebook
Question 2 (Multi-Select – Choose TWO)
Which Fabric components support both ingestion and transformation of data?
A. Dataflow Gen2
B. Eventhouse
C. Spark notebooks
D. SQL analytics endpoint
E. Power BI Desktop
Question 3 (Scenario – Single Choice)
Your team wants to browse datasets across workspaces and understand lineage and ownership before using them. Which feature should you use?
A. Deployment pipelines
B. OneLake catalog
C. Power BI lineage view
D. XMLA endpoint
Question 4 (Single Choice)
Which statement best describes Direct Lake?
A. Data is cached in VertiPaq during refresh
B. Queries run directly against Delta tables in OneLake
C. Queries always fall back to DirectQuery
D. Requires incremental refresh
Question 5 (Matching)
Match the Fabric item to its primary use case:
| Item | Use Case |
|---|---|
| 1. Lakehouse | A. High-concurrency SQL analytics |
| 2. Warehouse | B. Event streaming and time-series |
| 3. Eventhouse | C. Open data storage + Spark |
Question 6 (Single Choice)
Which ingestion option is best for append-only, high-volume streaming telemetry?
A. Dataflow Gen2
B. Eventstream to Eventhouse
C. Warehouse COPY INTO
D. Power Query
Question 7 (Scenario – Single Choice)
You want to join two large datasets without materializing the result. Which approach is most appropriate?
A. Power Query merge
B. SQL VIEW
C. Calculated table in DAX
D. Dataflow Gen2 output table
Question 8 (Multi-Select – Choose TWO)
Which actions help reduce data duplication in Fabric?
A. Using shortcuts in OneLake
B. Creating multiple Lakehouses per workspace
C. Sharing semantic models
D. Importing the same data into multiple models
Question 9 (Single Choice)
Which column type is required for incremental refresh?
A. Integer
B. Text
C. Boolean
D. Date/DateTime
Question 10 (Scenario – Single Choice)
Your dataset contains nulls in a numeric column used for aggregation. What is the best place to handle this?
A. DAX measure
B. Power Query
C. Report visual
D. RLS filter
Question 11 (Single Choice)
Which Power Query transformation is foldable in most SQL sources?
A. Adding an index column
B. Filtering rows
C. Custom M function
D. Merging with fuzzy match
Question 12 (Multi-Select – Choose TWO)
Which scenarios justify denormalizing data?
A. Star schema reporting
B. OLTP transactional workloads
C. High-performance analytics
D. Reducing DAX complexity
Question 13 (Single Choice)
Which operation increases cardinality the most?
A. Removing unused columns
B. Splitting a text column
C. Converting text to integer keys
D. Aggregating rows
Question 14 (Scenario – Single Choice)
You need reusable transformations across multiple datasets. What should you create?
A. Calculated columns
B. Shared semantic model
C. Dataflow Gen2
D. Power BI template
Question 15 (Fill in the Blank)
The two required Power Query parameters for incremental refresh are __________ and __________.
Question 16 (Single Choice)
Which Fabric feature allows querying data without copying it into a workspace?
A. Shortcut
B. Snapshot
C. Deployment pipeline
D. Calculation group
Question 17 (Scenario – Single Choice)
Your SQL query performance degrades after adding many joins. What is the most likely cause?
A. Low concurrency
B. Snowflake schema
C. Too many measures
D. Too many visuals
Question 18 (Multi-Select – Choose TWO)
Which tools can be used to query Lakehouse data?
A. Spark SQL
B. T-SQL via SQL endpoint
C. KQL
D. DAX Studio
Question 19 (Single Choice)
Which language is used primarily with Eventhouse?
A. SQL
B. Python
C. KQL
D. DAX
Question 20 (Scenario – Single Choice)
You want to analyze slowly changing dimensions historically. Which approach is best?
A. Overwrite rows
B. Incremental refresh
C. Type 2 dimension design
D. Dynamic RLS
Question 21 (Single Choice)
Which feature helps understand downstream dependencies?
A. Impact analysis
B. Endorsement
C. Sensitivity labels
D. Git integration
Question 22 (Multi-Select – Choose TWO)
Which options support data aggregation before reporting?
A. SQL views
B. DAX calculated columns
C. Power Query group by
D. Report-level filters
Question 23 (Single Choice)
Which scenario best fits a Warehouse?
A. Machine learning experimentation
B. Real-time telemetry
C. High-concurrency BI queries
D. File-based storage only
Question 24 (Scenario – Single Choice)
You want to reuse report layouts without embedding credentials. What should you use?
A. PBIX
B. PBIP
C. PBIT
D. PBIDS
SECTION B – Implement & Manage Semantic Models (Questions 25–48)
Question 25 (Single Choice)
Which schema is recommended for semantic models?
A. Snowflake
B. Star
C. Fully normalized
D. Graph
Question 26 (Scenario – Single Choice)
You have a many-to-many relationship between Sales and Promotions. What should you implement?
A. Bi-directional filters
B. Bridge table
C. Calculated column
D. Duplicate dimension
Question 27 (Multi-Select – Choose TWO)
Which storage modes support composite models?
A. Import
B. DirectQuery
C. Direct Lake
D. Live connection
Question 28 (Single Choice)
What is the primary purpose of calculation groups?
A. Reduce model size
B. Replace measures
C. Apply reusable calculations
D. Improve refresh speed
Question 29 (Scenario – Single Choice)
You need users to switch between metrics dynamically in visuals. What should you use?
A. Bookmarks
B. Calculation groups
C. Field parameters
D. Perspectives
Question 30 (Single Choice)
Which DAX pattern generally performs best?
A. SUMX(FactTable, [Column])
B. FILTER + CALCULATE
C. Simple aggregations
D. Nested iterators
Question 31 (Multi-Select – Choose TWO)
Which actions improve DAX performance?
A. Use variables
B. Increase cardinality
C. Avoid unnecessary iterators
D. Use bi-directional filters everywhere
Question 32 (Scenario – Single Choice)
Your model exceeds memory limits but queries are fast. What should you configure?
A. Incremental refresh
B. Large semantic model storage
C. DirectQuery fallback
D. Composite model
Question 33 (Single Choice)
Which tool is best for diagnosing slow visuals?
A. Tabular Editor
B. Performance Analyzer
C. Fabric Monitor
D. SQL Profiler
Question 34 (Scenario – Single Choice)
A Direct Lake model fails to read data. What happens next if fallback is enabled?
A. Query fails
B. Switches to Import
C. Switches to DirectQuery
D. Rebuilds partitions
Question 35 (Single Choice)
Which feature enables version control for Power BI artifacts?
A. Deployment pipelines
B. Git integration
C. XMLA endpoint
D. Endorsements
Question 36 (Matching)
Match the DAX function type to its example:
| Type | Function |
|---|---|
| 1. Iterator | A. CALCULATE |
| 2. Filter modifier | B. SUMX |
| 3. Information | C. ISFILTERED |
Question 37 (Scenario – Single Choice)
You want recent data queried in real time and historical data cached. What should you use?
A. Import only
B. DirectQuery only
C. Hybrid table
D. Calculated table
Question 38 (Single Choice)
Which relationship direction is recommended by default?
A. Both
B. Single
C. None
D. Many-to-many
Question 39 (Multi-Select – Choose TWO)
Which features help enterprise-scale governance?
A. Sensitivity labels
B. Endorsements
C. Personal bookmarks
D. Private datasets
Question 40 (Scenario – Single Choice)
Which setting most affects model refresh duration?
A. Number of measures
B. Incremental refresh policy
C. Number of visuals
D. Report theme
Question 41 (Single Choice)
What does XMLA primarily enable?
A. Real-time streaming
B. Advanced model management
C. Data ingestion
D. Visualization authoring
Question 42 (Fill in the Blank)
Direct Lake reads data directly from __________ stored in __________.
Question 43 (Scenario – Single Choice)
Your composite model uses both Import and DirectQuery. What is this called?
A. Live model
B. Hybrid model
C. Large model
D. Calculated model
Question 44 (Single Choice)
Which optimization reduces relationship ambiguity?
A. Snowflake schema
B. Bridge tables
C. Bidirectional filters
D. Hidden columns
Question 45 (Scenario – Single Choice)
Which feature allows formatting measures dynamically (e.g., %, currency)?
A. Perspectives
B. Field parameters
C. Dynamic format strings
D. Aggregation tables
Question 46 (Multi-Select – Choose TWO)
Which features support reuse across reports?
A. Shared semantic models
B. PBIT files
C. PBIX imports
D. Report-level measures
Question 47 (Single Choice)
Which modeling choice most improves query speed?
A. Snowflake schema
B. High-cardinality columns
C. Star schema
D. Many calculated columns
Question 48 (Scenario – Single Choice)
You want to prevent unnecessary refreshes when data hasn’t changed. What should you enable?
A. Large model
B. Detect data changes
C. Direct Lake fallback
D. XMLA read-write
SECTION C – Maintain & Govern (Questions 49–60)
Question 49 (Single Choice)
Which role provides full control over a Fabric workspace?
A. Viewer
B. Contributor
C. Admin
D. Member
Question 50 (Multi-Select – Choose TWO)
Which security mechanisms are item-level?
A. RLS
B. CLS
C. Workspace roles
D. Object-level security
Question 51 (Scenario – Single Choice)
You want to mark a dataset as trusted. What should you apply?
A. Sensitivity label
B. Endorsement
C. Certification
D. RLS
Question 52 (Single Choice)
Which pipeline stage is typically used for validation?
A. Development
B. Test
C. Production
D. Sandbox
Question 53 (Single Choice)
Which access control restricts specific tables or columns?
A. Workspace role
B. RLS
C. Object-level security
D. Sensitivity label
Question 54 (Scenario – Single Choice)
Which feature allows reviewing downstream report impact before changes?
A. Lineage view
B. Impact analysis
C. Git diff
D. Performance Analyzer
Question 55 (Multi-Select – Choose TWO)
Which actions help enforce data governance?
A. Sensitivity labels
B. Certified datasets
C. Personal workspaces
D. Shared capacities
Question 56 (Single Choice)
Which permission is required to deploy content via pipelines?
A. Viewer
B. Contributor
C. Admin
D. Member
Question 57 (Fill in the Blank)
Row-level security filters data at the __________ level.
Question 58 (Scenario – Single Choice)
You want Power BI Desktop artifacts to integrate cleanly with Git. What format should you use?
A. PBIX
B. PBIP
C. PBIT
D. PBIDS
Question 59 (Single Choice)
Which governance feature integrates with Microsoft Purview?
A. Endorsements
B. Sensitivity labels
C. Deployment pipelines
D. Field parameters
Question 60 (Scenario – Single Choice)
Which role can certify a dataset?
A. Viewer
B. Contributor
C. Dataset owner or admin
D. Any workspace member
DP-600 PRACTICE EXAM
FULL ANSWER KEY & EXPLANATIONS
SECTION A – Prepare Data (1–24)
Question 1
✅ Correct Answer: B – Dataflow Gen2
Explanation:
Dataflow Gen2 is designed for low-code ingestion and transformation from files, including CSVs, into Fabric Lakehouses.
Why others are wrong:
- A: Power BI Desktop is not an ingestion tool for Lakehouses
- C: COPY INTO is SQL-based and less suitable for CSV transformation
- D: Spark is overkill for simple ingestion
Question 2
✅ Correct Answers: A and C
Explanation:
- Dataflow Gen2 supports ingestion + transformation via Power Query
- Spark notebooks support ingestion and complex transformations
Why others are wrong:
- B: Eventhouse is optimized for streaming analytics
- D: SQL endpoint is query-only
- E: Power BI Desktop doesn’t ingest into Fabric storage
Question 3
✅ Correct Answer: B – OneLake catalog
Explanation:
The OneLake catalog allows discovery, metadata browsing, and cross-workspace visibility.
Why others are wrong:
- A: Pipelines manage deployment
- C: Lineage view shows dependencies, not discovery
- D: XMLA is for model management
Question 4
✅ Correct Answer: B
Explanation:
Direct Lake queries Delta tables directly in OneLake without importing data into VertiPaq.
Why others are wrong:
- A: That describes Import mode
- C: Fallback is optional
- D: Incremental refresh is not required
Question 5
✅ Correct Matching:
- 1 → C
- 2 → A
- 3 → B
Explanation:
- Lakehouse = open storage + Spark
- Warehouse = high-concurrency SQL
- Eventhouse = streaming/time-series
Question 6
✅ Correct Answer: B
Explanation:
Eventstream → Eventhouse is optimized for high-volume streaming telemetry.
Question 7
✅ Correct Answer: B – SQL VIEW
Explanation:
Views allow joins without materializing data.
Why others are wrong:
- A/C/D materialize or duplicate data
Question 8
✅ Correct Answers: A and C
Explanation:
- Shortcuts avoid copying data
- Shared semantic models reduce duplication
Question 9
✅ Correct Answer: D
Explanation:
Incremental refresh requires a Date or DateTime column.
Question 10
✅ Correct Answer: B
Explanation:
Handling nulls in Power Query ensures clean data before modeling.
Question 11
✅ Correct Answer: B
Explanation:
Row filtering is highly foldable in SQL sources.
Question 12
✅ Correct Answers: A and C
Explanation:
Denormalization improves performance and simplifies star schemas.
Question 13
✅ Correct Answer: B
Explanation:
Splitting text columns increases cardinality dramatically.
Question 14
✅ Correct Answer: C
Explanation:
Dataflow Gen2 enables reusable transformations.
Question 15
✅ Correct Answer:
RangeStart and RangeEnd
Question 16
✅ Correct Answer: A – Shortcut
Explanation:
Shortcuts allow querying data without copying it.
Question 17
✅ Correct Answer: B
Explanation:
Snowflake schemas introduce excessive joins, hurting performance.
Question 18
✅ Correct Answers: A and B
Explanation:
Lakehouse data can be queried via Spark SQL or SQL endpoint.
Question 19
✅ Correct Answer: C – KQL
Question 20
✅ Correct Answer: C
Explanation:
Type 2 dimensions preserve historical changes.
Question 21
✅ Correct Answer: A – Impact analysis
Question 22
✅ Correct Answers: A and C
Question 23
✅ Correct Answer: C
Question 24
✅ Correct Answer: C – PBIT
Explanation:
Templates exclude data and credentials.
SECTION B – Semantic Models (25–48)
Question 25
✅ Correct Answer: B – Star schema
Question 26
✅ Correct Answer: B – Bridge table
Question 27
✅ Correct Answers: A and B
Explanation:
Composite models combine Import + DirectQuery.
Question 28
✅ Correct Answer: C
Question 29
✅ Correct Answer: C – Field parameters
Question 30
✅ Correct Answer: C
Explanation:
Simple aggregations outperform iterators.
Question 31
✅ Correct Answers: A and C
Question 32
✅ Correct Answer: B
Question 33
✅ Correct Answer: B – Performance Analyzer
Question 34
✅ Correct Answer: C
Question 35
✅ Correct Answer: B – Git integration
Question 36
✅ Correct Matching:
- 1 → B
- 2 → A
- 3 → C
Question 37
✅ Correct Answer: C – Hybrid table
Question 38
✅ Correct Answer: B – Single
Question 39
✅ Correct Answers: A and B
Question 40
✅ Correct Answer: B
Question 41
✅ Correct Answer: B
Question 42
✅ Correct Answer:
Delta tables stored in OneLake
Question 43
✅ Correct Answer: B – Hybrid model
Question 44
✅ Correct Answer: B – Bridge tables
Question 45
✅ Correct Answer: C
Question 46
✅ Correct Answers: A and B
Question 47
✅ Correct Answer: C
Question 48
✅ Correct Answer: B – Detect data changes
SECTION C – Maintain & Govern (49–60)
Question 49
✅ Correct Answer: C – Admin
Question 50
✅ Correct Answers: B and D
Question 51
✅ Correct Answer: B – Endorsement
Question 52
✅ Correct Answer: B – Test
Question 53
✅ Correct Answer: C
Question 54
✅ Correct Answer: B – Impact analysis
Question 55
✅ Correct Answers: A and B
Question 56
✅ Correct Answer: C – Admin
Question 57
✅ Correct Answer:
Row (data) level
Question 58
✅ Correct Answer: B – PBIP
Question 59
✅ Correct Answer: B – Sensitivity labels
Question 60
✅ Correct Answer: C

One thought on “DP-600 Practice Exam 1 (60 questions with answer key)”