Tag: Power BI

Practice Questions: Create and modify parameters (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Get or connect to data
--> Create and modify parameters


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

Question 1

What is the primary purpose of using parameters in Power Query?

A. To improve visual rendering performance
B. To store calculated DAX expressions
C. To replace hard-coded values with reusable variables
D. To control report-level security

✅ Correct Answer: C

Explanation:
Parameters allow you to define reusable values that replace hard-coded inputs such as server names, file paths, or filter values.


Question 2

Which scenario is the best use case for Power Query parameters?

A. Creating dynamic slicers for end users
B. Switching between development and production databases
C. Defining relationships between tables
D. Creating calculated measures

✅ Correct Answer: B

Explanation:
Parameters are commonly used to manage environment-specific values like server and database names.


Question 3

Where are parameters created and managed in Power BI Desktop?

A. Report view
B. Model view
C. DAX editor
D. Power Query Editor

✅ Correct Answer: D

Explanation:
Parameters are created and modified in the Power Query Editor under Manage Parameters.


Question 4

Which data types can be assigned to a Power Query parameter?

A. Text only
B. Numeric only
C. Any DAX-supported data type
D. Text, Number, Date, DateTime, and Boolean

✅ Correct Answer: D

Explanation:
Power Query parameters support multiple data types, including text, numeric, and date-based types.


Question 5

What happens when you change the current value of a parameter that is used in a query?

A. The report visuals automatically change without refresh
B. Only the parameter value updates, not the data
C. The dependent queries are refreshed using the new value
D. The parameter is deleted and recreated

✅ Correct Answer: C

Explanation:
Changing a parameter’s current value causes any dependent queries to refresh using the updated value.


Question 6

Which option helps restrict users to predefined parameter values?

A. Required
B. Suggested values
C. Parameter type
D. Query folding

✅ Correct Answer: B

Explanation:
Suggested values allow you to define a list or range of acceptable parameter inputs.


Question 7

A Power BI dataset uses a parameter to define a file path. The file location changes. What is the most efficient way to update the dataset?

A. Rewrite the M code
B. Replace the query entirely
C. Update the parameter value
D. Create a calculated column

✅ Correct Answer: C

Explanation:
Updating the parameter value avoids editing query logic and ensures all dependent queries update automatically.


Question 8

Which statement about Power Query parameters is true?

A. Parameters can only be used for filtering rows
B. Parameters must be created using M code
C. Parameters appear as queries in Power Query
D. Parameters are only supported in Import mode

✅ Correct Answer: C

Explanation:
Parameters appear as special queries in Power Query and can be referenced by other queries.


Question 9

A parameter is used to filter data in a query. What happens if the parameter’s value does not match the expected data type?

A. Power BI automatically converts the value
B. The report loads with blank visuals
C. The query refresh may fail
D. The parameter is ignored

✅ Correct Answer: C

Explanation:
A mismatch between parameter type and usage can cause query errors during refresh.


Question 10

How are parameters most commonly tested on the PL-300 exam?

A. Memorizing the M syntax used to create them
B. Identifying UI navigation steps
C. Selecting the correct scenario where parameters improve flexibility
D. Writing custom functions

✅ Correct Answer: C

Explanation:
The exam focuses on when and why to use parameters rather than detailed syntax or UI steps.


Exam Readiness Summary ✅

You are well prepared for this topic if you can:

  • Explain why parameters are used
  • Identify environment and filtering scenarios
  • Understand how parameter changes affect refresh
  • Recognize parameters as a Power Query feature, not DAX

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Evaluate Data including Data Statistics & Column Properties (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Profile and clean the data
--> Evaluate Data including Data Statistics & Column Properties


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

Question 1

Which Power Query feature helps you quickly identify null, error, and valid values in a column?

A. Column distribution
B. Column profile
C. Column quality
D. Data type detection

✅ Correct Answer: C

Explanation:
Column quality visually shows the percentage of valid, error, and empty (null) values in a column.


Question 2

You want to understand how often each value appears in a categorical column. Which tool should you use?

A. Column quality
B. Column distribution
C. Column profile
D. Default summarization

✅ Correct Answer: B

Explanation:
Column distribution displays value frequency and helps identify duplicates or unexpected values.


Question 3

Which Power Query tool provides statistics such as minimum, maximum, and average values?

A. Column quality
B. Column distribution
C. Column profile
D. Data view

✅ Correct Answer: C

Explanation:
Column profile provides detailed statistical information for numeric and date columns.


Question 4

A numeric column cannot be aggregated in a visual because it is treated as text. What should you evaluate first?

A. Column distribution
B. Column format
C. Column data type
D. Default summarization

✅ Correct Answer: C

Explanation:
Incorrect data types prevent aggregation. The data type should be corrected before formatting or summarization.


Question 5

Which setting determines how a numeric column behaves by default when added to a visual?

A. Column profile
B. Data type
C. Column format
D. Default summarization

✅ Correct Answer: D

Explanation:
Default summarization controls whether a numeric column sums, averages, counts, or does not summarize.


Question 6

When should data evaluation and profiling primarily take place?

A. In Report view
B. In Model view
C. In Power Query Editor
D. In the Power BI Service

✅ Correct Answer: C

Explanation:
Data profiling and evaluation are performed in Power Query before data is loaded into the model.


Question 7

You suspect a column contains unexpected negative values. Which feature helps confirm this?

A. Column quality
B. Column distribution
C. Column profile
D. Column format

✅ Correct Answer: C

Explanation:
Column profile provides min/max statistics, making it ideal for detecting invalid ranges.


Question 8

Which column property affects how values are displayed but not how they are calculated?

A. Data type
B. Column format
C. Column profile
D. Default summarization

✅ Correct Answer: B

Explanation:
Formatting controls visual appearance (currency, percentage, date format) but does not affect calculations.


Question 9

A column shows a high percentage of error values during evaluation. What is the most likely implication?

A. The column has duplicate values
B. The column contains invalid or incompatible data
C. The column is incorrectly summarized
D. The column has incorrect formatting

✅ Correct Answer: B

Explanation:
Error values typically indicate issues such as invalid conversions or incompatible data types.


Question 10

Which pairing of Power Query tools and purpose is correct?

A. Column quality → value frequency analysis
B. Column distribution → identifying null values
C. Column profile → statistical analysis
D. Column format → detecting outliers

✅ Correct Answer: C

Explanation:
Column profile is used for statistical analysis, including min, max, and average values.


Exam Readiness Check ✅

You’re well prepared for this PL-300 topic if you can:

  • Distinguish between column quality, distribution, and profile
  • Identify which tool answers which question
  • Understand how column properties affect modeling and visuals
  • Recognize that data evaluation happens before modeling

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Resolve inconsistencies, unexpected or null values, and data quality issues (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Profile and clean the data
--> Resolve inconsistencies, unexpected or null values, and data quality issues


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

1. Identifying Null Values

You open Power Query and want to quickly identify which columns contain null values. Which feature should you use?

A. Column distribution
B. Column quality
C. Column profile
D. Query dependencies

Correct Answer: B

Explanation:
Column Quality displays the percentage of Valid, Error, and Empty (null) values, making it the fastest way to identify nulls in a column.


2. Replacing Null Values Appropriately

A numeric column contains null values that should be treated as zero for reporting purposes. What is the BEST approach?

A. Remove rows with null values
B. Replace null values with 0
C. Fill down the column
D. Change the column data type

Correct Answer: B

Explanation:
Replacing nulls with zero is appropriate when the business meaning supports it. Removing rows would result in data loss.


3. Fixing Inconsistent Text Values

A column contains values such as “USA”, “usa”, and “Usa”. What is the MOST efficient way to standardize these values?

A. Replace values manually
B. Create a conditional column
C. Apply text formatting (Uppercase)
D. Remove duplicate rows

Correct Answer: C

Explanation:
Applying a text format such as Uppercase standardizes casing efficiently across the entire column.


4. Detecting Unexpected Values

You suspect a column contains invalid numeric values outside the expected range. Which Power Query feature helps identify this?

A. Column quality
B. Column distribution
C. Column profile
D. Data type conversion

Correct Answer: C

Explanation:
Column Profile shows statistics such as minimum, maximum, and average, making it ideal for detecting outliers and unexpected values.


5. Understanding Data Profiling Scope

Why might column profiling results appear inaccurate in Power Query?

A. Profiling only works for text columns
B. Profiling is disabled by default
C. Profiling is based on a data sample
D. Profiling ignores null values

Correct Answer: C

Explanation:
By default, Power Query profiles only a sample of rows. You must enable “Column profiling based on entire dataset” for full accuracy.


6. Handling Error Values

A column contains error values caused by invalid data type conversions. What is the BEST first step?

A. Remove rows with errors
B. Replace errors with null
C. Fix the transformation causing the error
D. Change the column name

Correct Answer: C

Explanation:
Best practice is to address the root cause of errors rather than simply removing or replacing them.


7. Removing Leading and Trailing Spaces

A text column appears to have duplicate values due to extra spaces. Which transformation should you apply?

A. Clean
B. Trim
C. Replace values
D. Capitalize Each Word

Correct Answer: B

Explanation:
Trim removes leading and trailing spaces, which commonly cause hidden inconsistencies in text data.


8. Cleaning Non-Printable Characters

Some text values contain hidden characters that affect filtering and grouping. Which transformation should you use?

A. Trim
B. Replace errors
C. Clean
D. Format → Lowercase

Correct Answer: C

Explanation:
The Clean transformation removes non-printable characters that are not visible but can impact data quality.


9. Resolving Missing Values in Hierarchical Data

A dataset contains a category value only in the first row of each group, with subsequent rows being null. What is the BEST solution?

A. Replace nulls with “Unknown”
B. Fill down
C. Remove null rows
D. Merge columns

Correct Answer: B

Explanation:
Fill Down propagates the previous non-null value and is commonly used for hierarchical or grouped data structures.


10. Examining Category Inconsistencies

You want to identify inconsistent category names such as misspellings or variations. Which tool is MOST useful?

A. Column quality
B. Column distribution
C. Query parameters
D. Conditional formatting

Correct Answer: B

Explanation:
Column Distribution shows value frequency, making it easy to spot variations and inconsistencies in categorical data.


✅ PL-300 Exam Takeaways

  • Know which profiling tool answers which question
  • Understand why a transformation is used, not just how
  • Prefer fixing issues in Power Query instead of DAX
  • Expect scenario-based questions asking for the best action

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Resolve Data Import Errors (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Profile and clean the data
--> Resolve Data Import Errors


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

1. Identifying the Cause of an Import Error

A column fails to load because Power BI cannot convert certain values to a numeric data type. What is the MOST likely cause?

A. Duplicate values in the column
B. Text values mixed with numeric values
C. The column contains too many rows
D. The column is hidden

Correct Answer: B

Explanation:
Type conversion errors commonly occur when text values (such as "N/A" or "Unknown") exist in a column expected to be numeric.


2. Viewing Only Error Rows

You want to see only the rows that caused an import error in Power Query. What should you do?

A. Enable Column distribution
B. Use Keep Errors on the column
C. Change the column data type
D. Open Advanced Editor

Correct Answer: B

Explanation:
Keep Errors filters the column to show only rows containing error values, making troubleshooting easier.


3. Best First Step When Errors Occur

A column shows error values after changing its data type. What is the BEST initial action?

A. Remove all error rows
B. Replace errors with null
C. Identify and fix the source values
D. Disable query refresh

Correct Answer: C

Explanation:
Best practice is to fix the root cause (invalid source values) before applying type conversion or replacing errors.


4. Handling Errors Without Losing Rows

You want to keep all rows but avoid breaking visuals caused by error values. What should you do?

A. Remove rows with errors
B. Replace errors with null
C. Replace values with zero
D. Delete the column

Correct Answer: B

Explanation:
Replacing errors with null preserves rows while preventing calculation and visualization failures.


5. Import Errors After Removing a Column

A query fails after a column is removed earlier in the applied steps. What is the MOST likely reason?

A. The column had duplicate values
B. A later step references the removed column
C. The column contained nulls
D. The column data type was incorrect

Correct Answer: B

Explanation:
Applied steps are sequential. If a later step references a removed or renamed column, the query will fail.


6. Merge Query Errors

A merge query fails because matching columns have different data types. What should you do?

A. Replace errors with null
B. Change both columns to the same data type
C. Remove duplicate rows
D. Use a conditional column

Correct Answer: B

Explanation:
Merge keys must have matching data types. Mismatches commonly cause merge errors.


7. Fixing Date Conversion Errors

Dates import as text and generate errors when converted to Date. The issue is caused by regional formatting differences. What is the BEST solution?

A. Replace errors with today’s date
B. Remove the column
C. Change data type using locale
D. Fill down the column

Correct Answer: C

Explanation:
Using Locale allows Power BI to correctly interpret date formats based on regional settings.


8. Understanding Replace Errors

What does the Replace Errors transformation do?

A. Removes rows with errors
B. Replaces null values only
C. Substitutes error values with a specified value
D. Fixes the underlying data issue automatically

Correct Answer: C

Explanation:
Replace Errors allows you to replace error values with a defined value (often null), but it does not fix the root cause.


9. Diagnosing Transformation Errors

Which Power Query feature helps identify which step in the query caused an error?

A. Column quality
B. Query dependencies
C. Applied Steps pane
D. Data view

Correct Answer: C

Explanation:
The Applied Steps pane shows each transformation and highlights where errors occur.


10. Best Practice for Preventing Import Errors

Which approach BEST reduces the risk of data import errors?

A. Converting data types as early as possible
B. Cleaning and validating data before type conversion
C. Removing all null values
D. Importing fewer columns

Correct Answer: B

Explanation:
Cleaning data before assigning data types prevents conversion errors and ensures stable transformations.


✅ PL-300 Exam Takeaways

  • Import errors usually stem from type mismatches, invalid values, or broken applied steps
  • Fixing the root cause is preferred over removing data
  • Know when to use Replace Errors, Keep Errors, and Remove Errors
  • Expect scenario-based questions that test transformation order and reasoning

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Resolve Data Import Errors (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Profile and clean the data
--> Resolve Data Import Errors


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

1. Identifying the Cause of an Import Error

A column fails to load because Power BI cannot convert certain values to a numeric data type. What is the MOST likely cause?

A. Duplicate values in the column
B. Text values mixed with numeric values
C. The column contains too many rows
D. The column is hidden

Correct Answer: B

Explanation:
Type conversion errors commonly occur when text values (such as "N/A" or "Unknown") exist in a column expected to be numeric.


2. Viewing Only Error Rows

You want to see only the rows that caused an import error in Power Query. What should you do?

A. Enable Column distribution
B. Use Keep Errors on the column
C. Change the column data type
D. Open Advanced Editor

Correct Answer: B

Explanation:
Keep Errors filters the column to show only rows containing error values, making troubleshooting easier.


3. Best First Step When Errors Occur

A column shows error values after changing its data type. What is the BEST initial action?

A. Remove all error rows
B. Replace errors with null
C. Identify and fix the source values
D. Disable query refresh

Correct Answer: C

Explanation:
Best practice is to fix the root cause (invalid source values) before applying type conversion or replacing errors.


4. Handling Errors Without Losing Rows

You want to keep all rows but avoid breaking visuals caused by error values. What should you do?

A. Remove rows with errors
B. Replace errors with null
C. Replace values with zero
D. Delete the column

Correct Answer: B

Explanation:
Replacing errors with null preserves rows while preventing calculation and visualization failures.


5. Import Errors After Removing a Column

A query fails after a column is removed earlier in the applied steps. What is the MOST likely reason?

A. The column had duplicate values
B. A later step references the removed column
C. The column contained nulls
D. The column data type was incorrect

Correct Answer: B

Explanation:
Applied steps are sequential. If a later step references a removed or renamed column, the query will fail.


6. Merge Query Errors

A merge query fails because matching columns have different data types. What should you do?

A. Replace errors with null
B. Change both columns to the same data type
C. Remove duplicate rows
D. Use a conditional column

Correct Answer: B

Explanation:
Merge keys must have matching data types. Mismatches commonly cause merge errors.


7. Fixing Date Conversion Errors

Dates import as text and generate errors when converted to Date. The issue is caused by regional formatting differences. What is the BEST solution?

A. Replace errors with today’s date
B. Remove the column
C. Change data type using locale
D. Fill down the column

Correct Answer: C

Explanation:
Using Locale allows Power BI to correctly interpret date formats based on regional settings.


8. Understanding Replace Errors

What does the Replace Errors transformation do?

A. Removes rows with errors
B. Replaces null values only
C. Substitutes error values with a specified value
D. Fixes the underlying data issue automatically

Correct Answer: C

Explanation:
Replace Errors allows you to replace error values with a defined value (often null), but it does not fix the root cause.


9. Diagnosing Transformation Errors

Which Power Query feature helps identify which step in the query caused an error?

A. Column quality
B. Query dependencies
C. Applied Steps pane
D. Data view

Correct Answer: C

Explanation:
The Applied Steps pane shows each transformation and highlights where errors occur.


10. Best Practice for Preventing Import Errors

Which approach BEST reduces the risk of data import errors?

A. Converting data types as early as possible
B. Cleaning and validating data before type conversion
C. Removing all null values
D. Importing fewer columns

Correct Answer: B

Explanation:
Cleaning data before assigning data types prevents conversion errors and ensures stable transformations.


✅ PL-300 Exam Takeaways

  • Import errors usually stem from type mismatches, invalid values, or broken applied steps
  • Fixing the root cause is preferred over removing data
  • Know when to use Replace Errors, Keep Errors, and Remove Errors
  • Expect scenario-based questions that test transformation order and reasoning

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Select the appropriate column data type (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Select the appropriate column data type


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

1. Choosing the Correct Type for IDs

A column contains values such as 100245, 100312, and 100587. These values are used only to uniquely identify records. What data type should you use?

A. Whole Number
B. Decimal Number
C. Text
D. Fixed Decimal Number

Correct Answer: C

Explanation:
Identifiers should be stored as Text, not numeric types, since they are not used in calculations and may contain leading zeros in the future.


2. Fixing Incorrect Aggregation Behavior

A column representing revenue does not aggregate correctly in visuals. The column’s data type is set to Text. What should you do?

A. Change the data type to Decimal Number
B. Replace null values
C. Remove duplicate rows
D. Create a calculated column

Correct Answer: A

Explanation:
Text columns cannot be aggregated. Changing the data type to Decimal Number enables proper numeric calculations.


3. When to Use Fixed Decimal Number

Which scenario is BEST suited for the Fixed Decimal Number data type?

A. Product quantity
B. Currency values requiring precision
C. Customer ID numbers
D. Percentage values used in visuals

Correct Answer: B

Explanation:
Fixed Decimal Number is designed for financial data where precision is critical, such as currency.


4. Handling Dates Imported as Text

A column contains date values stored as text due to regional formatting differences. What is the BEST way to convert them?

A. Change data type directly to Date
B. Replace values manually
C. Change data type using locale
D. Create a calculated column

Correct Answer: C

Explanation:
Using Change Type → Using Locale ensures Power BI interprets dates correctly based on regional formats.


5. Preventing Data Type Errors

You are experiencing errors after converting a column’s data type. What should you do FIRST?

A. Remove rows with errors
B. Replace errors with null
C. Clean and validate the source values
D. Disable automatic type detection

Correct Answer: C

Explanation:
Errors often occur because the data is not clean. Always clean data before converting types.


6. Relationship Creation Failure

You are unable to create a relationship between two tables on a key column. What is the MOST likely reason?

A. The column contains duplicate values
B. The column contains null values
C. The data types of the columns do not match
D. The column is hidden

Correct Answer: C

Explanation:
Relationship columns must have matching data types for Power BI to create the relationship.


7. Choosing Between Date and Date/Time

A column contains dates only, with no time component. Which data type is MOST appropriate?

A. Text
B. Date/Time
C. Date
D. Date/Time/Timezone

Correct Answer: C

Explanation:
Using Date avoids unnecessary time components and improves filtering and grouping accuracy.


8. Automatic Type Detection Issues

Power BI automatically assigns a Whole Number type to a column that should be text. Why is this a problem?

A. It increases model size
B. It prevents the column from being filtered
C. It can remove leading zeros
D. It causes duplicate values

Correct Answer: C

Explanation:
Numeric types do not preserve leading zeros, which can be critical for codes or IDs.


9. Best Practice for Applying Data Types

When should you apply data type changes in Power Query?

A. Immediately after loading the data
B. After creating relationships
C. After cleaning and transforming the data
D. After publishing the report

Correct Answer: C

Explanation:
Applying data types after cleaning prevents conversion errors and ensures consistent transformations.


10. Diagnosing Model Performance Issues

A Power BI model is larger than expected and performs poorly. One cause is unnecessary Date/Time columns where only Date is needed. What should you do?

A. Convert Date/Time columns to Date
B. Convert Date columns to Text
C. Remove the columns
D. Disable auto date/time

Correct Answer: A

Explanation:
Using Date instead of Date/Time reduces model size and improves performance when time data is not required.


✅ PL-300 Exam Takeaways

  • Data type selection affects calculations, relationships, and performance
  • IDs should almost always be Text
  • Clean data before converting data types
  • Know when to use Decimal vs Fixed Decimal
  • Expect subtle, behavior-based questions on the exam

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Create and transform columns (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Create and transform columns


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

1. Choosing the Correct Tool

You need to split a column named FullName into FirstName and LastName before loading the data into the model. What is the BEST approach?

A. Create two DAX calculated columns
B. Use Split Column in Power Query
C. Use a measure
D. Use a visual-level calculation

Correct Answer: B

Explanation:
Structural changes like splitting columns should be done in Power Query before the data loads into the model.


2. Fixing Inconsistent Text Data

A column contains values with inconsistent casing (e.g., “north”, “North”, “NORTH”). What transformation should you apply?

A. Replace values
B. Trim
C. Format → Uppercase
D. Conditional column

Correct Answer: C

Explanation:
Formatting text to Uppercase (or Lowercase) standardizes values efficiently and avoids manual replacements.


3. Extracting Date Components

You need a column containing only the year from an existing Date column. What is the BEST option?

A. Create a DAX measure
B. Extract → Year in Power Query
C. Convert the date to text
D. Use a conditional column

Correct Answer: B

Explanation:
Power Query provides built-in Extract options for date components such as Year, Month, and Day.


4. Creating Business Logic Columns

You want to classify sales as “High” when the amount is greater than 1,000 and “Low” otherwise. What should you use?

A. Replace values
B. Split column
C. Conditional column
D. Index column

Correct Answer: C

Explanation:
Conditional columns are designed for simple business logic and can be created through the UI without writing code.


5. Removing Hidden Characters

A text column contains hidden, non-printable characters that affect filtering. Which transformation should you apply?

A. Trim
B. Clean
C. Replace errors
D. Capitalize Each Word

Correct Answer: B

Explanation:
Clean removes non-printable characters that are not visible but can cause data issues.


6. Merging Text Values into One Column

You need to combine City and State columns into a single column called Location. What should you use?

A. Split Column
B. Replace Values
C. Merge Columns
D. Custom Column

Correct Answer: C

Explanation:
Merge Columns combines multiple columns into one using a specified delimiter.


7. When to Use a Custom Column

When should you create a Custom Column instead of a Conditional Column?

A. When no logic is required
B. When simple IF/ELSE logic is needed
C. When more complex formulas or functions are required
D. When renaming columns

Correct Answer: C

Explanation:
Custom columns use Power Query M code, allowing more complex logic than conditional columns.


8. Power Query vs DAX Calculated Columns

Which scenario is BEST suited for a DAX calculated column instead of a Power Query column?

A. Cleaning inconsistent text values
B. Splitting a column by delimiter
C. Creating a column dependent on model relationships
D. Removing leading spaces

Correct Answer: C

Explanation:
DAX calculated columns can reference model relationships, while Power Query columns cannot.


9. Creating Row Identifiers

You need to add a sequential number to each row to preserve order. What should you use?

A. Conditional column
B. Custom column
C. Index column
D. Replace values

Correct Answer: C

Explanation:
An Index Column assigns a sequential number to each row and is commonly used for sorting or tracking order.


10. Best Practice for Column Creation

Which is the BEST practice when creating and transforming columns?

A. Create all columns using DAX
B. Transform columns after publishing the report
C. Clean data before creating derived columns
D. Avoid renaming columns

Correct Answer: C

Explanation:
Derived columns should be created from clean, validated data to avoid compounding errors.


✅ PL-300 Exam Takeaways

  • Use Power Query for structural transformations
  • Use DAX calculated columns when logic depends on model context
  • Know when to split, merge, extract, or derive columns
  • Expect scenario-based questions testing tool choice and order of operations

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Group and aggregate rows (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Group and aggregate rows


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

Question 1

You have a sales table with one row per transaction. You need a table that shows total sales per customer, and this logic will not change. Where should you perform this aggregation?

A. Create a DAX measure using SUM()
B. Use Power Query Group By on Customer
C. Create a calculated column in DAX
D. Use a visual-level aggregation

Correct Answer: B

Explanation:
Power Query Group By is ideal for static aggregations that do not need to respond to slicers. This reduces data volume and improves performance, which aligns with PL-300 best practices.


Question 2

Which aggregation option in Power Query counts the number of rows per group, regardless of column values?

A. Count
B. Count Distinct
C. Count Rows
D. Sum

Correct Answer: C

Explanation:
Count Rows counts all rows in each group. This is commonly tested and often confused with Count Distinct.


Question 3

You group a fact table in Power Query by Product and Year. What is the resulting grain of the table?

A. One row per product
B. One row per year
C. One row per product and year
D. One row per transaction

Correct Answer: C

Explanation:
When grouping by multiple columns, the output grain is the unique combination of those columns.


Question 4

What is a key impact on the data model when you group and aggregate rows in Power Query?

A. Measures become faster but less accurate
B. Relationships are automatically removed
C. Detailed transaction-level data is no longer available
D. DAX calculations are no longer required

Correct Answer: C

Explanation:
Grouping in Power Query permanently removes lower-level detail, which can limit drill-down and analytical flexibility.


Question 5

Which scenario indicates you should NOT group data in Power Query?

A. The dataset is very large
B. Aggregation logic is fixed
C. Users need slicer-driven calculations
D. You want to reduce model size

Correct Answer: C

Explanation:
If calculations must respond dynamically to slicers or filters, aggregation should be handled with DAX measures, not Power Query grouping.


Question 6

A Power BI report requires time intelligence calculations such as Year-to-Date sales. Where should aggregation occur?

A. Power Query Group By
B. Calculated columns
C. DAX measures
D. Visual-level filters

Correct Answer: C

Explanation:
Time intelligence requires dynamic context, which only DAX measures can provide. Grouping in Power Query would prevent proper time-based calculations.


Question 7

What does the All Rows aggregation option produce in Power Query?

A. A single aggregated value
B. A calculated column
C. A nested table per group
D. A distinct count

Correct Answer: C

Explanation:
All Rows creates a nested table containing all rows for each group, often used for advanced transformations or custom calculations.


Question 8

You mistakenly use Sum instead of Count Rows during a Group By operation. What is the most likely issue?

A. Incorrect totals due to data type mismatch
B. Missing relationships in the model
C. Duplicate rows created
D. Slower refresh times

Correct Answer: A

Explanation:
Using the wrong aggregation function can produce incorrect results, especially if the column contains non-additive values.


Question 9

Which benefit is most directly associated with grouping data in Power Query?

A. Improved visual formatting
B. Reduced dataset size
C. Increased DAX flexibility
D. Automatic relationship creation

Correct Answer: B

Explanation:
Grouping reduces the number of rows loaded into the model, improving performance and memory usage.


Question 10

Which statement best reflects a PL-300 best practice regarding grouping and aggregation?

A. Always aggregate data before loading
B. Group data only when DAX cannot be used
C. Group data when the required grain is known and fixed
D. Use grouping to replace all measures

Correct Answer: C

Explanation:
Grouping is appropriate when the required grain is known, stable, and does not require interactivity. Knowing when not to group is just as important for the exam.


Final Exam Tips for This Topic

  • Expect decision-based questions, not just “how-to”
  • Be clear on Power Query vs DAX responsibilities
  • Understand grain, performance, and flexibility trade-offs
  • Watch for Count vs Count Rows vs Count Distinct
  • Remember: Grouping is irreversible

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Pivot, Unpivot, and Transpose Data (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Pivot, Unpivot, and Transpose Data


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions


Question 1

A dataset contains monthly sales stored in separate columns named Jan, Feb, Mar, and so on. You need to create a time-series visual that supports filtering by month. What should you do?

A. Pivot the Month columns
B. Unpivot the Month columns
C. Transpose the table
D. Create calculated columns in DAX

Correct Answer: B

Explanation:
Unpivoting converts repeated value columns into a single attribute–value pair, producing a normalized structure suitable for time-series analysis.


Question 2

You want to convert unique values from a column into individual columns, displaying aggregated results per category. Which transformation should you use?

A. Unpivot Columns
B. Transpose Table
C. Pivot Columns
D. Group By

Correct Answer: C

Explanation:
Pivoting turns values from one column into multiple columns, often requiring an aggregation function such as Sum or Count.


Question 3

Which transformation flips all rows into columns and columns into rows?

A. Pivot
B. Unpivot
C. Group By
D. Transpose

Correct Answer: D

Explanation:
Transpose rotates the entire table structure and is typically used when the table orientation is incorrect.


Question 4

After transposing a table, the first row now contains the desired column headers. What should you do next?

A. Unpivot Columns
B. Rename columns manually
C. Promote the first row to headers
D. Create a calculated table

Correct Answer: C

Explanation:
Promoting the first row to headers finalizes the transformation after transposing data.


Question 5

Which scenario best indicates that unpivoting is required?

A. Values in a column need to be summarized
B. Categories should become column headers
C. Repeating attribute columns must become rows
D. Data needs to be grouped by multiple keys

Correct Answer: C

Explanation:
Unpivoting is used to normalize data where multiple columns represent the same attribute (such as months or measures).


Question 6

What is a potential negative impact of pivoting data in Power Query?

A. Increased DAX complexity
B. Reduced model performance due to many columns
C. Loss of relationships
D. Automatic removal of measures

Correct Answer: B

Explanation:
Pivoting can significantly increase the number of columns, bloating the model and negatively affecting performance.


Question 7

You receive a dataset where column headers are stored in the first column instead of the first row. Which transformation is most appropriate?

A. Unpivot
B. Pivot
C. Transpose
D. Group By

Correct Answer: C

Explanation:
Transposing the table re-orients rows and columns so headers can be promoted correctly.


Question 8

Which transformation often requires you to specify an aggregation function?

A. Transpose
B. Unpivot
C. Pivot
D. Remove Columns

Correct Answer: C

Explanation:
Pivoting may result in multiple values for a new column and therefore requires aggregation (Sum, Count, Average, etc.).


Question 9

You want to keep identifier columns (such as ProductID and Region) while converting all other columns into rows. Which option is best?

A. Pivot Selected Columns
B. Unpivot Other Columns
C. Transpose Table
D. Group By

Correct Answer: B

Explanation:
Unpivot Other Columns preserves key identifier columns while normalizing all remaining columns.


Question 10

Which statement reflects a PL-300 best practice for reshaping data?

A. Always pivot data before loading
B. Use transpose for large fact tables
C. Normalize data before modeling when possible
D. Avoid unpivoting to improve performance

Correct Answer: C

Explanation:
Normalized (unpivoted) data supports better filtering, relationships, and DAX flexibility, which aligns with PL-300 modeling principles.


Final Exam Tips for This Topic

  • Identify whether the data is wide or tall
  • Pivot = rows → columns
  • Unpivot = columns → rows
  • Transpose = rotate entire table
  • Watch for aggregation requirements when pivoting
  • The exam emphasizes choosing the right transformation, not UI memorization

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Convert Semi-Structured Data to a Table (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Convert Semi-Structured Data to a Table


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions


Question 1

You load a JSON file into Power BI. The resulting table contains a single column where each row shows List. What is the first step to analyze the data?

A. Expand the column
B. Convert the list to a table
C. Promote headers
D. Split the column by delimiter

Correct Answer: B

Explanation:
Lists must be converted into tables before they can be expanded or analyzed as rows.


Question 2

A column in Power Query displays Record in each row. What does this indicate?

A. The column contains duplicated values
B. The column contains nested structured fields
C. The column contains multiple rows per record
D. The column contains untyped data

Correct Answer: B

Explanation:
A Record represents a nested structure with named fields that can be expanded into columns.


Question 3

Which Power Query action is used to expose fields stored inside a record?

A. Convert to Table
B. Pivot Column
C. Expand Column
D. Transpose Table

Correct Answer: C

Explanation:
Expanding a record reveals its internal fields as individual columns.


Question 4

An API response loads as a table with a column containing lists of values. What is the correct transformation sequence?

A. Expand → Promote Headers
B. Convert to Table → Expand
C. Split Column → Fill Down
D. Group By → Expand

Correct Answer: B

Explanation:
Lists must be converted into tables first, after which they can be expanded.


Question 5

After expanding nested data, you notice duplicate rows in your fact table. What is the most likely cause?

A. Incorrect data type
B. Expanding without understanding data granularity
C. Missing relationships
D. Failure to promote headers

Correct Answer: B

Explanation:
Expanding nested structures without considering the grain can duplicate rows and inflate fact tables.


Question 6

You import an Excel file where headers appear in multiple rows instead of a single row. What is the most appropriate approach?

A. Expand the column
B. Convert the table to a list
C. Transpose the table and promote headers
D. Group rows by column

Correct Answer: C

Explanation:
Transposing realigns rows and columns so headers can be promoted properly.


Question 7

Which Power Query feature is most useful when category labels appear only once and apply to multiple rows below?

A. Replace Values
B. Fill Down
C. Unpivot Columns
D. Merge Queries

Correct Answer: B

Explanation:
Fill Down propagates header or category values to related rows, common in semi-structured spreadsheets.


Question 8

Why is it recommended to expand only required fields when converting semi-structured data?

A. To reduce report refresh frequency
B. To improve visual formatting
C. To reduce model size and complexity
D. To enable DirectQuery mode

Correct Answer: C

Explanation:
Expanding unnecessary fields increases model size and can negatively impact performance and usability.


Question 9

Which transformation should be completed before creating relationships in the data model?

A. Creating measures
B. Flattening semi-structured data
C. Formatting visuals
D. Applying row-level security

Correct Answer: B

Explanation:
Relationships require clean, tabular data. Semi-structured data must be flattened first.


Question 10

Which statement best reflects a PL-300 best practice for handling semi-structured data?

A. Leave nested data unexpanded until report creation
B. Use DAX to flatten semi-structured data
C. Normalize and flatten data in Power Query
D. Always transpose semi-structured tables

Correct Answer: C

Explanation:
Power Query is the correct place to normalize and flatten semi-structured data before modeling and analysis.


Final Exam Tips for This Topic

  • Recognize lists vs records vs tables
  • Lists → Convert to table
  • Records → Expand
  • Inspect data grain before expanding
  • Clean data before flattening
  • This topic is about recognition and transformation choices, not memorizing UI clicks

Go back to the PL-300 Exam Prep Hub main page