Tag: PL-300 Exam Hub

Group and Aggregate Rows (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Group and Aggregate Rows


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Grouping and aggregating rows is a foundational data preparation task used to summarize detailed data into meaningful totals before it is loaded into the Power BI data model. For the PL-300: Microsoft Power BI Data Analyst exam, Microsoft evaluates your understanding of how, when, and why to group data in Power Query, and how those decisions affect the data model and reporting outcomes.


Why Group and Aggregate Rows?

Grouping and aggregation are used to:

  • Summarize transactional or granular data
  • Reduce dataset size and improve performance
  • Shape fact tables to the correct grain
  • Prepare data for simpler reporting
  • Offload static calculations from DAX into Power Query

Exam Focus: The exam often tests decision-making—specifically whether aggregation should occur in Power Query or later in DAX.


Where Grouping Happens in Power BI

Grouping and aggregation for this exam objective occur in Power Query Editor, using:

  • Home → Group By
  • Transform → Group By

This transformation physically reshapes the dataset before it is loaded into the model.

Key Distinction: Power Query grouping changes the stored data. DAX measures calculate results dynamically at query time.


The Group By Operation

When using Group By, you define:

1. Group By Columns

Columns that determine how rows are grouped, such as:

  • Customer
  • Product
  • Date
  • Region

Each unique combination of these columns produces one row in the output.

2. Aggregation Columns

New columns created using aggregation functions applied to grouped rows.


Common Aggregation Functions (Exam-Relevant)

Power Query supports several aggregation functions frequently referenced on the PL-300 exam:

  • Sum – Adds numeric values
  • Count Rows – Counts rows in each group
  • Count Distinct Rows – Counts unique values
  • Average – Calculates the mean
  • Min / Max – Returns lowest or highest values
  • All Rows – Produces nested tables for advanced scenarios

Exam Tip: Be clear on the difference between Count Rows and Count Distinct—this is commonly tested.


Grouping by One vs Multiple Columns

Grouping by a Single Column

Used to create high-level summaries such as:

  • Total sales per customer
  • Number of orders per product

Results in one row per unique value.


Grouping by Multiple Columns

Used when summaries must retain more detail, such as:

  • Sales by customer and year
  • Quantity by product and region

The output grain is defined by the combination of columns.


Impact on the Data Model

Grouping and aggregating rows in Power Query has a direct impact on the data model, which is an important exam consideration.

Key Impacts:

  • Reduced row count improves model performance
  • Changes the grain of fact tables
  • May eliminate the need for certain DAX measures
  • Can simplify relationships by reducing cardinality

Important Trade-Off:

Once data is aggregated in Power Query:

  • You cannot recover lower-level detail
  • You lose flexibility for drill-down analysis
  • Time intelligence and slicer-driven behavior may be limited

Exam Insight: Microsoft expects you to recognize when aggregation improves performance and when it limits analytical flexibility.


Group and Aggregate vs DAX Measures (Highly Tested)

Understanding where aggregation belongs is a core PL-300 skill.

Group in Power Query When:

  • Aggregation logic is fixed
  • You want to reduce data volume
  • Performance optimization is required
  • The dataset should load at a specific grain

Use DAX Measures When:

  • Aggregations must respond to slicers
  • Time intelligence is required
  • Users need flexible, dynamic calculations

Common Mistakes (Often Tested)

These are frequent pitfalls that appear in exam scenarios:

  • Grouping too early, eliminating needed detail
  • Aggregating data that should remain transactional
  • Using Sum on columns that should be counted
  • Confusing Count Rows with Count Distinct
  • Grouping in Power Query when a DAX measure is more appropriate
  • Forgetting to validate results after grouping
  • Incorrect data types causing aggregation errors

Exam Pattern: Many questions present a “wrong but plausible” grouping choice—look carefully at reporting requirements.


Best Practices for PL-300 Candidates

  • Understand the grain of your data before grouping
  • Group only when it adds clear value
  • Validate totals after aggregation
  • Prefer Power Query grouping for static summaries
  • Use DAX for dynamic, filter-aware calculations
  • Know when not to group:
    • When users need drill-down capability
    • When calculations must respond to slicers
    • When time intelligence is required
    • When future reporting needs are unknown

How This Appears on the PL-300 Exam

Expect scenario-based questions such as:

  • You need to reduce model size and improve performance. Where should aggregation occur?
  • Which aggregation produces unique counts per group?
  • What is the impact of grouping data before loading it into the model?
  • Why would grouping in Power Query be inappropriate in this scenario?

Key Takeaways

✔ Grouping is performed in Power Query, not DAX
✔ Aggregation reshapes data before modeling
✔ Grouping impacts performance, flexibility, and grain
✔ Know both when to group and when not to
✔ This topic tests data modeling judgment, not just mechanics


Practice Questions

Go to the Practice Exam Questions for this topic.

Create and Transform Columns (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Create and Transform Columns


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Columns are the foundation of data analysis in Power BI. The ability to create new columns and transform existing ones is essential for shaping your dataset into a structure that supports meaningful insights and accurate reports.

In the PL-300 exam, Microsoft tests not only whether you can perform transformations but also whether you understand when and why to apply them.


Why Create and Transform Columns?

Before data can be modeled and visualized:

  • It must be clean, consistent, and in the right format
  • New columns may be needed to support business logic
  • Transformations ensure data is reliable and analysis-ready

For example:

  • Converting text dates into true Date types
  • Extracting parts of a string (e.g., Year from a date)
  • Splitting a full name into first and last names
  • Normalizing inconsistent text values

These are not just useful—they are often necessary for accurate DAX measures and reporting.


Where Column Transformations Happen

Most column creation and transformation tasks happen in Power Query Editor (before the data loads into the model).

Key places include:

  • Transform tab
  • Add Column tab
  • Applied Steps pane
  • Advanced Editor (for M code)

Power BI also allows column creation after loading the data through:

  • DAX Calculated Columns (in the data model)

The exam may present scenarios where you choose which tool (Power Query vs DAX) to use.


Common Column Transformations

Here are the main categories of column operations you should be ready to apply:


1. Basic Transformations

These change existing columns:

  • Rename columns
  • Change data types
  • Trim, clean, or format text
  • Replace values
  • Remove columns

These are the bread-and-butter tasks that clean and standardize data.


2. Splitting and Merging Columns

When data is combined within one field:

  • Split Column (by delimiter or number of characters)
    • Example: Split Full NameFirst Name and Last Name
  • Merge Columns
    • Example: Combine City and State into a single location field

This is essential when data needs to be restructured for modeling.


3. Extracting Components

Examples include:

  • Extracting Year, Month, or Day from a Date column
  • Taking the first/last characters from a text string
  • Extracting text before or after a specific character

These operations prepare granular fields needed for grouping or calculations.


4. Calculations Using “Add Column”

You can create derived columns based on logic:

  • Custom Columns (via M formulas)
  • Conditional Columns
    • Example: Flag High Value sales where sales > $1,000
  • Index Columns
    • Useful for row ordering

These columns often support business metrics or classifications.


Text Transformations

Text columns commonly require cleaning and standardization:

  • Uppercase / Lowercase
  • Trim (removes leading/trailing spaces)
  • Clean (removes non-printable characters)
  • Replace Values (e.g., “N/A” → null)

The exam often tests whether you know how to fix inconsistent text data.


Date and Time Transformations

Working with dates is core to analysis:

  • Change text to date/time type
  • Extract Year, Quarter, Month, Day
  • Add custom time intelligence columns
  • Use locale conversion for date parsing

This enables time-based grouping and accurate measures like YTD (Year-to-Date).


Conditional and Custom Columns

Conditional Columns

  • Created through UI (Add Column → Conditional Column)
  • Define logic visually (e.g., if Sales > 500 then "High" else "Low")

Custom Columns

  • Created using Power Query M code
  • More advanced logic and functions

Both are useful depending on the complexity of your requirement. Exam questions often compare these approaches.


Column Transformations vs DAX Calculated Columns

Power Query Column

  • Transformation occurs before data loads into model
  • Changes physical data shape
  • Useful for cleaning and structuring data

DAX Calculated Column

  • Created after data loads into the model
  • Evaluated per row in the model
  • Useful for measures and relationships tied to data model context

Exam insight:
Use Power Query transformations for structural cleanup. Use DAX calculated columns when the logic depends on model relationships or evaluation context.


Best Practices for the Exam

  • Clean data before creating columns.
    Don’t derive new data from dirty input.
  • Apply the right transformation tool.
    Power Query for structural cleanup; DAX for model-aware calculations.
  • Name columns clearly.
    Report consumers and measures depend on intuitive names.
  • Avoid unnecessary columns.
    Only keep what’s needed for reporting to improve model performance.
  • Group related transformations logically.
    Use Query folding where possible (especially for large datasets).

How This Appears on the PL-300 Exam

You might see scenarios like:

You need to split a full address column into street, city, and postal code for better filtering. Which transformation should you use?

This tests:

  • Knowledge of Split Column
  • When to apply it
  • How to maintain data type integrity afterward

Or:

Your date column is text and not aggregating correctly. What do you do?

This tests:

  • Understanding of data types
  • Ability to convert to proper Date/Time

Most questions are scenario-based, requiring both decision and action reasoning.


Key Takeaways

✔ Column transformations are a core part of shaping data
✔ Power Query is the primary environment for creating and transforming columns
✔ Use Add Column for new fields and Transform for modifying existing fields
✔ Know the difference between Power Query and DAX calculated columns
✔ Common transformations include text, date, splitting/merging, conditional logic, and custom formulas


Practice Questions

Go to the Practice Exam Questions for this topic.

Select Appropriate Column Data Types (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Select Appropriate Column Data Types


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Selecting the correct column data types is a foundational step in preparing data for analysis in Power BI. In the PL-300 exam, Microsoft evaluates your ability to choose, validate, and apply appropriate data types to ensure accurate calculations, efficient models, and reliable visuals.

Although Power BI often detects data types automatically, exam scenarios frequently test when automatic detection is wrong—and how to fix it.


Why Column Data Types Matter

Correct data types directly impact:

  • Aggregation behavior (sum, average, count)
  • Filtering and sorting
  • Relationship creation
  • Model performance and storage
  • DAX calculations and measures
  • Visual behavior in reports

An incorrect data type can silently produce wrong results without throwing an obvious error—something the exam loves to test.


Where Data Types Are Set

Column data types are primarily managed in Power Query Editor, not in the report or model view.

You can set or change data types by:

  • Using the Data Type dropdown in the column header
  • Using Transform → Data Type
  • Using Transform → Using Locale
  • Editing the Changed Type step in Applied Steps

Exam Tip: Power BI automatically inserts a Changed Type step—know when to keep it, modify it, or remove it.


Common Column Data Types in Power BI

Numeric Types

  • Whole Number – IDs, counts, quantities
  • Decimal Number – Currency, percentages, measurements
  • Fixed Decimal Number – Financial data requiring precision

Text

  • Used for names, descriptions, categories, codes
  • Avoid using text for numeric or date data whenever possible

Date and Time Types

  • Date
  • Time
  • Date/Time
  • Date/Time/Timezone

Correct date types enable:

  • Time intelligence in DAX
  • Date hierarchies
  • Accurate filtering and grouping

Boolean

  • True/False values
  • Useful for flags and status indicators

Choosing the Correct Data Type (Exam Focus)

Numbers vs Text

A column containing numeric-looking values (e.g., "1001", "1002") might actually represent:

  • An ID → should remain Text, especially if they may contain leading zeros in the future.
  • A measure → should be Whole or Decimal Number

Exam Insight: If your IDs are numeric, ensure that they do not have leading zeros or decimals, and also ensure that they are not inadvertently summarized or used in calculations. Numeric IDs are better for performance but setting or leaving them Text is an option if the scenario requires it.


Dates Stored as Text

Dates often import as text due to:

  • Regional formats
  • Inconsistent source systems

Correct approach:

  • Clean the text first
  • Convert using Change Type

Avoid using text dates in the model—this breaks time intelligence.


Decimal vs Fixed Decimal

  • Decimal Number: More flexible, faster
  • Fixed Decimal Number: Better for financial accuracy

Exam Insight: Use Fixed Decimal when precision matters (e.g., currency).


Avoiding Common Data Type Mistakes

❌ Converting Too Early

Changing data types before cleaning:

  • Can cause errors
  • Makes transformations fail

Best practice: Clean first, then convert.


❌ Using Text Instead of Numeric

Numeric data stored as text:

  • Cannot be aggregated
  • Breaks calculations
  • Causes visuals to behave unexpectedly

❌ Incorrect Date Types

Using Date/Time when only Date is needed:

  • Increases model size
  • Causes grouping issues

Data Types and Relationships

For relationships to work:

  • Data types must match
  • Text ↔ Text
  • Number ↔ Number
  • Date ↔ Date

If data types don’t match:

  • Relationships cannot be created
  • Merge queries may fail
  • Filter propagation breaks

Using “Change Type with Locale”

This is especially important for:

  • International datasets
  • CSV files
  • Date formats like DD/MM/YYYY

Why it matters for the exam:
Microsoft frequently includes scenarios where date conversion fails due to regional formatting.


Verifying Data Types Before Load

Before loading data into the model:

  • Review all Changed Type steps
  • Confirm numeric columns aggregate correctly
  • Confirm date columns create hierarchies
  • Ensure IDs are not numeric by mistake

Best Practices for PL-300 Candidates

  • Always review auto-detected data types
  • Treat IDs and codes as text
  • Use Fixed Decimal for financial data
  • Convert data types after cleaning
  • Match data types before relationships or merges
  • Understand when to use locale-based conversion

How This Appears on the PL-300 Exam

Expect questions that ask you to:

  • Choose the correct data type for a scenario
  • Identify problems caused by incorrect data types
  • Fix failed merges or relationships
  • Resolve aggregation or date intelligence issues

These questions are often subtle—the data loads successfully, but behaves incorrectly.


Key Takeaways

  • Selecting appropriate data types is essential for correct analysis
  • Automatic detection is helpful but not foolproof
  • Power Query is the correct place to manage data types
  • Understanding why a type is wrong is more important than memorizing steps

Practice Questions

Go to the Practice Exam Questions for this topic.

Resolve Data Import Errors (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Profile and clean the data
--> Resolve data import errors


Note that there are 10 practice questions (with answers and explanations) at the end of each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub's main page.

Data import errors are a common issue when bringing data into Power BI. These errors typically arise during the Power Query stage and must be resolved before data can be successfully loaded into the data model. The PL-300 exam tests your ability to identify, interpret, and fix these errors using Power Query’s built-in tools and transformations.


What Are Data Import Errors?

Import errors occur when Power BI cannot process or convert incoming data as expected. These errors can arise from:

  • Invalid data formats
  • Incompatible data types
  • Data corruption
  • Unexpected null or missing values
  • Transformation steps that fail

Identifying and resolving these errors early ensures that your dataset is clean, consistent, and ready for modeling and reporting.


Where Import Errors Occur

Import errors are most commonly encountered:

🧩 During Data Type Conversion

When the source value cannot be converted to the target type
(e.g., text "N/A" converted to number)

🧩 In Applied Steps

If a transformation step references a column that doesn’t exist
or expects a format that isn’t present

🧩 While Combining Queries

When merging or appending tables with mismatched structures

🧩 When Parsing Complex Formats

Such as dates in nonstandard formats or malformed JSON


How Power BI Signals Import Errors

In Power Query Editor, import errors are typically shown as:

  • Error icons in the preview cells
  • A warning message in the query results (“Error” link)
  • Red dotted underlines or warnings in applied steps
  • The “Load failed” message when refreshing

The first step in resolving errors is to examine the error details.


Viewing Error Details

When an error appears in Power Query:

  1. Click the Error indicator in the cell or
  2. Use View → Column quality / Column profile

You can also filter the column to show only error values by filtering on Errors.

Exam tip:
Power BI often shows technical error messages, so part of the task is interpreting what the underlying issue is (e.g., type mismatch, invalid format, null where not expected).


Common Import Errors & How to Fix Them

1. Type Conversion Errors

Scenario: A column expected to be numeric contains text such as "Unknown".

Fix Options:

  • Use Replace Errors to substitute a default value
  • Use Replace Values to convert specific text to numeric (e.g., "Unknown"0)
  • Adjust data type after cleaning

Key Idea: Always fix the root cause before changing the data type.


2. Unexpected Null Values

Scenario: A key column has nulls where values are required, causing subsequent transformations to fail.

Fix Options:

  • Replace nulls with default values via Replace Values
  • Remove rows where the column is null
  • Use conditional logic (Add Column → Conditional Column) to handle nulls appropriately

Key Idea: Nulls can break transformations (like merges) if not handled first.


3. Transformation Step Errors

Scenario: A transformation step refers to a column removed or renamed earlier in the applied steps.

Fix Options:

  • Review and reorder steps in the APPLIED STEPS pane
  • Rename the column consistently before referencing it
  • Delete the problematic step and reapply it correctly

Key Idea: Power BI applies steps sequentially. A downstream step can fail if an upstream change invalidates assumptions.


4. Merge/Append Structure Errors

Scenario: You merge or append tables that don’t share compatible column structures (e.g., mismatched data types).

Fix Options:

  • Ensure columns used for merger/join have identical data types
  • Rename or reorder columns to match structures
  • Preclean individual tables before combining

Key Idea: Always validate structure and types before merging or appending tables.


5. Parsing & Date Format Errors

Scenario: Date values import as text due to regional format differences (MM/DD/YYYY vs DD/MM/YYYY).

Fix Options:

  • Change the column data type to Date after validating format
  • Use Transform → Using Locale to define the correct regional format
  • Use Custom Columns to parse dates manually with Date.FromText

Key Idea: Locale-aware parsing helps resolve ambiguous date formats.


Tools to Help Diagnose Import Errors

Power BI provides several tools to help you locate and fix import errors:

🔍 Error Filtering

Filter columns to show only error rows.

📊 Column Quality / Distribution / Profile

Use profiling tools to identify patterns, nulls, and anomalies.

🧠 Step Validation

Hover over each Applied Step to see whether it is valid or failing.

📝 Advanced Editor

Review M code for logic errors or incorrect references.


Best Practices for Fixing Import Errors

1. Clean Before Converting Types
Always fix textual anomalies and nulls before assigning data types.

2. Avoid Hard-Coding Values
Replace problematic values using conditional logic or parameters for maintenance.

3. Inspect Impact of Each Step
Use the Applied Steps pane to ensure each transformation is valid.

4. Test Incrementally
Fix errors one at a time and refresh often to confirm success.

5. Document Assumptions
Add comments or descriptive step names to make logic clearer.


How This Appears on the PL-300 Exam

The exam commonly tests your ability to:

✔ Identify why a query fails (type mismatch, nulls, missing column)
✔ Choose the correct sequence to fix the issue
✔ Understand the difference between Replace Errors and Remove Errors
✔ Apply transformations in the correct order (clean → convert → transform)

Most questions are scenario-based, asking what action you would take next to successfully import data.


Key Exam Takeaways

  • Import errors can be caused by data type mismatches, unexpected nulls, invalid formats, and broken transformation steps.
  • Use Power Query tools to diagnose and resolve errors before loading data into the model.
  • Always understand the root cause before applying a fix.
  • Knowing how to use Replace Errors, Replace Values, Conditional Columns, and Data Type changes is essential.

Practice Questions

Go to the Practice Exam Questions for this topic.

Resolve inconsistencies, unexpected or null values, and data quality issues (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Profile and clean the data
--> Resolve inconsistencies, unexpected or null values, and data quality issues


Note that there are 10 practice questions (with answers and explanations) at the end of each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub's main page.

High-quality data is essential for accurate analysis and trustworthy reports. In the PL-300 exam, Microsoft expects candidates to understand how to identify and resolve common data quality problems using Power Query before data is loaded into the model.

This section focuses on handling inconsistencies, unexpected values, nulls, and errors—all of which can negatively impact calculations, relationships, and visuals if left unresolved.


Why This Topic Matters for the Exam

From an exam perspective, this topic tests your ability to:

  • Diagnose data quality problems using profiling tools
  • Choose the correct transformation to fix an issue
  • Understand when to remove, replace, or transform data
  • Prevent downstream modeling and reporting issues

Most questions are scenario-based, asking what action you should take to fix a specific data issue.


Common Data Quality Issues You Must Recognize

1. Null (Blank) Values

Nulls represent missing or unknown data and can cause:

  • Incorrect aggregations
  • Broken relationships
  • Visuals that behave unexpectedly

Common causes:

  • Incomplete source data
  • Left joins with no matching rows
  • Data entry gaps

2. Unexpected or Invalid Values

These include:

  • Negative values where only positives make sense
  • Text values in numeric columns
  • Dates outside expected ranges
  • Misspelled or inconsistent category names

3. Inconsistent Data

Inconsistencies often appear as:

  • Mixed casing (USA vs usa)
  • Trailing or leading spaces
  • Multiple spellings for the same value
  • Different date or number formats

4. Error Values

Errors usually occur when:

  • Converting data types
  • Performing calculations
  • Parsing malformed data

Examples include:

  • Conversion failed
  • Divide by zero
  • Invalid date format

Identifying Data Quality Issues in Power Query

Power Query provides built-in data profiling tools to quickly detect problems:

Column Quality

  • Shows percentages of Valid, Error, and Empty values
  • Ideal for spotting nulls and errors

Column Distribution

  • Displays value frequency and distinct counts
  • Helps identify unexpected or inconsistent values

Column Profile

  • Provides min, max, average, and other statistics
  • Useful for detecting outliers and invalid ranges

Exam Tip: Profiling tools only analyze a sample by default. You may need to enable “Column profiling based on entire dataset” for accuracy.


Techniques to Resolve Null Values

Remove Rows

  • Used when nulls make a record unusable
  • Common for missing primary keys or required fields

Replace Values

  • Replace nulls with:
    • 0 (for numeric measures)
    • “Unknown” or “Not Provided” (for text)
    • A default date

Fill Down / Fill Up

  • Used for hierarchical or grouped data
  • Common in spreadsheets with merged cells

Exam Insight: Replacing nulls should be a business-justified decision, not automatic.


Resolving Inconsistencies

Standardizing Text

  • Use Transform → Format:
    • Uppercase
    • Lowercase
    • Capitalize Each Word

Trimming and Cleaning

  • Trim removes leading/trailing spaces
  • Clean removes non-printable characters

Replacing Values

  • Normalize spelling differences (e.g., “US”, “USA”, “United States”)

Handling Unexpected or Invalid Values

Filtering

  • Remove values outside acceptable ranges
  • Exclude invalid categories

Conditional Columns

  • Create logic to flag or correct invalid data
  • Example: Replace negative sales with null or zero

Data Type Corrections

  • Ensure columns use appropriate data types
  • Prevents aggregation and calculation errors later

Fixing Error Values

Replace Errors

  • Replace with null or a default value

Remove Errors

  • Used when rows are unreliable

Fix the Root Cause

  • Change transformation order
  • Adjust data type conversion
  • Clean data before applying calculations

Exam Tip: Microsoft often tests whether you know why an error occurs, not just how to remove it.


Best Practices for PL-300 Candidates

  • Always profile before transforming
  • Fix issues in Power Query, not DAX, when possible
  • Understand the impact of removing vs replacing data
  • Keep transformations repeatable and documented
  • Prefer clean data models over complex report logic

Key Takeaways for the Exam

You should be able to:

  • Identify different types of data quality issues
  • Choose the correct Power Query tool to resolve them
  • Understand the downstream impact on models and visuals
  • Interpret profiling results correctly

Mastering this topic ensures cleaner datasets, better models, and fewer surprises during analysis—exactly what the PL-300 exam is designed to validate.


Practice Questions

Go to the Practice Exam Questions for this topic.

Evaluate Data including Data Statistics & Column Properties (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Profile and clean the data
--> Evaluate data, including data statistics and column properties


Note that there are 10 practice questions (with answers and explanations) at the end of each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Before cleaning, transforming, or modeling data, a Power BI Data Analyst must first evaluate the quality and structure of the data. The PL-300 exam tests your ability to profile data, interpret data statistics, and understand column properties to identify issues such as missing values, incorrect data types, outliers, and inconsistent formats.

This topic lives under Profile and clean the data because effective data preparation starts with understanding what the data looks like and how it behaves.


What Does “Evaluate Data” Mean in Power BI?

Evaluating data means using Power BI (specifically Power Query) to:

  • Understand data distribution and completeness
  • Identify data quality issues
  • Verify correct data types and formats
  • Decide what cleaning or transformation steps are required

Rather than guessing, Power BI provides built-in profiling tools that summarize data characteristics automatically.


Data Profiling Tools in Power Query

Power BI includes several profiling features that appear in the Power Query Editor, primarily within the View tab.

Key Data Profiling Options

  • Column quality
  • Column distribution
  • Column profile

These tools help you quickly assess whether a column is usable, trustworthy, and correctly defined.


Column Quality

Column quality provides a high-level overview of data completeness and validity.

It visually displays:

  • Valid values
  • Error values
  • Empty (null) values

Why Column Quality Matters

  • Quickly highlights missing or broken data
  • Helps determine whether rows should be filtered, fixed, or removed
  • Useful for early detection of refresh or ingestion issues

📌 Exam insight:
Questions often test whether you can identify which tool reveals missing or invalid values—column quality is the answer.


Column Distribution

Column distribution shows how values are spread across a column.

It provides:

  • Frequency of values
  • Distinct vs unique counts
  • A histogram-style visualization (for numeric fields)

Common Uses

  • Spotting unexpected duplicates
  • Identifying skewed data
  • Detecting outliers
  • Validating categorical values

📌 Exam insight:
Column distribution is used to understand value frequency, not just nulls or errors.


Column Profile

Column profile gives the most detailed statistical view of a column.

Depending on the data type, it may include:

  • Minimum and maximum values
  • Average
  • Standard deviation
  • Count and distinct count
  • Null count

Typical Use Cases

  • Verifying numeric ranges (e.g., negative values where none should exist)
  • Checking date ranges
  • Understanding overall data shape before modeling

📌 Exam insight:
Column profile helps validate statistical characteristics, not formatting or naming.


Understanding Column Properties

Beyond statistics, Power BI also evaluates column properties, which affect how data behaves in the model and visuals.

Key Column Properties to Evaluate

Data Type

Examples:

  • Whole number
  • Decimal number
  • Text
  • Date / DateTime
  • Boolean

Incorrect data types can:

  • Break visuals
  • Prevent aggregations
  • Cause relationship issues

📌 Exam tip:
Always verify data types before applying transformations or creating measures.


Format

Controls how values appear (e.g., currency, percentage, date format).

  • Affects display, not calculation logic
  • Often adjusted after validating data type

Default Summarization

Determines how numeric columns aggregate in visuals:

  • Sum
  • Average
  • Count
  • Do not summarize

📌 Exam insight:
Default summarization is evaluated when deciding how columns behave in visuals—not during Power Query transformations.


Column Name & Description

  • Clear names improve usability
  • Descriptions help report consumers understand the data

While not deeply technical, the exam may include best-practice questions around data clarity and usability.


Evaluating Data at the Right Stage

Most evaluation tasks occur in Power Query, before data is loaded into the model.

Why?

  • Faster detection of issues
  • Prevents poor-quality data from entering the model
  • Reduces downstream modeling complexity

📌 Key distinction for the exam:

  • Power Query → data evaluation & cleaning
  • Model view → relationships & behavior
  • Report view → visualization

Common Exam Scenarios

You may encounter questions like:

Scenario 1

You need to quickly identify columns with missing or invalid values.

Correct concept: Column quality


Scenario 2

You want to understand how frequently values appear in a categorical column.

Correct concept: Column distribution


Scenario 3

You need to verify numeric ranges and detect outliers.

Correct concept: Column profile


Scenario 4

A numeric column is being treated as text and cannot be aggregated.

Correct concept: Incorrect data type (column property)


Best Practices to Remember

  • Enable profiling tools early in data preparation
  • Validate data types before transformations
  • Use statistics to guide cleaning decisions
  • Don’t rely on visuals alone to detect data quality issues

Key Exam Takeaways

For the PL-300 exam, remember:

  • Column quality → valid, error, and null values
  • Column distribution → frequency and distinct values
  • Column profile → statistical insights
  • Column properties affect aggregation, relationships, and visuals
  • Data evaluation happens primarily in Power Query

Understanding how to interpret what Power BI is telling you about your data is just as important as knowing how to clean it.


Practice Questions

Go to the Practice Exam Questions for this topic.

Create and Modify Parameters (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Get or connect to data
--> Create and Modify Parameters


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

As you prepare for the PL-300: Microsoft Power BI Data Analyst exam, one important skill to master is using parameters in Power Query to build flexible, reusable, and dynamic data connections and transformations. Parameters help you abstract static values—such as server names, file paths, or filter values—and manage them in a centralized way. Microsoft Learn


What Are Parameters in Power BI?

A parameter is a named value that you can use in Power Query to influence how queries run. Instead of hard-coding values directly into your query steps, a parameter lets you declare the value once and reference it throughout your queries so you can:

  • Change values globally without editing each query step.
  • Switch between environments (Dev, Test, Prod) easily.
  • Apply consistent filters or thresholds.
  • Support dynamic filtering and modular query design.

In many ways, parameters act like variables in the Power Query (M) engine—reusable building blocks you can plug into various parts of your query logic. Microsoft Learn


Where You Use Parameters

Parameters are especially useful when connecting and shaping data:

At the data source level

  • Define the server, database, folder, or file path.
  • Change the current environment without rewriting source steps.

In query logic

  • Filter tables based on parameter values.
  • Customize operations like date range filters or limit values.

For reusability

  • Use the same parameter value across multiple queries, reducing maintenance.
  • Make Power BI solutions easier to support if data locations change. Microsoft Learn

How to Create Parameters

In Power BI Desktop:

  1. Go to Home → Transform data → Transform data to open Power Query Editor.
  2. In the Power Query Editor, choose Manage Parameters → New Parameter.
  3. In the Manage Parameters dialog, provide:
    • Name (meaningful identifier for the parameter)
    • Description (optional, useful for documentation)
    • Required (whether a value must be entered)
    • Type (Text, Number, Date, etc.)
    • Suggested values (optional: helps users pick from list, default, etc.)
    • Current value (the value Power Query uses when the parameter is applied) Microsoft Learn

Once created, the parameter appears in Power Query as its own query and can be referenced in other queries.


Example Uses of Parameters

Here are typical scenarios where parameters are valuable:

1. Dynamic Data Source Connection

If a report needs to point to Development in one workflow and Production in another, you can create parameters for server name and database name, then reference them in the data source step.

This avoids hard-coding connection strings and simplifies environment switches without editing M code. 3Cloud


2. Reusable Filters

Suppose you want to filter a sales table by a minimum sales amount that might change often. Instead of editing the filter step directly each time, you use a parameter called MinSalesThreshold. If business needs change, you update the parameter once, and all queries referencing it update accordingly. Microsoft Learn


3. Parameter-Driven Queries

You can embed a parameter into custom functions or use it directly in the M query logic (e.g., in Table.SelectRows), making your query logic adaptable without editing M code every time. Microsoft Learn


Modifying Parameters

Once a parameter is created:

  • Return to Manage Parameters in Power Query Editor to change metadata (name, description, type).
  • Update the Current Value to change how queries refresh with different values.
  • If you change suggested values, you may also redefine the list of allowed inputs.

Changing a parameter’s current value typically triggers a refresh of any queries that reference it, applying the new logic or sources immediately. Microsoft Learn


Best Practices for Parameters

Use clear names and descriptions
Someone else reviewing your model should immediately understand the purpose of each parameter.

Keep parameter types consistent
Ensure the data type matches how you intend to use it in queries to avoid type mismatch errors.

Leverage suggested values
If a parameter is intended to allow only certain options (e.g., environment names), defining suggested values improves usability and reduces errors.

Document your parameters
Include descriptions so others know what each parameter does and why it exists. Microsoft Learn


How This Appears on the PL-300 Exam

You may see exam scenarios such as:

  • “You need to allow end users to easily switch data sources without editing M code.”
  • “Modify a query so that multiple environments use the same report with different connection settings.”
  • “Filter the data using a parameter and explain how the data refresh behaves.”

To answer confidently:

  • Know how to create and configure parameters.
  • Understand how parameters change the behavior of queries and filtering.
  • Recognize when parameters simplify maintenance and governance.

This skill not only helps you build robust and flexible Power BI solutions but also aligns with the Prepare the data objectives in the PL-300 exam. Microsoft Learn


Practice Questions

Go to the Practice Exam Questions for this topic.

Choose Between DirectQuery and Import (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Get or connect to data
--> Choose Between DirectQuery and Import


Note that there are 10 practice questions (with answers and explanations) at the end of each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

One of the critical decisions a Power BI Data Analyst makes is how Power BI retrieves and interacts with data. The choice between DirectQuery and Import affects performance, modeling flexibility, scalability, and how data refresh is managed. This topic is core to the Get or connect to data objective and frequently appears in scenario-based exam questions.


What Are Import and DirectQuery?

Before comparing them, it’s important to understand what each mode does:

  • Import: Loads data from a data source into the Power BI dataset’s in-memory storage. Once imported, Power BI interacts with the data locally and refreshes it periodically.
  • DirectQuery: Does not store data in Power BI. Instead, it queries the source system in real time whenever the report needs data.

Import vs. DirectQuery – Conceptual Overview

Here’s the high-level conceptual difference:

AspectImportDirectQuery
Where data is storedIn Power BI’s in-memory engineIn the source system
PerformanceVery fast (in-memory)Depends on source performance
Data freshnessRequires scheduled refreshAlways up-to-date at query time
Modeling flexibilityFullLimited
Query folding benefitYesLimited (source dependent)
Refresh requiredYesNo
Suitable for large tables?Depends on sizeYes
Depends on gateway?Only for on-premises sourcesYes for on-premises sources

When to Choose Import

Choose Import when:

🟢 Performance and Interactivity Matter

Because data is loaded into memory, visuals and filters respond very quickly, which is ideal for dashboards and interactive exploration.

🟢 Full Modeling Flexibility Is Needed

Import supports:

  • Complex DAX
  • Calculated tables & columns
  • Relationship editing
  • Role-level security

This makes it the most flexible and commonly used mode.

🟢 Data Volume is Manageable

For moderately sized datasets that fit comfortably in memory, Import is typically preferred.

Example Scenario

A finance team needs a dashboard that refreshes nightly and delivers fast slicing and dicing of historical sales data.

In this case, Import is the likely choice.


When to Choose DirectQuery

Opt for DirectQuery when:

🔵 Data Must Stay at Source

Some data may be too large or sensitive to copy into Power BI. With DirectQuery, data remains on the original system.

🔵 Real-Time or Near-Real-Time Data is Required

If reports must reflect the most current operational data instantly, DirectQuery ensures up-to-date results.

🔵 Data Volume Is Massive

For very large tables (e.g., billions of rows), Import is impractical. DirectQuery avoids memory constraints by querying only needed data.

🔵 Source System Can Handle Query Load

DirectQuery performance is heavily dependent on the source system’s ability to process queries quickly.

Example Scenario

A company needs a report showing up-to-the-second inventory levels from an operational database.

In this case, DirectQuery is a better fit.


Trade-Offs: What You Lose With DirectQuery

While DirectQuery offers real-time access and handles large data without importing, it has limitations:

LimitationDirectQuery Impact
Modeling flexibilityReduced (no calculated tables; limited DAX)
PerformanceDepends on underlying source and network
Query loadHeavy impact on source system if not optimized
Transformation optionsLimited Power Query support

Because of these trade-offs, exam scenarios often ask you to weigh performance vs. flexibility vs. freshness.


Composite Models: Best of Both Worlds

Power BI supports composite models, which allow mixing Import and DirectQuery within the same dataset.

Use cases:

  • Import smaller, static reference tables
  • DirectQuery larger, frequently updated fact tables

Composite models provide:

  • Query optimization
  • Flexibility across mixed scenarios

Exam questions may include composite model scenarios, so understanding both modes is helpful.


Decision Criteria (Exam-Ready Framework)

When a question asks “Should you use DirectQuery or Import?”, use this decision checklist:

  1. How current must the data be?
    • Real-time → DirectQuery
    • Static or refreshed periodically → Import
  2. Can the source handle query load?
    • Yes → DirectQuery
    • No → Import
  3. Is high performance critical?
    • Yes and data is manageable → Import
  4. Does modeling complexity matter?
    • Yes → Import
    • Minimal modeling needed → DirectQuery
  5. Is dataset very large?
    • Yes → DirectQuery
    • No → Import

Typical Exam Scenarios

Here are some real-feel patterns that may appear in exam questions:

📌 Scenario: Fast Interactive Reporting

Data refreshes once per day, users need fast slicing.
Answer: Import.

📌 Scenario: Very Large Operational Tables

Real-time insight into an operational system required.
Answer: DirectQuery.

📌 Scenario: Need Most Recent Data Without Refresh Schedule

Source changes constantly; data must reflect current state.
Answer: DirectQuery.

📌 Scenario: Mixed Requirements

A small lookup table and a large facts table.
Answer: Composite model (Import + DirectQuery).


Pitfalls and How to Avoid Them

❌ Mistake: Choosing DirectQuery “just in case”

DirectQuery reduces modeling capability and can slow performance if the source isn’t optimized.

Tip: Choose DirectQuery for specific reasons—not by default.

❌ Mistake: Ignoring refresh schedule

Import requires scheduled refreshes. Forgetting this can lead to stale visuals.

Tip: Always confirm refresh requirements before choosing Import.


Key Exam Takeaways

  • Import = data loaded into memory (fast & flexible)
  • DirectQuery = data queried live (real-time & scalable)
  • Composite models combine both
  • Use decision criteria based on performance, freshness, modeling, and source capability
  • Always consider source capacity and query load

Understanding why you choose one mode over the other is more important than memorizing UI steps—especially on the PL-300 exam.


Practice Questions

Go to the Practice Exam Questions for this topic.

Change Data Source Settings (including Credentials & Privacy Levels) (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Get or connect to data
--> Change Data Source Settings, including Credentials & Privacy Levels


Note that there are 10 practice questions (with answers and explanations) at the end of each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Managing data source settings is a core responsibility of a Power BI Data Analyst and a frequently tested skill on the PL-300: Microsoft Power BI Data Analyst exam. This topic focuses on understanding how Power BI connects to data, how credentials are stored and updated, and how privacy levels affect data combination and security.

On the exam, you’re rarely asked where to click. The emphasis will likely be on conceptual understanding, decision-making, and troubleshooting. You’re likely to be tested on things like what happens when credentials are wrong, why a refresh fails, or how privacy levels affect query behavior.


What Are Data Source Settings in Power BI?

Data source settings define how Power BI authenticates, accesses, and combines data from one or more sources. These settings exist at two primary levels:

  • Power BI Desktop (development and modeling)
  • Power BI Service (publishing, refresh, and sharing)

Key components include:

  • Authentication credentials
  • Privacy levels
  • Data source permissions
  • Refresh behavior

Managing Credentials

What Are Credentials?

Credentials determine who Power BI is when it connects to a data source. If credentials are invalid, expired, or mismatched between environments, refreshes will fail.

Common Authentication Types (Exam-Relevant)

  • Windows authentication
    • On-premises SQL Server
    • Often paired with gateways
  • Database authentication
    • SQL login stored in Power BI
    • Independent of user identity
  • Microsoft Entra ID (OAuth)
    • Cloud sources (Azure SQL, Power BI datasets)
    • Most common for Power BI Service
  • Anonymous
    • Public web data
    • Rare in enterprise scenarios

Where Credentials Are Managed

In Power BI Desktop

  • File → Options and settings → Data source settings
  • Credentials are stored locally
  • Used during development and testing

In Power BI Service

  • Managed per dataset
  • Required for scheduled refresh
  • Often paired with a gateway for on-premises data

📌 Exam tip:
Credentials in Desktop do not automatically carry over to the Power BI Service.


Updating or Changing Credentials

You may need to update credentials when:

  • Passwords expire
  • Data sources move environments
  • Ownership of a dataset changes
  • Refresh failures occur after publishing

On the exam, expect scenarios like:

“A dataset refresh fails after being published. It worked in Power BI Desktop.”

The likely solution:

  • Update credentials in Power BI Service
  • Configure or reconfigure the gateway

Understanding Privacy Levels

Privacy levels control how Power BI isolates data sources when combining data to prevent unintended data leakage.

The Three Privacy Levels

Public

  • Data is publicly available
  • Lowest level of isolation
  • Examples: public websites

Organizational

  • Internal company data
  • Can be shared within the organization
  • Common for corporate databases

Private

  • Highly sensitive data
  • Strict isolation
  • Cannot be freely combined with other sources

Why Privacy Levels Matter

When Power BI combines multiple data sources:

  • It enforces privacy isolation
  • It may block query folding
  • It may require additional processing

📌 Key exam concept:
Privacy levels affect data combination behavior, not just security.


Privacy Levels and Query Folding

Privacy levels can:

  • Prevent Power BI from pushing transformations back to the source
  • Force local evaluation of queries
  • Reduce performance

Example:

  • Combining a Private SQL database with a Public web source may prevent query folding.

📌 Exam takeaway:
Privacy settings can impact performance, not just access.


The “Ignore Privacy Levels” Setting

Power BI Desktop allows you to:

  • Ignore privacy levels (for development or testing)

⚠️ Important for the exam:

  • This setting does not apply in Power BI Service
  • Service refresh always enforces privacy levels

Common Exam Scenarios and What They’re Testing

Scenario 1: Refresh Fails in Service

Tests:

  • Understanding of credential scope
  • Knowledge of service-side configuration

Correct thinking:

  • Credentials must be set in the Power BI Service
  • Gateway may be required

Scenario 2: Combining Multiple Data Sources

Tests:

  • Privacy level understanding
  • Data leakage prevention

Correct thinking:

  • Choose appropriate privacy levels
  • Understand impact on query folding

Scenario 3: Dataset Ownership Changes

Tests:

  • Credential reassignment
  • Dataset security awareness

Correct thinking:

  • Credentials may need to be re-entered
  • Refresh ownership matters

Key Exam Takeaways

For the PL-300 exam, remember:

  • Credentials define who Power BI is when accessing data
  • Desktop and Service credentials are separate
  • Privacy levels control data isolation
  • Privacy levels affect data combination and performance
  • Ignoring privacy levels works only in Desktop
  • Refresh failures often point to credential or gateway issues

If you understand why Power BI behaves the way it does with credentials and privacy, you’ll answer these questions confidently—even when the wording is tricky.


Practice Questions

Go to the Practice Exam Questions for this topic.

Identify and Connect to Data Sources or a Shared Semantic Model (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Get or connect to data
--> Identify and connect to data sources or a shared semantic model


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

One of the first and most foundational skills tested in the PL-300: Microsoft Power BI Data Analyst exam is the ability to identify, select, and connect to appropriate data sources. This skill lives within the Prepare the data domain and underpins everything that follows—modeling, visualization, and analysis.

A Power BI Data Analyst must understand where data lives, how to connect to it efficiently, and when to reuse existing models instead of building new ones.


Understanding Data Source Types in Power BI

Power BI supports a wide variety of data sources, and the exam expects familiarity with common enterprise and self-service scenarios rather than obscure connectors.

Common Data Source Categories

File-based sources

  • Excel workbooks
  • CSV and text files
  • XML and JSON files
  • PDF files (structured tables)

Database sources

  • SQL Server
  • Azure SQL Database
  • Azure Synapse Analytics
  • Oracle
  • MySQL / PostgreSQL

Cloud and SaaS sources

  • SharePoint Online lists and files
  • OneDrive
  • Dataverse
  • Azure Blob Storage
  • Azure Data Lake Storage Gen2

Power Platform and Power BI sources

  • Power BI datasets (shared semantic models)
  • Dataflows
  • Datamarts

Exam tip: You are not expected to memorize every connector—focus on recognizing the correct source for a given scenario.


Choosing the Right Connectivity Mode

When connecting to data, Power BI offers three primary connectivity modes, and the exam frequently tests your understanding of when to use each one.

Import Mode

  • Data is loaded into the Power BI model
  • Fast performance
  • Supports full DAX and modeling capabilities
  • Requires dataset refreshes

Use when:
Data size is manageable and performance is critical.


DirectQuery Mode

  • Queries data directly at the source
  • Minimal data stored in Power BI
  • Performance depends on the source system
  • Some modeling and DAX limitations

Use when:
Data is very large or must remain in the source system.


Live Connection

  • Used with shared semantic models (Power BI datasets) or Analysis Services
  • No local model created
  • Modeling is done in the source dataset

Use when:
Connecting to a centrally governed dataset managed by another team.


Connecting to a Shared Semantic Model

A shared semantic model (formerly called a “dataset”) allows analysts to reuse existing data models instead of duplicating logic.

What Is a Shared Semantic Model?

  • A centrally managed Power BI dataset
  • Contains curated tables, relationships, measures, and security
  • Published to the Power BI Service
  • Designed for reuse across multiple reports

Benefits of Using Shared Semantic Models

  • Consistent metrics and definitions
  • Reduced duplication of logic
  • Centralized governance and security
  • Faster report development

How to Connect

In Power BI Desktop:

  1. Select Get data
  2. Choose Power BI datasets
  3. Select an existing dataset from the workspace
  4. Create a report using a Live connection

Exam tip:
When connected to a shared semantic model:

  • You cannot modify relationships
  • You cannot add calculated tables
  • You can create report-level measures (thin reports)

Authentication and Access Considerations

The exam also expects awareness of authentication methods, especially in enterprise environments.

Common authentication types include:

  • Windows authentication
  • Database authentication
  • Microsoft Entra ID (Azure AD)
  • OAuth

Key concept:
Your ability to connect depends on permissions at the source, not just in Power BI.


Identifying the Best Data Source for a Scenario

You may see exam questions that describe a business requirement and ask you to identify the most appropriate data source or connection method.

Example Scenarios

  • A finance team needs fast, interactive reports → Import mode
  • A large transactional database updated every minute → DirectQuery
  • Multiple teams need consistent KPIs → Shared semantic model
  • Files stored in SharePoint and updated weekly → SharePoint Folder connector

Key Exam Takeaways

For the PL-300 exam, remember the following:

  • Understand major data source categories
  • Know Import vs DirectQuery vs Live connection
  • Recognize when to use shared semantic models
  • Be aware of limitations when using Live connections
  • Choose data sources based on performance, governance, and scale

This topic is less about clicking buttons and more about making the right architectural decision—a critical skill for a Power BI Data Analyst.


Power BI Connection Modes – Comparison Table (PL-300 Exam Focus)

Feature / AspectImportDirectQueryLive Connection
Where data is storedLoaded into Power BI datasetRemains in source systemRemains in source dataset or Analysis Services
PerformanceFastest (in-memory)Depends on source performanceDepends on source dataset
Data refresh requiredYes (scheduled or manual)No (queries run live)No (always live)
Data volume supportLimited by dataset sizeVery large datasetsVery large datasets
Modeling allowedFull modeling supportedLimited modelingNo model changes allowed
Create relationshipsYesLimitedNo
Create calculated tablesYesLimitedNo
Create calculated columnsYesLimitedNo
Create measures (DAX)YesYes (with limitations)Yes (report-level only)
Power Query transformationsFully supportedLimited (query folding dependent)Not supported
Row-level security (RLS)SupportedSupportedInherited from source
Typical sourcesExcel, CSV, SQL, SharePointLarge SQL, Azure SQL, SynapsePower BI datasets, Analysis Services
Best use caseHigh performance, small-to-medium dataNear-real-time or massive dataReusing governed enterprise models
Exam frequencyVery highHighVery high

When to Choose Each Mode (Exam Scenarios)

Import Mode

Choose Import when:

  • Performance is the top priority
  • Data size is manageable
  • Full modeling flexibility is needed

📌 Most common and safest answer on the exam unless constraints are stated.


DirectQuery

Choose DirectQuery when:

  • Data is extremely large
  • Data must stay at the source
  • Near real-time reporting is required

📌 Expect questions about performance trade-offs and modeling limitations.


Live Connection

Choose Live Connection when:

  • Using a shared semantic model
  • Consistent metrics are required across reports
  • Centralized governance is in place

📌 Often paired with “thin report” exam scenarios.


High-Value Exam Tips ⭐

  • If the question mentions enterprise governance or shared datasets → think Live Connection
  • If it mentions large, frequently changing data → think DirectQuery
  • If it mentions fast visuals and full modeling → think Import
  • Live connections do not allow model changes
  • Import mode requires refresh
  • DirectQuery relies on query folding

Practice Exam Questions

Go to the Practice Exam Questions for this section.