Category: Business Intelligence

Implement Time Intelligence Measures (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Model the data (25–30%)
--> Create model calculations by using DAX
--> Implement Time Intelligence Measures


Note that there are 10 practice questions (with answers and explanations) at the end of each topic. Also, there are 2 practice tests with 60 questions each available on the hub below all the exam topics.

Exam Context

Time intelligence is a core DAX competency on the PL-300 exam. Microsoft frequently tests a candidate’s ability to calculate values across time, such as year-to-date, prior period comparisons, rolling totals, and growth metrics.


What Are Time Intelligence Measures?

Time intelligence measures are DAX calculations that:

  • Analyze data over time
  • Compare values across different periods
  • Accumulate results over a date range

These measures rely on:

  • A proper date table
  • Correct relationships
  • The CALCULATE function

Prerequisites for Time Intelligence (Frequently Tested)

Before time intelligence will work correctly, the model must include:

1. A Dedicated Date Table

  • One row per date
  • Continuous date range (no gaps)
  • Marked as a Date table in Power BI

2. Proper Relationships

  • Date table related to fact tables
  • Relationship uses the date column (not datetime, if possible)

3. Correct Data Types

  • Date column must be of type Date
  • Not text or integer

⚠️ Exam Tip:
Many PL-300 questions are trick questions where time intelligence fails because one of these prerequisites is missing.


Role of CALCULATE in Time Intelligence

All built-in time intelligence functions work by modifying filter context using CALCULATE.

Example:

Sales YTD =
CALCULATE(
    [Total Sales],
    DATESYTD(Date[Date])
)

👉 CALCULATE changes the filter context to include all dates from the start of the year through the current date.


Common Time Intelligence Functions (PL-300 Focus)

Year-to-Date (YTD)

Sales YTD =
CALCULATE(
    [Total Sales],
    DATESYTD(Date[Date])
)

Month-to-Date (MTD)

Sales MTD =
CALCULATE(
    [Total Sales],
    DATESMTD(Date[Date])
)

Quarter-to-Date (QTD)

Sales QTD =
CALCULATE(
    [Total Sales],
    DATESQTD(Date[Date])
)


Previous Period Comparisons

Previous Year

Sales PY =
CALCULATE(
    [Total Sales],
    SAMEPERIODLASTYEAR(Date[Date])
)

Previous Month

Sales PM =
CALCULATE(
    [Total Sales],
    DATEADD(Date[Date], -1, MONTH)
)

Exam Insight:
SAMEPERIODLASTYEAR requires a continuous date table—a common failure point on the exam.


Rolling and Moving Averages

Rolling 12 Months

Sales Rolling 12M =
CALCULATE(
    [Total Sales],
    DATESINPERIOD(
        Date[Date],
        MAX(Date[Date]),
        -12,
        MONTH
    )
)

This pattern is commonly tested in scenario-based questions.


Growth and Variance Measures

Year-over-Year Growth

Sales YoY Growth =
[Total Sales] - [Sales PY]

Year-over-Year Percentage

Sales YoY % =
DIVIDE(
    [Total Sales] - [Sales PY],
    [Sales PY]
)

⚠️ Exam Tip:
Always use DIVIDE() instead of / to safely handle divide-by-zero scenarios.


Time Intelligence vs Custom Date Logic

Built-in Time IntelligenceCustom Logic
Requires date tableCan work without one
Simpler syntaxMore flexible
Optimized by engineMore complex
Preferred for PL-300Tested less often

👉 For PL-300, Microsoft prefers built-in time intelligence functions.


Common Mistakes (Often Tested)

  • Using time intelligence without marking a date table
  • Using text-based dates
  • Missing dates in the calendar
  • Using fact table dates instead of a shared date dimension
  • Expecting time intelligence to work in calculated columns

Best Practices for PL-300 Candidates

  • Always create and mark a common date table
  • Build reusable base measures
  • Use built-in time intelligence when possible
  • Validate results at different grain levels (year, month, day)
  • Avoid time intelligence in calculated columns

How This Appears on the Exam

Expect questions that:

  • Ask why a YTD or PY measure returns incorrect results
  • Test which function to use for a specific time comparison
  • Require selecting the correct DAX pattern
  • Identify missing prerequisites in a data model

Key Takeaways

  • Time intelligence is a high-value exam topic
  • Depends on a proper date table and relationships
  • Uses CALCULATE to modify filter context
  • Enables YTD, PY, rolling totals, and growth analysis
  • Frequently appears in scenario-based questions

Practice Questions

Go to the Practice Exam Questions for this topic.

Create a Common Date Table (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Model the data (25–30%)
--> Design and implement a data model
--> Create a Common Date Table


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

A common date table (also called a calendar table) is one of the most critical components of a well-designed Power BI data model. It enables consistent time-intelligence across measures, ensures accurate filter behavior, and supports meaningful reporting.

For the PL-300: Microsoft Power BI Data Analyst exam, Microsoft expects you to understand why a common date table is needed, how to create one, and how to use it correctly in relationships and time-based calculations.


What Is a Common Date Table?

A common date table is a standalone table that contains every date (and associated date attributes) used in your fact data over the analytical time span.

It typically includes columns such as:

  • Date
  • Year
  • Quarter
  • Month
  • Day
  • Month Name
  • Fiscal Year / Fiscal Quarter
  • Week Number
  • IsWeekend / IsHoliday flags

This table becomes the hub for time-intelligence calculations.


Why Use a Common Date Table?

A common date table provides:

1. Consistent Time Intelligence Across the Model

DAX time-intelligence functions (like TOTALYTD, SAMEPERIODLASTYEAR, etc.) work reliably only with a proper date table.

2. Single Point of Truth

Each date attribute (e.g., month, quarter) should come from one place — not multiple duplicated year fields across fact tables.

3. Correct Filtering

Relationships from the date table to fact tables ensure slicers and filters behave consistently.

4. Support for Multiple Date Roles

When facts have different date fields (e.g., Order Date, Ship Date), you use role-playing dimensions based on the common date table.


Where the Date Table Fits in a Power BI Model

In a star schema, the common date table acts as a dimension table connected to one or more fact tables via date fields:

         DimDate
            |
  OrderDate |--- FactSales
  ShipDate  |--- FactSales

This pattern eliminates ambiguity and supports multi-date filtering.


Creating a Common Date Table

There are several ways to create a date table in Power BI:

1. Auto Date/Time (Basic)

Power BI can automatically generate internal date tables, but this is not recommended for enterprise models or time-intelligence functions because:

  • Limited control over attributes
  • Cannot be customized or extended easily

For PL-300, assume you will create your own date table.


2. Using DAX (Recommended)

You can create a date table with DAX in Power BI Desktop:

Date = 
CALENDAR (
    DATE ( 2018, 1, 1 ),
    DATE ( 2025, 12, 31 )
)

You then add calculated columns:

Year = YEAR ( [Date] )
MonthNumber = MONTH ( [Date] )
MonthName = FORMAT ( [Date], "MMMM" )
Quarter = "Q" & FORMAT ( [Date], "Q" )

This gives you a fully controlled and reusable date table.


3. Using Power Query

You can also generate the date table in Power Query with List.Dates and expand to generate attributes.

Example M pattern:

let
    StartDate = #date(2018, 1, 1),
    EndDate   = #date(2025, 12, 31),
    DatesList = List.Dates(StartDate, Duration.Days(EndDate - StartDate) + 1, #duration(1,0,0,0)),
    DateTable = Table.FromList(DatesList, Splitter.SplitByNothing(), {"Date"})
in
    DateTable

Then add columns for Year, Month, Quarter, etc.


Marking a Table as a Date Table

Power BI has a special property:

Modeling → Mark as Date Table → Select the Date column

This signals to Power BI that the table is a valid date dimension. It enables full use of time-intelligence functions and prevents errors in DAX.

A valid date table:

  • Must contain contiguous dates
  • Must have no gaps
  • Has a single unique column designated as the date

Role-Playing Dimensions for Dates

In many models, the same date table will serve multiple fact date fields, such as:

  • Order Date
  • Ship Date
  • Promotion Date
  • Invoice Date

This is typically handled by duplicating the date table (e.g., Date – Order, Date – Ship) and creating separate relationships.


Common Date Table Attributes

Here are common attributes you might include:

AttributePurpose
DatePrimary key
YearSlicing by year
MonthGrouping and visuals
Month NameUser-friendly label
QuarterTime buckets
Week NumberWeekly analysis
Fiscal Year / PeriodOrganization’s fiscal structure
IsWeekendCustom filtering
ISOWeekInternational week numbering

Exam questions may refer to building or using these attributes.


Best Practices for PL-300 Candidates

  • Always create your own date table — don’t rely on auto date/time
  • Mark the table as a date table in the model
  • Include all relevant attributes required for slicing
  • Build the table wide enough to cover all fact data ranges
  • Use role-playing duplicates when necessary (e.g., Ship vs Order date)
  • Name the table clearly (e.g., DimDate, DateCalendar)

How This Appears on the PL-300 Exam

Expect scenario questions like:

  • Why does a time-intelligence measure return blank?
    (often because the model has no valid date table)
  • How do you create a date table that supports fiscal calculations?
  • Which table property enables built-in DAX functions to work correctly?
    (answer: Mark as Date Table)
  • How should multiple date fields in a fact table be modeled?
    (answer: role-playing dimensions using a common date table)

The correct answers require understanding both modeling and Power BI features — not just memorizing menu locations.


Common Mistakes (Often Tested)

❌ Using a fact table’s date column as the only date source
❌ Forgetting to mark the date table as a date table
❌ Leaving gaps in the date sequence
❌ Relying solely on auto date/time
❌ Not handling multiple fact date roles properly


Key Takeaways

  • A common date table is essential for reliable time-intelligence results.
  • You can build a date table via DAX or Power Query.
  • Always Mark as Date Table in Power BI Desktop.
  • Include useful attributes for analysis (Year, Month, Quarter, etc.).
  • Plan for role-playing dimensions (multiple date roles).
  • This topic is heavily scenario-driven on the PL-300 exam.

Practice Questions

Go to the Practice Exam Questions for this topic.

Create Fact Tables and Dimension Tables (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Create Fact Tables and Dimension Tables


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Creating fact tables and dimension tables is a foundational step in preparing data for analysis in Power BI. For the PL-300: Microsoft Power BI Data Analyst exam, this topic tests your understanding of data modeling principles, especially how to structure data into a star schema using Power Query before loading it into the data model.

Microsoft emphasizes not just what fact and dimension tables are, but how and when to create them during data preparation.


Why Fact and Dimension Tables Matter

Well-designed fact and dimension tables:

  • Improve model performance
  • Simplify DAX measures
  • Enable accurate relationships
  • Support consistent filtering and slicing
  • Reduce ambiguity and calculation errors

Exam insight: Many PL-300 questions test whether you recognize when raw data should be split into facts and dimensions instead of remaining as a single flat table.


What Is a Fact Table?

A fact table stores quantitative, measurable data that you want to analyze.

Common Characteristics

  • Contains numeric measures (Sales Amount, Quantity, Cost)
  • Includes foreign keys to dimension tables
  • Has many rows (high granularity)
  • Represents business events (sales, orders, transactions)

Examples

  • Sales transactions
  • Inventory movements
  • Website visits
  • Financial postings

What Is a Dimension Table?

A dimension table stores descriptive attributes used to filter, group, and label facts.

Common Characteristics

  • Contains textual or categorical data
  • Has unique values per key
  • Fewer rows than fact tables
  • Provides business context

Examples

  • Customer
  • Product
  • Date
  • Geography
  • Employee

Star Schema (Exam Favorite)

The recommended modeling approach in Power BI is the star schema:

  • One central fact table
  • Multiple surrounding dimension tables
  • One-to-many relationships from dimensions to facts
  • Single-direction filtering (typically)

Exam insight: If a question asks how to optimize performance or simplify DAX, the answer is often “create a star schema.”


Creating Fact and Dimension Tables in Power Query

Starting Point: Raw or Flat Data

Many data sources arrive as a single wide table containing both measures and descriptive columns.

Typical Transformation Approach

  1. Identify measures
    • Numeric columns that should remain in the fact table
  2. Identify dimensions
    • Descriptive attributes (Product Name, Category, Customer City)
  3. Create dimension tables
    • Reference the original query
    • Remove non-relevant columns
    • Remove duplicates
    • Rename columns clearly
    • Ensure a unique key
  4. Create the fact table
    • Keep foreign keys and measures
    • Remove descriptive text fields now handled by dimensions

Keys and Relationships

Dimension Keys

  • Primary key in the dimension table
  • Must be unique and non-null

Fact Table Keys

  • Foreign keys referencing dimension tables
  • May repeat many times

Exam insight: PL-300 questions often test your understanding of cardinality (one-to-many) and correct relationship direction.


Common Dimension Types

Date Dimension

  • Often created separately
  • Supports time intelligence
  • Includes Year, Quarter, Month, Day, etc.

Role-Playing Dimensions

  • Same dimension used multiple times (e.g., Order Date, Ship Date)
  • Requires separate relationships

Impact on the Data Model

Creating proper fact and dimension tables results in:

  • Cleaner Fields pane
  • Easier measure creation
  • Improved query performance
  • Predictable filter behavior

Poorly designed models (single flat tables or snowflake schemas) can lead to:

  • Complex DAX
  • Ambiguous relationships
  • Slower performance
  • Incorrect results

Common Mistakes (Often Tested)

❌ Leaving Data in a Single Flat Table

This often leads to duplicated descriptive data and poor performance.


❌ Creating Dimensions Without Removing Duplicates

Dimension tables must contain unique keys.


❌ Including Measures in Dimension Tables

Measures belong in fact tables, not dimensions.


❌ Using Bi-Directional Filtering Unnecessarily

Often used to compensate for poor model design.


Best Practices for PL-300 Candidates

  • Design with a star schema mindset
  • Keep fact tables narrow and tall
  • Keep dimension tables descriptive
  • Use Power Query to shape tables before loading
  • Rename tables and columns clearly
  • Know when not to split (very small or static datasets)

Know when not to over-model: If the dataset is extremely small or used for a simple report, splitting into facts and dimensions may not add value.


How This Appears on the PL-300 Exam

Expect scenario-based questions such as:

  • A dataset contains sales values and product details — how should it be structured?
  • Which table should store numeric measures?
  • Why should descriptive columns be moved to dimension tables?
  • What relationship should exist between fact and dimension tables?

These questions test modeling decisions, not just terminology.


Quick Comparison

Fact TableDimension Table
Stores measurementsStores descriptive attributes
Many rowsFewer rows
Contains foreign keysContains primary keys
Central tableSurrounding tables
Used for aggregationUsed for filtering

Final Exam Takeaways

  • Fact and dimension tables are essential for scalable Power BI models
  • Create them during data preparation, not after modeling
  • The PL-300 exam emphasizes model clarity, performance, and correctness
  • Star schema design is a recurring exam theme

Practice Questions

Go to the Practice Exam Questions for this topic.

Group and Aggregate Rows (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Group and Aggregate Rows


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Grouping and aggregating rows is a foundational data preparation task used to summarize detailed data into meaningful totals before it is loaded into the Power BI data model. For the PL-300: Microsoft Power BI Data Analyst exam, Microsoft evaluates your understanding of how, when, and why to group data in Power Query, and how those decisions affect the data model and reporting outcomes.


Why Group and Aggregate Rows?

Grouping and aggregation are used to:

  • Summarize transactional or granular data
  • Reduce dataset size and improve performance
  • Shape fact tables to the correct grain
  • Prepare data for simpler reporting
  • Offload static calculations from DAX into Power Query

Exam Focus: The exam often tests decision-making—specifically whether aggregation should occur in Power Query or later in DAX.


Where Grouping Happens in Power BI

Grouping and aggregation for this exam objective occur in Power Query Editor, using:

  • Home → Group By
  • Transform → Group By

This transformation physically reshapes the dataset before it is loaded into the model.

Key Distinction: Power Query grouping changes the stored data. DAX measures calculate results dynamically at query time.


The Group By Operation

When using Group By, you define:

1. Group By Columns

Columns that determine how rows are grouped, such as:

  • Customer
  • Product
  • Date
  • Region

Each unique combination of these columns produces one row in the output.

2. Aggregation Columns

New columns created using aggregation functions applied to grouped rows.


Common Aggregation Functions (Exam-Relevant)

Power Query supports several aggregation functions frequently referenced on the PL-300 exam:

  • Sum – Adds numeric values
  • Count Rows – Counts rows in each group
  • Count Distinct Rows – Counts unique values
  • Average – Calculates the mean
  • Min / Max – Returns lowest or highest values
  • All Rows – Produces nested tables for advanced scenarios

Exam Tip: Be clear on the difference between Count Rows and Count Distinct—this is commonly tested.


Grouping by One vs Multiple Columns

Grouping by a Single Column

Used to create high-level summaries such as:

  • Total sales per customer
  • Number of orders per product

Results in one row per unique value.


Grouping by Multiple Columns

Used when summaries must retain more detail, such as:

  • Sales by customer and year
  • Quantity by product and region

The output grain is defined by the combination of columns.


Impact on the Data Model

Grouping and aggregating rows in Power Query has a direct impact on the data model, which is an important exam consideration.

Key Impacts:

  • Reduced row count improves model performance
  • Changes the grain of fact tables
  • May eliminate the need for certain DAX measures
  • Can simplify relationships by reducing cardinality

Important Trade-Off:

Once data is aggregated in Power Query:

  • You cannot recover lower-level detail
  • You lose flexibility for drill-down analysis
  • Time intelligence and slicer-driven behavior may be limited

Exam Insight: Microsoft expects you to recognize when aggregation improves performance and when it limits analytical flexibility.


Group and Aggregate vs DAX Measures (Highly Tested)

Understanding where aggregation belongs is a core PL-300 skill.

Group in Power Query When:

  • Aggregation logic is fixed
  • You want to reduce data volume
  • Performance optimization is required
  • The dataset should load at a specific grain

Use DAX Measures When:

  • Aggregations must respond to slicers
  • Time intelligence is required
  • Users need flexible, dynamic calculations

Common Mistakes (Often Tested)

These are frequent pitfalls that appear in exam scenarios:

  • Grouping too early, eliminating needed detail
  • Aggregating data that should remain transactional
  • Using Sum on columns that should be counted
  • Confusing Count Rows with Count Distinct
  • Grouping in Power Query when a DAX measure is more appropriate
  • Forgetting to validate results after grouping
  • Incorrect data types causing aggregation errors

Exam Pattern: Many questions present a “wrong but plausible” grouping choice—look carefully at reporting requirements.


Best Practices for PL-300 Candidates

  • Understand the grain of your data before grouping
  • Group only when it adds clear value
  • Validate totals after aggregation
  • Prefer Power Query grouping for static summaries
  • Use DAX for dynamic, filter-aware calculations
  • Know when not to group:
    • When users need drill-down capability
    • When calculations must respond to slicers
    • When time intelligence is required
    • When future reporting needs are unknown

How This Appears on the PL-300 Exam

Expect scenario-based questions such as:

  • You need to reduce model size and improve performance. Where should aggregation occur?
  • Which aggregation produces unique counts per group?
  • What is the impact of grouping data before loading it into the model?
  • Why would grouping in Power Query be inappropriate in this scenario?

Key Takeaways

✔ Grouping is performed in Power Query, not DAX
✔ Aggregation reshapes data before modeling
✔ Grouping impacts performance, flexibility, and grain
✔ Know both when to group and when not to
✔ This topic tests data modeling judgment, not just mechanics


Practice Questions

Go to the Practice Exam Questions for this topic.

Create and Transform Columns (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Create and Transform Columns


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Columns are the foundation of data analysis in Power BI. The ability to create new columns and transform existing ones is essential for shaping your dataset into a structure that supports meaningful insights and accurate reports.

In the PL-300 exam, Microsoft tests not only whether you can perform transformations but also whether you understand when and why to apply them.


Why Create and Transform Columns?

Before data can be modeled and visualized:

  • It must be clean, consistent, and in the right format
  • New columns may be needed to support business logic
  • Transformations ensure data is reliable and analysis-ready

For example:

  • Converting text dates into true Date types
  • Extracting parts of a string (e.g., Year from a date)
  • Splitting a full name into first and last names
  • Normalizing inconsistent text values

These are not just useful—they are often necessary for accurate DAX measures and reporting.


Where Column Transformations Happen

Most column creation and transformation tasks happen in Power Query Editor (before the data loads into the model).

Key places include:

  • Transform tab
  • Add Column tab
  • Applied Steps pane
  • Advanced Editor (for M code)

Power BI also allows column creation after loading the data through:

  • DAX Calculated Columns (in the data model)

The exam may present scenarios where you choose which tool (Power Query vs DAX) to use.


Common Column Transformations

Here are the main categories of column operations you should be ready to apply:


1. Basic Transformations

These change existing columns:

  • Rename columns
  • Change data types
  • Trim, clean, or format text
  • Replace values
  • Remove columns

These are the bread-and-butter tasks that clean and standardize data.


2. Splitting and Merging Columns

When data is combined within one field:

  • Split Column (by delimiter or number of characters)
    • Example: Split Full NameFirst Name and Last Name
  • Merge Columns
    • Example: Combine City and State into a single location field

This is essential when data needs to be restructured for modeling.


3. Extracting Components

Examples include:

  • Extracting Year, Month, or Day from a Date column
  • Taking the first/last characters from a text string
  • Extracting text before or after a specific character

These operations prepare granular fields needed for grouping or calculations.


4. Calculations Using “Add Column”

You can create derived columns based on logic:

  • Custom Columns (via M formulas)
  • Conditional Columns
    • Example: Flag High Value sales where sales > $1,000
  • Index Columns
    • Useful for row ordering

These columns often support business metrics or classifications.


Text Transformations

Text columns commonly require cleaning and standardization:

  • Uppercase / Lowercase
  • Trim (removes leading/trailing spaces)
  • Clean (removes non-printable characters)
  • Replace Values (e.g., “N/A” → null)

The exam often tests whether you know how to fix inconsistent text data.


Date and Time Transformations

Working with dates is core to analysis:

  • Change text to date/time type
  • Extract Year, Quarter, Month, Day
  • Add custom time intelligence columns
  • Use locale conversion for date parsing

This enables time-based grouping and accurate measures like YTD (Year-to-Date).


Conditional and Custom Columns

Conditional Columns

  • Created through UI (Add Column → Conditional Column)
  • Define logic visually (e.g., if Sales > 500 then "High" else "Low")

Custom Columns

  • Created using Power Query M code
  • More advanced logic and functions

Both are useful depending on the complexity of your requirement. Exam questions often compare these approaches.


Column Transformations vs DAX Calculated Columns

Power Query Column

  • Transformation occurs before data loads into model
  • Changes physical data shape
  • Useful for cleaning and structuring data

DAX Calculated Column

  • Created after data loads into the model
  • Evaluated per row in the model
  • Useful for measures and relationships tied to data model context

Exam insight:
Use Power Query transformations for structural cleanup. Use DAX calculated columns when the logic depends on model relationships or evaluation context.


Best Practices for the Exam

  • Clean data before creating columns.
    Don’t derive new data from dirty input.
  • Apply the right transformation tool.
    Power Query for structural cleanup; DAX for model-aware calculations.
  • Name columns clearly.
    Report consumers and measures depend on intuitive names.
  • Avoid unnecessary columns.
    Only keep what’s needed for reporting to improve model performance.
  • Group related transformations logically.
    Use Query folding where possible (especially for large datasets).

How This Appears on the PL-300 Exam

You might see scenarios like:

You need to split a full address column into street, city, and postal code for better filtering. Which transformation should you use?

This tests:

  • Knowledge of Split Column
  • When to apply it
  • How to maintain data type integrity afterward

Or:

Your date column is text and not aggregating correctly. What do you do?

This tests:

  • Understanding of data types
  • Ability to convert to proper Date/Time

Most questions are scenario-based, requiring both decision and action reasoning.


Key Takeaways

✔ Column transformations are a core part of shaping data
✔ Power Query is the primary environment for creating and transforming columns
✔ Use Add Column for new fields and Transform for modifying existing fields
✔ Know the difference between Power Query and DAX calculated columns
✔ Common transformations include text, date, splitting/merging, conditional logic, and custom formulas


Practice Questions

Go to the Practice Exam Questions for this topic.

Resolve inconsistencies, unexpected or null values, and data quality issues (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Profile and clean the data
--> Resolve inconsistencies, unexpected or null values, and data quality issues


Note that there are 10 practice questions (with answers and explanations) at the end of each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub's main page.

High-quality data is essential for accurate analysis and trustworthy reports. In the PL-300 exam, Microsoft expects candidates to understand how to identify and resolve common data quality problems using Power Query before data is loaded into the model.

This section focuses on handling inconsistencies, unexpected values, nulls, and errors—all of which can negatively impact calculations, relationships, and visuals if left unresolved.


Why This Topic Matters for the Exam

From an exam perspective, this topic tests your ability to:

  • Diagnose data quality problems using profiling tools
  • Choose the correct transformation to fix an issue
  • Understand when to remove, replace, or transform data
  • Prevent downstream modeling and reporting issues

Most questions are scenario-based, asking what action you should take to fix a specific data issue.


Common Data Quality Issues You Must Recognize

1. Null (Blank) Values

Nulls represent missing or unknown data and can cause:

  • Incorrect aggregations
  • Broken relationships
  • Visuals that behave unexpectedly

Common causes:

  • Incomplete source data
  • Left joins with no matching rows
  • Data entry gaps

2. Unexpected or Invalid Values

These include:

  • Negative values where only positives make sense
  • Text values in numeric columns
  • Dates outside expected ranges
  • Misspelled or inconsistent category names

3. Inconsistent Data

Inconsistencies often appear as:

  • Mixed casing (USA vs usa)
  • Trailing or leading spaces
  • Multiple spellings for the same value
  • Different date or number formats

4. Error Values

Errors usually occur when:

  • Converting data types
  • Performing calculations
  • Parsing malformed data

Examples include:

  • Conversion failed
  • Divide by zero
  • Invalid date format

Identifying Data Quality Issues in Power Query

Power Query provides built-in data profiling tools to quickly detect problems:

Column Quality

  • Shows percentages of Valid, Error, and Empty values
  • Ideal for spotting nulls and errors

Column Distribution

  • Displays value frequency and distinct counts
  • Helps identify unexpected or inconsistent values

Column Profile

  • Provides min, max, average, and other statistics
  • Useful for detecting outliers and invalid ranges

Exam Tip: Profiling tools only analyze a sample by default. You may need to enable “Column profiling based on entire dataset” for accuracy.


Techniques to Resolve Null Values

Remove Rows

  • Used when nulls make a record unusable
  • Common for missing primary keys or required fields

Replace Values

  • Replace nulls with:
    • 0 (for numeric measures)
    • “Unknown” or “Not Provided” (for text)
    • A default date

Fill Down / Fill Up

  • Used for hierarchical or grouped data
  • Common in spreadsheets with merged cells

Exam Insight: Replacing nulls should be a business-justified decision, not automatic.


Resolving Inconsistencies

Standardizing Text

  • Use Transform → Format:
    • Uppercase
    • Lowercase
    • Capitalize Each Word

Trimming and Cleaning

  • Trim removes leading/trailing spaces
  • Clean removes non-printable characters

Replacing Values

  • Normalize spelling differences (e.g., “US”, “USA”, “United States”)

Handling Unexpected or Invalid Values

Filtering

  • Remove values outside acceptable ranges
  • Exclude invalid categories

Conditional Columns

  • Create logic to flag or correct invalid data
  • Example: Replace negative sales with null or zero

Data Type Corrections

  • Ensure columns use appropriate data types
  • Prevents aggregation and calculation errors later

Fixing Error Values

Replace Errors

  • Replace with null or a default value

Remove Errors

  • Used when rows are unreliable

Fix the Root Cause

  • Change transformation order
  • Adjust data type conversion
  • Clean data before applying calculations

Exam Tip: Microsoft often tests whether you know why an error occurs, not just how to remove it.


Best Practices for PL-300 Candidates

  • Always profile before transforming
  • Fix issues in Power Query, not DAX, when possible
  • Understand the impact of removing vs replacing data
  • Keep transformations repeatable and documented
  • Prefer clean data models over complex report logic

Key Takeaways for the Exam

You should be able to:

  • Identify different types of data quality issues
  • Choose the correct Power Query tool to resolve them
  • Understand the downstream impact on models and visuals
  • Interpret profiling results correctly

Mastering this topic ensures cleaner datasets, better models, and fewer surprises during analysis—exactly what the PL-300 exam is designed to validate.


Practice Questions

Go to the Practice Exam Questions for this topic.

Evaluate Data including Data Statistics & Column Properties (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Profile and clean the data
--> Evaluate data, including data statistics and column properties


Note that there are 10 practice questions (with answers and explanations) at the end of each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Before cleaning, transforming, or modeling data, a Power BI Data Analyst must first evaluate the quality and structure of the data. The PL-300 exam tests your ability to profile data, interpret data statistics, and understand column properties to identify issues such as missing values, incorrect data types, outliers, and inconsistent formats.

This topic lives under Profile and clean the data because effective data preparation starts with understanding what the data looks like and how it behaves.


What Does “Evaluate Data” Mean in Power BI?

Evaluating data means using Power BI (specifically Power Query) to:

  • Understand data distribution and completeness
  • Identify data quality issues
  • Verify correct data types and formats
  • Decide what cleaning or transformation steps are required

Rather than guessing, Power BI provides built-in profiling tools that summarize data characteristics automatically.


Data Profiling Tools in Power Query

Power BI includes several profiling features that appear in the Power Query Editor, primarily within the View tab.

Key Data Profiling Options

  • Column quality
  • Column distribution
  • Column profile

These tools help you quickly assess whether a column is usable, trustworthy, and correctly defined.


Column Quality

Column quality provides a high-level overview of data completeness and validity.

It visually displays:

  • Valid values
  • Error values
  • Empty (null) values

Why Column Quality Matters

  • Quickly highlights missing or broken data
  • Helps determine whether rows should be filtered, fixed, or removed
  • Useful for early detection of refresh or ingestion issues

📌 Exam insight:
Questions often test whether you can identify which tool reveals missing or invalid values—column quality is the answer.


Column Distribution

Column distribution shows how values are spread across a column.

It provides:

  • Frequency of values
  • Distinct vs unique counts
  • A histogram-style visualization (for numeric fields)

Common Uses

  • Spotting unexpected duplicates
  • Identifying skewed data
  • Detecting outliers
  • Validating categorical values

📌 Exam insight:
Column distribution is used to understand value frequency, not just nulls or errors.


Column Profile

Column profile gives the most detailed statistical view of a column.

Depending on the data type, it may include:

  • Minimum and maximum values
  • Average
  • Standard deviation
  • Count and distinct count
  • Null count

Typical Use Cases

  • Verifying numeric ranges (e.g., negative values where none should exist)
  • Checking date ranges
  • Understanding overall data shape before modeling

📌 Exam insight:
Column profile helps validate statistical characteristics, not formatting or naming.


Understanding Column Properties

Beyond statistics, Power BI also evaluates column properties, which affect how data behaves in the model and visuals.

Key Column Properties to Evaluate

Data Type

Examples:

  • Whole number
  • Decimal number
  • Text
  • Date / DateTime
  • Boolean

Incorrect data types can:

  • Break visuals
  • Prevent aggregations
  • Cause relationship issues

📌 Exam tip:
Always verify data types before applying transformations or creating measures.


Format

Controls how values appear (e.g., currency, percentage, date format).

  • Affects display, not calculation logic
  • Often adjusted after validating data type

Default Summarization

Determines how numeric columns aggregate in visuals:

  • Sum
  • Average
  • Count
  • Do not summarize

📌 Exam insight:
Default summarization is evaluated when deciding how columns behave in visuals—not during Power Query transformations.


Column Name & Description

  • Clear names improve usability
  • Descriptions help report consumers understand the data

While not deeply technical, the exam may include best-practice questions around data clarity and usability.


Evaluating Data at the Right Stage

Most evaluation tasks occur in Power Query, before data is loaded into the model.

Why?

  • Faster detection of issues
  • Prevents poor-quality data from entering the model
  • Reduces downstream modeling complexity

📌 Key distinction for the exam:

  • Power Query → data evaluation & cleaning
  • Model view → relationships & behavior
  • Report view → visualization

Common Exam Scenarios

You may encounter questions like:

Scenario 1

You need to quickly identify columns with missing or invalid values.

Correct concept: Column quality


Scenario 2

You want to understand how frequently values appear in a categorical column.

Correct concept: Column distribution


Scenario 3

You need to verify numeric ranges and detect outliers.

Correct concept: Column profile


Scenario 4

A numeric column is being treated as text and cannot be aggregated.

Correct concept: Incorrect data type (column property)


Best Practices to Remember

  • Enable profiling tools early in data preparation
  • Validate data types before transformations
  • Use statistics to guide cleaning decisions
  • Don’t rely on visuals alone to detect data quality issues

Key Exam Takeaways

For the PL-300 exam, remember:

  • Column quality → valid, error, and null values
  • Column distribution → frequency and distinct values
  • Column profile → statistical insights
  • Column properties affect aggregation, relationships, and visuals
  • Data evaluation happens primarily in Power Query

Understanding how to interpret what Power BI is telling you about your data is just as important as knowing how to clean it.


Practice Questions

Go to the Practice Exam Questions for this topic.

Create and Modify Parameters (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Get or connect to data
--> Create and Modify Parameters


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

As you prepare for the PL-300: Microsoft Power BI Data Analyst exam, one important skill to master is using parameters in Power Query to build flexible, reusable, and dynamic data connections and transformations. Parameters help you abstract static values—such as server names, file paths, or filter values—and manage them in a centralized way. Microsoft Learn


What Are Parameters in Power BI?

A parameter is a named value that you can use in Power Query to influence how queries run. Instead of hard-coding values directly into your query steps, a parameter lets you declare the value once and reference it throughout your queries so you can:

  • Change values globally without editing each query step.
  • Switch between environments (Dev, Test, Prod) easily.
  • Apply consistent filters or thresholds.
  • Support dynamic filtering and modular query design.

In many ways, parameters act like variables in the Power Query (M) engine—reusable building blocks you can plug into various parts of your query logic. Microsoft Learn


Where You Use Parameters

Parameters are especially useful when connecting and shaping data:

At the data source level

  • Define the server, database, folder, or file path.
  • Change the current environment without rewriting source steps.

In query logic

  • Filter tables based on parameter values.
  • Customize operations like date range filters or limit values.

For reusability

  • Use the same parameter value across multiple queries, reducing maintenance.
  • Make Power BI solutions easier to support if data locations change. Microsoft Learn

How to Create Parameters

In Power BI Desktop:

  1. Go to Home → Transform data → Transform data to open Power Query Editor.
  2. In the Power Query Editor, choose Manage Parameters → New Parameter.
  3. In the Manage Parameters dialog, provide:
    • Name (meaningful identifier for the parameter)
    • Description (optional, useful for documentation)
    • Required (whether a value must be entered)
    • Type (Text, Number, Date, etc.)
    • Suggested values (optional: helps users pick from list, default, etc.)
    • Current value (the value Power Query uses when the parameter is applied) Microsoft Learn

Once created, the parameter appears in Power Query as its own query and can be referenced in other queries.


Example Uses of Parameters

Here are typical scenarios where parameters are valuable:

1. Dynamic Data Source Connection

If a report needs to point to Development in one workflow and Production in another, you can create parameters for server name and database name, then reference them in the data source step.

This avoids hard-coding connection strings and simplifies environment switches without editing M code. 3Cloud


2. Reusable Filters

Suppose you want to filter a sales table by a minimum sales amount that might change often. Instead of editing the filter step directly each time, you use a parameter called MinSalesThreshold. If business needs change, you update the parameter once, and all queries referencing it update accordingly. Microsoft Learn


3. Parameter-Driven Queries

You can embed a parameter into custom functions or use it directly in the M query logic (e.g., in Table.SelectRows), making your query logic adaptable without editing M code every time. Microsoft Learn


Modifying Parameters

Once a parameter is created:

  • Return to Manage Parameters in Power Query Editor to change metadata (name, description, type).
  • Update the Current Value to change how queries refresh with different values.
  • If you change suggested values, you may also redefine the list of allowed inputs.

Changing a parameter’s current value typically triggers a refresh of any queries that reference it, applying the new logic or sources immediately. Microsoft Learn


Best Practices for Parameters

Use clear names and descriptions
Someone else reviewing your model should immediately understand the purpose of each parameter.

Keep parameter types consistent
Ensure the data type matches how you intend to use it in queries to avoid type mismatch errors.

Leverage suggested values
If a parameter is intended to allow only certain options (e.g., environment names), defining suggested values improves usability and reduces errors.

Document your parameters
Include descriptions so others know what each parameter does and why it exists. Microsoft Learn


How This Appears on the PL-300 Exam

You may see exam scenarios such as:

  • “You need to allow end users to easily switch data sources without editing M code.”
  • “Modify a query so that multiple environments use the same report with different connection settings.”
  • “Filter the data using a parameter and explain how the data refresh behaves.”

To answer confidently:

  • Know how to create and configure parameters.
  • Understand how parameters change the behavior of queries and filtering.
  • Recognize when parameters simplify maintenance and governance.

This skill not only helps you build robust and flexible Power BI solutions but also aligns with the Prepare the data objectives in the PL-300 exam. Microsoft Learn


Practice Questions

Go to the Practice Exam Questions for this topic.

Identify and Connect to Data Sources or a Shared Semantic Model (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections: 
Prepare the data (25–30%)
--> Get or connect to data
--> Identify and connect to data sources or a shared semantic model


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

One of the first and most foundational skills tested in the PL-300: Microsoft Power BI Data Analyst exam is the ability to identify, select, and connect to appropriate data sources. This skill lives within the Prepare the data domain and underpins everything that follows—modeling, visualization, and analysis.

A Power BI Data Analyst must understand where data lives, how to connect to it efficiently, and when to reuse existing models instead of building new ones.


Understanding Data Source Types in Power BI

Power BI supports a wide variety of data sources, and the exam expects familiarity with common enterprise and self-service scenarios rather than obscure connectors.

Common Data Source Categories

File-based sources

  • Excel workbooks
  • CSV and text files
  • XML and JSON files
  • PDF files (structured tables)

Database sources

  • SQL Server
  • Azure SQL Database
  • Azure Synapse Analytics
  • Oracle
  • MySQL / PostgreSQL

Cloud and SaaS sources

  • SharePoint Online lists and files
  • OneDrive
  • Dataverse
  • Azure Blob Storage
  • Azure Data Lake Storage Gen2

Power Platform and Power BI sources

  • Power BI datasets (shared semantic models)
  • Dataflows
  • Datamarts

Exam tip: You are not expected to memorize every connector—focus on recognizing the correct source for a given scenario.


Choosing the Right Connectivity Mode

When connecting to data, Power BI offers three primary connectivity modes, and the exam frequently tests your understanding of when to use each one.

Import Mode

  • Data is loaded into the Power BI model
  • Fast performance
  • Supports full DAX and modeling capabilities
  • Requires dataset refreshes

Use when:
Data size is manageable and performance is critical.


DirectQuery Mode

  • Queries data directly at the source
  • Minimal data stored in Power BI
  • Performance depends on the source system
  • Some modeling and DAX limitations

Use when:
Data is very large or must remain in the source system.


Live Connection

  • Used with shared semantic models (Power BI datasets) or Analysis Services
  • No local model created
  • Modeling is done in the source dataset

Use when:
Connecting to a centrally governed dataset managed by another team.


Connecting to a Shared Semantic Model

A shared semantic model (formerly called a “dataset”) allows analysts to reuse existing data models instead of duplicating logic.

What Is a Shared Semantic Model?

  • A centrally managed Power BI dataset
  • Contains curated tables, relationships, measures, and security
  • Published to the Power BI Service
  • Designed for reuse across multiple reports

Benefits of Using Shared Semantic Models

  • Consistent metrics and definitions
  • Reduced duplication of logic
  • Centralized governance and security
  • Faster report development

How to Connect

In Power BI Desktop:

  1. Select Get data
  2. Choose Power BI datasets
  3. Select an existing dataset from the workspace
  4. Create a report using a Live connection

Exam tip:
When connected to a shared semantic model:

  • You cannot modify relationships
  • You cannot add calculated tables
  • You can create report-level measures (thin reports)

Authentication and Access Considerations

The exam also expects awareness of authentication methods, especially in enterprise environments.

Common authentication types include:

  • Windows authentication
  • Database authentication
  • Microsoft Entra ID (Azure AD)
  • OAuth

Key concept:
Your ability to connect depends on permissions at the source, not just in Power BI.


Identifying the Best Data Source for a Scenario

You may see exam questions that describe a business requirement and ask you to identify the most appropriate data source or connection method.

Example Scenarios

  • A finance team needs fast, interactive reports → Import mode
  • A large transactional database updated every minute → DirectQuery
  • Multiple teams need consistent KPIs → Shared semantic model
  • Files stored in SharePoint and updated weekly → SharePoint Folder connector

Key Exam Takeaways

For the PL-300 exam, remember the following:

  • Understand major data source categories
  • Know Import vs DirectQuery vs Live connection
  • Recognize when to use shared semantic models
  • Be aware of limitations when using Live connections
  • Choose data sources based on performance, governance, and scale

This topic is less about clicking buttons and more about making the right architectural decision—a critical skill for a Power BI Data Analyst.


Power BI Connection Modes – Comparison Table (PL-300 Exam Focus)

Feature / AspectImportDirectQueryLive Connection
Where data is storedLoaded into Power BI datasetRemains in source systemRemains in source dataset or Analysis Services
PerformanceFastest (in-memory)Depends on source performanceDepends on source dataset
Data refresh requiredYes (scheduled or manual)No (queries run live)No (always live)
Data volume supportLimited by dataset sizeVery large datasetsVery large datasets
Modeling allowedFull modeling supportedLimited modelingNo model changes allowed
Create relationshipsYesLimitedNo
Create calculated tablesYesLimitedNo
Create calculated columnsYesLimitedNo
Create measures (DAX)YesYes (with limitations)Yes (report-level only)
Power Query transformationsFully supportedLimited (query folding dependent)Not supported
Row-level security (RLS)SupportedSupportedInherited from source
Typical sourcesExcel, CSV, SQL, SharePointLarge SQL, Azure SQL, SynapsePower BI datasets, Analysis Services
Best use caseHigh performance, small-to-medium dataNear-real-time or massive dataReusing governed enterprise models
Exam frequencyVery highHighVery high

When to Choose Each Mode (Exam Scenarios)

Import Mode

Choose Import when:

  • Performance is the top priority
  • Data size is manageable
  • Full modeling flexibility is needed

📌 Most common and safest answer on the exam unless constraints are stated.


DirectQuery

Choose DirectQuery when:

  • Data is extremely large
  • Data must stay at the source
  • Near real-time reporting is required

📌 Expect questions about performance trade-offs and modeling limitations.


Live Connection

Choose Live Connection when:

  • Using a shared semantic model
  • Consistent metrics are required across reports
  • Centralized governance is in place

📌 Often paired with “thin report” exam scenarios.


High-Value Exam Tips ⭐

  • If the question mentions enterprise governance or shared datasets → think Live Connection
  • If it mentions large, frequently changing data → think DirectQuery
  • If it mentions fast visuals and full modeling → think Import
  • Live connections do not allow model changes
  • Import mode requires refresh
  • DirectQuery relies on query folding

Practice Exam Questions

Go to the Practice Exam Questions for this section.

Practice Questions: Choose between DirectQuery and Import (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Get or connect to data
--> Choose between DirectQuery and Import


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

Question 1

A Power BI report must deliver the fastest possible visual response. The dataset is moderate in size and refreshed once per day. Which connectivity mode should you choose?

A. DirectQuery
B. Live connection
C. Import
D. Composite model

✅ Correct Answer: C

Explanation:
Import mode stores data in memory, providing the fastest performance and full modeling capabilities.


Question 2

A report must show up-to-the-minute transaction data from a large operational database. Data must remain in the source system. What is the best option?

A. Import
B. DirectQuery
C. Live connection
D. Power BI dataflow

✅ Correct Answer: B

Explanation:
DirectQuery retrieves data directly from the source in real time and avoids importing large datasets.


Question 3

Which limitation is most commonly associated with DirectQuery?

A. No scheduled refresh support
B. Reduced modeling and DAX capabilities
C. Inability to use row-level security
D. Inability to connect to SQL Server

✅ Correct Answer: B

Explanation:
DirectQuery limits certain modeling features, including calculated tables and some DAX expressions.


Question 4

A dataset contains a small product lookup table and a very large fact table that updates continuously. What is the most appropriate solution?

A. Import both tables
B. Use DirectQuery for both tables
C. Use a composite model
D. Use a live connection

✅ Correct Answer: C

Explanation:
Composite models allow importing small static tables while using DirectQuery for large, frequently updated tables.


Question 5

Which factor has the greatest impact on report performance when using DirectQuery?

A. Number of visuals on the page
B. Power BI Desktop version
C. Performance of the source system
D. Dataset refresh frequency

✅ Correct Answer: C

Explanation:
DirectQuery sends queries to the source system, so performance depends heavily on the source’s ability to handle queries.


Question 6

When is Import mode generally not recommended?

A. When modeling flexibility is required
B. When dataset size exceeds practical memory limits
C. When reports need fast interactivity
D. When refresh can occur on a schedule

✅ Correct Answer: B

Explanation:
Very large datasets may exceed memory constraints, making Import impractical.


Question 7

Which statement about data freshness is true?

A. Import mode always shows real-time data
B. DirectQuery requires scheduled refresh
C. Import mode relies on dataset refresh
D. DirectQuery stores data in memory

✅ Correct Answer: C

Explanation:
Import mode displays data as of the last refresh, while DirectQuery retrieves data at query time.


Question 8

A Power BI report must support complex DAX measures and calculated tables. Data updates hourly and does not need real-time accuracy. What should you choose?

A. DirectQuery
B. Import
C. Live connection
D. Streaming dataset

✅ Correct Answer: B

Explanation:
Import mode supports full DAX and modeling flexibility and is appropriate when real-time data is not required.


Question 9

Which scenario is the best candidate for DirectQuery?

A. Monthly financial reporting
B. Historical trend analysis
C. Real-time inventory monitoring
D. Static reference data

✅ Correct Answer: C

Explanation:
Real-time or near-real-time monitoring scenarios are ideal for DirectQuery.


Question 10

Why might a Power BI Data Analyst avoid DirectQuery unless necessary?

A. It cannot connect to cloud data sources
B. It disables report sharing
C. It can negatively impact performance and modeling flexibility
D. It does not support security

✅ Correct Answer: C

Explanation:
DirectQuery introduces performance dependencies on the source system and limits modeling features, making Import preferable when possible.


Exam Readiness Check ✅

You’re well prepared for this PL-300 objective if you can:

  • Identify real-time vs scheduled refresh needs
  • Balance performance vs flexibility
  • Recognize large-scale data scenarios
  • Explain why DirectQuery is chosen—not just when

Go back to the PL-300 Exam Prep Hub main page