Category: Data Education & Training

Practice Questions: Identify Computer Vision Workloads (AI-900 Exam Prep)

Practice Questions


Question 1

A retail company wants to automatically assign categories such as shirt, shoes, or hat to product photos uploaded by sellers.

Which type of AI workload is this?

A. Natural language processing
B. Image classification
C. Object detection
D. Anomaly detection

Correct Answer: B

Explanation: Image classification assigns one or more labels to an entire image. In this scenario, each product photo is classified into a category.


Question 2

A city uses traffic cameras to identify vehicles and pedestrians and draw boxes around them in each image.

Which computer vision capability is being used?

A. Image tagging
B. Image classification
C. Object detection
D. OCR

Correct Answer: C

Explanation: Object detection identifies multiple objects within an image and locates them using bounding boxes.


Question 3

A company wants to extract text from scanned invoices and store the text in a database for searching.

Which computer vision workload is required?

A. Image description
B. Optical Character Recognition (OCR)
C. Face detection
D. Language translation

Correct Answer: B

Explanation: OCR is used to extract printed or handwritten text from images or scanned documents.


Question 4

An application analyzes photos and generates captions such as “A group of people standing on a beach.”

Which computer vision capability is this?

A. Image classification
B. Image tagging and description
C. Object detection
D. Video analysis

Correct Answer: B

Explanation: Image tagging and description focuses on understanding the overall content of an image and generating descriptive text.


Question 5

A security system needs to determine whether a human face is present in images captured at building entrances.

Which workload is most appropriate?

A. Facial recognition
B. Face detection
C. Image classification
D. Speech recognition

Correct Answer: B

Explanation: Face detection determines whether a face exists in an image. Identity verification (facial recognition) is not the focus of AI-900.


Question 6

A media company wants to analyze recorded videos to identify scenes, objects, and motion over time.

Which Azure AI workload does this represent?

A. Image classification
B. Video analysis
C. OCR
D. Text analytics

Correct Answer: B

Explanation: Video analysis processes visual data across multiple frames, enabling object detection, motion tracking, and scene analysis.


Question 7

A manufacturing company wants to detect defective products by locating scratches or dents in photos taken on an assembly line.

Which computer vision workload should be used?

A. Image classification
B. Object detection
C. Anomaly detection
D. Natural language processing

Correct Answer: B

Explanation: Object detection can be used to locate defects within an image by identifying specific problem areas.


Question 8

A developer needs to train a model using their own labeled images because prebuilt vision models are not sufficient.

Which Azure AI service is most appropriate?

A. Azure AI Vision
B. Azure AI Video Indexer
C. Azure AI Custom Vision
D. Azure AI Language

Correct Answer: C

Explanation: Azure AI Custom Vision allows users to train custom image classification and object detection models using their own data.


Question 9

Which clue in a scenario most strongly indicates a computer vision workload?

A. Audio recordings are analyzed
B. Large amounts of numerical data are processed
C. Images or videos are the primary input
D. Text documents are translated

Correct Answer: C

Explanation: Computer vision workloads always involve visual input such as images or video.


Question 10

An organization wants to ensure responsible use of AI when analyzing images of people.

Which consideration is most relevant for computer vision workloads?

A. Query performance tuning
B. Data normalization
C. Privacy and consent
D. Indexing strategies

Correct Answer: C

Explanation: Privacy, consent, and bias are key responsible AI considerations when working with images and facial data.


Final Exam Tip

If a question mentions photos, images, scanned documents, cameras, or video, think computer vision first, then determine the specific capability (classification, detection, OCR, or description).


Go to the PL-300 Exam Prep Hub main page.

Identify Computer Vision Workloads (AI-900 Exam Prep)

Overview

Computer vision is a branch of Artificial Intelligence (AI) that enables machines to interpret, analyze, and understand visual information such as images and videos. In the context of the AI-900: Microsoft Azure AI Fundamentals exam, you are not expected to build complex models or write code. Instead, the focus is on recognizing computer vision workloads, understanding what problems they solve, and knowing which Azure AI services are appropriate for each scenario.

This topic falls under:

  • Describe Artificial Intelligence workloads and considerations (15–20%)
    • Identify features of common AI workloads

A strong conceptual understanding here will help you confidently answer many scenario-based exam questions.


What Is a Computer Vision Workload?

A computer vision workload involves extracting meaningful insights from visual data. These workloads allow systems to:

  • Identify objects, people, or text in images
  • Analyze facial features or emotions
  • Understand the content of photos or videos
  • Detect changes, anomalies, or motion

Common inputs include:

  • Images (JPEG, PNG, etc.)
  • Video streams (live or recorded)

Common outputs include:

  • Labels or tags
  • Bounding boxes around detected objects
  • Extracted text
  • Descriptions of image content

Common Computer Vision Use Cases

On the AI-900 exam, computer vision workloads are usually presented as real-world scenarios. Below are the most common ones you should recognize.

Image Classification

What it does: Assigns a category or label to an image.

Example scenarios:

  • Determining whether an image contains a cat, dog, or bird
  • Classifying products in an online store
  • Identifying whether a photo shows food, people, or scenery

Key idea: The entire image is classified as one or more categories.


Object Detection

What it does: Detects and locates multiple objects within an image.

Example scenarios:

  • Detecting cars, pedestrians, and traffic signs in street images
  • Counting people in a room
  • Identifying damaged items in a warehouse

Key idea: Unlike classification, object detection identifies where objects appear using bounding boxes.


Face Detection and Facial Analysis

What it does: Detects human faces and analyzes facial attributes.

Example scenarios:

  • Detecting whether a face is present in an image
  • Estimating age or emotion
  • Identifying facial landmarks (eyes, nose, mouth)

Important exam note:

  • AI-900 focuses on face detection and analysis, not facial recognition for identity verification.
  • Be aware of ethical and privacy considerations when working with facial data.

Optical Character Recognition (OCR)

What it does: Extracts printed or handwritten text from images and documents.

Example scenarios:

  • Reading text from scanned documents
  • Extracting information from receipts or invoices
  • Recognizing license plate numbers

Key idea: OCR turns unstructured visual text into machine-readable text.


Image Description and Tagging

What it does: Generates descriptive text or tags that summarize image content.

Example scenarios:

  • Automatically tagging photos in a digital library
  • Creating alt text for accessibility
  • Generating captions for images

Key idea: This workload focuses on understanding the overall context of an image rather than specific objects.


Video Analysis

What it does: Analyzes video content frame by frame.

Example scenarios:

  • Detecting motion or anomalies in security footage
  • Tracking objects over time
  • Summarizing video content

Key idea: Video analysis extends image analysis across time, not just a single frame.


Azure Services Commonly Associated with Computer Vision

For the AI-900 exam, you should recognize which Azure AI services support computer vision workloads at a high level.

Azure AI Vision

Supports:

  • Image analysis
  • Object detection
  • OCR
  • Face detection
  • Image tagging and description

This is the most commonly referenced service for computer vision scenarios on the exam.


Azure AI Custom Vision

Supports:

  • Custom image classification
  • Custom object detection

Used when prebuilt models are not sufficient and you need to train a model using your own images.


Azure AI Video Indexer

Supports:

  • Video analysis
  • Object, face, and scene detection in videos

Typically appears in scenarios involving video content.


How Computer Vision Differs from Other AI Workloads

Understanding what is not computer vision is just as important on the exam.

AI Workload TypeFocus Area
Computer VisionImages and videos
Natural Language ProcessingText and speech
Speech AIAudio and voice
Anomaly DetectionPatterns in numerical or time-series data

Exam tip: If the input data is visual (images or video), you are almost certainly dealing with a computer vision workload.


Responsible AI Considerations

Microsoft emphasizes responsible AI, and AI-900 includes high-level awareness of these principles.

For computer vision workloads, key considerations include:

  • Privacy and consent when capturing images or video
  • Avoiding bias in facial analysis
  • Transparency in how visual data is collected and used

You will not be tested on implementation details, but you may see conceptual questions about ethical use.


Exam Tips for Identifying Computer Vision Workloads

  • Focus on keywords like image, photo, video, camera, scanned document
  • Look for actions such as detect, recognize, classify, extract text
  • Match the scenario to the simplest appropriate workload
  • Remember: AI-900 tests understanding, not coding

Summary

To succeed on the AI-900 exam, you should be able to:

  • Recognize when a problem is a computer vision workload
  • Identify common use cases such as image classification, object detection, and OCR
  • Understand which Azure AI services are commonly used
  • Distinguish computer vision from other AI workloads

Mastering this topic will give you a strong foundation for many questions in the Describe Artificial Intelligence workloads and considerations domain.


Go to the Practice Exam Questions for this topic.

Go to the PL-300 Exam Prep Hub main page.

PL-300: Microsoft Power BI Data Analyst certification exam – Frequently Asked Questions (FAQs)

Below are some commonly asked questions about the PL-300: Microsoft Power BI Data Analyst certification exam. Upon successfully passing this exam, you earn the Microsoft Certified: Power BI Data Analyst Associate certification.


What is the PL-300 certification exam?

The PL-300: Microsoft Power BI Data Analyst exam validates your ability to prepare, model, visualize, analyze, and secure data using Microsoft Power BI.

Candidates who pass the exam demonstrate proficiency in:

  • Connecting to and transforming data from multiple sources
  • Designing and building efficient data models
  • Creating compelling and insightful reports and dashboards
  • Applying DAX calculations and measures
  • Implementing security, governance, and deployment best practices in Power BI

This certification is designed for professionals who work with data and use Power BI to deliver business insights. Upon successfully passing this exam, candidates earn the Microsoft Certified: Power BI Data Analyst Associate certification.


Is the PL-300 certification exam worth it?

The short answer is yes.

Preparing for the PL-300 exam provides significant value, even beyond the certification itself. The study process exposes you to Power BI features, patterns, and best practices that you may not encounter in day-to-day work. This often results in:

  • Stronger data modeling and DAX skills
  • Better-performing and more maintainable Power BI solutions
  • Increased confidence when designing analytics solutions
  • Greater credibility with stakeholders, employers, and clients

For many professionals, the exam also serves as a structured learning path that fills in knowledge gaps and reinforces real-world experience.


How many questions are on the PL-300 exam?

The PL-300 exam typically contains between 40 and 60 questions.

The questions may appear in several formats, including:

  • Single-choice and multiple-choice questions
  • Multi-select questions
  • Drag-and-drop or matching questions
  • Case studies with multiple questions

The exact number and format can vary slightly from exam to exam.


How hard is the PL-300 exam?

The PL-300 exam is considered moderately to highly challenging, especially for candidates without hands-on Power BI experience.

The difficulty comes from:

  • The breadth of topics covered
  • Scenario-based questions that test applied knowledge
  • Time pressure during the exam

However, the challenge is also what gives the certification its value. With proper preparation and practice, the exam is very achievable.

Helpful preparation resources include:


How much does the PL-300 certification exam cost?

As of January 1, 2026, the standard exam pricing is:

  • United States: $165 USD
  • Australia: $140 USD
  • Canada: $140 USD
  • India: $4,865 INR
  • China: $83 USD
  • United Kingdom: £106 GBP
  • Other countries: Pricing varies based on country and region

Microsoft occasionally offers discounts, student pricing, or exam vouchers, so it is worth checking the official Microsoft certification site before scheduling your exam.


How do I prepare for the Microsoft PL-300 certification exam?

The most important advice is do not rush to sit the exam. Take time to cover all topic areas thoroughly before taking the exam.

Recommended preparation steps:

  1. Review the official PL-300 exam skills outline.
  2. Complete the free Microsoft Learn PL-300 learning path.
  3. Practice building Power BI reports end-to-end using real or sample data.
  4. Strengthen weak areas such as DAX, data modeling, or security.
  5. Take practice exams to validate your readiness. Microsoft Learn’s PL-300 practice exam is available here; and there are 2 practice exams available on The Data Community’s PL-300 Exam Prep Hub.

Additional learning resources include:

Hands-on experience with Power BI Desktop and the Power BI Service is essential.


How do I pass the PL-300 exam?

To maximize your chances of passing:

  • Focus on understanding concepts, not memorization
  • Practice common Power BI patterns and scenarios
  • Pay close attention to question wording during the exam
  • Manage your time carefully and avoid spending too long on a single question

Consistently scoring well on reputable practice exams is usually a good indicator that you are ready for the real exam.


What is the best site for PL-300 certification dumps?

Using exam dumps is not recommended and may violate Microsoft’s exam policies.

Instead, use legitimate preparation resources such as:

Legitimate practice materials help you build real skills that are valuable beyond the exam itself.


How long should I study for the PL-300 exam?

Study time varies depending on your background and experience.

General guidelines:

  • Experienced Power BI users: 4–6 weeks of focused preparation
  • Moderate experience: 6–8 weeks of focused preparation
  • Beginners or limited experience: 8–12 weeks or more of focused preparation

Rather than focusing on time alone, because it will vary broadly based on several factors, aim to fully understand all exam topics and perform well on practice exams before scheduling the test.


Where can I find training or a course for the PL-300 exam?

Training options include:

  • Microsoft Learn: Free, official learning path
  • Online learning platforms: Udemy, Coursera, and similar providers
  • YouTube: Free playlists and walkthroughs covering PL-300 topics
  • Subscription platforms: Datacamp and others offering Power BI courses
  • Microsoft partners: Instructor-led and enterprise-focused training

A combination of structured learning and hands-on practice tends to work best.


What skills should I have before taking the PL-300 exam?

Before attempting the exam, you should be comfortable with:

  • Basic data concepts (tables, relationships, measures)
  • Power BI Desktop and Power BI Service
  • Power Query for data transformation
  • DAX fundamentals
  • Basic understanding of data modeling and analytics concepts

You do not need to be an expert in all areas, but hands-on familiarity is important.


What score do I need to pass the PL-300 exam?

Microsoft exams are scored on a scale of 1–1000, and a score of 700 or higher is required to pass.

The score is scaled, meaning it is based on question difficulty rather than a simple percentage of correct answers.


How long is the PL-300 exam?

You are given approximately 120 minutes to complete the exam, including time to review instructions and case studies.

Time management is very important, especially for scenario-based questions.


How long is the PL-300 certification valid?

The Microsoft Certified: Power BI Data Analyst Associate certification is valid for one year.

To maintain your certification, you must complete a free online renewal assessment before the expiration date.


Is PL-300 suitable for beginners?

PL-300 is beginner-friendly in structure but assumes some hands-on experience.

Beginners can absolutely pass the exam, but they should expect to spend additional time practicing with Power BI and learning foundational concepts.


What roles benefit most from the PL-300 certification?

The PL-300 certification is especially valuable for:

  • Data Analysts
  • Business Intelligence Developers
  • Reporting and Analytics Professionals
  • Data Engineers working with Power BI
  • Consultants and Power BI practitioners

It is also useful for professionals transitioning into analytics-focused roles.


What languages is the PL-300 exam offered in?

The PL-300 certification exam is offered in the following languages:

English, Japanese, Chinese (Simplified), Korean, German, French, Spanish, Portuguese (Brazil), Chinese (Traditional), Italian


Have additional questions? Post them on the comments.

Good luck on your data journey!

Glossary – 100 “Data Science” Terms

Below is a glossary that includes 100 “Data Science” terms and phrases, along with their definitions and examples, in alphabetical order. Enjoy!

TermDefinition & Example
A/B TestingComparing two variants. Example: Website layout test.
AccuracyOverall correct predictions rate. Example: 90% accuracy.
Actionable InsightInsight leading to action. Example: Improve onboarding.
AlgorithmProcedure used to train models. Example: Decision trees.
Alternative HypothesisAssumption opposing the null hypothesis. Example: Group A performs better than B.
AUCArea under ROC curve. Example: Model ranking metric.
Bayesian InferenceUpdating probabilities with new evidence. Example: Prior and posterior beliefs.
Bias-Variance TradeoffBalance between simplicity and flexibility. Example: Model tuning.
BootstrappingResampling technique for estimation. Example: Estimating confidence intervals.
Business ProblemDecision-focused question. Example: Why churn increased.
CausationOne variable directly affects another. Example: Price drop causes sales increase.
ClassificationPredicting categories. Example: Spam detection.
ClusteringGrouping similar observations. Example: Market segmentation.
Computer VisionInterpreting images and video. Example: Image classification.
Confidence IntervalRange likely containing the true value. Example: 95% CI for average revenue.
Confusion MatrixTable evaluating classification results. Example: True positives vs false positives.
CorrelationStrength of relationship between variables. Example: Ad spend vs revenue.
Cross-ValidationRepeated training/testing splits. Example: k-fold CV.
Data DriftChange in input data distribution. Example: New demographics.
Data ImputationReplacing missing values. Example: Median imputation.
Data LeakageTraining model with future information. Example: Using post-event data.
Data ScienceInterdisciplinary field combining statistics, programming, and domain knowledge to extract insights from data. Example: Predicting customer churn.
Data StorytellingCommunicating insights effectively. Example: Executive dashboards.
DatasetA structured collection of data for analysis. Example: Customer transactions table.
Deep LearningMulti-layer neural networks. Example: Speech recognition.
Descriptive StatisticsSummary statistics of data. Example: Mean, median.
Dimensionality ReductionReducing number of features. Example: PCA.
Effect SizeMagnitude of difference or relationship. Example: Lift in conversion rate.
Ensemble LearningCombining multiple models. Example: Boosting techniques.
Ethics in Data ScienceResponsible use of data and models. Example: Avoiding biased predictions.
ExperimentationTesting hypotheses with data. Example: A/B testing.
Explainable AI (XAI)Techniques to explain predictions. Example: SHAP values.
Exploratory Data Analysis (EDA)Initial data investigation using statistics and visuals. Example: Distribution plots.
F1 ScoreBalance of precision and recall. Example: Imbalanced datasets.
FeatureAn input variable used in modeling. Example: Customer age.
Feature EngineeringCreating new features from raw data. Example: Tenure calculated from signup date.
ForecastingPredicting future values. Example: Demand forecasting.
GeneralizationModel performance on unseen data. Example: Stable test accuracy.
Hazard FunctionInstantaneous event rate. Example: Churn risk over time.
Holdout SetData reserved for final evaluation. Example: Final test dataset.
HyperparameterPre-set model configuration. Example: Learning rate.
HypothesisA testable assumption about data. Example: Discounts increase conversion rates.
Hypothesis TestingStatistical method to evaluate assumptions. Example: t-test for average sales.
InsightMeaningful analytical finding. Example: High churn among new users.
LabelKnown output used in supervised learning. Example: Fraud or not fraud.
LikelihoodProbability of data given parameters. Example: Used in Bayesian models.
Loss FunctionMeasures prediction error. Example: Mean squared error.
MeanArithmetic average. Example: Average sales value.
MedianMiddle value of ordered data. Example: Median income.
Missing ValuesAbsent data points. Example: Null customer age.
ModeMost frequent value. Example: Most common category.
ModelMathematical representation learned from data. Example: Logistic regression.
Model DriftPerformance degradation over time. Example: Changing customer behavior.
Model InterpretabilityUnderstanding model decisions. Example: Feature importance.
Monte Carlo SimulationRandom sampling to model uncertainty. Example: Risk modeling.
Natural Language Processing (NLP)Analyzing human language. Example: Sentiment analysis.
Neural NetworkModel inspired by the human brain. Example: Image recognition.
Null HypothesisDefault assumption of no effect. Example: No difference between two groups.
OptimizationProcess of minimizing loss. Example: Gradient descent.
OutlierValue significantly different from others. Example: Unusually large purchase.
OverfittingModel memorizes training data. Example: Poor test performance.
PipelineEnd-to-end data science workflow. Example: Ingest → train → deploy.
PopulationEntire group of interest. Example: All customers.
Posterior ProbabilityUpdated belief after observing data. Example: Updated churn likelihood.
PrecisionCorrect positive prediction rate. Example: Fraud detection precision.
Principal Component Analysis (PCA)Linear dimensionality reduction technique. Example: Visualizing high-dimensional data.
Prior ProbabilityInitial belief before observing data. Example: Baseline churn rate.
p-valueProbability of observing results under the null hypothesis. Example: p < 0.05 indicates significance.
RecallAbility to identify all positives. Example: Medical diagnosis.
RegressionPredicting numeric values. Example: Sales forecasting.
Reinforcement LearningLearning via rewards and penalties. Example: Game-playing AI.
ReproducibilityAbility to recreate results. Example: Fixed random seeds.
ROC CurveClassifier performance visualization. Example: Threshold comparison.
SamplingSelecting subset of data. Example: Survey sample.
Sampling BiasNon-representative sampling. Example: Surveying only active users.
SeasonalityRepeating time-based patterns. Example: Holiday sales.
Semi-Structured DataData with flexible structure. Example: JSON files.
StackingEnsemble method using meta-models. Example: Combining classifiers.
Standard DeviationAverage distance from the mean. Example: Price volatility.
StationarityStable statistical properties over time. Example: Mean doesn’t change.
Statistical PowerProbability of detecting a true effect. Example: Larger sample sizes increase power.
Statistical SignificanceEvidence results are unlikely due to chance. Example: Rejecting the null hypothesis.
Structured DataData with a fixed schema. Example: SQL tables.
Supervised LearningLearning with labeled data. Example: Credit risk prediction.
Survival AnalysisModeling time-to-event data. Example: Customer churn timing.
Target VariableThe outcome a model predicts. Example: Loan default indicator.
Test DataData used to evaluate model performance. Example: Held-out validation set.
Text MiningExtracting insights from text. Example: Topic modeling.
Time SeriesData indexed by time. Example: Daily stock prices.
TokenizationSplitting text into units. Example: Words or subwords.
Training DataData used to train a model. Example: Historical transactions.
Transfer LearningReusing pretrained models. Example: Image models for medical scans.
TrendLong-term direction in data. Example: Growing user base.
UnderfittingModel too simple to capture patterns. Example: High bias.
Unstructured DataData without predefined structure. Example: Text, images.
Unsupervised LearningLearning without labels. Example: Customer clustering.
Uplift ModelingMeasuring treatment impact. Example: Marketing campaign effectiveness.
Validation SetData used for tuning models. Example: Hyperparameter selection.
VarianceMeasure of data spread. Example: Sales variability.
Word EmbeddingsNumerical text representations. Example: Word2Vec.

Exam Prep Hub for PL-300: Microsoft Power BI Data Analyst

Welcome to the one-stop hub with information for preparing for the PL-300: Microsoft Power BI Data Analyst certification exam. Upon successful completion of the exam, you earn the Microsoft Certified: Power BI Data Analyst Associate certification.

This hub provides information directly here (topic-by-topic), links to a number of external resources, tips for preparing for the exam, practice tests, and section questions to help you prepare. Bookmark this page and use it as a guide to ensure that you are fully covering all relevant topics for the PL-300 exam and making use of as many of the resources available as possible.


Skills tested at a glance (as specified in the official study guide)

  • Prepare the data (25–30%)
  • Model the data (25–30%)
  • Visualize and analyze the data (25–30%)
  • Manage and secure Power BI (15–20%)
Click on each hyperlinked topic below to go to the preparation content and practice questions for that topic. And there are also 2 practice exams provided below.

Prepare the data (25–30%)

Get or connect to data

Profile and clean the data

Transform and load the data

Model the data (25–30%)

Design and implement a data model

Create model calculations by using DAX

Optimize model performance

Visualize and analyze the data (25–30%)

Create reports

Enhance reports for usability and storytelling

Identify patterns and trends

Manage and secure Power BI (15–20%)

Create and manage workspaces and assets

Secure and govern Power BI items


Practice Exams

We have provided 2 practice exams (with answer keys) to help you prepare:


Important PL-300 Resources

To Do’s:

  • Schedule time to learn, study, perform labs, and do practice exams and questions
  • Schedule the exam based on when you think you will be ready; scheduling the exam gives you a target and drives you to keep working on it; but keep in mind that it can be rescheduled based on the rules of the provider.
  • Use the various resources above and below to learn
  • Take the free Microsoft Learn practice test, any other available practice tests, and do the practice questions in each section and the two practice tests available on this hub.

Good luck to you passing the PL-300: Microsoft Power BI Data Analyst certification exam and earning the Microsoft Certified: Power BI Data Analyst Associate certification!

Publish, Import, or Update Items in a Workspace (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Manage and secure Power BI (15–20%)
--> Create and manage workspaces and assets
--> Publish, Import, or Update Items in a Workspace


There are 10 practice questions (with answers and explanations) for each topic, including this one. There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Overview

Power BI workspaces are the central location for managing and collaborating on Power BI assets such as reports, semantic models (datasets), dashboards, dataflows, and apps.
For the PL-300 exam, you are expected to understand how content gets into a workspace, how it is updated, and how different publishing and import options affect governance, collaboration, and security.


What Are Workspace Items?

Common items managed within a Power BI workspace include:

  • Reports
  • Semantic models (datasets)
  • Dashboards
  • Dataflows
  • Paginated reports
  • Apps

Knowing how these items are published, imported, and updated is a core administrative and lifecycle skill tested on the exam.


Publishing Items to a Workspace

Publish from Power BI Desktop

The most common way to publish content is from Power BI Desktop:

  • You publish a .pbix file
  • A report and semantic model are created (or updated) in the workspace
  • Requires Contributor, Member, or Admin role

Key exam point:

  • Publishing a PBIX overwrites the existing report and semantic model (unless name conflicts are avoided)

Publish to Different Workspaces

When publishing from Power BI Desktop, you can:

  • Choose the target workspace
  • Publish to My Workspace or a shared workspace
  • Publish the same PBIX to multiple workspaces (e.g., Dev, Test, Prod)

This supports deployment and lifecycle management scenarios.


Importing Items into a Workspace

Import from Power BI Service

You can import content directly into a workspace using:

  • Upload a file (PBIX, Excel, JSON theme files)
  • Import from OneDrive or SharePoint
  • Import from another workspace (via reuse or copy)

Imported content becomes a managed workspace asset, subject to workspace permissions.


Import from External Sources

You can import:

  • Excel workbooks (creates reports and datasets)
  • Paginated report files (.rdl)
  • Power BI templates (.pbit)

Exam note:

  • Imported items behave similarly to published items but may require credential configuration after import.

Updating Items in a Workspace

Updating Reports and Semantic Models

Common update methods include:

  • Republish the PBIX from Power BI Desktop
  • Replace the dataset connection
  • Modify report visuals in the Power BI Service (if permitted)

Important behavior:

  • Republishing replaces the existing version
  • App users will not see updates until the workspace app is updated

Updating Dataflows

Dataflows can be:

  • Edited directly in the Power BI Service
  • Refreshed manually or on a schedule
  • Reused across multiple datasets

This supports centralized data preparation.


Updating Paginated Reports

Paginated reports can be updated by:

  • Uploading a revised .rdl file
  • Editing via Power BI Report Builder
  • Republishing to the same workspace

Permissions and Roles Impacting Publishing

Workspace roles determine what actions users can take:

RolePublishImportUpdate
ViewerNoNoNo
ContributorYesYesYes (limited)
MemberYesYesYes
AdminYesYesYes

Exam focus:

  • Viewers cannot publish or update
  • Contributors cannot manage workspace settings or apps

Publishing vs Importing: Key Differences

ActionPublishImport
SourcePower BI DesktopService or external files
Creates datasetYesYes
Overwrites contentYes (same name)Depends
Common useDevelopment lifecycleContent onboarding

Common Exam Scenarios

You may be asked:

  • How to move reports between environments
  • Who can publish or update content
  • What happens when a PBIX is republished
  • How imported content behaves in a workspace
  • How updates affect workspace apps

If the question mentions content lifecycle, governance, or collaboration, it is likely testing this topic.


Best Practices to Remember for PL-300

  • Use workspaces for collaboration and asset management
  • Publish from Power BI Desktop for controlled updates
  • Import external files when onboarding content
  • Use separate workspaces for Dev/Test/Prod
  • Remember that apps require manual updates
  • Assign appropriate workspace roles

Summary

Publishing, importing, and updating items in a workspace is fundamental to managing Power BI solutions at scale. For the PL-300 exam, focus on:

  • How content enters a workspace
  • Who can manage it
  • How updates are controlled
  • How changes affect downstream users

Understanding these workflows ensures you can design secure, maintainable, and enterprise-ready Power BI environments.


Practice Questions

Go to the Practice Questions for this topic.

Practice Questions: Choose between DirectQuery and Import (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Get or connect to data
--> Choose between DirectQuery and Import


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

Question 1

A Power BI report must deliver the fastest possible visual response. The dataset is moderate in size and refreshed once per day. Which connectivity mode should you choose?

A. DirectQuery
B. Live connection
C. Import
D. Composite model

✅ Correct Answer: C

Explanation:
Import mode stores data in memory, providing the fastest performance and full modeling capabilities.


Question 2

A report must show up-to-the-minute transaction data from a large operational database. Data must remain in the source system. What is the best option?

A. Import
B. DirectQuery
C. Live connection
D. Power BI dataflow

✅ Correct Answer: B

Explanation:
DirectQuery retrieves data directly from the source in real time and avoids importing large datasets.


Question 3

Which limitation is most commonly associated with DirectQuery?

A. No scheduled refresh support
B. Reduced modeling and DAX capabilities
C. Inability to use row-level security
D. Inability to connect to SQL Server

✅ Correct Answer: B

Explanation:
DirectQuery limits certain modeling features, including calculated tables and some DAX expressions.


Question 4

A dataset contains a small product lookup table and a very large fact table that updates continuously. What is the most appropriate solution?

A. Import both tables
B. Use DirectQuery for both tables
C. Use a composite model
D. Use a live connection

✅ Correct Answer: C

Explanation:
Composite models allow importing small static tables while using DirectQuery for large, frequently updated tables.


Question 5

Which factor has the greatest impact on report performance when using DirectQuery?

A. Number of visuals on the page
B. Power BI Desktop version
C. Performance of the source system
D. Dataset refresh frequency

✅ Correct Answer: C

Explanation:
DirectQuery sends queries to the source system, so performance depends heavily on the source’s ability to handle queries.


Question 6

When is Import mode generally not recommended?

A. When modeling flexibility is required
B. When dataset size exceeds practical memory limits
C. When reports need fast interactivity
D. When refresh can occur on a schedule

✅ Correct Answer: B

Explanation:
Very large datasets may exceed memory constraints, making Import impractical.


Question 7

Which statement about data freshness is true?

A. Import mode always shows real-time data
B. DirectQuery requires scheduled refresh
C. Import mode relies on dataset refresh
D. DirectQuery stores data in memory

✅ Correct Answer: C

Explanation:
Import mode displays data as of the last refresh, while DirectQuery retrieves data at query time.


Question 8

A Power BI report must support complex DAX measures and calculated tables. Data updates hourly and does not need real-time accuracy. What should you choose?

A. DirectQuery
B. Import
C. Live connection
D. Streaming dataset

✅ Correct Answer: B

Explanation:
Import mode supports full DAX and modeling flexibility and is appropriate when real-time data is not required.


Question 9

Which scenario is the best candidate for DirectQuery?

A. Monthly financial reporting
B. Historical trend analysis
C. Real-time inventory monitoring
D. Static reference data

✅ Correct Answer: C

Explanation:
Real-time or near-real-time monitoring scenarios are ideal for DirectQuery.


Question 10

Why might a Power BI Data Analyst avoid DirectQuery unless necessary?

A. It cannot connect to cloud data sources
B. It disables report sharing
C. It can negatively impact performance and modeling flexibility
D. It does not support security

✅ Correct Answer: C

Explanation:
DirectQuery introduces performance dependencies on the source system and limits modeling features, making Import preferable when possible.


Exam Readiness Check ✅

You’re well prepared for this PL-300 objective if you can:

  • Identify real-time vs scheduled refresh needs
  • Balance performance vs flexibility
  • Recognize large-scale data scenarios
  • Explain why DirectQuery is chosen—not just when

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Convert Semi-Structured Data to a Table (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Convert Semi-Structured Data to a Table


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions


Question 1

You load a JSON file into Power BI. The resulting table contains a single column where each row shows List. What is the first step to analyze the data?

A. Expand the column
B. Convert the list to a table
C. Promote headers
D. Split the column by delimiter

Correct Answer: B

Explanation:
Lists must be converted into tables before they can be expanded or analyzed as rows.


Question 2

A column in Power Query displays Record in each row. What does this indicate?

A. The column contains duplicated values
B. The column contains nested structured fields
C. The column contains multiple rows per record
D. The column contains untyped data

Correct Answer: B

Explanation:
A Record represents a nested structure with named fields that can be expanded into columns.


Question 3

Which Power Query action is used to expose fields stored inside a record?

A. Convert to Table
B. Pivot Column
C. Expand Column
D. Transpose Table

Correct Answer: C

Explanation:
Expanding a record reveals its internal fields as individual columns.


Question 4

An API response loads as a table with a column containing lists of values. What is the correct transformation sequence?

A. Expand → Promote Headers
B. Convert to Table → Expand
C. Split Column → Fill Down
D. Group By → Expand

Correct Answer: B

Explanation:
Lists must be converted into tables first, after which they can be expanded.


Question 5

After expanding nested data, you notice duplicate rows in your fact table. What is the most likely cause?

A. Incorrect data type
B. Expanding without understanding data granularity
C. Missing relationships
D. Failure to promote headers

Correct Answer: B

Explanation:
Expanding nested structures without considering the grain can duplicate rows and inflate fact tables.


Question 6

You import an Excel file where headers appear in multiple rows instead of a single row. What is the most appropriate approach?

A. Expand the column
B. Convert the table to a list
C. Transpose the table and promote headers
D. Group rows by column

Correct Answer: C

Explanation:
Transposing realigns rows and columns so headers can be promoted properly.


Question 7

Which Power Query feature is most useful when category labels appear only once and apply to multiple rows below?

A. Replace Values
B. Fill Down
C. Unpivot Columns
D. Merge Queries

Correct Answer: B

Explanation:
Fill Down propagates header or category values to related rows, common in semi-structured spreadsheets.


Question 8

Why is it recommended to expand only required fields when converting semi-structured data?

A. To reduce report refresh frequency
B. To improve visual formatting
C. To reduce model size and complexity
D. To enable DirectQuery mode

Correct Answer: C

Explanation:
Expanding unnecessary fields increases model size and can negatively impact performance and usability.


Question 9

Which transformation should be completed before creating relationships in the data model?

A. Creating measures
B. Flattening semi-structured data
C. Formatting visuals
D. Applying row-level security

Correct Answer: B

Explanation:
Relationships require clean, tabular data. Semi-structured data must be flattened first.


Question 10

Which statement best reflects a PL-300 best practice for handling semi-structured data?

A. Leave nested data unexpanded until report creation
B. Use DAX to flatten semi-structured data
C. Normalize and flatten data in Power Query
D. Always transpose semi-structured tables

Correct Answer: C

Explanation:
Power Query is the correct place to normalize and flatten semi-structured data before modeling and analysis.


Final Exam Tips for This Topic

  • Recognize lists vs records vs tables
  • Lists → Convert to table
  • Records → Expand
  • Inspect data grain before expanding
  • Clean data before flattening
  • This topic is about recognition and transformation choices, not memorizing UI clicks

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Create Fact Tables and Dimension Tables (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Create Fact Tables and Dimension Tables


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions


Question 1

A table contains SalesAmount, Quantity, ProductName, ProductCategory, CustomerName, and OrderDate. Which columns should remain in the fact table?

A. ProductName, ProductCategory
B. CustomerName, OrderDate
C. SalesAmount, Quantity
D. ProductName, CustomerName

Correct Answer: C

Explanation:
Fact tables store numeric measures that are aggregated, such as SalesAmount and Quantity. Descriptive attributes belong in dimension tables.


Question 2

What is the primary purpose of a dimension table?

A. Store transaction-level data
B. Provide descriptive context for facts
C. Improve visual formatting
D. Store calculated measures

Correct Answer: B

Explanation:
Dimension tables provide descriptive attributes (such as names, categories, and dates) that are used to filter and group fact data.


Question 3

Which relationship type is most appropriate between a dimension table and a fact table?

A. Many-to-many
B. One-to-one
C. One-to-many
D. Bi-directional

Correct Answer: C

Explanation:
A dimension table contains unique keys, while the fact table contains repeated foreign keys, creating a one-to-many relationship.


Question 4

You create a Product dimension table but forget to remove duplicate ProductID values. What issue is most likely?

A. Measures will return blank values
B. Relationships cannot be created correctly
C. Visuals will fail to render
D. DAX functions will not work

Correct Answer: B

Explanation:
Dimension tables must have unique key values. Duplicates prevent proper one-to-many relationships.


Question 5

Which schema design is recommended by Microsoft for Power BI models?

A. Snowflake schema
B. Flat table schema
C. Galaxy schema
D. Star schema

Correct Answer: D

Explanation:
The star schema is recommended for performance, simplicity, and easier DAX calculations in Power BI.


Question 6

Where should fact and dimension tables typically be created?

A. In DAX measures
B. In Power Query during data preparation
C. In visuals after loading data
D. In the Power BI Service

Correct Answer: B

Explanation:
Fact and dimension tables should be shaped in Power Query before loading into the data model.


Question 7

A model uses the same Date table for Order Date and Ship Date. What type of dimension is this?

A. Slowly changing dimension
B. Degenerate dimension
C. Role-playing dimension
D. Bridge table

Correct Answer: C

Explanation:
A role-playing dimension is used multiple times in different roles, such as Order Date and Ship Date.


Question 8

Which is a valid reason not to split a dataset into fact and dimension tables?

A. The dataset is extremely small and static
B. The dataset contains numeric measures
C. The model requires relationships
D. The data will be refreshed regularly

Correct Answer: A

Explanation:
For very small or simple datasets, splitting into facts and dimensions may add unnecessary complexity.


Question 9

What is the primary performance benefit of separating fact and dimension tables?

A. Faster visual rendering due to fewer measures
B. Reduced memory usage and simpler filter paths
C. Automatic indexing of columns
D. Improved DirectQuery support

Correct Answer: B

Explanation:
Star schemas reduce duplication of descriptive data and create efficient filter paths, improving performance.


Question 10

Which modeling mistake often leads to the unnecessary use of bi-directional relationships?

A. Using too many measures
B. Poor star schema design
C. Too many dimension tables
D. Using calculated columns

Correct Answer: B

Explanation:
Bi-directional relationships are often used to compensate for poor model design. A clean star schema usually requires only single-direction filtering.


Final Exam Tips for This Topic

  • Measures → Fact tables
  • Descriptive attributes → Dimension tables
  • Use Power Query to shape tables before modeling
  • Ensure unique keys in dimension tables
  • Prefer star schema over flat or snowflake models
  • Know when not to over-model

Go back to the PL-300 Exam Prep Hub main page

Practice Questions: Identify when to use reference or duplicate queries and the resulting impact (PL-300 Exam Prep)

This post is a part of the PL-300: Microsoft Power BI Data Analyst Exam Prep Hub; and this topic falls under these sections:
Prepare the data (25–30%)
--> Transform and load the data
--> Identify when to use reference or duplicate queries and the resulting impact


Below are 10 practice questions (with answers and explanations) for this topic of the exam.
There are also 2 practice tests for the PL-300 exam with 60 questions each (with answers) available on the hub.

Practice Questions

Question 1

You have a query that cleans and standardizes sales data. You need to create several dimension tables from this cleaned dataset. Which option should you use?

A. Duplicate the query for each dimension
B. Reference the query for each dimension
C. Import the source data multiple times
D. Merge the query with itself

Correct Answer: B

Explanation:
Referencing allows multiple tables to inherit the same cleaned logic from a single base query. This ensures consistency and reduces repeated transformation steps, which is a recommended best practice for production models.


Question 2

What is the primary difference between a referenced query and a duplicated query?

A. Referenced queries refresh faster
B. Duplicated queries do not support transformations
C. Referenced queries depend on the original query
D. Duplicated queries cannot be loaded to the model

Correct Answer: C

Explanation:
A referenced query is dependent on its source query and will reflect any changes made to it. A duplicated query is an independent copy with no dependency.


Question 3

A change made to a base query causes multiple downstream queries to fail during refresh. What is the most likely reason?

A. The downstream queries were duplicated
B. The downstream queries were referenced
C. The model relationships were deleted
D. The data source credentials expired

Correct Answer: B

Explanation:
Referenced queries rely on the base query. If a breaking change is introduced (such as removing or renaming a column), all dependent referenced queries may fail.


Question 4

When should you duplicate a query instead of referencing it?

A. When you want transformations to stay consistent
B. When creating multiple dimension tables
C. When experimenting with major changes
D. When reducing refresh dependencies

Correct Answer: C

Explanation:
Duplicating a query is ideal when testing or experimenting, because changes will not affect other queries or downstream dependencies.


Question 5

Which impact is most commonly associated with excessive query duplication?

A. Improved refresh reliability
B. Reduced data volume
C. Increased maintenance effort
D. Better data lineage visibility

Correct Answer: C

Explanation:
Duplicating queries can lead to repeated transformation logic, making the model harder to maintain and increasing the risk of inconsistent data shaping.


Question 6

How does Power BI’s View Lineage represent referenced queries?

A. As independent branches
B. As disconnected tables
C. As upstream and downstream dependencies
D. As hidden queries

Correct Answer: C

Explanation:
Referenced queries appear as downstream dependencies in View Lineage, clearly showing how data flows from base queries to derived queries.


Question 7

You want to ensure that a change to data cleansing logic automatically applies to all derived tables. What should you do?

A. Duplicate the query
B. Reference the query
C. Disable query loading
D. Create calculated tables

Correct Answer: B

Explanation:
Referencing ensures that any change to the base query propagates to all dependent queries automatically.


Question 8

Which of the following is a common mistake when using referenced queries?

A. Using them for experimentation
B. Using them for dimension creation
C. Forgetting that changes propagate downstream
D. Using them to centralize data cleaning

Correct Answer: C

Explanation:
A frequent mistake is forgetting that changes to a referenced base query can unintentionally affect multiple dependent queries.


Question 9

Which approach generally results in a cleaner and more maintainable data model?

A. Duplicating all queries
B. Referencing a well-designed base query
C. Importing data separately for each table
D. Performing transformations in DAX

Correct Answer: B

Explanation:
Using a base query with referenced downstream queries centralizes transformation logic and simplifies maintenance, which aligns with Microsoft’s recommended modeling practices.


Question 10

Which scenario best illustrates when NOT to use a referenced query?

A. Creating a product dimension
B. Applying consistent formatting rules
C. Testing a new transformation approach
D. Creating multiple tables from a single source

Correct Answer: C

Explanation:
Referenced queries should not be used when testing or experimenting with transformations, because changes may impact other dependent queries. Duplicating is safer in this case.


PL-300 Exam Tip

Expect Microsoft to test:

  • Dependency awareness
  • Impact of changes
  • Maintainability vs flexibility
  • Correct use of Reference vs Duplicate

Go back to the PL-300 Exam Prep Hub main page