Tag: AI-900 Exam Prep Hub

Exam Prep Hubs available on The Data Community

Below are the free Exam Prep Hubs currently available on The Data Community.
Bookmark the hubs you are interested in and use them to ensure you are fully prepared for the respective exam.

Each hub contains:

  1. The topic-by-topic (from the official study guide) coverage of the material, making it easy for you to ensure you are covering all aspects of the exam material.
  2. Practice exam questions for each section.
  3. Bonus material to help you prepare
  4. Two (2) Practice Exams with 60 questions each, along with answer keys.
  5. Links to useful resources, such as Microsoft Learn content, YouTube video series, and more.




Practice Questions: Describe considerations for privacy and security in an AI solution (AI-900 Exam Prep)

Practice Questions


Question 1

What is the primary goal of privacy in an AI solution?

Answer: To protect personal and sensitive data used by or generated from the AI system.

Explanation: Privacy focuses on how personal data is collected, used, stored, and shared to protect individuals’ rights and expectations.


Question 2

Which scenario represents a security concern rather than a privacy concern?

Answer: Unauthorized access to an AI model and its training data.

Explanation: Security focuses on protecting systems, models, and data from unauthorized access, attacks, or misuse.


Question 3

An AI-powered chatbot stores users’ conversations, including names and addresses. Which Responsible AI principle is most directly involved?

Answer: Privacy

Explanation: Storing personal information requires careful handling to protect user privacy.


Question 4

Which AI workload is most likely to raise privacy concerns?

Answer: Any AI workload that processes personal or sensitive data.

Explanation: Privacy considerations apply across all AI workloads when personal data is involved.


Question 5

Why is user consent important in AI solutions?

Answer: Because users should understand and agree to how their data is collected and used.

Explanation: Obtaining consent helps ensure responsible and ethical use of personal data.


Question 6

A document processing system analyzes contracts containing confidential information. What is the main concern?

Answer: Protecting sensitive data from unauthorized exposure.

Explanation: Handling confidential documents requires strong privacy and security considerations.


Question 7

Which practice supports privacy in an AI solution?

Answer: Collecting only the data necessary for the intended purpose.

Explanation: Limiting data collection reduces privacy risks and potential misuse.


Question 8

Why must AI systems be protected against unauthorized access?

Answer: To prevent data breaches, misuse, or manipulation of AI models.

Explanation: Unauthorized access can compromise both the security and trustworthiness of AI systems.


Question 9

Which Microsoft framework includes privacy and security as core principles?

Answer: Responsible AI

Explanation: Privacy and security are fundamental principles within Microsoft’s Responsible AI framework.


Question 10

An organization wants to ensure its generative AI system does not expose sensitive training data. Which consideration should guide this effort?

Answer: Privacy and security

Explanation: Preventing unintended disclosure of sensitive data is central to protecting privacy and maintaining system security.


Exam tip

For AI-900 questions, privacy usually involves personal data and consent, while security involves protecting systems and data from unauthorized access or misuse. Scenario keywords often make the distinction clear.


Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe considerations for inclusiveness in an AI solution (AI-900 Exam Prep)

Practice Questions


Question 1

What is the primary goal of inclusiveness in an AI solution?

Answer: To ensure AI systems are usable and beneficial for people with diverse abilities, backgrounds, and needs.

Explanation: Inclusiveness focuses on empowering all users and avoiding designs that unintentionally exclude certain groups.


Question 2

Which scenario best represents an inclusiveness concern?

Answer: A speech recognition system that performs poorly for users with strong accents.

Explanation: Failing to support diverse accents and speech patterns limits accessibility and inclusiveness.


Question 3

An AI application does not support screen readers or accessibility tools. Which Responsible AI principle is most impacted?

Answer: Inclusiveness

Explanation: Lack of accessibility features can exclude users with disabilities, which is a core inclusiveness issue.


Question 4

Which AI workload is most likely to require inclusiveness considerations?

Answer: All AI workloads intended for use by people

Explanation: Inclusiveness applies to any AI system that interacts with or affects users, regardless of workload type.


Question 5

Why is it important to test AI systems with diverse user groups?

Answer: To identify usability issues that may affect different abilities, languages, or contexts.

Explanation: Testing with diverse users helps ensure the AI solution works well for a broader population.


Question 6

A computer vision system fails to recognize assistive devices such as wheelchairs. What is the main concern?

Answer: The system is not inclusive of users with disabilities.

Explanation: Inclusive AI should consider assistive technologies and diverse physical environments.


Question 7

Which practice best supports inclusiveness in AI solutions?

Answer: Designing AI systems that support accessibility standards and tools.

Explanation: Accessibility support helps ensure AI systems can be used by people with varying abilities.


Question 8

Why is inclusiveness important for AI systems used by a global audience?

Answer: Because users may have different languages, cultures, abilities, and access needs.

Explanation: Inclusive design helps ensure AI solutions are usable and relevant across diverse populations.


Question 9

Which Microsoft framework includes inclusiveness as a core principle?

Answer: Responsible AI

Explanation: Inclusiveness is one of Microsoft’s six Responsible AI principles.


Question 10

An organization wants its AI system to avoid assumptions about users’ abilities or environments. Which principle should guide this effort?

Answer: Inclusiveness

Explanation: Inclusiveness encourages designing AI systems that adapt to diverse user needs rather than assuming a single type of user.


Exam tip

For AI-900 questions, inclusiveness is usually indicated by scenarios involving accessibility, disabilities, language support, accents, or diverse user needs. When the question is about who can or cannot use the AI system, inclusiveness is often the correct principle.


Go to the AI-900 Exam Prep Hub main page.

Describe considerations for inclusiveness in an AI solution (AI-900 Exam Prep)

Overview

Inclusiveness is a key guiding principle of Responsible AI and an important concept on the AI-900: Microsoft Azure AI Fundamentals exam. Inclusiveness focuses on designing AI solutions that empower and benefit all people, including individuals with different abilities, backgrounds, cultures, and access needs.

For the AI-900 exam, candidates are expected to understand what inclusiveness means in the context of AI, recognize inclusive and non-inclusive design scenarios, and identify why inclusiveness is essential for responsible AI solutions.


What does inclusiveness mean in AI?

Inclusiveness in AI refers to designing systems that:

  • Are usable by people with diverse abilities and needs
  • Consider different languages, cultures, and contexts
  • Avoid excluding or disadvantaging specific groups
  • Provide accessible experiences whenever possible

An inclusive AI solution aims to expand access and opportunity, rather than unintentionally limiting who can benefit from the technology.


Why inclusiveness matters

If inclusiveness is not considered, AI systems may:

  • Be difficult or impossible for some people to use
  • Exclude individuals with disabilities
  • Fail to support diverse languages or accents
  • Work well only for a narrow group of users

Inclusive AI helps ensure that technology benefits a broader population and does not reinforce existing barriers.


Examples of inclusiveness concerns

Common real-world examples include:

  • Speech recognition systems that struggle with certain accents or speech patterns
  • Computer vision systems that fail to recognize assistive devices such as wheelchairs
  • Chatbots or applications that do not support screen readers or accessibility tools
  • AI systems that assume all users have the same physical, cognitive, or technical abilities

In each case, the concern is whether the AI solution accommodates diverse user needs.


Inclusiveness across AI workloads

Inclusiveness applies across all AI workloads, including:

  • Speech AI, ensuring support for different accents, languages, and speech styles
  • Computer vision, accounting for varied physical environments and assistive technologies
  • Natural language processing, supporting multiple languages and inclusive language use
  • Generative AI, producing content that is accessible and usable by diverse audiences

Any AI system intended for broad use should consider inclusiveness.


Designing for inclusiveness

While AI-900 does not test technical design methods, it is important to recognize high-level inclusive practices:

  • Considering a wide range of users during design
  • Supporting accessibility tools and standards
  • Testing AI systems with diverse user groups
  • Avoiding assumptions about user abilities or contexts

These practices help ensure AI solutions are usable by more people.


Microsoft’s approach to inclusiveness

Inclusiveness is one of Microsoft’s Responsible AI principles, emphasizing the importance of designing AI systems that empower people and respect human diversity.

Microsoft encourages building AI solutions that are accessible, adaptable, and beneficial to individuals with varying needs and abilities.


Key takeaways for the AI-900 exam

  • Inclusiveness focuses on accessibility and diversity
  • AI systems should accommodate users with different abilities, languages, and contexts
  • Lack of inclusiveness can unintentionally exclude groups of people
  • Inclusiveness applies to all AI workloads
  • Inclusiveness is a core principle of Microsoft’s Responsible AI framework

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe considerations for transparency in an AI solution (AI-900 Exam Prep)

Practice Questions


Question 1

What does transparency mean in the context of responsible AI?

A. Ensuring AI models are open source
B. Making AI systems explainable and understandable to users
C. Encrypting all data used by AI systems
D. Ensuring AI decisions are always correct

Correct Answer: B

Explanation:
Transparency focuses on helping users understand how and why an AI system produces its results. This includes explainability, documentation, and clear communication of system capabilities and limitations.


Question 2

Why is transparency especially important in AI systems that affect people’s lives?

A. It reduces infrastructure costs
B. It improves model training speed
C. It helps users trust and appropriately rely on AI decisions
D. It guarantees fairness

Correct Answer: C

Explanation:
Transparency builds trust by allowing users to understand AI decisions, especially in sensitive areas like hiring, lending, or healthcare. It does not automatically guarantee fairness, but it supports accountability.


Question 3

Which scenario best demonstrates a lack of transparency?

A. A chatbot explains it is an AI system before starting a conversation
B. A loan approval model provides approval probabilities
C. A facial recognition system provides results without explanation
D. A recommendation engine displays confidence scores

Correct Answer: C

Explanation:
Providing results without explanation limits transparency. Users should understand what the system is doing and why, especially when outcomes affect them.


Question 4

Which Microsoft Responsible AI principle is most closely associated with explaining AI decisions to users?

A. Fairness
B. Reliability and Safety
C. Transparency
D. Privacy and Security

Correct Answer: C

Explanation:
Transparency focuses on making AI systems understandable, including explanations of decisions, limitations, and confidence levels.


Question 5

What is one way to improve transparency in an AI solution?

A. Increase model complexity
B. Hide training data sources
C. Provide explanations and confidence levels
D. Remove human oversight

Correct Answer: C

Explanation:
Providing explanations, confidence scores, and documentation helps users understand how the AI operates and how much to trust its output.


Question 6

An AI system is used to classify customer feedback sentiment. Which feature best supports transparency?

A. Faster inference time
B. Clear documentation describing how sentiment is determined
C. Larger training dataset
D. Automatic retraining

Correct Answer: B

Explanation:
Transparency is supported through documentation and explanations, not performance optimizations or automation alone.


Question 7

Which of the following is a transparency-related question a user might ask?

A. How quickly does the model run?
B. How was this decision made?
C. How much does the service cost?
D. How often is the model retrained?

Correct Answer: B

Explanation:
Transparency addresses questions about how decisions are made and what factors influence AI outputs.


Question 8

Why should AI systems clearly communicate their limitations?

A. To reduce model accuracy
B. To discourage user adoption
C. To help users make informed decisions
D. To meet performance benchmarks

Correct Answer: C

Explanation:
Communicating limitations ensures users do not over-rely on AI systems and understand when human judgment is required.


Question 9

Which Azure AI practice supports transparency for end users?

A. Automatically scaling compute resources
B. Logging system errors
C. Providing model explanations and confidence scores
D. Encrypting data at rest

Correct Answer: C

Explanation:
Model explanations and confidence scores directly help users understand AI predictions, supporting transparency.


Question 10

How does transparency contribute to responsible AI usage?

A. By removing bias from models
B. By ensuring AI systems never fail
C. By enabling accountability and informed trust
D. By improving training speed

Correct Answer: C

Explanation:
Transparency enables accountability, helps users trust AI appropriately, and supports ethical decision-making. It complements, but does not replace, other principles like fairness or reliability.


Go to the AI-900 Exam Prep Hub main page.

Describe considerations for transparency in an AI solution (AI-900 Exam Prep)

Overview

Transparency is a core guiding principle of Responsible AI and an important concept tested on the AI-900: Microsoft Azure AI Fundamentals exam. Transparency focuses on ensuring that people understand how and why AI systems make decisions, what data they use, and what their limitations are.

For AI-900, candidates are expected to recognize transparency concerns in AI scenarios and understand why transparent AI systems are critical for trust, accountability, and responsible use.


What does transparency mean in AI?

Transparency in AI means that:

  • Users are informed when they are interacting with an AI system
  • Decisions and outputs can be explained in understandable terms
  • The purpose and limitations of the AI system are clearly communicated
  • Stakeholders understand how AI impacts decisions

Transparency does not require users to understand complex algorithms, but it does require clarity about what the AI is doing and why.


Why transparency matters

Without transparency, AI systems can:

  • Appear unpredictable or untrustworthy
  • Make decisions that users cannot understand or challenge
  • Hide biases, errors, or limitations
  • Reduce confidence in AI-driven outcomes

Transparent AI systems help build trust, enable informed decision-making, and support ethical AI use.


Examples of transparency concerns

Common real-world scenarios include:

  • Users not being told that an AI system is making recommendations or decisions
  • Automated decisions without explanations, such as loan approvals or rejections
  • Chatbots that appear human without disclosing they are AI
  • AI systems that do not explain their confidence or uncertainty

In these cases, the concern is whether users can understand and appropriately trust the AI system.


Transparency across AI workloads

Transparency considerations apply to all AI workloads, including:

  • Machine learning models making predictions or classifications
  • Computer vision systems interpreting images or video
  • Natural language processing systems analyzing or generating text
  • Generative AI systems producing content or recommendations

Any AI system that influences decisions or user behavior should be transparent about its role.


Key transparency practices

At a high level, transparency includes:

  • Informing users when AI is involved
  • Providing explanations for AI outputs where possible
  • Communicating limitations, accuracy expectations, and risks
  • Enabling users to question or review AI-driven decisions

While AI-900 does not test technical explainability methods, candidates should recognize these concepts in exam scenarios.


Microsoft’s approach to transparency

Transparency is one of Microsoft’s Responsible AI principles. Microsoft emphasizes clear communication about AI capabilities, limitations, and use cases to help users make informed decisions.

Azure AI services include documentation, guidance, and features that support transparent AI usage.


Transparency vs trust

A key exam concept is that transparency:

  • Builds trust in AI systems
  • Supports accountability and ethical use
  • Helps users understand when AI assistance is appropriate

Transparent systems make it easier for users to rely on AI responsibly.


Key takeaways for the AI-900 exam

  • Transparency means clarity about how and why AI systems make decisions
  • Users should know when they are interacting with AI
  • AI systems should communicate limitations and uncertainty
  • Transparency applies across all AI workloads
  • Transparency is a core principle of Microsoft’s Responsible AI framework

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe Considerations for Accountability in an AI Solution (AI-900 Exam Prep)

Practice Exam Questions


Question 1

An organization deploys an AI system that automatically approves or rejects loan applications. To meet Microsoft’s Responsible AI principles, the organization requires employees to review rejected applications when customers appeal a decision.

Which Responsible AI principle does this best demonstrate?

A. Fairness
B. Transparency
C. Accountability
D. Inclusiveness

Correct Answer: C

Explanation:
Accountability ensures that humans remain responsible for AI decisions. Allowing human review and intervention demonstrates human oversight, a core accountability requirement.


Question 2

Which action best supports accountability in an AI solution?

A. Encrypting training data
B. Providing explanations for model predictions
C. Assigning a team responsible for monitoring AI behavior
D. Increasing the size of the training dataset

Correct Answer: C

Explanation:
Accountability requires clear ownership and responsibility. Assigning a team to monitor and manage AI outcomes ensures humans are accountable for system behavior.


Question 3

An AI-based hiring system logs every candidate ranking decision and allows auditors to review historical outcomes.

Which accountability consideration is being addressed?

A. Human oversight
B. Monitoring and auditing
C. Inclusiveness
D. Data minimization

Correct Answer: B

Explanation:
Logging decisions and enabling audits supports monitoring and auditing, which helps organizations remain accountable for AI behavior over time.


Question 4

Which scenario best illustrates a lack of accountability in an AI solution?

A. The AI system provides confidence scores with predictions
B. The organization cannot explain who owns the AI system
C. Training data is encrypted at rest
D. Users are informed that AI is being used

Correct Answer: B

Explanation:
If no one is responsible for the AI system, accountability is missing. Ownership and responsibility are core elements of accountability.


Question 5

A healthcare AI solution flags high-risk patients. Final treatment decisions are always made by doctors.

Which concept does this scenario demonstrate?

A. Transparency
B. Fairness
C. Human-in-the-loop accountability
D. Privacy

Correct Answer: C

Explanation:
Human-in-the-loop systems ensure humans make final decisions, reinforcing accountability in high-impact scenarios.


Question 6

Which statement best describes accountability in AI?

A. AI systems should never make automated decisions
B. AI models must be open source
C. Humans remain responsible for AI outcomes
D. AI decisions must be unbiased

Correct Answer: C

Explanation:
Accountability means humans and organizations remain responsible, even when AI systems are automated.


Question 7

An organization deploys an AI chatbot but ensures complex or sensitive issues are escalated to human agents.

Which Responsible AI principle is primarily demonstrated?

A. Inclusiveness
B. Reliability and safety
C. Accountability
D. Transparency

Correct Answer: C

Explanation:
Escalating decisions to humans ensures human oversight and responsibility, which is central to accountability.


Question 8

Which of the following is NOT primarily related to accountability?

A. Audit trails
B. Governance policies
C. Human review processes
D. Data anonymization

Correct Answer: D

Explanation:
Data anonymization relates to privacy, not accountability. The other options ensure human responsibility and oversight.


Question 9

After deployment, an AI model’s performance degrades, but no process exists to review or correct its behavior.

Which Responsible AI principle is most at risk?

A. Fairness
B. Accountability
C. Transparency
D. Inclusiveness

Correct Answer: B

Explanation:
Without monitoring or corrective processes, no one is accountable for the AI system’s ongoing behavior.


Question 10

On the AI-900 exam, which keyword most strongly indicates an accountability-related question?

A. Encryption
B. Accessibility
C. Ownership
D. Explainability

Correct Answer: C

Explanation:
Ownership is a key indicator of accountability. Accountability questions focus on who is responsible for AI systems and decisions.


Exam-Day Tip

If a question mentions:

  • Human review
  • Oversight
  • Governance
  • Auditing
  • Ownership
  • Responsibility

👉 The correct answer might be related to Accountability.


Go to the AI-900 Exam Prep Hub main page.

Describe Considerations for Accountability in an AI Solution (AI-900 Exam Prep)

Where This Fits in the Exam

  • Exam Domain: Describe Artificial Intelligence workloads and considerations (15–20%)
  • Sub-Domain: Identify guiding principles for responsible AI
  • Topic: Describe considerations for accountability in an AI solution

On the AI-900 exam, accountability focuses on the idea that humans remain responsible for AI systems, even when decisions are automated.


What Is Accountability in AI?

Accountability means ensuring that people are responsible for the behavior, outcomes, and impact of AI systems.

Even though AI systems can make predictions or recommendations automatically, AI does not replace human responsibility. Organizations must be able to:

  • Explain who owns the AI system
  • Monitor and audit AI decisions
  • Intervene when AI behaves incorrectly or harmfully

Key idea for the exam:
AI systems must have human oversight and clear ownership.


Why Accountability Is Important

AI systems can impact critical areas such as:

  • Hiring and recruitment
  • Loan approvals
  • Healthcare decisions
  • Law enforcement
  • Customer service

Without accountability:

  • Errors may go unnoticed
  • Bias may persist
  • Harmful decisions may not be corrected
  • Trust in AI systems is reduced

Accountability ensures ethical use, legal compliance, and user trust.


Key Accountability Considerations

Human Oversight

AI systems should allow humans to:

  • Review AI decisions
  • Override or correct outcomes
  • Handle exceptions and edge cases

This is often referred to as human-in-the-loop or human-on-the-loop decision-making.


Clear Ownership and Responsibility

An organization should clearly define:

  • Who designed the AI system
  • Who deployed it
  • Who maintains and monitors it
  • Who is responsible when issues occur

On the exam, accountability always points back to people and organizations, not the model itself.


Monitoring and Auditing

Accountable AI solutions include:

  • Logging of AI decisions
  • Performance monitoring over time
  • Bias and drift detection
  • Periodic reviews of outcomes

This helps ensure the AI system continues to behave as intended after deployment.


Governance and Controls

Accountability includes governance practices such as:

  • Approval processes for AI use
  • Policies for acceptable AI behavior
  • Compliance with laws and regulations
  • Documentation of design decisions

These controls ensure AI solutions align with organizational and ethical standards.


Accountability vs Other Responsible AI Principles

Understanding how accountability differs from related principles is very important for AI-900.

PrincipleFocus
AccountabilityHumans are responsible for AI outcomes
TransparencyExplaining how AI makes decisions
FairnessAvoiding bias and discrimination
Reliability & SafetyConsistent and safe system behavior
Privacy & SecurityProtecting data and systems
InclusivenessDesigning for diverse users

Exam tip:
If the question mentions human review, ownership, audits, or responsibility, the answer is Accountability.


Practical Examples of Accountability

  • A loan approval system allows staff to review and override AI decisions
  • An organization keeps logs of AI predictions for audits
  • A chatbot escalates sensitive issues to a human agent
  • A company assigns a team responsible for monitoring AI performance

All of these reinforce human responsibility over AI behavior.


Common AI-900 Exam Scenarios

You may see questions like:

  • Who is responsible when an AI system makes an incorrect decision?
  • Which principle ensures AI decisions can be reviewed by humans?
  • Which Responsible AI principle emphasizes governance and oversight?

In these cases, Accountability is the correct answer.


Key Takeaways for the Exam

  • Accountability ensures humans remain responsible for AI systems
  • AI does not eliminate organizational or ethical responsibility
  • Human oversight, auditing, and governance are central concepts
  • Accountability is about ownership and control, not explainability or fairness

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Regression Machine Learning Scenarios (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A real estate company wants to predict the selling price of a house based on its size, location, and age.

Which machine learning technique should be used?

A. Classification
B. Clustering
C. Regression
D. Anomaly detection

Correct Answer: C

Explanation:
The output is a numeric value (price), which makes this a regression scenario.


Question 2

A business wants to estimate the number of hours it will take to complete a project based on historical project data.

Which type of machine learning is most appropriate?

A. Regression
B. Classification
C. Clustering
D. Association

Correct Answer: A

Explanation:
Estimating time in hours is predicting a numeric value, which is a regression task.


Question 3

Which scenario is best suited for regression?

A. Determining whether a transaction is fraudulent
B. Grouping customers based on purchasing behavior
C. Predicting monthly sales revenue
D. Assigning customers to loyalty tiers

Correct Answer: C

Explanation:
Monthly sales revenue is a continuous numeric value, making regression the correct choice.


Question 4

An AI model predicts tomorrow’s temperature based on historical weather data.

What type of machine learning problem is this?

A. Classification
B. Regression
C. Clustering
D. Anomaly detection

Correct Answer: B

Explanation:
Temperature is a numeric measurement, so this is a regression problem.


Question 5

A company wants to predict how many units of a product will be sold next month.

Which machine learning technique should be used?

A. Regression
B. Classification
C. Clustering
D. Natural language processing

Correct Answer: A

Explanation:
The output is a quantity (number of units), which is best handled by regression.


Question 6

Which statement best describes a regression model?

A. It assigns data points to categories
B. It predicts continuous numeric values
C. It groups unlabeled data
D. It identifies unusual data points

Correct Answer: B

Explanation:
Regression models are used to predict numeric values, such as prices or quantities.


Question 7

An organization uses historical data to estimate the fuel consumption of delivery vehicles.

What type of machine learning scenario is this?

A. Classification
B. Clustering
C. Regression
D. Recommendation

Correct Answer: C

Explanation:
Fuel consumption is a numeric measurement, making this a regression scenario.


Question 8

Which output value most strongly indicates a regression problem?

A. Approved / Rejected
B. High / Medium / Low
C. Fraud / Not Fraud
D. 245.7

Correct Answer: D

Explanation:
A precise numeric output (245.7) indicates a regression scenario.


Question 9

A model predicts delivery times in hours based on distance, traffic, and weather.

Which machine learning technique is being used?

A. Classification
B. Regression
C. Clustering
D. Anomaly detection

Correct Answer: B

Explanation:
Delivery time in hours is a continuous numeric value, so regression is appropriate.


Question 10

On the AI-900 exam, which keyword most often signals a regression scenario?

A. Classify
B. Group
C. Detect
D. Estimate

Correct Answer: D

Explanation:
Words like estimate, predict, or forecast typically indicate regression problems.


Exam-Day Tip

If a machine learning related question asks “how much,” “how many,” or “how long”, the answer is typically Regression related.


Go to the AI-900 Exam Prep Hub main page.

Identify Regression Machine Learning Scenarios (AI-900 Exam Prep)

Where This Fits in the Exam

  • Exam Domain: Describe fundamental principles of machine learning on Azure (15–20%)
  • Sub-Domain: Identify common machine learning techniques
  • Topic: Identify regression machine learning scenarios

On the AI-900 exam, regression questions are about recognizing when regression is the appropriate technique, not building or tuning models.


What Is Regression in Machine Learning?

Regression is a type of supervised machine learning used to predict a numerical (continuous) value.

  • The model learns from labeled training data
  • The output is a number, not a category
  • The goal is to predict how much, how many, or how long

Key exam rule:
If the output is a number, the scenario is almost always regression.


Characteristics of Regression Scenarios

A regression machine learning workload typically involves:

  • Historical data with known outcomes
  • One or more input features
  • A continuous numeric output
  • Predicting future values based on patterns in data

Examples of numeric outputs:

  • Price
  • Temperature
  • Revenue
  • Distance
  • Duration
  • Quantity

Common Regression Use Cases

Price and Cost Prediction

  • Predicting house prices
  • Estimating insurance premiums
  • Forecasting product costs

Forecasting and Trends

  • Predicting future sales revenue
  • Estimating energy consumption
  • Forecasting website traffic

Measurements and Quantities

  • Predicting delivery time
  • Estimating fuel efficiency
  • Calculating demand levels

All of these scenarios involve predicting a numeric value, making them regression problems.


Regression vs Other Machine Learning Techniques

Understanding the difference between regression and other ML techniques is critical for AI-900.

TechniqueOutput TypeExample
RegressionNumeric valuePredicting house price
ClassificationCategory or labelApproving or denying a loan
ClusteringGroup assignmentSegmenting customers
Anomaly detectionUnusual behaviorDetecting fraud

Exam tip:
“Yes/No”, “True/False”, or named labels → Classification
A number or measurement → Regression


Example Exam Scenarios

Scenario 1

A company wants to predict the monthly electricity usage of buildings based on historical data.

  • Output: Electricity usage (kWh)
  • ML Technique: Regression

Scenario 2

A real estate company wants to estimate the selling price of homes based on size, location, and age.

  • Output: Price
  • ML Technique: Regression

Scenario 3

A logistics company wants to estimate delivery time for packages.

  • Output: Time (hours or days)
  • ML Technique: Regression

Azure Context for AI-900

On the AI-900 exam, regression scenarios are often framed using Azure Machine Learning concepts:

  • Training models using historical datasets
  • Predicting numeric outcomes
  • Evaluating prediction accuracy

You are not expected to:

  • Write code
  • Choose algorithms
  • Tune hyperparameters

Focus on recognition, not implementation.


Common Exam Traps and Misconceptions

  • ❌ Predicting categories like high / medium / lowClassification
  • ❌ Grouping similar items without labels → Clustering
  • ❌ Detecting rare events → Anomaly detection
  • ✅ Predicting a numberRegression

Key Takeaways for the Exam

  • Regression predicts numeric values
  • It is a supervised learning technique
  • Look for words like predict, estimate, forecast
  • Outputs are continuous values, not categories
  • Regression is commonly used for prices, quantities, and time

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.