Category: AI-900

Describe considerations for fairness in an AI solution (AI-900 Exam Prep)

Overview

Fairness is one of the core guiding principles of Responsible AI and a key concept tested on the AI-900: Microsoft Azure AI Fundamentals exam. In the context of AI solutions, fairness focuses on ensuring that AI systems treat all people and groups equitably and do not create or reinforce bias.

For the AI-900 exam, you are not expected to implement fairness techniques, but you are expected to recognize fairness-related risks, understand why they matter, and identify when fairness considerations apply to an AI workload.


What does fairness mean in AI?

An AI solution is considered fair when its predictions, recommendations, or decisions do not systematically disadvantage individuals or groups based on personal characteristics.

Bias in AI systems can arise from:

  • Biased or unrepresentative training data
  • Historical or societal inequalities reflected in data
  • Imbalanced datasets (over-representation of some groups and under-representation of others)
  • Design choices made during model development

Fairness aims to reduce these issues and ensure consistent, equitable outcomes.


Examples of fairness concerns

Understanding real-world scenarios is critical for the exam. Common examples include:

  • Hiring systems that favor one gender or demographic over others
  • Loan approval models that unfairly reject applicants from certain groups
  • Facial recognition systems that perform better for some skin tones than others
  • Credit scoring systems influenced by historical discrimination

In each case, the concern is not whether the AI is accurate overall, but whether it behaves equitably across different groups.


Fairness across AI workloads

Fairness considerations apply to all types of AI workloads, including:

  • Machine learning models making predictions or classifications
  • Computer vision systems identifying people or objects
  • Natural language processing systems analyzing or generating text
  • Generative AI systems creating content or recommendations

Any AI system that impacts people directly or indirectly should be evaluated for fairness.


Measuring and assessing fairness

Fairness is not always obvious and often requires analysis. Typical approaches include:

  • Comparing model outcomes across different demographic groups
  • Evaluating error rates for different populations
  • Monitoring model performance after deployment

On the AI-900 exam, you should recognize that fairness must be assessed and monitored, not assumed.


Microsoft’s approach to fairness

Microsoft emphasizes fairness as part of its Responsible AI principles, which include:

  • Fairness
  • Reliability and safety
  • Privacy and security
  • Inclusiveness
  • Transparency
  • Accountability

Azure AI services are designed with these principles in mind, and Microsoft provides tools and guidance to help organizations identify and mitigate unfair outcomes.


Fairness vs accuracy

A key exam concept is understanding that:

  • A highly accurate model can still be unfair
  • Improving fairness may sometimes require trade-offs

The goal is to balance performance with ethical responsibility.


Key takeaways for the AI-900 exam

  • Fairness ensures AI systems do not disadvantage individuals or groups
  • Bias often originates from training data or historical inequalities
  • Fairness applies across all AI workloads
  • Fairness must be evaluated and monitored continuously
  • Microsoft treats fairness as a core Responsible AI principle

Being able to recognize fairness concerns in AI scenarios is essential for success on the AI-900 exam.


Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Describe considerations for reliability and safety in an AI solution (AI-900 Exam Prep)

Overview

Reliability and safety are core principles of Responsible AI and an important topic on the AI-900: Microsoft Azure AI Fundamentals exam. These considerations focus on ensuring that AI systems behave as expected, perform consistently under normal and unexpected conditions, and do not cause harm to people, organizations, or systems.

For the AI-900 exam, candidates are expected to understand what reliability and safety mean, recognize scenarios where these considerations apply, and identify why they are critical when deploying AI solutions.


What do reliability and safety mean in AI?

  • Reliability refers to an AI system’s ability to perform consistently and accurately over time and across different conditions.
  • Safety refers to protecting people and systems from harm caused by incorrect, unpredictable, or inappropriate AI behavior.

An AI system should work as intended, handle edge cases gracefully, and fail safely when problems occur.


Why reliability and safety matter

AI systems are increasingly used in situations where incorrect outputs can have serious consequences. Unreliable or unsafe AI systems can:

  • Produce incorrect or misleading results
  • Behave unpredictably in unfamiliar situations
  • Cause physical, financial, or emotional harm
  • Reduce trust in AI solutions

Ensuring reliability and safety helps organizations deploy AI responsibly and confidently.


Examples of reliability and safety concerns

Understanding practical examples is essential for the exam:

  • Autonomous systems misinterpreting sensor data
  • Medical AI tools providing incorrect diagnoses
  • AI-powered chatbots giving harmful or unsafe advice
  • Computer vision systems failing in poor lighting or weather conditions
  • Generative AI systems producing harmful, misleading, or unsafe content

In these scenarios, the concern is whether the AI behaves predictably and safely, especially in edge cases.


Reliability and safety across AI workloads

Reliability and safety considerations apply to all AI workloads, including:

  • Machine learning models making predictions or classifications
  • Computer vision systems detecting objects or people
  • Natural language processing systems interpreting or generating text
  • Generative AI systems creating responses, images, or content

Any AI system that influences decisions, actions, or content should be evaluated for reliability and safety.


Designing for reliability and safety

While AI-900 does not test implementation details, it is important to recognize high-level approaches:

  • Testing AI systems under different conditions
  • Handling unexpected inputs or edge cases
  • Monitoring systems after deployment
  • Implementing safeguards to prevent harmful outputs

These practices help ensure that AI systems remain dependable over time.


Microsoft’s approach to reliability and safety

Microsoft includes reliability and safety as one of its Responsible AI principles, alongside fairness, privacy and security, inclusiveness, transparency, and accountability.

Azure AI services are designed with built-in safeguards and guidance to help organizations deploy reliable and safe AI solutions.


Key takeaways for the AI-900 exam

  • Reliability means consistent and predictable AI behavior
  • Safety focuses on preventing harm to people and systems
  • Reliability and safety apply to all AI workloads
  • AI systems should be tested, monitored, and designed to fail safely
  • Microsoft treats reliability and safety as core Responsible AI principles

Being able to identify reliability and safety concerns in AI scenarios is critical for success on the AI-900 exam.


Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe considerations for reliability and safety in an AI solution (AI-900 Exam Prep)

Practice Questions


Question 1

What does reliability mean in the context of an AI solution?

Answer: The ability of an AI system to perform consistently and predictably over time and across different conditions.

Explanation: Reliability focuses on consistent behavior and dependable performance, even when conditions change.


Question 2

Which situation best represents a safety concern in an AI system?

Answer: An AI-powered medical tool providing incorrect treatment recommendations.

Explanation: Safety relates to preventing harm to people or systems caused by incorrect or unsafe AI behavior.


Question 3

An AI system works well in testing but produces unexpected results when exposed to real-world data. Which principle is most relevant?

Answer: Reliability

Explanation: Unpredictable behavior in real-world conditions indicates a lack of reliability.


Question 4

Which AI workload is most likely to require reliability and safety considerations?

Answer: All AI workloads

Explanation: Any AI system that influences decisions, actions, or content should be evaluated for reliability and safety.


Question 5

Why is it important for AI systems to handle edge cases safely?

Answer: To prevent unexpected or harmful outcomes in unusual situations.

Explanation: Edge cases can cause failures if not handled properly, making safe behavior essential.


Question 6

A chatbot occasionally generates misleading or harmful advice. Which Responsible AI principle is most directly affected?

Answer: Reliability and safety

Explanation: Producing unsafe or unreliable content poses a risk to users and violates safety expectations.


Question 7

Which practice helps improve the reliability of an AI system?

Answer: Testing the system under different conditions and scenarios.

Explanation: Testing helps identify weaknesses and ensures consistent performance across varied inputs.


Question 8

Why should AI systems be monitored after deployment?

Answer: Because real-world data and usage patterns can change over time.

Explanation: Ongoing monitoring helps detect reliability or safety issues that emerge after deployment.


Question 9

Which Microsoft concept includes reliability and safety alongside fairness, transparency, and accountability?

Answer: Responsible AI

Explanation: Reliability and safety are core principles within Microsoft’s Responsible AI framework.


Question 10

An organization wants its AI system to fail gracefully instead of producing harmful outputs when errors occur. Which consideration does this reflect?

Answer: Safety

Explanation: Failing safely reduces the risk of harm when AI systems encounter problems or unexpected inputs.


Exam tip

On the AI-900 exam, reliability relates to consistent and predictable behavior, while safety focuses on preventing harm. Scenario-based questions often include words like unexpected, incorrect, harmful, or unpredictable.


Go to the AI-900 Exam Prep Hub main page.

Describe considerations for privacy and security in an AI solution (AI-900 Exam Prep)

Overview

Privacy and security are foundational principles of Responsible AI and a key topic on the AI-900: Microsoft Azure AI Fundamentals exam. These considerations focus on protecting personal data, maintaining user trust, and safeguarding AI systems from unauthorized access or misuse.

For AI-900, candidates are expected to understand why privacy and security matter, recognize scenarios where they apply, and identify how they relate to the responsible use of AI — not to implement technical security controls.


What do privacy and security mean in AI?

  • Privacy refers to protecting personal and sensitive data used by or generated from AI systems.
  • Security refers to protecting AI systems, data, and models from unauthorized access, attacks, or misuse.

AI solutions often rely on large volumes of data, which makes safeguarding that data critical.


Why privacy and security are important

AI systems frequently process sensitive information such as:

  • Personal identifiers (names, addresses, IDs)
  • Images or videos of people
  • Voice recordings
  • Text containing confidential or proprietary information

If privacy and security are not properly considered, AI solutions can expose personal data, violate regulations, and lose user trust.


Examples of privacy and security concerns

Common real-world scenarios include:

  • Facial recognition systems collecting biometric data without consent
  • Chatbots storing or exposing personal information shared by users
  • Document processing systems handling confidential financial or legal documents
  • Generative AI systems unintentionally revealing sensitive training data
  • Unauthorized access to AI models or datasets

In each case, the concern is how data is collected, stored, protected, and used.


Privacy and security across AI workloads

Privacy and security considerations apply to all AI workloads, including:

  • Machine learning models trained on personal or sensitive data
  • Computer vision systems analyzing images or video of people
  • Natural language processing systems processing user text or conversations
  • Speech AI systems handling voice recordings
  • Generative AI systems creating or using content based on user input

Any AI system that uses personal or sensitive data must prioritize privacy and security.


Key privacy considerations

High-level privacy concepts tested on AI-900 include:

  • Collecting only the data that is necessary
  • Using data responsibly and for intended purposes
  • Protecting user consent and expectations
  • Preventing unintended data exposure

These considerations help ensure ethical and lawful use of data.


Key security considerations

Security-related concepts include:

  • Preventing unauthorized access to AI systems and data
  • Protecting AI models from tampering or misuse
  • Ensuring secure storage and transmission of data

While AI-900 does not test technical security mechanisms, you should recognize when security is a concern in AI scenarios.


Microsoft’s approach to privacy and security

Privacy and security are core components of Microsoft’s Responsible AI principles. Azure AI services are designed to meet enterprise-grade security and compliance standards, helping organizations build AI solutions that protect data and users.


Key takeaways for the AI-900 exam

  • Privacy protects personal and sensitive data
  • Security protects AI systems and data from unauthorized access
  • Privacy and security apply across all AI workloads
  • AI systems must handle data responsibly and securely
  • Privacy and security are essential to building trustworthy AI solutions

Recognizing privacy and security concerns in AI scenarios is essential for success on the AI-900 exam.


Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe considerations for privacy and security in an AI solution (AI-900 Exam Prep)

Practice Questions


Question 1

What is the primary goal of privacy in an AI solution?

Answer: To protect personal and sensitive data used by or generated from the AI system.

Explanation: Privacy focuses on how personal data is collected, used, stored, and shared to protect individuals’ rights and expectations.


Question 2

Which scenario represents a security concern rather than a privacy concern?

Answer: Unauthorized access to an AI model and its training data.

Explanation: Security focuses on protecting systems, models, and data from unauthorized access, attacks, or misuse.


Question 3

An AI-powered chatbot stores users’ conversations, including names and addresses. Which Responsible AI principle is most directly involved?

Answer: Privacy

Explanation: Storing personal information requires careful handling to protect user privacy.


Question 4

Which AI workload is most likely to raise privacy concerns?

Answer: Any AI workload that processes personal or sensitive data.

Explanation: Privacy considerations apply across all AI workloads when personal data is involved.


Question 5

Why is user consent important in AI solutions?

Answer: Because users should understand and agree to how their data is collected and used.

Explanation: Obtaining consent helps ensure responsible and ethical use of personal data.


Question 6

A document processing system analyzes contracts containing confidential information. What is the main concern?

Answer: Protecting sensitive data from unauthorized exposure.

Explanation: Handling confidential documents requires strong privacy and security considerations.


Question 7

Which practice supports privacy in an AI solution?

Answer: Collecting only the data necessary for the intended purpose.

Explanation: Limiting data collection reduces privacy risks and potential misuse.


Question 8

Why must AI systems be protected against unauthorized access?

Answer: To prevent data breaches, misuse, or manipulation of AI models.

Explanation: Unauthorized access can compromise both the security and trustworthiness of AI systems.


Question 9

Which Microsoft framework includes privacy and security as core principles?

Answer: Responsible AI

Explanation: Privacy and security are fundamental principles within Microsoft’s Responsible AI framework.


Question 10

An organization wants to ensure its generative AI system does not expose sensitive training data. Which consideration should guide this effort?

Answer: Privacy and security

Explanation: Preventing unintended disclosure of sensitive data is central to protecting privacy and maintaining system security.


Exam tip

For AI-900 questions, privacy usually involves personal data and consent, while security involves protecting systems and data from unauthorized access or misuse. Scenario keywords often make the distinction clear.


Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe considerations for inclusiveness in an AI solution (AI-900 Exam Prep)

Practice Questions


Question 1

What is the primary goal of inclusiveness in an AI solution?

Answer: To ensure AI systems are usable and beneficial for people with diverse abilities, backgrounds, and needs.

Explanation: Inclusiveness focuses on empowering all users and avoiding designs that unintentionally exclude certain groups.


Question 2

Which scenario best represents an inclusiveness concern?

Answer: A speech recognition system that performs poorly for users with strong accents.

Explanation: Failing to support diverse accents and speech patterns limits accessibility and inclusiveness.


Question 3

An AI application does not support screen readers or accessibility tools. Which Responsible AI principle is most impacted?

Answer: Inclusiveness

Explanation: Lack of accessibility features can exclude users with disabilities, which is a core inclusiveness issue.


Question 4

Which AI workload is most likely to require inclusiveness considerations?

Answer: All AI workloads intended for use by people

Explanation: Inclusiveness applies to any AI system that interacts with or affects users, regardless of workload type.


Question 5

Why is it important to test AI systems with diverse user groups?

Answer: To identify usability issues that may affect different abilities, languages, or contexts.

Explanation: Testing with diverse users helps ensure the AI solution works well for a broader population.


Question 6

A computer vision system fails to recognize assistive devices such as wheelchairs. What is the main concern?

Answer: The system is not inclusive of users with disabilities.

Explanation: Inclusive AI should consider assistive technologies and diverse physical environments.


Question 7

Which practice best supports inclusiveness in AI solutions?

Answer: Designing AI systems that support accessibility standards and tools.

Explanation: Accessibility support helps ensure AI systems can be used by people with varying abilities.


Question 8

Why is inclusiveness important for AI systems used by a global audience?

Answer: Because users may have different languages, cultures, abilities, and access needs.

Explanation: Inclusive design helps ensure AI solutions are usable and relevant across diverse populations.


Question 9

Which Microsoft framework includes inclusiveness as a core principle?

Answer: Responsible AI

Explanation: Inclusiveness is one of Microsoft’s six Responsible AI principles.


Question 10

An organization wants its AI system to avoid assumptions about users’ abilities or environments. Which principle should guide this effort?

Answer: Inclusiveness

Explanation: Inclusiveness encourages designing AI systems that adapt to diverse user needs rather than assuming a single type of user.


Exam tip

For AI-900 questions, inclusiveness is usually indicated by scenarios involving accessibility, disabilities, language support, accents, or diverse user needs. When the question is about who can or cannot use the AI system, inclusiveness is often the correct principle.


Go to the AI-900 Exam Prep Hub main page.

Describe considerations for inclusiveness in an AI solution (AI-900 Exam Prep)

Overview

Inclusiveness is a key guiding principle of Responsible AI and an important concept on the AI-900: Microsoft Azure AI Fundamentals exam. Inclusiveness focuses on designing AI solutions that empower and benefit all people, including individuals with different abilities, backgrounds, cultures, and access needs.

For the AI-900 exam, candidates are expected to understand what inclusiveness means in the context of AI, recognize inclusive and non-inclusive design scenarios, and identify why inclusiveness is essential for responsible AI solutions.


What does inclusiveness mean in AI?

Inclusiveness in AI refers to designing systems that:

  • Are usable by people with diverse abilities and needs
  • Consider different languages, cultures, and contexts
  • Avoid excluding or disadvantaging specific groups
  • Provide accessible experiences whenever possible

An inclusive AI solution aims to expand access and opportunity, rather than unintentionally limiting who can benefit from the technology.


Why inclusiveness matters

If inclusiveness is not considered, AI systems may:

  • Be difficult or impossible for some people to use
  • Exclude individuals with disabilities
  • Fail to support diverse languages or accents
  • Work well only for a narrow group of users

Inclusive AI helps ensure that technology benefits a broader population and does not reinforce existing barriers.


Examples of inclusiveness concerns

Common real-world examples include:

  • Speech recognition systems that struggle with certain accents or speech patterns
  • Computer vision systems that fail to recognize assistive devices such as wheelchairs
  • Chatbots or applications that do not support screen readers or accessibility tools
  • AI systems that assume all users have the same physical, cognitive, or technical abilities

In each case, the concern is whether the AI solution accommodates diverse user needs.


Inclusiveness across AI workloads

Inclusiveness applies across all AI workloads, including:

  • Speech AI, ensuring support for different accents, languages, and speech styles
  • Computer vision, accounting for varied physical environments and assistive technologies
  • Natural language processing, supporting multiple languages and inclusive language use
  • Generative AI, producing content that is accessible and usable by diverse audiences

Any AI system intended for broad use should consider inclusiveness.


Designing for inclusiveness

While AI-900 does not test technical design methods, it is important to recognize high-level inclusive practices:

  • Considering a wide range of users during design
  • Supporting accessibility tools and standards
  • Testing AI systems with diverse user groups
  • Avoiding assumptions about user abilities or contexts

These practices help ensure AI solutions are usable by more people.


Microsoft’s approach to inclusiveness

Inclusiveness is one of Microsoft’s Responsible AI principles, emphasizing the importance of designing AI systems that empower people and respect human diversity.

Microsoft encourages building AI solutions that are accessible, adaptable, and beneficial to individuals with varying needs and abilities.


Key takeaways for the AI-900 exam

  • Inclusiveness focuses on accessibility and diversity
  • AI systems should accommodate users with different abilities, languages, and contexts
  • Lack of inclusiveness can unintentionally exclude groups of people
  • Inclusiveness applies to all AI workloads
  • Inclusiveness is a core principle of Microsoft’s Responsible AI framework

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe considerations for transparency in an AI solution (AI-900 Exam Prep)

Practice Questions


Question 1

What does transparency mean in the context of responsible AI?

A. Ensuring AI models are open source
B. Making AI systems explainable and understandable to users
C. Encrypting all data used by AI systems
D. Ensuring AI decisions are always correct

Correct Answer: B

Explanation:
Transparency focuses on helping users understand how and why an AI system produces its results. This includes explainability, documentation, and clear communication of system capabilities and limitations.


Question 2

Why is transparency especially important in AI systems that affect people’s lives?

A. It reduces infrastructure costs
B. It improves model training speed
C. It helps users trust and appropriately rely on AI decisions
D. It guarantees fairness

Correct Answer: C

Explanation:
Transparency builds trust by allowing users to understand AI decisions, especially in sensitive areas like hiring, lending, or healthcare. It does not automatically guarantee fairness, but it supports accountability.


Question 3

Which scenario best demonstrates a lack of transparency?

A. A chatbot explains it is an AI system before starting a conversation
B. A loan approval model provides approval probabilities
C. A facial recognition system provides results without explanation
D. A recommendation engine displays confidence scores

Correct Answer: C

Explanation:
Providing results without explanation limits transparency. Users should understand what the system is doing and why, especially when outcomes affect them.


Question 4

Which Microsoft Responsible AI principle is most closely associated with explaining AI decisions to users?

A. Fairness
B. Reliability and Safety
C. Transparency
D. Privacy and Security

Correct Answer: C

Explanation:
Transparency focuses on making AI systems understandable, including explanations of decisions, limitations, and confidence levels.


Question 5

What is one way to improve transparency in an AI solution?

A. Increase model complexity
B. Hide training data sources
C. Provide explanations and confidence levels
D. Remove human oversight

Correct Answer: C

Explanation:
Providing explanations, confidence scores, and documentation helps users understand how the AI operates and how much to trust its output.


Question 6

An AI system is used to classify customer feedback sentiment. Which feature best supports transparency?

A. Faster inference time
B. Clear documentation describing how sentiment is determined
C. Larger training dataset
D. Automatic retraining

Correct Answer: B

Explanation:
Transparency is supported through documentation and explanations, not performance optimizations or automation alone.


Question 7

Which of the following is a transparency-related question a user might ask?

A. How quickly does the model run?
B. How was this decision made?
C. How much does the service cost?
D. How often is the model retrained?

Correct Answer: B

Explanation:
Transparency addresses questions about how decisions are made and what factors influence AI outputs.


Question 8

Why should AI systems clearly communicate their limitations?

A. To reduce model accuracy
B. To discourage user adoption
C. To help users make informed decisions
D. To meet performance benchmarks

Correct Answer: C

Explanation:
Communicating limitations ensures users do not over-rely on AI systems and understand when human judgment is required.


Question 9

Which Azure AI practice supports transparency for end users?

A. Automatically scaling compute resources
B. Logging system errors
C. Providing model explanations and confidence scores
D. Encrypting data at rest

Correct Answer: C

Explanation:
Model explanations and confidence scores directly help users understand AI predictions, supporting transparency.


Question 10

How does transparency contribute to responsible AI usage?

A. By removing bias from models
B. By ensuring AI systems never fail
C. By enabling accountability and informed trust
D. By improving training speed

Correct Answer: C

Explanation:
Transparency enables accountability, helps users trust AI appropriately, and supports ethical decision-making. It complements, but does not replace, other principles like fairness or reliability.


Go to the AI-900 Exam Prep Hub main page.

Describe considerations for transparency in an AI solution (AI-900 Exam Prep)

Overview

Transparency is a core guiding principle of Responsible AI and an important concept tested on the AI-900: Microsoft Azure AI Fundamentals exam. Transparency focuses on ensuring that people understand how and why AI systems make decisions, what data they use, and what their limitations are.

For AI-900, candidates are expected to recognize transparency concerns in AI scenarios and understand why transparent AI systems are critical for trust, accountability, and responsible use.


What does transparency mean in AI?

Transparency in AI means that:

  • Users are informed when they are interacting with an AI system
  • Decisions and outputs can be explained in understandable terms
  • The purpose and limitations of the AI system are clearly communicated
  • Stakeholders understand how AI impacts decisions

Transparency does not require users to understand complex algorithms, but it does require clarity about what the AI is doing and why.


Why transparency matters

Without transparency, AI systems can:

  • Appear unpredictable or untrustworthy
  • Make decisions that users cannot understand or challenge
  • Hide biases, errors, or limitations
  • Reduce confidence in AI-driven outcomes

Transparent AI systems help build trust, enable informed decision-making, and support ethical AI use.


Examples of transparency concerns

Common real-world scenarios include:

  • Users not being told that an AI system is making recommendations or decisions
  • Automated decisions without explanations, such as loan approvals or rejections
  • Chatbots that appear human without disclosing they are AI
  • AI systems that do not explain their confidence or uncertainty

In these cases, the concern is whether users can understand and appropriately trust the AI system.


Transparency across AI workloads

Transparency considerations apply to all AI workloads, including:

  • Machine learning models making predictions or classifications
  • Computer vision systems interpreting images or video
  • Natural language processing systems analyzing or generating text
  • Generative AI systems producing content or recommendations

Any AI system that influences decisions or user behavior should be transparent about its role.


Key transparency practices

At a high level, transparency includes:

  • Informing users when AI is involved
  • Providing explanations for AI outputs where possible
  • Communicating limitations, accuracy expectations, and risks
  • Enabling users to question or review AI-driven decisions

While AI-900 does not test technical explainability methods, candidates should recognize these concepts in exam scenarios.


Microsoft’s approach to transparency

Transparency is one of Microsoft’s Responsible AI principles. Microsoft emphasizes clear communication about AI capabilities, limitations, and use cases to help users make informed decisions.

Azure AI services include documentation, guidance, and features that support transparent AI usage.


Transparency vs trust

A key exam concept is that transparency:

  • Builds trust in AI systems
  • Supports accountability and ethical use
  • Helps users understand when AI assistance is appropriate

Transparent systems make it easier for users to rely on AI responsibly.


Key takeaways for the AI-900 exam

  • Transparency means clarity about how and why AI systems make decisions
  • Users should know when they are interacting with AI
  • AI systems should communicate limitations and uncertainty
  • Transparency applies across all AI workloads
  • Transparency is a core principle of Microsoft’s Responsible AI framework

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe Considerations for Accountability in an AI Solution (AI-900 Exam Prep)

Practice Exam Questions


Question 1

An organization deploys an AI system that automatically approves or rejects loan applications. To meet Microsoft’s Responsible AI principles, the organization requires employees to review rejected applications when customers appeal a decision.

Which Responsible AI principle does this best demonstrate?

A. Fairness
B. Transparency
C. Accountability
D. Inclusiveness

Correct Answer: C

Explanation:
Accountability ensures that humans remain responsible for AI decisions. Allowing human review and intervention demonstrates human oversight, a core accountability requirement.


Question 2

Which action best supports accountability in an AI solution?

A. Encrypting training data
B. Providing explanations for model predictions
C. Assigning a team responsible for monitoring AI behavior
D. Increasing the size of the training dataset

Correct Answer: C

Explanation:
Accountability requires clear ownership and responsibility. Assigning a team to monitor and manage AI outcomes ensures humans are accountable for system behavior.


Question 3

An AI-based hiring system logs every candidate ranking decision and allows auditors to review historical outcomes.

Which accountability consideration is being addressed?

A. Human oversight
B. Monitoring and auditing
C. Inclusiveness
D. Data minimization

Correct Answer: B

Explanation:
Logging decisions and enabling audits supports monitoring and auditing, which helps organizations remain accountable for AI behavior over time.


Question 4

Which scenario best illustrates a lack of accountability in an AI solution?

A. The AI system provides confidence scores with predictions
B. The organization cannot explain who owns the AI system
C. Training data is encrypted at rest
D. Users are informed that AI is being used

Correct Answer: B

Explanation:
If no one is responsible for the AI system, accountability is missing. Ownership and responsibility are core elements of accountability.


Question 5

A healthcare AI solution flags high-risk patients. Final treatment decisions are always made by doctors.

Which concept does this scenario demonstrate?

A. Transparency
B. Fairness
C. Human-in-the-loop accountability
D. Privacy

Correct Answer: C

Explanation:
Human-in-the-loop systems ensure humans make final decisions, reinforcing accountability in high-impact scenarios.


Question 6

Which statement best describes accountability in AI?

A. AI systems should never make automated decisions
B. AI models must be open source
C. Humans remain responsible for AI outcomes
D. AI decisions must be unbiased

Correct Answer: C

Explanation:
Accountability means humans and organizations remain responsible, even when AI systems are automated.


Question 7

An organization deploys an AI chatbot but ensures complex or sensitive issues are escalated to human agents.

Which Responsible AI principle is primarily demonstrated?

A. Inclusiveness
B. Reliability and safety
C. Accountability
D. Transparency

Correct Answer: C

Explanation:
Escalating decisions to humans ensures human oversight and responsibility, which is central to accountability.


Question 8

Which of the following is NOT primarily related to accountability?

A. Audit trails
B. Governance policies
C. Human review processes
D. Data anonymization

Correct Answer: D

Explanation:
Data anonymization relates to privacy, not accountability. The other options ensure human responsibility and oversight.


Question 9

After deployment, an AI model’s performance degrades, but no process exists to review or correct its behavior.

Which Responsible AI principle is most at risk?

A. Fairness
B. Accountability
C. Transparency
D. Inclusiveness

Correct Answer: B

Explanation:
Without monitoring or corrective processes, no one is accountable for the AI system’s ongoing behavior.


Question 10

On the AI-900 exam, which keyword most strongly indicates an accountability-related question?

A. Encryption
B. Accessibility
C. Ownership
D. Explainability

Correct Answer: C

Explanation:
Ownership is a key indicator of accountability. Accountability questions focus on who is responsible for AI systems and decisions.


Exam-Day Tip

If a question mentions:

  • Human review
  • Oversight
  • Governance
  • Auditing
  • Ownership
  • Responsibility

👉 The correct answer might be related to Accountability.


Go to the AI-900 Exam Prep Hub main page.