Tag: Responsible AI

Practice Questions: Describe Considerations for Accountability in an AI Solution (AI-900 Exam Prep)

Practice Exam Questions


Question 1

An organization deploys an AI system that automatically approves or rejects loan applications. To meet Microsoft’s Responsible AI principles, the organization requires employees to review rejected applications when customers appeal a decision.

Which Responsible AI principle does this best demonstrate?

A. Fairness
B. Transparency
C. Accountability
D. Inclusiveness

Correct Answer: C

Explanation:
Accountability ensures that humans remain responsible for AI decisions. Allowing human review and intervention demonstrates human oversight, a core accountability requirement.


Question 2

Which action best supports accountability in an AI solution?

A. Encrypting training data
B. Providing explanations for model predictions
C. Assigning a team responsible for monitoring AI behavior
D. Increasing the size of the training dataset

Correct Answer: C

Explanation:
Accountability requires clear ownership and responsibility. Assigning a team to monitor and manage AI outcomes ensures humans are accountable for system behavior.


Question 3

An AI-based hiring system logs every candidate ranking decision and allows auditors to review historical outcomes.

Which accountability consideration is being addressed?

A. Human oversight
B. Monitoring and auditing
C. Inclusiveness
D. Data minimization

Correct Answer: B

Explanation:
Logging decisions and enabling audits supports monitoring and auditing, which helps organizations remain accountable for AI behavior over time.


Question 4

Which scenario best illustrates a lack of accountability in an AI solution?

A. The AI system provides confidence scores with predictions
B. The organization cannot explain who owns the AI system
C. Training data is encrypted at rest
D. Users are informed that AI is being used

Correct Answer: B

Explanation:
If no one is responsible for the AI system, accountability is missing. Ownership and responsibility are core elements of accountability.


Question 5

A healthcare AI solution flags high-risk patients. Final treatment decisions are always made by doctors.

Which concept does this scenario demonstrate?

A. Transparency
B. Fairness
C. Human-in-the-loop accountability
D. Privacy

Correct Answer: C

Explanation:
Human-in-the-loop systems ensure humans make final decisions, reinforcing accountability in high-impact scenarios.


Question 6

Which statement best describes accountability in AI?

A. AI systems should never make automated decisions
B. AI models must be open source
C. Humans remain responsible for AI outcomes
D. AI decisions must be unbiased

Correct Answer: C

Explanation:
Accountability means humans and organizations remain responsible, even when AI systems are automated.


Question 7

An organization deploys an AI chatbot but ensures complex or sensitive issues are escalated to human agents.

Which Responsible AI principle is primarily demonstrated?

A. Inclusiveness
B. Reliability and safety
C. Accountability
D. Transparency

Correct Answer: C

Explanation:
Escalating decisions to humans ensures human oversight and responsibility, which is central to accountability.


Question 8

Which of the following is NOT primarily related to accountability?

A. Audit trails
B. Governance policies
C. Human review processes
D. Data anonymization

Correct Answer: D

Explanation:
Data anonymization relates to privacy, not accountability. The other options ensure human responsibility and oversight.


Question 9

After deployment, an AI model’s performance degrades, but no process exists to review or correct its behavior.

Which Responsible AI principle is most at risk?

A. Fairness
B. Accountability
C. Transparency
D. Inclusiveness

Correct Answer: B

Explanation:
Without monitoring or corrective processes, no one is accountable for the AI system’s ongoing behavior.


Question 10

On the AI-900 exam, which keyword most strongly indicates an accountability-related question?

A. Encryption
B. Accessibility
C. Ownership
D. Explainability

Correct Answer: C

Explanation:
Ownership is a key indicator of accountability. Accountability questions focus on who is responsible for AI systems and decisions.


Exam-Day Tip

If a question mentions:

  • Human review
  • Oversight
  • Governance
  • Auditing
  • Ownership
  • Responsibility

👉 The correct answer might be related to Accountability.


Go to the AI-900 Exam Prep Hub main page.

Describe Considerations for Accountability in an AI Solution (AI-900 Exam Prep)

Where This Fits in the Exam

  • Exam Domain: Describe Artificial Intelligence workloads and considerations (15–20%)
  • Sub-Domain: Identify guiding principles for responsible AI
  • Topic: Describe considerations for accountability in an AI solution

On the AI-900 exam, accountability focuses on the idea that humans remain responsible for AI systems, even when decisions are automated.


What Is Accountability in AI?

Accountability means ensuring that people are responsible for the behavior, outcomes, and impact of AI systems.

Even though AI systems can make predictions or recommendations automatically, AI does not replace human responsibility. Organizations must be able to:

  • Explain who owns the AI system
  • Monitor and audit AI decisions
  • Intervene when AI behaves incorrectly or harmfully

Key idea for the exam:
AI systems must have human oversight and clear ownership.


Why Accountability Is Important

AI systems can impact critical areas such as:

  • Hiring and recruitment
  • Loan approvals
  • Healthcare decisions
  • Law enforcement
  • Customer service

Without accountability:

  • Errors may go unnoticed
  • Bias may persist
  • Harmful decisions may not be corrected
  • Trust in AI systems is reduced

Accountability ensures ethical use, legal compliance, and user trust.


Key Accountability Considerations

Human Oversight

AI systems should allow humans to:

  • Review AI decisions
  • Override or correct outcomes
  • Handle exceptions and edge cases

This is often referred to as human-in-the-loop or human-on-the-loop decision-making.


Clear Ownership and Responsibility

An organization should clearly define:

  • Who designed the AI system
  • Who deployed it
  • Who maintains and monitors it
  • Who is responsible when issues occur

On the exam, accountability always points back to people and organizations, not the model itself.


Monitoring and Auditing

Accountable AI solutions include:

  • Logging of AI decisions
  • Performance monitoring over time
  • Bias and drift detection
  • Periodic reviews of outcomes

This helps ensure the AI system continues to behave as intended after deployment.


Governance and Controls

Accountability includes governance practices such as:

  • Approval processes for AI use
  • Policies for acceptable AI behavior
  • Compliance with laws and regulations
  • Documentation of design decisions

These controls ensure AI solutions align with organizational and ethical standards.


Accountability vs Other Responsible AI Principles

Understanding how accountability differs from related principles is very important for AI-900.

PrincipleFocus
AccountabilityHumans are responsible for AI outcomes
TransparencyExplaining how AI makes decisions
FairnessAvoiding bias and discrimination
Reliability & SafetyConsistent and safe system behavior
Privacy & SecurityProtecting data and systems
InclusivenessDesigning for diverse users

Exam tip:
If the question mentions human review, ownership, audits, or responsibility, the answer is Accountability.


Practical Examples of Accountability

  • A loan approval system allows staff to review and override AI decisions
  • An organization keeps logs of AI predictions for audits
  • A chatbot escalates sensitive issues to a human agent
  • A company assigns a team responsible for monitoring AI performance

All of these reinforce human responsibility over AI behavior.


Common AI-900 Exam Scenarios

You may see questions like:

  • Who is responsible when an AI system makes an incorrect decision?
  • Which principle ensures AI decisions can be reviewed by humans?
  • Which Responsible AI principle emphasizes governance and oversight?

In these cases, Accountability is the correct answer.


Key Takeaways for the Exam

  • Accountability ensures humans remain responsible for AI systems
  • AI does not eliminate organizational or ethical responsibility
  • Human oversight, auditing, and governance are central concepts
  • Accountability is about ownership and control, not explainability or fairness

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Responsible AI Considerations for Generative AI (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A company uses a generative AI model to create marketing content. They want to ensure the model does not produce offensive or harmful language.

Which Responsible AI principle is being addressed?

A. Transparency
B. Fairness
C. Reliability and Safety
D. Accountability

Correct Answer: C

Explanation:
Preventing harmful or offensive outputs is a core aspect of reliability and safety, which ensures AI systems behave safely under expected conditions.


Question 2

A chatbot powered by generative AI informs users that responses are created by an AI system and may contain errors.

Which Responsible AI principle does this demonstrate?

A. Privacy and Security
B. Transparency
C. Inclusiveness
D. Fairness

Correct Answer: B

Explanation:
Clearly communicating that content is AI-generated and may be inaccurate supports transparency, helping users understand the system’s limitations.


Question 3

A developer ensures that AI-generated job descriptions do not favor or exclude any gender, ethnicity, or age group.

Which Responsible AI principle is being applied?

A. Accountability
B. Fairness
C. Reliability and Safety
D. Privacy

Correct Answer: B

Explanation:
Avoiding bias and discrimination in generated content aligns with the fairness principle.


Question 4

An organization requires a human reviewer to approve all AI-generated responses before they are published on a public website.

Which Responsible AI principle does this represent?

A. Transparency
B. Reliability and Safety
C. Accountability
D. Inclusiveness

Correct Answer: C

Explanation:
Ensuring humans remain responsible for AI outputs demonstrates accountability.


Question 5

A generative AI system is designed so that user prompts and outputs are not stored or used to retrain the model.

Which Responsible AI principle is primarily addressed?

A. Transparency
B. Privacy and Security
C. Fairness
D. Inclusiveness

Correct Answer: B

Explanation:
Protecting user data and preventing unauthorized use of information supports privacy and security.


Question 6

Which feature in Azure AI services helps prevent generative AI models from producing unsafe or inappropriate content?

A. Model training
B. Content filters
C. Data labeling
D. Feature engineering

Correct Answer: B

Explanation:
Content filters are used to block harmful, unsafe, or inappropriate AI-generated outputs.


Question 7

A generative AI model supports multiple languages and produces accessible text for diverse user groups.

Which Responsible AI principle does this best represent?

A. Fairness
B. Transparency
C. Inclusiveness
D. Accountability

Correct Answer: C

Explanation:
Supporting diverse languages and accessibility aligns with the inclusiveness principle.


Question 8

Which scenario best illustrates a Responsible AI concern specific to generative AI?

A. A model classifies images into categories
B. A model predicts future sales
C. A model generates false but confident answers
D. A model stores structured data in a database

Correct Answer: C

Explanation:
Generative AI can produce hallucinations—incorrect but plausible outputs—which is a key Responsible AI concern.


Question 9

Why is Responsible AI especially important for generative AI workloads?

A. Generative AI requires more computing power
B. Generative AI creates new content that can cause harm if uncontrolled
C. Generative AI only works with unstructured data
D. Generative AI replaces traditional machine learning

Correct Answer: B

Explanation:
Because generative AI creates new content, it can introduce bias, misinformation, or harmful outputs if not properly governed.


Question 10

A company uses Azure OpenAI Service and wants to ensure ethical use of generative AI.

Which action best supports Responsible AI practices?

A. Removing all system prompts
B. Enabling content moderation and human review
C. Increasing model size
D. Disabling user authentication

Correct Answer: B

Explanation:
Combining content moderation with human oversight helps ensure safe, ethical, and responsible use of generative AI.


Final Exam Tips for This Topic

  • Expect scenario-based questions
  • Focus on principles, not technical configuration
  • Watch for keywords: bias, harm, safety, privacy, transparency
  • If the question mentions risk or trust, think Responsible AI

Go to the AI-900 Exam Prep Hub main page.

Identify Responsible AI Considerations for Generative AI (AI-900 Exam Prep)

Overview

Generative AI systems are powerful because they can create new content, such as text, images, code, and audio. However, this power also introduces ethical, legal, and societal risks. For this reason, Responsible AI is a core concept tested in the AI-900 exam, especially for generative AI workloads on Azure.

Microsoft emphasizes Responsible AI to ensure that AI systems are:

  • Fair
  • Reliable
  • Safe
  • Transparent
  • Secure
  • Inclusive
  • Accountable

Understanding these principles — and how they apply specifically to generative AI — is essential for passing the exam.


What Is Responsible AI?

Responsible AI refers to designing, developing, and deploying AI systems in ways that:

  • Minimize harm
  • Promote fairness and trust
  • Respect privacy and security
  • Provide transparency and accountability

Microsoft has formalized this through its Responsible AI Principles, which are directly reflected in Azure AI services and exam questions.


Why Responsible AI Matters for Generative AI

Generative AI introduces unique risks, including:

  • Producing biased or harmful content
  • Generating incorrect or misleading information (hallucinations)
  • Exposing sensitive or copyrighted data
  • Being misused for impersonation or misinformation

Because generative AI creates content dynamically, guardrails and safeguards are critical.


Microsoft’s Responsible AI Principles (Exam-Relevant)

1. Fairness

Definition:
AI systems should treat all people fairly and avoid bias.

Generative AI Example:
A text-generation model should not produce discriminatory language based on race, gender, age, or religion.

Azure Support:

  • Bias evaluation
  • Content filtering
  • Prompt design best practices

Exam Clue Words: bias, discrimination, fairness


2. Reliability and Safety

Definition:
AI systems should perform consistently and safely under expected conditions.

Generative AI Example:
A chatbot should avoid generating dangerous instructions or harmful advice.

Azure Support:

  • Content moderation
  • Safety filters
  • System message controls

Exam Clue Words: safety, harmful output, reliability


3. Privacy and Security

Definition:
AI systems must protect user data and respect privacy.

Generative AI Example:
A model should not store or reveal personal or confidential information provided in prompts.

Azure Support:

  • Data isolation
  • No training on customer prompts (Azure OpenAI)
  • Enterprise-grade security

Exam Clue Words: privacy, personal data, security


4. Transparency

Definition:
Users should understand how AI systems are being used and their limitations.

Generative AI Example:
Informing users that responses are AI-generated and may contain errors.

Azure Support:

  • Model documentation
  • Clear service descriptions
  • Usage disclosures

Exam Clue Words: explainability, transparency, disclosure


5. Accountability

Definition:
Humans must remain responsible for AI system outcomes.

Generative AI Example:
A human reviews AI-generated content before publishing it externally.

Azure Support:

  • Human-in-the-loop design
  • Monitoring and logging
  • Responsible deployment guidance

Exam Clue Words: human oversight, accountability


6. Inclusiveness

Definition:
AI systems should empower everyone and avoid excluding groups.

Generative AI Example:
Supporting multiple languages or accessibility-friendly outputs.

Azure Support:

  • Multilingual models
  • Accessibility-aware services

Exam Clue Words: inclusivity, accessibility


Responsible AI Controls for Generative AI on Azure

Azure provides built-in mechanisms to help organizations use generative AI responsibly.

Key Controls to Know for AI-900

ControlPurpose
Content filtersPrevent harmful, unsafe, or inappropriate outputs
Prompt engineeringGuide model behavior safely
System messagesSet boundaries for AI behavior
Human reviewValidate outputs before use
Usage monitoringDetect misuse or anomalies

Common Responsible AI Scenarios (Exam Focus)

You are very likely to see scenarios like these:

  • Preventing a chatbot from generating offensive language
  • Ensuring AI-generated content is reviewed by humans
  • Avoiding bias in generated job descriptions
  • Protecting personal data in prompts and outputs
  • Informing users that AI-generated content may be inaccurate

If the question mentions risk, harm, bias, safety, or trust, it is almost always testing Responsible AI.


Generative AI vs Responsible AI (Exam Framing)

ConceptPurpose
Generative AICreates new content
Responsible AIEnsures that content is safe, fair, and trustworthy

👉 Generative AI answers what AI can do
👉 Responsible AI answers how AI should be used


Key Takeaways for the AI-900 Exam

  • Responsible AI is not optional — it is a core design principle
  • Generative AI introduces new ethical risks
  • Microsoft’s Responsible AI principles guide Azure AI services
  • Expect scenario-based questions, not deep technical ones
  • Focus on concepts, not implementation details

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.