In the AI-900: Microsoft Azure AI Fundamentals exam, generative AI represents a significant and growing focus area. This topic assesses your ability to recognize when generative AI is the appropriate solution and how it differs from traditional AI and predictive machine learning.
Generative AI models are designed to create new content—such as text, images, audio, or code—based on patterns learned from large datasets and guided by user prompts.
This article explains common real-world scenarios where generative AI is used, how those scenarios appear on the AI-900 exam, and how they map to Azure services.
What Makes a Scenario “Generative AI”?
A workload is a generative AI scenario when:
The output is newly generated content, not just a prediction or classification
The model responds to natural language prompts or instructions
The output can vary creatively, even for similar inputs
If the task is to predict, classify, or extract, it is not generative AI. If the task is to create, compose, or generate, it is.
Common Generative AI Scenarios (AI-900 Focus)
1. Text Generation
Scenario examples:
Writing emails, reports, or marketing copy
Drafting blog posts or documentation
Generating summaries from bullet points
Why this is generative AI: The model creates original text based on a prompt rather than selecting from predefined responses.
2. Conversational AI and Chatbots
Scenario examples:
AI-powered customer support chatbots
Virtual assistants that answer open-ended questions
Knowledge assistants that explain concepts conversationally
Why this is generative AI: Responses are dynamically generated and context-aware, rather than rule-based or scripted.
3. Text Summarization
Scenario examples:
Summarizing long documents
Creating executive summaries
Condensing meeting transcripts
Why this is generative AI: The model produces a new, concise version of the original content while preserving meaning.
4. Translation and Language Transformation
Scenario examples:
Translating text between languages
Rewriting text to be simpler or more formal
Paraphrasing content
Why this is generative AI: The output text is newly generated rather than extracted or classified.
5. Code Generation and Assistance
Scenario examples:
Generating code from natural language descriptions
Explaining existing code
Refactoring or optimizing code snippets
Why this is generative AI: The model creates original source code based on intent expressed in a prompt.
6. Image Generation
Scenario examples:
Creating images from text prompts
Generating artwork or design concepts
Producing visual content for marketing
Why this is generative AI: The model synthesizes entirely new images rather than identifying objects in existing ones.
7. Audio and Speech Generation
Scenario examples:
Converting text into natural-sounding speech
Generating voiceovers
Creating spoken responses for virtual assistants
Why this is generative AI: The audio output is generated dynamically from text input.
Azure Services Commonly Used for Generative AI
For the AI-900 exam, generative AI scenarios are most commonly associated with:
Azure OpenAI Service
Large language models (LLMs)
Text, code, and image generation
Conversational AI
Other Azure services (such as Azure AI Speech or Language) may support generative capabilities, but Azure OpenAI Service is the primary service to associate with generative AI workloads.
Generative AI vs Other AI Approaches (Quick Contrast)
Task Type
AI Approach
Predict a value or category
Predictive Machine Learning
Follow predefined rules
Traditional AI
Create new text, images, or code
Generative AI
How This Appears on the AI-900 Exam
On the exam, generative AI scenarios are typically described using words such as:
Generate
Create
Write
Summarize
Compose
Respond conversationally
If the question emphasizes creative or open-ended output, generative AI is likely the correct choice.
Key Takeaways for Exam Day
Generative AI is about creation, not prediction
Outputs are flexible and context-aware
Azure OpenAI Service is the primary Azure service for generative AI
If the output did not previously exist, generative AI is likely the answer
A company uses a generative AI model to create marketing content. They want to ensure the model does not produce offensive or harmful language.
Which Responsible AI principle is being addressed?
A. Transparency B. Fairness C. Reliability and Safety D. Accountability
Correct Answer:C
Explanation: Preventing harmful or offensive outputs is a core aspect of reliability and safety, which ensures AI systems behave safely under expected conditions.
Question 2
A chatbot powered by generative AI informs users that responses are created by an AI system and may contain errors.
Which Responsible AI principle does this demonstrate?
A. Privacy and Security B. Transparency C. Inclusiveness D. Fairness
Correct Answer:B
Explanation: Clearly communicating that content is AI-generated and may be inaccurate supports transparency, helping users understand the system’s limitations.
Question 3
A developer ensures that AI-generated job descriptions do not favor or exclude any gender, ethnicity, or age group.
Which Responsible AI principle is being applied?
A. Accountability B. Fairness C. Reliability and Safety D. Privacy
Correct Answer:B
Explanation: Avoiding bias and discrimination in generated content aligns with the fairness principle.
Question 4
An organization requires a human reviewer to approve all AI-generated responses before they are published on a public website.
Which Responsible AI principle does this represent?
A. Transparency B. Reliability and Safety C. Accountability D. Inclusiveness
Correct Answer:C
Explanation: Ensuring humans remain responsible for AI outputs demonstrates accountability.
Question 5
A generative AI system is designed so that user prompts and outputs are not stored or used to retrain the model.
Which Responsible AI principle is primarily addressed?
A. Transparency B. Privacy and Security C. Fairness D. Inclusiveness
Correct Answer:B
Explanation: Protecting user data and preventing unauthorized use of information supports privacy and security.
Question 6
Which feature in Azure AI services helps prevent generative AI models from producing unsafe or inappropriate content?
A. Model training B. Content filters C. Data labeling D. Feature engineering
Correct Answer:B
Explanation: Content filters are used to block harmful, unsafe, or inappropriate AI-generated outputs.
Question 7
A generative AI model supports multiple languages and produces accessible text for diverse user groups.
Which Responsible AI principle does this best represent?
A. Fairness B. Transparency C. Inclusiveness D. Accountability
Correct Answer:C
Explanation: Supporting diverse languages and accessibility aligns with the inclusiveness principle.
Question 8
Which scenario best illustrates a Responsible AI concern specific to generative AI?
A. A model classifies images into categories B. A model predicts future sales C. A model generates false but confident answers D. A model stores structured data in a database
Correct Answer:C
Explanation: Generative AI can produce hallucinations—incorrect but plausible outputs—which is a key Responsible AI concern.
Question 9
Why is Responsible AI especially important for generative AI workloads?
A. Generative AI requires more computing power B. Generative AI creates new content that can cause harm if uncontrolled C. Generative AI only works with unstructured data D. Generative AI replaces traditional machine learning
Correct Answer:B
Explanation: Because generative AI creates new content, it can introduce bias, misinformation, or harmful outputs if not properly governed.
Question 10
A company uses Azure OpenAI Service and wants to ensure ethical use of generative AI.
Which action best supports Responsible AI practices?
A. Removing all system prompts B. Enabling content moderation and human review C. Increasing model size D. Disabling user authentication
Correct Answer:B
Explanation: Combining content moderation with human oversight helps ensure safe, ethical, and responsible use of generative AI.
Final Exam Tips for This Topic
Expect scenario-based questions
Focus on principles, not technical configuration
Watch for keywords: bias, harm, safety, privacy, transparency
If the question mentions risk or trust, think Responsible AI
Generative AI systems are powerful because they can create new content, such as text, images, code, and audio. However, this power also introduces ethical, legal, and societal risks. For this reason, Responsible AI is a core concept tested in the AI-900 exam, especially for generative AI workloads on Azure.
Microsoft emphasizes Responsible AI to ensure that AI systems are:
Fair
Reliable
Safe
Transparent
Secure
Inclusive
Accountable
Understanding these principles — and how they apply specifically to generative AI — is essential for passing the exam.
What Is Responsible AI?
Responsible AI refers to designing, developing, and deploying AI systems in ways that:
Minimize harm
Promote fairness and trust
Respect privacy and security
Provide transparency and accountability
Microsoft has formalized this through its Responsible AI Principles, which are directly reflected in Azure AI services and exam questions.
Why Responsible AI Matters for Generative AI
Generative AI introduces unique risks, including:
Producing biased or harmful content
Generating incorrect or misleading information (hallucinations)
Exposing sensitive or copyrighted data
Being misused for impersonation or misinformation
Because generative AI creates content dynamically, guardrails and safeguards are critical.
Microsoft’s Responsible AI Principles (Exam-Relevant)
1. Fairness
Definition: AI systems should treat all people fairly and avoid bias.
Generative AI Example: A text-generation model should not produce discriminatory language based on race, gender, age, or religion.
Azure Support:
Bias evaluation
Content filtering
Prompt design best practices
Exam Clue Words: bias, discrimination, fairness
2. Reliability and Safety
Definition: AI systems should perform consistently and safely under expected conditions.
Generative AI Example: A chatbot should avoid generating dangerous instructions or harmful advice.
A. To provide pre-trained computer vision models only B. To host virtual machines for AI workloads C. To provide a unified platform for building, customizing, and managing generative AI solutions D. To replace Azure Machine Learning
✅ Correct Answer: C
Explanation: Azure AI Foundry is a unified platform designed to help teams build, customize, deploy, and manage generative AI applications at scale. It does not replace Azure ML but complements it.
Question 2
Which capability of Azure AI Foundry allows organizations to compare and select the most appropriate model for a specific use case?
A. Role-Based Access Control (RBAC) B. Model catalog and benchmarking C. Azure Monitor integration D. Speech synthesis APIs
✅ Correct Answer: B
Explanation: Azure AI Foundry includes a model catalog with tools to compare and benchmark multiple models, helping teams choose the best model based on performance, cost, or task suitability.
Question 3
A development team wants to create an AI system that can autonomously perform tasks and collaborate with other AI components. Which Azure AI Foundry capability supports this scenario?
A. Image classification B. Agent orchestration C. Text analytics D. Speech recognition
✅ Correct Answer: B
Explanation: Azure AI Foundry supports AI agents and multi-agent workflows, enabling autonomous task execution and collaboration across agents.
Question 4
Which feature makes Azure AI Foundry suitable for enterprise environments?
A. Open-source licensing B. Built-in gaming engines C. Governance, monitoring, and role-based access controls D. Support for only a single AI model
✅ Correct Answer: C
Explanation: Enterprise readiness comes from security, governance, RBAC, monitoring, and compliance controls, all of which are core features of Azure AI Foundry.
Question 5
Which task can be performed using Azure AI Foundry?
A. Only training custom neural networks from scratch B. Managing physical AI hardware C. Fine-tuning generative AI models for domain-specific use cases D. Replacing Azure App Service
✅ Correct Answer: C
Explanation: Azure AI Foundry allows fine-tuning and optimization of generative AI models to adapt them to specific business or domain requirements.
Question 6
What stage of the AI lifecycle is supported by Azure AI Foundry?
A. Only model training B. Only deployment C. Only monitoring D. The full lifecycle from experimentation to production and monitoring
✅ Correct Answer: D
Explanation: Azure AI Foundry supports the entire AI lifecycle, including experimentation, development, deployment, monitoring, and continuous improvement.
Question 7
Which scenario best matches the use of Azure AI Foundry?
A. Classifying images of animals B. Translating text between languages C. Building an enterprise chatbot that uses multiple AI models and enforces governance D. Running batch SQL queries
✅ Correct Answer: C
Explanation: Azure AI Foundry is designed for complex generative AI scenarios, such as enterprise chatbots that require multiple models, orchestration, and governance.
Question 8
How does Azure AI Foundry integrate with other Azure services?
A. It operates completely independently B. It only integrates with Azure OpenAI C. It integrates with services like Azure App Service, Cosmos DB, and Logic Apps D. It replaces all other Azure AI services
✅ Correct Answer: C
Explanation: Azure AI Foundry integrates deeply with the Azure ecosystem, allowing generative AI solutions to be embedded into broader applications and workflows.
Question 9
Which feature helps control access and usage of AI resources in Azure AI Foundry?
A. Prompt engineering B. Role-Based Access Control (RBAC) C. Image tagging D. Speech transcription
✅ Correct Answer: B
Explanation: RBAC ensures that users and teams only have access to the resources and actions they are authorized to use, supporting secure enterprise deployments.
Question 10
On the AI-900 exam, when should you select Azure AI Foundry as the correct answer?
A. When the question focuses on basic image processing B. When the question mentions simple sentiment analysis C. When the scenario describes building, managing, and governing generative AI applications at scale D. When the question requires only translation services
✅ Correct Answer: C
Explanation: Azure AI Foundry is the best choice when the scenario involves enterprise-scale generative AI, including model selection, agents, lifecycle management, and governance.
Azure AI Foundry — now commonly referred to as Microsoft Foundry — is a unified Azure platform for developing, managing, and scaling enterprise-grade generative AI applications. It brings together models, tools, governance, and infrastructure into a single, interoperable environment, making it easier for teams to build, deploy, and operate AI apps and agents securely and consistently.
For AI-900 purposes, think of Foundry as a comprehensive hub for generative AI development on Azure — far beyond just model hosting — that enables rapid innovation with governance and enterprise readiness built in.
Core Capabilities of Azure AI Foundry
📌 1. Unified AI Development Platform
Foundry provides a single platform for AI teams and developers to:
Explore and compare a broad catalog of foundational models
Build, test, and customize generative AI solutions
Monitor and refine models over time
This reduces complexity and streamlines workflows compared with managing disparate tools.
🧠 2. Vast Model Catalog & Interoperability
Foundry gives access to thousands of models from multiple sources:
Frontier and open models from Microsoft
Models from OpenAI
Third-party models (e.g., Meta, Mistral)
Partner and community models
Teams can benchmark and compare models for specific tasks before selecting one for production.
⚙️ 3. Customization and Optimization
Foundry provides tools to help you:
Fine-tune models for specific domain needs
Distill or upgrade models to improve quality or reduce cost
Route workloads to the best performing model for a given request
Automated routing helps balance performance vs cost in production AI applications.
🤖 4. Build Agents and Intelligent Workflows
With Foundry, developers can build:
AI agents that perform tasks autonomously
Multi-agent systems where agents collaborate to solve complex problems
RPA-like automation and AI-driven business logic
These agents can be integrated into apps, bots, or workflow systems to respond, act, and collaborate with users.
🔐 5. Enterprise-Ready Governance and Security
Foundry includes enterprise-grade tools to manage:
Role-Based Access Control (RBAC)
Monitoring, logging, and audit trails
Secure access and isolation between teams
Compliance with organizational policies
This makes it suitable for large teams and critical use cases.
🛠 6. Integrated Tools and Templates
Foundry includes:
Pre-built solution templates for common AI patterns (e.g., Q&A bots, document assistants)
SDKs and APIs for Python, C#, and other languages
IDE integrations (e.g., Visual Studio Code extensions)
These accelerate development and reduce the learning curve.
🔄 7. End-to-End Lifecycle Support
Foundry supports the full AI project lifecycle:
Experimentation with models
Development of applications or workflows
Testing and evaluation
Deployment to production
Monitoring and refinement for optimization
This means teams can start with prototypes and scale seamlessly.
🧩 8. Integration with Azure Ecosystem
Foundry is not limited to AI models — it integrates with other Azure services, such as:
Azure App Service
Azure Container Apps
Azure Cosmos DB
Azure Logic Apps
Microsoft 365 and Teams
This allows generative AI features to be embedded into broader enterprise systems.
Scenarios Where Azure AI Foundry Is Used
Foundry supports many generative AI workloads, including:
Conversational agents and bots
Knowledge-powered search and assistants
Context-aware automation
Enterprise RAG (Retrieval-Augmented Generation)
AI-powered workflows and multi-agent orchestration
Its focus on flexibility and scale makes it suitable for both prototyping and enterprise production.
How Foundry Relates to Other Azure Generative AI Services
Capability
Azure AI Foundry
Other Azure Services
Model hosting & comparison
✅
Azure OpenAI / Azure AI services
Multi-model catalog
✅
Individual service catalogs
Fine-tuning & optimization
✅
Azure Machine Learning
Build agents & workflows
✅
Azure AI Language / Bots
Governance & enterprise features
✅
Core Azure security services
Rapid prototyping templates
✅
Individual service templates
Foundry’s value is in bringing these capabilities together into a unified platform.
Exam Tips for AI-900
Foundry is the answer when a question describes building, customizing, and governing enterprise generative AI solutions at scale.
It is not just a model API, but a platform for development, deployment, and lifecycle management of generative AI apps.
If a question mentions agents, workflows, integrated governance, or multi-model support for generative workloads, think Azure AI Foundry / Microsoft Foundry.
Key Takeaways
Azure AI Foundry (Microsoft Foundry) is a unified enterprise AI platform for generative AI development on Azure.
It provides model catalogs, customization, development tools, agents, governance, and integrations.
It supports the full AI application lifecycle — from prototype to production.
It integrates deeply with the Azure ecosystem and supports enterprise-grade governance and security.
You need to build a chatbot that can generate natural, human-like responses and maintain context across multiple user interactions. Which Azure service should you use?
A. Azure AI Language B. Azure AI Speech C. Azure OpenAI Service D. Azure AI Vision
Correct Answer: C
Explanation: Azure OpenAI Service provides large language models capable of multi-turn conversational AI. Azure AI Language supports traditional NLP tasks but not advanced generative conversations.
Question 2
Which feature of Azure OpenAI Service enables semantic search by representing text as numerical vectors?
A. Prompt engineering B. Text completion C. Embeddings D. Tokenization
Correct Answer: C
Explanation: Embeddings convert text into vectors that capture semantic meaning, enabling similarity search and retrieval-augmented generation (RAG).
Question 3
An organization wants to generate summaries of long internal documents while ensuring their data is not used to train public models. Which service meets this requirement?
A. Open-source LLM hosted on a VM B. Azure AI Language C. Azure OpenAI Service D. Azure Cognitive Search
Correct Answer: C
Explanation: Azure OpenAI ensures customer data isolation and does not use customer data to retrain models, making it suitable for enterprise and regulated environments.
Question 4
Which type of workload is Azure OpenAI Service primarily designed to support?
A. Predictive analytics B. Generative AI C. Rule-based automation D. Image preprocessing
Correct Answer: B
Explanation: Azure OpenAI focuses on generative AI workloads, including text generation, conversational AI, code generation, and embeddings.
Question 5
A developer wants to build an AI assistant that can explain code, generate new code snippets, and translate code between programming languages. Which Azure service should be used?
A. Azure AI Language B. Azure Machine Learning C. Azure OpenAI Service D. Azure AI Vision
Correct Answer: C
Explanation: Azure OpenAI supports code-capable large language models designed for code generation, explanation, and translation.
Question 6
Which Azure OpenAI capability is MOST useful for building retrieval-augmented generation (RAG) solutions?
A. Chat completion B. Embeddings C. Image generation D. Speech synthesis
Correct Answer: B
Explanation: RAG solutions rely on embeddings to retrieve relevant content based on semantic similarity before generating responses.
Question 7
Which security feature is a key benefit of using Azure OpenAI Service instead of public OpenAI endpoints?
A. Anonymous access B. Built-in image labeling C. Azure Active Directory integration D. Automatic data labeling
Correct Answer: C
Explanation: Azure OpenAI integrates with Azure Active Directory and RBAC, providing enterprise-grade authentication and access control.
Question 8
A solution requires generating marketing copy, summarizing customer feedback, and answering user questions in natural language. Which Azure service best supports all these requirements?
A. Azure AI Language B. Azure OpenAI Service C. Azure AI Vision D. Azure AI Search
Correct Answer: B
Explanation: Azure OpenAI excels at generating and transforming text using large language models, covering all described scenarios.
Question 9
Which statement BEST describes how Azure OpenAI Service handles customer data?
A. Customer data is used to retrain models globally B. Customer data is publicly accessible C. Customer data is isolated and not used for model training D. Customer data is stored permanently without controls
Correct Answer: C
Explanation: Azure OpenAI ensures data isolation and does not use customer prompts or responses to retrain foundation models.
Question 10
When should you choose Azure OpenAI Service instead of Azure AI Language?
A. When performing key phrase extraction B. When detecting named entities C. When generating original text or conversational responses D. When identifying sentiment polarity
Correct Answer: C
Explanation: Azure AI Language is designed for traditional NLP tasks, while Azure OpenAI is used for generative AI tasks such as text generation and conversational AI.
Final Exam Tip
If the scenario involves creating new content, chatting naturally, generating code, or semantic understanding at scale, the correct answer is likely related to Azure OpenAI Service.
The Azure OpenAI Service provides access to powerful OpenAI large language models (LLMs)—such as GPT models—directly within the Microsoft Azure cloud environment. It enables organizations to build generative AI applications while benefiting from Azure’s security, compliance, governance, and enterprise integration capabilities.
For the AI-900 exam, Azure OpenAI is positioned as Microsoft’s primary service for generative AI workloads, especially those involving text, code, and conversational AI.
What Is Azure OpenAI Service?
Azure OpenAI Service allows developers to deploy, customize, and consume OpenAI models using Azure-native tooling, APIs, and security controls.
Key characteristics:
Hosted and managed by Microsoft Azure
Provides enterprise-grade security and compliance
Uses REST APIs and SDKs
Integrates seamlessly with other Azure services
👉 On the exam, Azure OpenAI is the correct answer when a scenario describes generative AI powered by large language models.
Core Capabilities of Azure OpenAI Service
1. Access to Large Language Models (LLMs)
Azure OpenAI provides access to advanced models such as:
GPT models for text generation and understanding
Chat models for conversational AI
Embedding models for semantic search and retrieval
Code-focused models for programming assistance
These models can:
Generate human-like text
Answer questions
Summarize content
Write code
Explain concepts
Generate creative content
2. Text and Content Generation
Azure OpenAI can generate:
Articles, emails, and reports
Chatbot responses
Marketing copy
Knowledge base answers
Product descriptions
Exam tip: If the question mentions writing, summarizing, or generating text, Azure OpenAI is likely the answer.
3. Conversational AI (Chatbots)
Azure OpenAI supports natural, multi-turn conversations, making it ideal for:
Customer support chatbots
Virtual assistants
Internal helpdesk bots
AI copilots
These chatbots:
Maintain conversation context
Generate natural responses
Can be grounded in enterprise data
4. Code Generation and Assistance
Azure OpenAI can:
Generate code snippets
Explain existing code
Translate code between languages
Assist with debugging
This makes it valuable for developer productivity tools and AI-assisted coding scenarios.
5. Embeddings and Semantic Search
Azure OpenAI can create vector embeddings that represent the meaning of text.
Use cases include:
Semantic search
Document similarity
Recommendation systems
Retrieval-augmented generation (RAG)
Exam tip: If the scenario mentions searching based on meaning rather than keywords, think embeddings + Azure OpenAI.
6. Enterprise Security and Compliance
One of the most important exam points:
Azure OpenAI provides:
Data isolation
No training on customer data
Azure Active Directory integration
Role-Based Access Control (RBAC)
Compliance with Microsoft standards
This makes it suitable for regulated industries.
7. Integration with Azure Services
Azure OpenAI integrates with:
Azure AI Foundry
Azure AI Search
Azure Machine Learning
Azure App Service
Azure Functions
Azure Logic Apps
This allows organizations to build end-to-end generative AI solutions within Azure.
Common Use Cases Tested on AI-900
You should associate Azure OpenAI with:
Chatbots and conversational agents
Text generation and summarization
AI copilots
Semantic search
Code generation
Enterprise generative AI solutions
Azure OpenAI vs Other Azure AI Services (Exam Perspective)
Service
Primary Focus
Azure OpenAI
Generative AI using large language models
Azure AI Language
Traditional NLP (sentiment, entities, key phrases)
Azure AI Vision
Image analysis and OCR
Azure AI Speech
Speech-to-text and text-to-speech
Azure AI Foundry
End-to-end generative AI app lifecycle
Key Exam Takeaways
For AI-900, remember:
Azure OpenAI = Generative AI
Best for text, chat, code, and embeddings
Enterprise-ready with security and compliance
Uses pre-trained OpenAI models
Integrates with the broader Azure ecosystem
One-Line Exam Rule
If the question describes generating new content using large language models in Azure, the answer is likely related to Azure OpenAI Service.
What is the primary purpose of the Azure AI Foundry model catalog?
A. To store training datasets for Azure Machine Learning B. To centrally discover, compare, and deploy AI models C. To monitor AI model performance in production D. To automatically fine-tune all deployed models
✅ Correct Answer:B
Explanation: The Azure AI Foundry model catalog is a centralized repository that allows users to discover, evaluate, compare, and deploy AI models from Microsoft and partner providers. It is not primarily used for dataset storage or monitoring.
Question 2
Which types of models are available in the Azure AI Foundry model catalog?
A. Only Microsoft-built models B. Only open-source community models C. Models from Microsoft and multiple third-party providers D. Only models trained within Azure Machine Learning
✅ Correct Answer:C
Explanation: The model catalog includes models from Microsoft, OpenAI, Meta, Anthropic, Cohere, and other partners, giving users access to a diverse range of generative and AI models.
Question 3
Which feature helps users compare models within the Azure AI Foundry model catalog?
A. Azure Cost Management B. Model leaderboards and benchmarking C. AutoML pipelines D. Feature engineering tools
✅ Correct Answer:B
Explanation: The model catalog includes leaderboards and benchmark metrics, allowing users to compare models based on performance characteristics and suitability for specific tasks.
Question 4
What information is typically included in a model card in the Azure AI Foundry model catalog?
A. Only pricing details B. Only deployment scripts C. Metadata such as capabilities, limitations, and licensing D. Only training dataset information
✅ Correct Answer:C
Explanation: Model cards provide descriptive metadata, including model purpose, supported tasks, licensing terms, and usage considerations, helping users make informed decisions.
Question 5
Which deployment option allows you to consume a model without managing infrastructure?
A. Managed compute B. Dedicated virtual machines C. Serverless API deployment D. On-premises deployment
✅ Correct Answer:C
Explanation: Serverless API deployment (Models-as-a-Service) allows users to call models via APIs without managing underlying infrastructure, making it ideal for rapid development and scalability.
Question 6
What is a key benefit of having search and filtering in the model catalog?
A. It automatically selects the best model B. It restricts models to one provider C. It helps users quickly find models that match specific needs D. It enforces Responsible AI policies
✅ Correct Answer:C
Explanation: Search and filtering features allow users to narrow down models based on capabilities, provider, task type, and deployment options, speeding up model selection.
Question 7
Which AI workload is the Azure AI Foundry model catalog most closely associated with?
A. Traditional rule-based automation B. Predictive analytics dashboards C. Generative AI solutions D. Network security monitoring
✅ Correct Answer:C
Explanation: The model catalog is a core capability supporting generative AI workloads, such as text generation, chat, summarization, and multimodal applications.
Question 8
Why might an organization choose managed compute instead of a serverless API deployment?
A. To avoid version control B. To reduce accuracy C. To gain more control over performance and resources D. To eliminate licensing requirements
✅ Correct Answer:C
Explanation: Managed compute provides greater control over performance, scaling, and resource allocation, which can be important for predictable workloads or specialized use cases.
Question 9
Which scenario best illustrates the use of the Azure AI Foundry model catalog?
A. Writing SQL queries for data analysis B. Comparing multiple large language models before deployment C. Creating Power BI dashboards D. Training image classification models from scratch
✅ Correct Answer:B
Explanation: The model catalog is designed to help users evaluate and compare models before deploying them into generative AI applications.
Question 10
For the AI-900 exam, which statement best describes the Azure AI Foundry model catalog?
A. A low-level training engine for custom neural networks B. A centralized hub for discovering and deploying AI models C. A compliance auditing tool D. A replacement for Azure Machine Learning
✅ Correct Answer:B
Explanation: For AI-900, the key takeaway is that the model catalog acts as a central hub that simplifies model discovery, comparison, and deployment within Azure’s generative AI ecosystem.
🔑 Exam Tip
If an AI-900 question mentions:
Choosing between multiple generative models
Evaluating model performance or benchmarks
Using models from different providers in Azure
👉 The correct answer is very likely related to the Azure AI Foundry model catalog.
The Azure AI Foundry model catalog (also known as Microsoft Foundry Models) is a centralized, searchable repository of AI models that developers and organizations can use to build generative AI solutions on Azure. It contains hundreds to thousands of models from multiple providers — including Microsoft, OpenAI, Anthropic, Meta, Cohere, DeepSeek, NVIDIA, and more — and provides tools to explore, compare, and deploy them for various AI workloads.
The model catalog is a key feature of Azure AI Foundry because it lets teams discover and evaluate the right models for specific tasks before integrating them into applications.
Key Capabilities of the Model Catalog
🌐 1. Wide and Diverse Model Selection
The catalog includes a broad set of models, such as:
Large language models (LLMs) for text generation and chat
Domain-specific models for legal, medical, or industry tasks
Multimodal models that handle text + images
Reasoning and specialized task models These models come from multiple providers including Microsoft, OpenAI, Anthropic, Meta, Mistral AI, and more.
This diversity ensures that developers can find models that fit a wide range of use cases, from simple text completion to advanced multi-agent workflows.
🔍 2. Search and Filtering Tools
The model catalog provides tools to help you find the right model by:
Keyword search
Provider and collection filters
Filtering by capabilities (e.g., reasoning, tool calling)
Deployment type (e.g., serverless API vs managed compute)
Inference and fine-tune task types
Industry or domain tags
These filters make it easier to match models to specific AI workloads.
📊 3. Comparison and Benchmarking
The catalog includes features like:
Model performance leaderboards
Benchmark metrics for selected models
Side-by-side comparison tools
This lets organizations evaluate and compare models based on real-world performance metrics before deployment.
This is especially useful when choosing between models for accuracy, cost, or task suitability.
📄 4. Model Cards with Metadata
Each model in the catalog has a model card that provides:
Quick facts about the model
A description
Version and supported data types
Licenses and legal information
Benchmark results (if available)
Deployment status and options
Model cards help users understand model capabilities, constraints, and appropriate use cases.
🚀 5. Multiple Deployment Options
Models in the Foundry catalog can be deployed using:
Serverless API: A “Models as a Service” approach where the model is hosted and managed by Azure, and you pay per API call
Managed compute: Dedicated virtual machines for predictable performance and long-running applications
This gives teams flexibility in choosing cost and performance trade-offs.
⚙️ 6. Integration and Customization
The model catalog isn’t just for discovery — it also supports:
Fine-tuning of models based on your data
Custom deployments within your enterprise environment
Integration with other Azure tools and services, like Azure AI Foundry deployment workflows and AI development tooling
This makes the catalog a foundational piece of end-to-end generative AI development on Azure.
Model Categories in the Catalog
The model catalog is organized into key categories such as:
Models sold directly by Azure: Models hosted and supported by Microsoft with enterprise-grade integration, support, and compliant terms.
Partner and community models: Models developed by external organizations like OpenAI, Anthropic, Meta, or Cohere. These often extend capabilities or offer domain-specific strengths.
This structure helps teams select between fully supported enterprise models and innovative third-party models.
Scenarios Where You Would Use the Model Catalog
The Azure AI Foundry model catalog is especially useful when:
Exploring models for text generation, chat, summarization, or reasoning
Comparing multiple models for accuracy vs cost
Deploying models in different formats (serverless API vs compute)
Integrating models from multiple providers in a single AI pipeline
It is a central discovery and evaluation hub for generative AI on Azure.
How This Relates to AI-900
For the AI-900 exam, you should understand:
The model catalog is a core capability of Azure AI Foundry
It allows discovering, comparing, and deploying models
It supports multiple model providers
It offers deployment options and metadata to guide selection
If a question mentions finding the right generative model for a use case, evaluating model performance, or using a variety of models in Azure, then the Azure AI Foundry model catalog is likely being described.
Summary (Exam Highlights)
Azure AI Foundry model catalog provides discoverability for thousands of AI models.
Models can be filtered, compared, and evaluated.
Catalog entries include useful metadata (model cards) and benchmarking.
Models come from Microsoft and partner providers like OpenAI, Anthropic, Meta, etc.
Deployment options vary between serverless APIs and managed compute.
Practice Exam 1 – 60 Questions (with Answer key at the end)
Note: The exam is separated into topic sections to help with context and preparation, but the real exam will not be like that.
SECTION 1: Describe Artificial Intelligence workloads and considerations (Questions 1–10)
Question 1 (Single choice)
Which scenario is the best example of an AI workload?
A. A rules-based system that routes emails based on keywords B. A dashboard that displays historical sales data C. A system that predicts customer churn based on historical behavior D. A script that automatically renames files
Question 2 (Multi-select – Choose TWO)
Which characteristics are commonly associated with AI solutions?
A. Deterministic outputs B. Ability to improve with experience C. Dependence on labeled or unlabeled data D. Use of static business rules
Question 3 (Scenario – Single choice)
A company wants to automatically approve or reject loan applications based on past decisions and applicant attributes. Which AI workload type does this represent?
A. Computer vision B. Anomaly detection C. Classification D. Natural language processing
Question 4 (Matching)
Match each AI workload to its correct description:
AI Workload
Description
1. Classification
A. Identify unusual patterns
2. Regression
B. Assign items to categories
3. Clustering
C. Group similar items without labels
4. Anomaly detection
D. Predict numeric values
Question 5 (Single choice)
Which factor is most important when evaluating the ethical impact of an AI solution?
A. Processing speed B. Model size C. Potential bias in training data D. Storage cost
Question 6 (Scenario – Single choice)
An AI system used for hiring consistently favors one demographic group. Which Responsible AI principle is most directly violated?
A. Reliability B. Transparency C. Fairness D. Privacy
Question 7 (Multi-select – Choose TWO)
Which scenarios would typically require human oversight when deploying AI solutions?
A. Medical diagnosis recommendations B. Image resizing C. Credit approval decisions D. Log file compression
Question 8 (Fill in the blank)
The ability for users to understand how an AI model makes decisions relates to the principle of __________.
Question 9 (Single choice)
Which workload is best suited for predicting future sales revenue?
A. Classification B. Regression C. Clustering D. Object detection
Question 10 (Scenario – Single choice)
A system groups customers into segments without predefined labels. Which AI approach is being used?
A. Supervised learning B. Reinforcement learning C. Unsupervised learning D. Deep learning
SECTION 2: Describe fundamental principles of machine learning on Azure (Questions 11–20)
Question 11 (Single choice)
Which Azure service is primarily used to build, train, and deploy machine learning models?
A. Azure AI Vision B. Azure Machine Learning C. Azure OpenAI D. Azure Cognitive Search
Question 12 (Multi-select – Choose TWO)
Which elements are required to train a supervised machine learning model?
A. Labeled data B. Feature engineering C. Pretrained transformers D. Inference endpoints
Question 13 (Scenario – Single choice)
You want to predict house prices based on size, location, and age. Which type of machine learning model should you use?
A. Classification B. Regression C. Clustering D. Anomaly detection
Question 14 (Single choice)
Which term describes input variables used by a machine learning model?