Tag: Azure AI

Practice Questions: Describe capabilities of the Azure AI Vision service (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A company wants to automatically generate short descriptions such as “A group of people standing on a beach” for images uploaded to its website. No model training is required.

Which Azure service should be used?

A. Azure Machine Learning
B. Azure AI Vision image analysis
C. Azure Custom Vision
D. Azure OpenAI Service

Correct Answer: B

Explanation:
Azure AI Vision image analysis can generate natural language descriptions of images using prebuilt models. Azure Machine Learning and Custom Vision require training, and Azure OpenAI is not designed for image analysis tasks.


Question 2

Which Azure AI Vision capability extracts printed and handwritten text from scanned documents and images?

A. Image tagging
B. Object detection
C. Optical Character Recognition (OCR)
D. Facial analysis

Correct Answer: C

Explanation:
OCR is specifically designed to detect and extract text from images, including scanned documents and handwritten content.


Question 3

A developer needs to identify objects in an image and return their locations using bounding boxes.

Which Azure AI Vision feature should be used?

A. Image classification
B. Image tagging
C. Object detection
D. Image description

Correct Answer: C

Explanation:
Object detection identifies what objects are present and where they are located using bounding boxes and confidence scores.


Question 4

Which capability of Azure AI Vision can detect faces and return attributes such as estimated age and facial expression?

A. Facial recognition
B. Facial detection and facial analysis
C. Image classification
D. Custom Vision

Correct Answer: B

Explanation:
Azure AI Vision supports facial detection and analysis, which provides facial attributes but does not identify individuals.


Question 5

A solution must automatically assign keywords like “outdoor”, “food”, or “animal” to images for search and organization.

Which Azure AI Vision feature meets this requirement?

A. OCR
B. Object detection
C. Image tagging
D. Facial analysis

Correct Answer: C

Explanation:
Image tagging assigns descriptive labels to images to improve categorization and searchability.


Question 6

Which statement best describes Azure AI Vision?

A. It requires training a custom model for each scenario
B. It provides prebuilt computer vision capabilities through APIs
C. It is only used for facial recognition
D. It can only analyze video streams

Correct Answer: B

Explanation:
Azure AI Vision offers prebuilt computer vision models accessed via APIs, requiring no model training.


Question 7

A company wants to analyze images quickly without building or training a machine learning model.

Which Azure service is most appropriate?

A. Azure Machine Learning
B. Azure Custom Vision
C. Azure AI Vision
D. Azure Databricks

Correct Answer: C

Explanation:
Azure AI Vision is designed for quick deployment using prebuilt models, making it ideal when no custom training is required.


Question 8

Which task is NOT a capability of Azure AI Vision?

A. Detecting objects in an image
B. Extracting text from images
C. Identifying specific individuals in photos
D. Generating image descriptions

Correct Answer: C

Explanation:
Azure AI Vision does not identify individuals. Facial recognition and identity verification are restricted and not required for AI-900.


Question 9

A scenario mentions analyzing images while following Microsoft’s Responsible AI principles, particularly around privacy and fairness.

Which Azure AI Vision feature is most closely associated with these considerations?

A. Image tagging
B. Facial detection and analysis
C. OCR
D. Object detection

Correct Answer: B

Explanation:
Facial detection and analysis involve human data and are closely tied to privacy, fairness, and transparency considerations.


Question 10

When should Azure AI Vision be used instead of Azure Custom Vision?

A. When you need a highly specialized image classification model
B. When you want full control over training data
C. When you need prebuilt image analysis without training
D. When labeling thousands of custom images

Correct Answer: C

Explanation:
Azure AI Vision is ideal for prebuilt, general-purpose image analysis scenarios. Custom Vision is used when custom training is required.


Final Exam Tips for This Topic

  • Think prebuilt vs custom
  • Azure AI Vision = no training
  • OCR = text extraction
  • Object detection = what + where
  • Facial analysis ≠ facial recognition

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe Capabilities of the Azure AI Face Detection Service (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A company wants to detect whether human faces appear in uploaded images and draw bounding boxes around them. The solution must not identify individuals.

Which Azure service should be used?

A. Azure Custom Vision
B. Azure AI Vision image classification
C. Azure AI Face detection
D. Azure OpenAI Service

Correct Answer: C

Explanation:
Azure AI Face detection is designed to detect faces and return their locations without identifying individuals. This aligns with privacy requirements and AI-900 expectations.


Question 2

Which task is supported by Azure AI Face detection?

A. Verifying a person’s identity against a database
B. Detecting the presence of human faces in an image
C. Training a custom facial recognition model
D. Authenticating users using facial biometrics

Correct Answer: B

Explanation:
Azure AI Face detection can detect faces and analyze facial attributes, but it does not perform identity verification or authentication.


Question 3

What type of information can Azure AI Face detection return for each detected face?

A. Person’s name and ID
B. Bounding box and facial attributes
C. Social media profile matches
D. Voice and speech characteristics

Correct Answer: B

Explanation:
The service returns face location (bounding box) and facial attributes such as estimated age or expression, not personal identity data.


Question 4

A scenario requires estimating whether people in an image appear to be smiling.

Which Azure AI Face detection capability supports this requirement?

A. Face identification
B. Facial attribute analysis
C. Image classification
D. Object detection

Correct Answer: B

Explanation:
Facial attribute analysis provides descriptive information such as facial expression, including whether a face appears to be smiling.


Question 5

Which statement best describes Azure AI Face detection for the AI-900 exam?

A. It requires training a custom dataset
B. It identifies known individuals in photos
C. It uses prebuilt models to analyze faces
D. It can only analyze video streams

Correct Answer: C

Explanation:
Azure AI Face detection uses pretrained models and requires no custom training, which is a key exam concept.


Question 6

A developer wants to count how many people appear in a group photo.

Which Azure AI service capability should be used?

A. OCR
B. Image tagging
C. Face detection
D. Image classification

Correct Answer: C

Explanation:
Face detection can identify multiple faces in a single image, making it suitable for counting people.


Question 7

Why is Azure AI Face detection closely associated with Responsible AI principles?

A. It uses unsupervised learning
B. It processes sensitive human biometric data
C. It requires large datasets
D. It supports only public images

Correct Answer: B

Explanation:
Facial data is considered sensitive personal data, so privacy, fairness, and transparency are especially important.


Question 8

Which scenario would be inappropriate for Azure AI Face detection?

A. Detecting faces in event photos
B. Estimating facial expressions
C. Identifying a person by name from an image
D. Drawing bounding boxes around faces

Correct Answer: C

Explanation:
Azure AI Face detection does not identify individuals. Identity recognition is outside the scope of AI-900 and restricted for ethical reasons.


Question 9

Which principle ensures users are informed when facial analysis is being used?

A. Reliability
B. Transparency
C. Inclusiveness
D. Sustainability

Correct Answer: B

Explanation:
Transparency requires that people understand when and how AI systems, such as facial detection, are being used.


Question 10

When comparing Azure AI Face detection with object detection, which statement is correct?

A. Object detection returns facial attributes
B. Face detection identifies any object in an image
C. Face detection focuses specifically on human faces
D. Both services identify individuals

Correct Answer: C

Explanation:
Face detection is specialized for human faces, while object detection identifies general objects like cars, animals, or furniture.


Exam Tip Recap 🔑

  • Face detection ≠ face recognition
  • Detects faces, locations, and attributes
  • Uses prebuilt models
  • Strong ties to Responsible AI

Go to the AI-900 Exam Prep Hub main page.

Describe Capabilities of the Azure AI Face Detection Service (AI-900 Exam Prep)

Overview

The Azure AI Face Detection service (part of Azure AI Vision) provides prebuilt computer vision capabilities to detect human faces in images and return structured information about those faces. For the AI-900: Microsoft Azure AI Fundamentals exam, the focus is on understanding what the service can do, what it cannot do, and how it aligns with Responsible AI principles.

This service uses pretrained models and can be accessed through REST APIs or SDKs without building or training a custom machine learning model.


What Is Face Detection (at the AI-900 level)?

Face detection answers the question:

“Is there a human face in this image, and what are its characteristics?”

It does not answer:

“Who is this person?”

This distinction is critical for the AI-900 exam.


Core Capabilities of Azure AI Face Detection

1. Face Detection

The service can:

  • Detect one or more human faces in an image
  • Return the location of each face using bounding boxes
  • Assign a confidence score to each detected face

This capability is commonly used for:

  • Photo moderation
  • Counting people in images
  • Identifying whether faces are present at all

2. Facial Attribute Analysis

For each detected face, the service can analyze and return attributes such as:

  • Estimated age range
  • Facial expression (for example, neutral or smiling)
  • Head pose (orientation of the face)
  • Glasses or accessories
  • Hair-related attributes

These attributes are descriptive and probabilistic, not definitive.


3. Multiple Face Detection

Azure AI Face Detection can:

  • Detect multiple faces in a single image
  • Return attributes for each detected face independently

This is useful in scenarios like:

  • Group photos
  • Crowd analysis
  • Event imagery

What Azure AI Face Detection Does NOT Do

Understanding limitations is frequently tested on AI-900.

The service does NOT:

  • Identify or verify individuals
  • Perform facial recognition for authentication
  • Match faces against a database of known people

Any functionality related to identity recognition falls outside the scope of AI-900 and is intentionally restricted due to privacy and ethical considerations.


Responsible AI Considerations

Facial analysis involves human biometric data, so Microsoft strongly emphasizes Responsible AI principles.

Key considerations include:

  • Privacy: Faces are sensitive personal data
  • Fairness: Models must work consistently across different demographics
  • Transparency: Users should be informed when facial analysis is used
  • Accountability: Humans remain responsible for how outputs are used

For AI-900, you are expected to recognize that facial detection requires extra care compared to other vision tasks like object detection or OCR.


Common AI-900 Exam Scenarios

You may see questions that describe:

  • Detecting whether people appear in an image
  • Returning bounding boxes around faces
  • Analyzing facial attributes without identifying individuals

Correct answers will typically reference:

  • Azure AI Face Detection
  • Prebuilt models
  • No custom training required

Azure AI Face Detection vs Other Vision Capabilities

CapabilityPurpose
Image classificationAssigns a single label to an image
Object detectionIdentifies objects and their locations
OCRExtracts text from images
Face detectionDetects faces and analyzes attributes

Key Takeaways for the AI-900 Exam

  • Azure AI Face Detection detects faces, not identities
  • It returns locations and attributes, not names
  • It uses pretrained models with no training required
  • Facial analysis requires Responsible AI awareness

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Features and Uses for Key Phrase Extraction (AI-900 Exam Prep)

Practice Questions


Question 1

A company wants to automatically identify the main topics discussed in thousands of customer reviews without determining whether the reviews are positive or negative.

Which NLP capability should be used?

A. Sentiment analysis
B. Language detection
C. Key phrase extraction
D. Entity recognition

Correct Answer: C

Explanation:
Key phrase extraction identifies important topics and concepts in text without analyzing emotional tone, making it ideal for summarizing review content.


Question 2

Which output is most likely returned by a key phrase extraction service?

A. A sentiment score between –1 and 1
B. A list of important words or short phrases
C. A detected language code
D. A classification label

Correct Answer: B

Explanation:
Key phrase extraction returns a list of relevant words or phrases that summarize the main ideas of the text.


Question 3

Which Azure service provides key phrase extraction using prebuilt models?

A. Azure Machine Learning
B. Azure AI Vision
C. Azure AI Language
D. Azure Cognitive Search

Correct Answer: C

Explanation:
Key phrase extraction is part of Azure AI Language, which offers prebuilt NLP models accessible via APIs.


Question 4

A support team wants to automatically tag incoming support tickets with topics such as billing, login issues, or performance.

Which NLP capability should they use?

A. Named entity recognition
B. Key phrase extraction
C. Sentiment analysis
D. Speech-to-text

Correct Answer: B

Explanation:
Key phrase extraction identifies important topics in unstructured text, making it suitable for tagging and categorization.


Question 5

Which scenario is NOT a typical use of key phrase extraction?

A. Summarizing the main topics of documents
B. Improving document search and indexing
C. Detecting the emotional tone of text
D. Identifying trending discussion topics

Correct Answer: C

Explanation:
Detecting emotional tone is handled by sentiment analysis, not key phrase extraction.


Question 6

Which statement best describes key phrase extraction for the AI-900 exam?

A. It requires labeled training data
B. It extracts names and dates only
C. It uses pretrained models on unstructured text
D. It classifies text into predefined categories

Correct Answer: C

Explanation:
Key phrase extraction uses pretrained NLP models and works directly on unstructured text without training.


Question 7

A multinational company wants to extract key topics from documents written in multiple languages.

Which feature of Azure AI Language supports this requirement?

A. Custom model training
B. Multi-language support
C. Facial recognition
D. Object detection

Correct Answer: B

Explanation:
Azure AI Language supports multiple languages for key phrase extraction, enabling global text analysis.


Question 8

Which NLP capability focuses on identifying specific items such as names, locations, and dates?

A. Key phrase extraction
B. Sentiment analysis
C. Language detection
D. Entity recognition

Correct Answer: D

Explanation:
Entity recognition extracts specific entities, while key phrase extraction focuses on main topics and concepts.


Question 9

A business wants to quickly understand what large volumes of text are about, without reading every document.

Which benefit of key phrase extraction addresses this need?

A. Emotion detection
B. Automatic topic identification
C. Speech recognition
D. Image analysis

Correct Answer: B

Explanation:
Key phrase extraction automatically identifies important topics, allowing rapid understanding of large text collections.


Question 10

Which responsible AI consideration is most relevant when using key phrase extraction?

A. Identity verification
B. Avoiding misinterpretation of extracted phrases
C. Biometric data protection
D. Facial bias detection

Correct Answer: B

Explanation:
Key phrase extraction outputs are contextual summaries, so users must avoid treating them as definitive conclusions.


Exam Tip Recap 🔑

Often paired with search, tagging, and trend analysis

Key phrase extraction = What is this text about?

It does not analyze sentiment

Uses prebuilt models in Azure AI Language


Go to the AI-900 Exam Prep Hub main page.

Identify Features and Uses for Key Phrase Extraction (AI-900 Exam Prep)

Overview

Key phrase extraction is a Natural Language Processing (NLP) capability that identifies the main topics or important terms within unstructured text. In the context of the AI-900: Microsoft Azure AI Fundamentals exam, you are expected to understand what key phrase extraction does, when to use it, and how it differs from other NLP workloads.

In Azure, key phrase extraction is provided through Azure AI Language using prebuilt models, requiring no custom training.


What Is Key Phrase Extraction?

Key phrase extraction answers the question:

“What is this text mainly about?”

It analyzes text and returns a list of relevant words or short phrases that summarize the core ideas.

Example:

“Azure AI provides cloud-based artificial intelligence services for developers.”

Extracted key phrases might include:

  • Azure AI
  • artificial intelligence services
  • cloud-based
  • developers

Core Features of Key Phrase Extraction

1. Automatic Topic Identification

The service automatically identifies:

  • Important concepts
  • Repeated or emphasized terms
  • Meaningful noun phrases

This helps users quickly understand large volumes of text.


2. Works with Unstructured Text

Key phrase extraction can be applied to:

  • Customer reviews
  • Support tickets
  • Emails
  • Social media posts
  • Articles and documents

No formatting or labeling is required.


3. Prebuilt NLP Models

For AI-900 purposes:

  • No model training is required
  • No labeled datasets are needed
  • The service is accessed via API calls or SDKs

This makes it ideal for rapid implementation.


4. Multi-Language Support

Azure AI Language supports multiple languages for key phrase extraction, making it suitable for global applications.


Common Use Cases

Summarizing Customer Feedback

Organizations can extract key phrases from thousands of customer comments to identify:

  • Common complaints
  • Popular features
  • Emerging issues

Search and Indexing

Key phrases can be used to:

  • Improve document search
  • Tag content automatically
  • Enhance content discoverability

Trend and Topic Analysis

By aggregating extracted phrases, businesses can:

  • Identify trending topics
  • Monitor brand mentions
  • Analyze public sentiment themes

Key Phrase Extraction vs Other NLP Workloads

NLP CapabilityPrimary Purpose
Key phrase extractionIdentify main topics in text
Sentiment analysisDetermine emotional tone
Language detectionIdentify the language used
Entity recognitionExtract specific entities (names, dates, locations)

Understanding these distinctions is critical for AI-900 exam questions.


Typical AI-900 Exam Scenarios

You may see questions describing:

  • Analyzing large amounts of feedback text
  • Automatically tagging documents
  • Identifying main discussion points without understanding emotion

Correct answers will reference:

  • Key phrase extraction
  • Azure AI Language
  • Prebuilt NLP models

Responsible AI Considerations

Although key phrase extraction does not directly analyze people, responsible usage still includes:

  • Avoiding misinterpretation of extracted phrases
  • Understanding that output is contextual, not definitive
  • Using extracted phrases as decision support, not final judgment

Key Takeaways for the AI-900 Exam

  • Key phrase extraction identifies important topics, not sentiment
  • It works on unstructured text
  • It uses pretrained models in Azure AI Language
  • It complements other NLP workloads rather than replacing them

A strong grasp of when to use key phrase extraction will help you confidently answer AI-900 questions related to Natural Language Processing workloads.


Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify features and uses for sentiment analysis (AI-900 Exam Prep)

Practice Questions


Question 1

What is the primary purpose of sentiment analysis in Natural Language Processing?

A. To identify people, places, and organizations in text
B. To determine the emotional tone of text
C. To translate text between languages
D. To summarize large documents

Correct Answer: B

Explanation:
Sentiment analysis evaluates the emotional tone or opinion expressed in text, such as positive, negative, neutral, or mixed. Entity recognition, translation, and summarization are different NLP tasks.


Question 2

Which Azure service provides sentiment analysis capabilities?

A. Azure Machine Learning
B. Azure AI Vision
C. Azure AI Language
D. Azure Cognitive Search

Correct Answer: C

Explanation:
Sentiment analysis is part of Azure AI Language, which provides pretrained NLP models for analyzing text sentiment, key phrases, entities, and more.


Question 3

A company wants to analyze customer reviews to determine whether feedback is positive or negative. Which AI capability should they use?

A. Key phrase extraction
B. Sentiment analysis
C. Entity recognition
D. Language detection

Correct Answer: B

Explanation:
Sentiment analysis is designed to classify text based on emotional tone, making it ideal for customer reviews and feedback analysis.


Question 4

Which sentiment classifications can Azure AI Language return?

A. Happy, Sad, Angry
B. Positive, Negative, Neutral, Mixed
C. True, False, Unknown
D. Approved, Rejected, Pending

Correct Answer: B

Explanation:
Azure sentiment analysis classifies text into positive, negative, neutral, or mixed sentiments.


Question 5

Which additional information is returned with sentiment analysis results?

A. Translation accuracy
B. Confidence scores
C. Named entities
D. Text summaries

Correct Answer: B

Explanation:
Sentiment analysis includes confidence scores, indicating how strongly the model believes the sentiment classification applies.


Question 6

A support team wants to automatically identify angry customer emails for escalation. Which NLP feature is most appropriate?

A. Entity recognition
B. Key phrase extraction
C. Sentiment analysis
D. Language detection

Correct Answer: C

Explanation:
Sentiment analysis helps detect negative or frustrated emotions, enabling automated prioritization of customer support requests.


Question 7

Which scenario is NOT an appropriate use case for sentiment analysis?

A. Measuring public opinion on social media
B. Identifying dissatisfaction in survey responses
C. Extracting product names from reviews
D. Monitoring brand perception

Correct Answer: C

Explanation:
Extracting product names is a task for entity recognition, not sentiment analysis.


Question 8

Does sentiment analysis in Azure AI Language require custom model training?

A. Yes, labeled data is required
B. Yes, but only for large datasets
C. No, it uses pretrained models
D. Only when using multiple languages

Correct Answer: C

Explanation:
Azure AI Language uses pretrained models, allowing sentiment analysis without building or training custom machine learning models.


Question 9

At which levels can sentiment analysis be applied?

A. Document level only
B. Sentence level only
C. Word level only
D. Document and sentence level

Correct Answer: D

Explanation:
Azure sentiment analysis evaluates sentiment at both the document level and sentence level, allowing more detailed insights.


Question 10

A business wants to understand how customers feel about a product, not what the product is. Which NLP capability should be used?

A. Key phrase extraction
B. Entity recognition
C. Sentiment analysis
D. Language detection

Correct Answer: C

Explanation:
Sentiment analysis focuses on emotional tone, while key phrase extraction and entity recognition focus on content and structure.


Final Exam Tip 🎯

For AI-900, always ask yourself:

“Am I being asked about emotion or opinion?”

If the answer is yes → Sentiment analysis


Go to the AI-900 Exam Prep Hub main page.

Identify Features and Uses for Sentiment Analysis (AI-900 Exam Prep)

Overview

Sentiment analysis is a Natural Language Processing (NLP) capability that determines the emotional tone or opinion expressed in text. In the context of the AI-900 exam, sentiment analysis is tested as a foundational NLP workload and is typically associated with scenarios involving customer feedback, reviews, social media posts, and support interactions.

On Azure, sentiment analysis is provided through Azure AI Language, which offers pretrained models that can analyze text without requiring machine learning expertise.


What Is Sentiment Analysis?

Sentiment analysis evaluates text to identify:

  • Overall sentiment (positive, negative, neutral, or mixed)
  • Confidence scores indicating how strongly the sentiment is expressed
  • Sentence-level sentiment (in addition to document-level sentiment)
  • Opinion mining (identifying sentiment about specific aspects, at a high level)

Example:

“The product works great, but the delivery was slow.”

Sentiment analysis can identify:

  • Positive sentiment about the product
  • Negative sentiment about the delivery
  • An overall mixed sentiment for the entire text

Azure Service Used for Sentiment Analysis

Sentiment analysis is a feature of:

Azure AI Language

Part of Azure AI Services, Azure AI Language provides several NLP capabilities, including:

  • Sentiment analysis
  • Key phrase extraction
  • Entity recognition
  • Language detection

For AI-900:

  • No custom model training is required
  • Prebuilt models are used
  • Text can be analyzed via REST APIs or SDKs

Key Features of Sentiment Analysis

1. Sentiment Classification

Text is classified into:

  • Positive
  • Negative
  • Neutral
  • Mixed

This classification applies at both:

  • Document level
  • Sentence level

2. Confidence Scores

Each sentiment classification includes a confidence score, indicating how strongly the model believes the sentiment applies.

Example:

  • Positive: 0.92
  • Neutral: 0.05
  • Negative: 0.03

Higher confidence scores indicate stronger sentiment.


3. Multi-Language Support

Azure AI Language supports sentiment analysis across multiple languages, making it suitable for global applications.


4. Pretrained Models

Sentiment analysis:

  • Uses pretrained AI models
  • Requires no labeled data
  • Can be implemented quickly

This aligns with the AI-900 focus on using AI services rather than building models.


Common Use Cases for Sentiment Analysis

1. Customer Feedback Analysis

Analyze:

  • Product reviews
  • Surveys
  • Net Promoter Score (NPS) comments

Goal: Understand customer satisfaction trends at scale.


2. Social Media Monitoring

Organizations analyze social media posts to:

  • Track brand perception
  • Identify emerging issues
  • Measure reaction to announcements or campaigns

3. Support Ticket Prioritization

Sentiment analysis can help:

  • Identify frustrated or angry customers
  • Escalate negative interactions automatically
  • Improve response times

4. Market Research

Sentiment analysis helps companies understand:

  • Public opinion about competitors
  • Trends in consumer sentiment
  • Product reception after launch

What Sentiment Analysis Is NOT Used For

This distinction is commonly tested on the exam.

TaskCorrect Capability
Extract names or datesEntity recognition
Identify important topicsKey phrase extraction
Translate textTranslation
Detect emotional toneSentiment analysis

Sentiment Analysis vs Related NLP Features

Sentiment Analysis vs Key Phrase Extraction

  • Sentiment analysis: How does the user feel?
  • Key phrase extraction: What is the text about?

Sentiment Analysis vs Entity Recognition

  • Sentiment analysis: Emotional tone
  • Entity recognition: Specific items (people, places, dates)

AI-900 Exam Tips 💡

  • Focus on when to use sentiment analysis, not how to implement it
  • Expect scenario-based questions (customer reviews, feedback, tweets)
  • Remember: Sentiment analysis is part of Azure AI Language
  • No training, tuning, or ML pipelines are required for AI-900

Summary

Sentiment analysis is a core NLP workload that enables organizations to automatically evaluate opinions and emotions in text. For the AI-900 exam, you should understand:

  • What sentiment analysis does
  • Common real-world use cases
  • How it differs from other NLP features
  • That it is delivered through Azure AI Language using pretrained models

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Identify Features and Uses for Language Modeling (AI-900 Exam Prep)

Overview

Language modeling is a core concept in Natural Language Processing (NLP) that focuses on enabling machines to understand, generate, and predict human language. In the context of the AI-900 exam, language modeling is not about building models from scratch, but about recognizing what language models do, what problems they solve, and how Azure provides access to them.

Language models power many modern AI experiences, including chatbots, text generation, summarization, translation, and question answering.


What Is a Language Model?

A language model is a type of AI model that learns patterns in language so it can:

  • Predict the next word or token in a sequence
  • Understand context and meaning
  • Generate coherent and contextually relevant text

At a fundamental level, language models calculate the probability of word sequences, which allows them to both interpret and generate language.


Key Features of Language Modeling

1. Text Prediction and Generation

Language models can:

  • Predict the next word in a sentence
  • Generate full sentences, paragraphs, or documents
  • Produce human-like responses in conversations

Example:

“The weather today is very…” → sunny


2. Context Awareness

Modern language models (especially transformer-based models) consider context, not just individual words.

This allows them to:

  • Understand sentence meaning
  • Maintain coherence across multiple sentences
  • Respond appropriately based on prior text

3. Natural Language Understanding and Generation

Language models support both:

  • Understanding text (reading and interpreting meaning)
  • Generating text (writing responses, summaries, or explanations)

This dual capability is central to many NLP workloads.


4. Pretrained Models

In Azure, language modeling typically relies on pretrained models, meaning:

  • No custom training is required
  • Models are already trained on large text datasets
  • Users can immediately apply them to common NLP tasks

This aligns with the AI-900 focus on consuming AI services, not building models.


Common Uses of Language Modeling

1. Chatbots and Virtual Assistants

Language models enable conversational AI by:

  • Understanding user input
  • Generating natural responses
  • Maintaining conversation context

Azure Example:
Chatbots built using Azure OpenAI Service or language-based Azure AI services.


2. Text Completion and Content Generation

Language models can:

  • Auto-complete sentences
  • Generate emails, reports, or documentation
  • Assist with creative writing or code comments

3. Question Answering

Language models can:

  • Interpret natural language questions
  • Generate relevant answers based on context or provided data

This is commonly used in:

  • Help desks
  • Knowledge bases
  • Internal support tools

4. Text Summarization

Language models can:

  • Condense long documents
  • Extract key points
  • Provide concise summaries

This helps users quickly understand large volumes of text.


5. Language Translation and Adaptation

While translation is often a separate NLP workload, language models:

  • Understand sentence structure
  • Preserve meaning across languages
  • Adapt phrasing naturally

Language Modeling in Azure

In Azure, language modeling capabilities are available through services such as:

Azure OpenAI Service

  • Provides access to powerful large language models
  • Supports text generation, chat, summarization, and reasoning tasks
  • Uses pretrained transformer-based models

Azure AI Language

  • Focuses on structured NLP tasks
  • Complements language modeling with features like sentiment analysis and entity recognition

For AI-900, it’s important to recognize what language models enable, not the underlying implementation details.


Language Modeling vs Other NLP Tasks (Exam Tip)

NLP TaskFocus
Sentiment analysisEmotional tone
Entity recognitionIdentifying names, places, organizations
Key phrase extractionImportant terms
Language modelingUnderstanding and generating language

If the question involves predicting, generating, or responding with text, language modeling is likely the correct concept.


Why Language Modeling Matters for AI-900

Microsoft includes language modeling in AI-900 to ensure candidates understand:

  • How modern AI systems interact with human language
  • Why conversational AI is possible
  • How Azure provides ready-to-use NLP capabilities

You are not expected to train models — only to identify features, uses, and scenarios.


Exam Takeaway

If a question mentions:

  • Text generation
  • Conversational AI
  • Predicting words or sentences
  • Understanding context in language

👉 Think Language Modeling


Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Identify Features and Uses for Speech Recognition and Synthesis (AI-900 Exam Prep)

Where This Fits in the Exam

  • Exam area: Describe features of Natural Language Processing (NLP) workloads on Azure (15–20%)
  • Sub-area: Identify features of common NLP workload scenarios
  • Key focus: Understanding what speech recognition and synthesis do, when to use them, and which Azure services support them

This topic is highly scenario-driven on the exam.


Overview: Speech in NLP Workloads

Speech-related NLP workloads allow AI systems to:

  • Understand spoken language (speech recognition)
  • Generate spoken language (speech synthesis)

Together, these capabilities enable voice-based interactions such as virtual assistants, voice bots, dictation tools, and accessibility solutions.


Speech Recognition

What Is Speech Recognition?

Speech recognition (also called speech-to-text) is the process of converting spoken audio into written text.

The AI system analyzes:

  • Audio signals
  • Phonemes and pronunciation
  • Language patterns
  • Context

And produces text that represents what was spoken.


Key Features of Speech Recognition

Speech recognition solutions can:

  • Convert live or recorded audio into text
  • Support real-time transcription
  • Handle multiple languages and accents
  • Apply noise reduction
  • Recognize custom vocabulary (e.g., medical or technical terms)
  • Provide timestamps for spoken words or phrases

Common Uses of Speech Recognition

Speech recognition is used when users speak instead of type.

Common scenarios include:

  • Voice commands (e.g., “Turn on the lights”)
  • Call center transcription
  • Meeting and lecture transcription
  • Voice-controlled applications
  • Accessibility tools for users with limited mobility
  • Voice input for chatbots and virtual assistants

Azure Services for Speech Recognition

In Azure, speech recognition is provided by:

Azure AI Speech (Speech service)

Capabilities include:

  • Speech-to-text
  • Real-time and batch transcription
  • Language detection
  • Custom speech models

Speech Synthesis

What Is Speech Synthesis?

Speech synthesis (also called text-to-speech) is the process of converting written text into spoken audio.

The goal is to produce natural, human-like speech that sounds fluent and expressive.


Key Features of Speech Synthesis

Speech synthesis solutions can:

  • Convert text into spoken audio
  • Use natural-sounding neural voices
  • Support multiple languages and accents
  • Adjust:
    • Pitch
    • Speed
    • Tone
  • Apply SSML (Speech Synthesis Markup Language) for fine control
  • Generate speech for audio files or real-time playback

Common Uses of Speech Synthesis

Speech synthesis is used when systems need to speak to users.

Common scenarios include:

  • Virtual assistants and chatbots
  • Navigation and GPS systems
  • Accessibility tools for visually impaired users
  • Audiobooks and e-learning content
  • Automated announcements
  • Customer service voice bots

Azure Services for Speech Synthesis

In Azure, speech synthesis is also provided by:

Azure AI Speech (Speech service)

Capabilities include:

  • Text-to-speech
  • Neural voices
  • Voice customization
  • Multilingual speech output

Speech Recognition vs Speech Synthesis

CapabilitySpeech RecognitionSpeech Synthesis
DirectionSpeech → TextText → Speech
InputAudioText
OutputTextAudio
Common NameSpeech-to-textText-to-speech
ExampleTranscribing a callReading text aloud

Combined Speech Workloads

Many real-world solutions use both capabilities together.

Example:

  1. User speaks a question (speech recognition)
  2. System processes the text using NLP or AI logic
  3. System responds verbally (speech synthesis)

This is the foundation of:

  • Voice assistants
  • Conversational AI
  • Interactive voice response (IVR) systems

Exam-Focused Clues to Watch For 👀

On the AI-900 exam, speech workloads are usually described using phrases like:

  • “Convert spoken audio into text” → Speech recognition
  • “Generate spoken responses from text” → Speech synthesis
  • “Voice-enabled application” → Azure AI Speech
  • “Real-time transcription” → Speech recognition
  • “Reads text aloud” → Speech synthesis

Key Takeaways for AI-900

  • Speech recognition converts speech to text
  • Speech synthesis converts text to speech
  • Both are part of NLP workloads
  • Azure AI Speech is the primary Azure service for both
  • Common exam scenarios involve:
    • Voice assistants
    • Transcription
    • Accessibility
    • Customer service automation

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Features and Uses for Translation (AI-900 Exam Prep)

Practice Questions


Question 1

Which Azure service is primarily used to translate text between languages?

A. Azure Speech Service
B. Azure Language Service
C. Azure Translator
D. Azure OpenAI Service

Correct Answer: C. Azure Translator

Explanation:
Azure Translator (part of Azure AI Services) is specifically designed for text translation across multiple languages. While other services handle NLP or speech, Translator focuses on multilingual text conversion.


Question 2

A company wants to translate product descriptions on a website in real time for international users. Which feature of Azure Translator best supports this scenario?

A. Batch transcription
B. Real-time REST API translation
C. Sentiment analysis
D. Custom question answering

Correct Answer: B. Real-time REST API translation

Explanation:
Azure Translator provides REST APIs that allow applications and websites to translate text dynamically as users access content.


Question 3

Which scenario is the best example of using machine translation?

A. Detecting the emotional tone of customer feedback
B. Extracting key phrases from documents
C. Translating an email from English to French
D. Identifying people and locations in text

Correct Answer: C. Translating an email from English to French

Explanation:
Machine translation focuses on converting text from one language to another, which is exactly what this scenario describes.


Question 4

What type of translation does Azure Translator perform by default?

A. Rule-based translation
B. Human-assisted translation
C. Statistical translation
D. Neural machine translation

Correct Answer: D. Neural machine translation

Explanation:
Azure Translator uses Neural Machine Translation (NMT) models, which rely on deep learning to produce more natural and accurate translations.


Question 5

A travel application needs to detect the source language of user input before translating it. Can Azure Translator support this requirement?

A. No, language detection requires Azure Language Service
B. Yes, language detection is built into Azure Translator
C. Only if custom models are trained
D. Only for speech input

Correct Answer: B. Yes, language detection is built into Azure Translator

Explanation:
Azure Translator can automatically detect the source language of text before translating it, which is a common real-world scenario.


Question 6

Which of the following is a common use case for translation in Azure?

A. Voice-controlled virtual assistants
B. Multilingual customer support chatbots
C. Facial recognition systems
D. Predictive maintenance systems

Correct Answer: B. Multilingual customer support chatbots

Explanation:
Translation enables chatbots and support systems to communicate with users in multiple languages, improving global accessibility.


Question 7

A company needs consistent translation for industry-specific terminology (for example, legal or medical terms). What Azure Translator feature helps with this?

A. Language detection
B. Speech synthesis
C. Custom Translator
D. Sentiment scoring

Correct Answer: C. Custom Translator

Explanation:
Custom Translator allows organizations to train translation models using their own terminology, improving accuracy for specialized domains.


Question 8

Which input format is supported by Azure Translator?

A. Text only
B. Audio only
C. Text and images
D. Text only (speech requires another service)

Correct Answer: D. Text only (speech requires another service)

Explanation:
Azure Translator works with text input. For speech-to-speech translation, Azure Speech Service is used in combination with translation.


Question 9

Which Azure service would you combine with Azure Translator to build a speech-to-speech translation application?

A. Azure Vision Service
B. Azure Speech Service
C. Azure Language Service
D. Azure Bot Service only

Correct Answer: B. Azure Speech Service

Explanation:
Speech-to-speech translation requires speech recognition (speech-to-text) and speech synthesis (text-to-speech), which are handled by Azure Speech Service, alongside translation.


Question 10

Why is translation considered a core Natural Language Processing (NLP) workload?

A. It analyzes numerical data patterns
B. It processes and understands human language
C. It detects objects in images
D. It forecasts future values

Correct Answer: B. It processes and understands human language

Explanation:
Translation involves understanding and generating human language, making it a foundational NLP workload alongside sentiment analysis, entity recognition, and language modeling.


Go to the AI-900 Exam Prep Hub main page.