Tag: Microsoft Certified: Azure AI Fundamentals

Practice Questions: Identify Features of Object Detection Solutions (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A city wants to analyze traffic camera images to identify and count cars and bicycles. The solution must determine where each vehicle appears in the image. Which computer vision solution should be used?

A. Image classification
B. Image segmentation
C. Object detection
D. Facial recognition

Correct Answer: C

Explanation:
Object detection identifies objects and their locations using bounding boxes, making it ideal for counting and tracking vehicles.


Question 2

Which output is characteristic of an object detection solution?

A. A single label for the entire image
B. Bounding boxes with labels and confidence scores
C. Pixel-level classification masks
D. Text extracted from images

Correct Answer: B

Explanation:
Object detection returns bounding boxes for detected objects, along with labels and confidence scores.


Question 3

Which scenario best fits object detection rather than image classification?

A. Tagging photos as indoor or outdoor
B. Determining if an image contains a dog
C. Identifying the locations of multiple people in an image
D. Categorizing images by color theme

Correct Answer: C

Explanation:
Object detection is required when identifying and locating multiple objects within an image.


Question 4

Which Azure service provides prebuilt object detection models without requiring custom training?

A. Azure Machine Learning
B. Azure AI Custom Vision
C. Azure AI Vision
D. Azure Cognitive Search

Correct Answer: C

Explanation:
Azure AI Vision offers prebuilt computer vision models, including object detection, that require no training.


Question 5

What is the main difference between object detection and image segmentation?

A. Object detection identifies pixel-level boundaries
B. Image segmentation uses bounding boxes
C. Object detection locates objects using bounding boxes
D. Image segmentation does not use machine learning

Correct Answer: C

Explanation:
Object detection locates objects using bounding boxes, while segmentation classifies each pixel in the image.


Question 6

Which requirement would make object detection the most appropriate solution?

A. Classifying images into predefined categories
B. Identifying precise pixel boundaries of objects
C. Locating and counting multiple objects in an image
D. Detecting sentiment in text

Correct Answer: C

Explanation:
Object detection is best when both identification and location of objects are required.


Question 7

A team needs to detect custom manufacturing defects in images of products. Which Azure service should they use?

A. Azure AI Vision (prebuilt models)
B. Azure AI Custom Vision with object detection
C. Azure OpenAI
D. Azure Text Analytics

Correct Answer: B

Explanation:
Azure AI Custom Vision allows training custom object detection models using labeled images with bounding boxes.


Question 8

Which phrase in an exam question most strongly indicates an object detection solution?

A. “Assign a label to the image”
B. “Extract text from the image”
C. “Identify and locate objects”
D. “Classify image sentiment”

Correct Answer: C

Explanation:
Keywords such as identify, locate, and bounding box clearly point to object detection.


Question 9

An object detection model returns a confidence score for each detected object. What does this score represent?

A. The size of the object
B. The number of objects detected
C. The model’s certainty in the prediction
D. The training accuracy of the model

Correct Answer: C

Explanation:
Confidence scores indicate how certain the model is about each detected object.


Question 10

Which statement correctly describes object detection solutions on Azure?

A. They only support single-object images
B. They cannot be used in real-time scenarios
C. They return labels and bounding boxes
D. They do not use machine learning models

Correct Answer: C

Explanation:
Object detection solutions return both object labels and bounding boxes and support real-time and batch scenarios.


Final AI-900 Exam Pointers 🎯

  • Object detection = what + where
  • Look for counting, locating, bounding boxes
  • Azure AI Vision = prebuilt detection
  • Azure AI Custom Vision = custom detection models

Go to the AI-900 Exam Prep Hub main page.

Identify Features of Object Detection Solutions (AI-900 Exam Prep)

Overview

Object detection is a key computer vision workload tested on the AI-900 exam. It goes beyond identifying what appears in an image by also determining where those objects are located. Object detection solutions analyze images (or video frames) and return labels, bounding boxes, and confidence scores.

On the AI-900 exam, you must be able to:

  • Recognize object detection scenarios
  • Distinguish object detection from image classification and image segmentation
  • Identify Azure services that support object detection

What Is Object Detection?

Object detection is a computer vision technique that:

  • Identifies multiple objects in an image
  • Assigns labels to each object
  • Returns bounding boxes showing object locations

It answers the question:

“What objects are in this image, and where are they?”


Key Characteristics of Object Detection

1. Bounding Boxes

  • Objects are located using rectangular boxes
  • Each bounding box defines:
    • Position (x, y coordinates)
    • Size (width and height)

This is the clearest differentiator from image classification.


2. Multiple Objects per Image

Object detection can:

  • Detect multiple objects
  • Identify different object types in the same image

Example:

  • Person
  • Bicycle
  • Car

Each with its own bounding box.


3. Labels with Confidence Scores

For each detected object, the solution returns:

  • A label (for example, Car)
  • A confidence score indicating prediction certainty

4. Real-Time and Batch Use

Object detection can be used for:

  • Real-time scenarios (video feeds, camera streams)
  • Batch processing (analyzing stored images)

Common Object Detection Scenarios

Object detection is appropriate when location matters.

Typical Use Cases

  • Counting people or vehicles
  • Security and surveillance
  • Retail analytics (products on shelves)
  • Traffic monitoring
  • Autonomous systems (identifying obstacles)

Object Detection vs Image Classification

Understanding this difference is critical for AI-900.

FeatureImage ClassificationObject Detection
Labels entire image
Identifies object locations
Uses bounding boxes
Detects multiple objects

Exam Tip:
If a question mentions “count,” “locate,” “draw boxes,” or “find all”, object detection is the correct choice.


Azure Services for Object Detection

Azure AI Vision (Prebuilt Models)

  • Provides ready-to-use object detection
  • Detects common objects
  • No training required
  • Accessible via REST APIs

Azure AI Custom Vision

  • Supports custom object detection models
  • Requires:
    • Labeled images
    • Bounding box annotations
  • Ideal for domain-specific objects

Features of Object Detection Solutions on Azure

Cloud-Based Inference

  • Runs in Azure
  • Scales automatically
  • Accessible via APIs

Custom vs Prebuilt Models

  • Prebuilt models for general use
  • Custom models for specialized scenarios

Integration with Applications

  • Can be embedded into:
    • Web apps
    • Mobile apps
    • IoT solutions
  • Often used with camera feeds or uploaded images

When to Use Object Detection

Use object detection when:

  • You need to find and locate objects
  • Multiple objects may exist
  • You need counts or spatial awareness

When Not to Use It

  • When only overall image labels are required
  • When pixel-level accuracy is needed (segmentation)

Responsible AI Considerations

At a high level, AI-900 expects awareness of:

  • Bias in training images
  • Privacy when detecting people
  • Transparency in how results are used

Key Exam Takeaways

  • Object detection identifies what and where
  • Uses bounding boxes + labels
  • Supports multiple objects per image
  • Azure AI Vision = prebuilt
  • Azure AI Custom Vision = custom models
  • Watch for keywords: detect, locate, count, bounding box

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Features of Optical Character Recognition (OCR) Solutions (AI-900 Exam Prep)

Practice Questions


Question 1

A company wants to convert scanned paper documents into searchable digital text. Which computer vision solution should be used?

A. Image classification
B. Object detection
C. Optical character recognition (OCR)
D. Image segmentation

Correct Answer: C

Explanation:
OCR extracts text from images and scanned documents, converting it into machine-readable text.


Question 2

Which output is typically produced by an OCR solution?

A. Image labels with confidence scores
B. Bounding boxes around detected objects
C. Extracted text and its location in the image
D. Pixel-level image masks

Correct Answer: C

Explanation:
OCR outputs recognized text along with positional information, often as bounding boxes.


Question 3

Which scenario is the best fit for OCR?

A. Counting vehicles in traffic images
B. Categorizing images as indoor or outdoor
C. Extracting invoice numbers from scanned receipts
D. Detecting faces in photos

Correct Answer: C

Explanation:
OCR is designed to extract text, such as invoice numbers, from images or documents.


Question 4

Which Azure service provides prebuilt OCR capabilities without requiring model training?

A. Azure AI Vision
B. Azure Machine Learning
C. Azure AI Custom Vision
D. Azure OpenAI

Correct Answer: A

Explanation:
Azure AI Vision includes prebuilt OCR features that can recognize text in images and documents.


Question 5

What is a key difference between OCR and object detection?

A. OCR identifies object locations
B. Object detection extracts text
C. OCR converts visual text into machine-readable text
D. Object detection does not use machine learning

Correct Answer: C

Explanation:
OCR focuses on extracting and converting text, while object detection identifies and locates objects.


Question 6

Which type of text can OCR solutions typically recognize?

A. Printed text only
B. Handwritten text only
C. Printed and handwritten text
D. Spoken language

Correct Answer: C

Explanation:
Modern OCR solutions can recognize both printed and handwritten text, though accuracy may vary.


Question 7

Which Azure service builds on OCR to extract structured information from forms and documents?

A. Azure AI Vision
B. Azure AI Document Intelligence
C. Azure Cognitive Search
D. Azure Machine Learning

Correct Answer: B

Explanation:
Azure AI Document Intelligence extends OCR capabilities to analyze forms, invoices, and receipts.


Question 8

Which phrase in an exam question most strongly indicates an OCR solution?

A. “Classify images by category”
B. “Detect and locate objects”
C. “Extract text from scanned documents”
D. “Analyze facial expressions”

Correct Answer: C

Explanation:
Keywords such as extract text, recognize text, or scan documents point directly to OCR.


Question 9

What responsible AI consideration is most relevant when using OCR on documents?

A. Object bias
B. Data privacy and security
C. Bounding box accuracy
D. Image resolution

Correct Answer: B

Explanation:
OCR often processes documents containing sensitive personal or business information, making privacy and security critical.


Question 10

Which statement correctly describes OCR solutions on Azure?

A. They only work with handwritten documents
B. They require custom training for every use case
C. They convert images of text into digital text
D. They are used to detect objects in images

Correct Answer: C

Explanation:
OCR solutions convert visual representations of text into machine-readable digital text.


Final AI-900 Exam Pointers

  • OCR = read text from images
  • Look for keywords: scan, read, extract text, digitize
  • Azure AI Vision = prebuilt OCR
  • Azure AI Document Intelligence = structured document extraction

Go to the AI-900 Exam Prep Hub main page.

Identify Features of Optical Character Recognition (OCR) Solutions (AI-900 Exam Prep)

Overview

Optical Character Recognition (OCR) is a core computer vision workload tested on the AI-900 exam. OCR solutions are designed to extract printed or handwritten text from images and documents and convert it into machine-readable text.

On the AI-900 exam, you are expected to:

  • Recognize OCR use cases
  • Understand what OCR does and does not do
  • Identify Azure services that provide OCR capabilities

What Is Optical Character Recognition (OCR)?

OCR is a computer vision technique that:

  • Detects text within images
  • Extracts characters, words, and lines
  • Converts visual text into digital text

It answers the question:

“What text appears in this image or document?”


Key Characteristics of OCR Solutions

1. Text Extraction

OCR solutions can extract:

  • Printed text
  • Handwritten text (depending on the service)
  • Numbers, symbols, and punctuation

The output is searchable and editable text.


2. Language Support

OCR solutions typically:

  • Support multiple languages
  • Automatically detect language in many cases

This is important for global document processing scenarios.


3. Layout and Structure Awareness

Advanced OCR solutions can identify:

  • Lines and paragraphs
  • Tables
  • Forms
  • Key-value pairs

This enables downstream document processing and automation.


4. Bounding Boxes for Text

OCR can return:

  • Extracted text
  • Bounding boxes showing where text appears

This allows applications to highlight or validate text locations.


5. Image and Document Input

OCR works with:

  • Images (JPG, PNG)
  • Scanned documents
  • PDFs
  • Photos taken by mobile devices

Common OCR Scenarios

OCR is the correct solution when text extraction is the primary goal.

Typical Use Cases

  • Invoice and receipt processing
  • Digitizing scanned documents
  • License plate recognition
  • Form processing
  • Reading text from signs or labels

OCR vs Other Computer Vision Workloads

Understanding this distinction is critical for AI-900.

TaskPrimary Purpose
Image classificationCategorize entire images
Object detectionLocate and identify objects
OCRExtract text from images
Image segmentationClassify pixels

Exam Tip:
If the question mentions read, extract, recognize text, or digitize documents, OCR is the correct answer.


Azure Services for OCR

Azure AI Vision (OCR Capabilities)

  • Provides prebuilt OCR models
  • Extracts printed and handwritten text
  • Supports multiple languages
  • No training required
  • Accessible via REST APIs

Azure AI Document Intelligence (formerly Form Recognizer)

  • Builds on OCR to:
    • Extract structured data
    • Analyze forms and documents
  • Commonly used for:
    • Invoices
    • Receipts
    • Business documents

Features of OCR Solutions on Azure

Prebuilt Models

  • Ready to use
  • No custom training needed
  • Ideal for common document scenarios

Scalable Cloud Processing

  • Runs in Azure
  • Handles large document volumes
  • Integrates with automation workflows

Integration with Other Services

OCR outputs are often used with:

  • Search services
  • Databases
  • Business process automation
  • AI-powered document workflows

When to Use OCR

Use OCR when:

  • Text needs to be extracted from images or documents
  • Manual data entry must be reduced
  • Documents need to be searchable

When Not to Use OCR

  • When identifying objects rather than text
  • When categorizing images without text extraction
  • When pixel-level image analysis is required

Responsible AI Considerations

At a fundamentals level, AI-900 expects awareness of:

  • Privacy when processing documents with personal data
  • Security of stored text and documents
  • Accuracy limitations, especially with handwritten or low-quality images

Key Exam Takeaways

  • OCR extracts text from images
  • Converts visual content into machine-readable text
  • Supports multiple languages
  • Azure AI Vision provides OCR capabilities
  • Azure AI Document Intelligence extends OCR for forms
  • Watch for keywords: read, extract, recognize text, scan

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Identify Features of Facial Detection and Facial Analysis Solutions (AI-900 Exam Prep)

Overview

Facial detection and facial analysis are computer vision capabilities that enable applications to locate human faces in images and extract non-identifying attributes about those faces. In Azure, these capabilities are provided through Azure AI Vision and are commonly used in scenarios such as photo moderation, accessibility tools, demographic analysis, and privacy-preserving image processing.

For the AI-900 exam, it’s critical to understand:

  • The difference between facial detection and facial analysis
  • What these solutions can and cannot do
  • Typical use cases
  • How they align with Responsible AI principles

Importantly, facial recognition (identity verification) is not part of this topic and is intentionally excluded from AI-900.


What Is Facial Detection?

Definition

Facial detection is the process of identifying whether human faces are present in an image and determining where they are located.

Key Features

Facial detection solutions can:

  • Detect one or more faces in an image
  • Return bounding box coordinates for each detected face
  • Identify facial landmarks (such as eyes, nose, and mouth positions)
  • Work on still images (not identity matching)

What Facial Detection Does Not Do

  • It does not identify individuals
  • It does not verify or authenticate users
  • It does not infer emotions, age, or gender

Common Use Cases

  • Blurring or masking faces for privacy
  • Counting people in images
  • Applying filters or effects to faces
  • Ensuring faces are present before further processing

What Is Facial Analysis?

Definition

Facial analysis builds on facial detection by extracting descriptive attributes from detected faces, without identifying who the person is.

Key Features

Facial analysis solutions can infer attributes such as:

  • Estimated age range
  • Facial expression (e.g., smiling, neutral)
  • Presence of accessories (glasses, face masks)
  • Head pose and orientation
  • Facial landmarks and geometry

These features help applications understand facial characteristics, not identity.


Facial Detection vs Facial Analysis

FeatureFacial DetectionFacial Analysis
Detects faces in images
Returns face location
Estimates age or expression
Identifies individuals
Requires model training
Uses prebuilt Azure AI models

Azure Services Used

For AI-900 purposes, these capabilities are delivered through:

Azure AI Vision

  • Prebuilt computer vision models
  • REST APIs and SDKs
  • Supports image-based facial detection and analysis
  • No machine learning expertise required

Candidates should recognize that custom model training is not required for facial detection or analysis in Azure.


Responsible AI and Facial Technologies

Microsoft places strong emphasis on Responsible AI, particularly for facial technologies due to their sensitive nature.

Key Responsible AI Principles Applied

  • Privacy & Security: Facial data is biometric information
  • Transparency: Users should understand how facial data is used
  • Fairness: Models should avoid bias across demographics
  • Accountability: Clear governance and consent are required

Exam Tip

Expect questions that test:

  • Awareness of ethical considerations
  • Understanding of appropriate vs inappropriate use cases
  • Clear distinction between analysis and identification

What AI-900 Explicitly Does NOT Cover

To avoid common exam traps, remember:

  • Facial recognition (identity matching) is not included
  • Authentication and surveillance scenarios are out of scope
  • Custom face datasets are not required
  • Training facial models from scratch is not tested

Typical AI-900 Exam Scenarios

You may be asked to identify which capability to use when:

  • Blurring faces for privacy → Facial detection
  • Estimating whether people are smiling → Facial analysis
  • Counting faces in a photo → Facial detection
  • Inferring accessories like glasses → Facial analysis

Key Takeaways for the Exam

  • Facial detection answers “Where are the faces?”
  • Facial analysis answers “What attributes do these faces have?”
  • Neither identifies who a person is
  • Both are prebuilt Azure AI Vision capabilities
  • Responsible AI considerations matter and are always relevant

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify features of facial detection and facial analysis solutions (AI-900 Exam Prep)

Practice Questions


Question 1

You need to determine whether an image contains one or more human faces and identify where those faces are located.
Which computer vision capability should you use?

A. Image classification
B. Object detection
C. Facial detection
D. Facial recognition

Correct Answer: C

Explanation:
Facial detection is designed to identify the presence and location of faces in an image using bounding boxes. It does not identify individuals, which rules out facial recognition.


Question 2

Which output is typically returned by a facial detection solution?

A. Person’s name
B. Bounding box coordinates of faces
C. Sentiment score
D. Object category labels

Correct Answer: B

Explanation:
Facial detection returns the location of detected faces, usually as bounding boxes or facial landmarks. It does not return identity or sentiment.


Question 3

An application estimates whether people in a photo are smiling and whether they are wearing glasses.
Which capability is being used?

A. Image classification
B. Facial recognition
C. Facial analysis
D. Object detection

Correct Answer: C

Explanation:
Facial analysis extracts descriptive attributes such as facial expressions and accessories. Facial recognition would attempt to identify individuals, which is not required here.


Question 4

Which statement best describes the difference between facial detection and facial analysis?

A. Facial detection identifies people; facial analysis detects faces
B. Facial detection finds faces; facial analysis extracts attributes
C. Facial detection requires training; facial analysis does not
D. Facial analysis works only on video

Correct Answer: B

Explanation:
Facial detection locates faces, while facial analysis builds on detection by inferring attributes such as age estimates or expressions.


Question 5

Which Azure service provides prebuilt facial detection and facial analysis capabilities?

A. Azure Machine Learning
B. Azure Custom Vision
C. Azure AI Vision
D. Azure OpenAI Service

Correct Answer: C

Explanation:
Azure AI Vision provides prebuilt APIs for facial detection and analysis without requiring custom model training.


Question 6

A company wants to blur all faces in uploaded images to protect user privacy.
Which capability should be used?

A. Facial recognition
B. Facial analysis
C. Facial detection
D. Image classification

Correct Answer: C

Explanation:
Facial detection identifies the location of faces, which allows the application to blur or mask them without identifying individuals.


Question 7

Which of the following is NOT a capability of facial analysis?

A. Estimating age range
B. Detecting facial landmarks
C. Identifying a person by name
D. Detecting facial expressions

Correct Answer: C

Explanation:
Facial analysis does not identify individuals. Identifying a person by name would require facial recognition, which is outside the scope of AI-900.


Question 8

Why are facial detection and facial analysis considered sensitive AI capabilities?

A. They require expensive hardware
B. They always identify individuals
C. They involve biometric data and privacy concerns
D. They only work in controlled environments

Correct Answer: C

Explanation:
Facial data is biometric information, so its use raises privacy, fairness, and transparency concerns addressed by Responsible AI principles.


Question 9

Which Responsible AI principle is most directly related to ensuring users understand how facial data is being used?

A. Reliability and safety
B. Transparency
C. Performance optimization
D. Scalability

Correct Answer: B

Explanation:
Transparency ensures that users are informed about how facial detection or analysis systems work and how their data is processed.


Question 10

An exam question asks which scenario is appropriate for facial analysis.
Which option should you choose?

A. Authenticating a user for secure login
B. Matching a face to a passport database
C. Determining whether people in an image are smiling
D. Tracking individuals across multiple cameras

Correct Answer: C

Explanation:
Facial analysis is suitable for extracting non-identifying attributes such as expressions. Authentication, identity matching, and tracking involve facial recognition and are not covered in AI-900.


Exam Tips Recap

  • Responsible AI considerations are fair game on the exam
  • Facial detectionWhere are the faces? or Where is the face?
  • Facial analysisWhat attributes do the faces have?
  • Neither identifies individuals; Identity recognition is not part of AI-900 facial analysis
  • Azure uses prebuilt AI Vision models
  • Watch for privacy and ethics–based questions

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe capabilities of the Azure AI Vision service (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A company wants to automatically generate short descriptions such as “A group of people standing on a beach” for images uploaded to its website. No model training is required.

Which Azure service should be used?

A. Azure Machine Learning
B. Azure AI Vision image analysis
C. Azure Custom Vision
D. Azure OpenAI Service

Correct Answer: B

Explanation:
Azure AI Vision image analysis can generate natural language descriptions of images using prebuilt models. Azure Machine Learning and Custom Vision require training, and Azure OpenAI is not designed for image analysis tasks.


Question 2

Which Azure AI Vision capability extracts printed and handwritten text from scanned documents and images?

A. Image tagging
B. Object detection
C. Optical Character Recognition (OCR)
D. Facial analysis

Correct Answer: C

Explanation:
OCR is specifically designed to detect and extract text from images, including scanned documents and handwritten content.


Question 3

A developer needs to identify objects in an image and return their locations using bounding boxes.

Which Azure AI Vision feature should be used?

A. Image classification
B. Image tagging
C. Object detection
D. Image description

Correct Answer: C

Explanation:
Object detection identifies what objects are present and where they are located using bounding boxes and confidence scores.


Question 4

Which capability of Azure AI Vision can detect faces and return attributes such as estimated age and facial expression?

A. Facial recognition
B. Facial detection and facial analysis
C. Image classification
D. Custom Vision

Correct Answer: B

Explanation:
Azure AI Vision supports facial detection and analysis, which provides facial attributes but does not identify individuals.


Question 5

A solution must automatically assign keywords like “outdoor”, “food”, or “animal” to images for search and organization.

Which Azure AI Vision feature meets this requirement?

A. OCR
B. Object detection
C. Image tagging
D. Facial analysis

Correct Answer: C

Explanation:
Image tagging assigns descriptive labels to images to improve categorization and searchability.


Question 6

Which statement best describes Azure AI Vision?

A. It requires training a custom model for each scenario
B. It provides prebuilt computer vision capabilities through APIs
C. It is only used for facial recognition
D. It can only analyze video streams

Correct Answer: B

Explanation:
Azure AI Vision offers prebuilt computer vision models accessed via APIs, requiring no model training.


Question 7

A company wants to analyze images quickly without building or training a machine learning model.

Which Azure service is most appropriate?

A. Azure Machine Learning
B. Azure Custom Vision
C. Azure AI Vision
D. Azure Databricks

Correct Answer: C

Explanation:
Azure AI Vision is designed for quick deployment using prebuilt models, making it ideal when no custom training is required.


Question 8

Which task is NOT a capability of Azure AI Vision?

A. Detecting objects in an image
B. Extracting text from images
C. Identifying specific individuals in photos
D. Generating image descriptions

Correct Answer: C

Explanation:
Azure AI Vision does not identify individuals. Facial recognition and identity verification are restricted and not required for AI-900.


Question 9

A scenario mentions analyzing images while following Microsoft’s Responsible AI principles, particularly around privacy and fairness.

Which Azure AI Vision feature is most closely associated with these considerations?

A. Image tagging
B. Facial detection and analysis
C. OCR
D. Object detection

Correct Answer: B

Explanation:
Facial detection and analysis involve human data and are closely tied to privacy, fairness, and transparency considerations.


Question 10

When should Azure AI Vision be used instead of Azure Custom Vision?

A. When you need a highly specialized image classification model
B. When you want full control over training data
C. When you need prebuilt image analysis without training
D. When labeling thousands of custom images

Correct Answer: C

Explanation:
Azure AI Vision is ideal for prebuilt, general-purpose image analysis scenarios. Custom Vision is used when custom training is required.


Final Exam Tips for This Topic

  • Think prebuilt vs custom
  • Azure AI Vision = no training
  • OCR = text extraction
  • Object detection = what + where
  • Facial analysis ≠ facial recognition

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe Capabilities of the Azure AI Face Detection Service (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A company wants to detect whether human faces appear in uploaded images and draw bounding boxes around them. The solution must not identify individuals.

Which Azure service should be used?

A. Azure Custom Vision
B. Azure AI Vision image classification
C. Azure AI Face detection
D. Azure OpenAI Service

Correct Answer: C

Explanation:
Azure AI Face detection is designed to detect faces and return their locations without identifying individuals. This aligns with privacy requirements and AI-900 expectations.


Question 2

Which task is supported by Azure AI Face detection?

A. Verifying a person’s identity against a database
B. Detecting the presence of human faces in an image
C. Training a custom facial recognition model
D. Authenticating users using facial biometrics

Correct Answer: B

Explanation:
Azure AI Face detection can detect faces and analyze facial attributes, but it does not perform identity verification or authentication.


Question 3

What type of information can Azure AI Face detection return for each detected face?

A. Person’s name and ID
B. Bounding box and facial attributes
C. Social media profile matches
D. Voice and speech characteristics

Correct Answer: B

Explanation:
The service returns face location (bounding box) and facial attributes such as estimated age or expression, not personal identity data.


Question 4

A scenario requires estimating whether people in an image appear to be smiling.

Which Azure AI Face detection capability supports this requirement?

A. Face identification
B. Facial attribute analysis
C. Image classification
D. Object detection

Correct Answer: B

Explanation:
Facial attribute analysis provides descriptive information such as facial expression, including whether a face appears to be smiling.


Question 5

Which statement best describes Azure AI Face detection for the AI-900 exam?

A. It requires training a custom dataset
B. It identifies known individuals in photos
C. It uses prebuilt models to analyze faces
D. It can only analyze video streams

Correct Answer: C

Explanation:
Azure AI Face detection uses pretrained models and requires no custom training, which is a key exam concept.


Question 6

A developer wants to count how many people appear in a group photo.

Which Azure AI service capability should be used?

A. OCR
B. Image tagging
C. Face detection
D. Image classification

Correct Answer: C

Explanation:
Face detection can identify multiple faces in a single image, making it suitable for counting people.


Question 7

Why is Azure AI Face detection closely associated with Responsible AI principles?

A. It uses unsupervised learning
B. It processes sensitive human biometric data
C. It requires large datasets
D. It supports only public images

Correct Answer: B

Explanation:
Facial data is considered sensitive personal data, so privacy, fairness, and transparency are especially important.


Question 8

Which scenario would be inappropriate for Azure AI Face detection?

A. Detecting faces in event photos
B. Estimating facial expressions
C. Identifying a person by name from an image
D. Drawing bounding boxes around faces

Correct Answer: C

Explanation:
Azure AI Face detection does not identify individuals. Identity recognition is outside the scope of AI-900 and restricted for ethical reasons.


Question 9

Which principle ensures users are informed when facial analysis is being used?

A. Reliability
B. Transparency
C. Inclusiveness
D. Sustainability

Correct Answer: B

Explanation:
Transparency requires that people understand when and how AI systems, such as facial detection, are being used.


Question 10

When comparing Azure AI Face detection with object detection, which statement is correct?

A. Object detection returns facial attributes
B. Face detection identifies any object in an image
C. Face detection focuses specifically on human faces
D. Both services identify individuals

Correct Answer: C

Explanation:
Face detection is specialized for human faces, while object detection identifies general objects like cars, animals, or furniture.


Exam Tip Recap 🔑

  • Face detection ≠ face recognition
  • Detects faces, locations, and attributes
  • Uses prebuilt models
  • Strong ties to Responsible AI

Go to the AI-900 Exam Prep Hub main page.

Describe Capabilities of the Azure AI Face Detection Service (AI-900 Exam Prep)

Overview

The Azure AI Face Detection service (part of Azure AI Vision) provides prebuilt computer vision capabilities to detect human faces in images and return structured information about those faces. For the AI-900: Microsoft Azure AI Fundamentals exam, the focus is on understanding what the service can do, what it cannot do, and how it aligns with Responsible AI principles.

This service uses pretrained models and can be accessed through REST APIs or SDKs without building or training a custom machine learning model.


What Is Face Detection (at the AI-900 level)?

Face detection answers the question:

“Is there a human face in this image, and what are its characteristics?”

It does not answer:

“Who is this person?”

This distinction is critical for the AI-900 exam.


Core Capabilities of Azure AI Face Detection

1. Face Detection

The service can:

  • Detect one or more human faces in an image
  • Return the location of each face using bounding boxes
  • Assign a confidence score to each detected face

This capability is commonly used for:

  • Photo moderation
  • Counting people in images
  • Identifying whether faces are present at all

2. Facial Attribute Analysis

For each detected face, the service can analyze and return attributes such as:

  • Estimated age range
  • Facial expression (for example, neutral or smiling)
  • Head pose (orientation of the face)
  • Glasses or accessories
  • Hair-related attributes

These attributes are descriptive and probabilistic, not definitive.


3. Multiple Face Detection

Azure AI Face Detection can:

  • Detect multiple faces in a single image
  • Return attributes for each detected face independently

This is useful in scenarios like:

  • Group photos
  • Crowd analysis
  • Event imagery

What Azure AI Face Detection Does NOT Do

Understanding limitations is frequently tested on AI-900.

The service does NOT:

  • Identify or verify individuals
  • Perform facial recognition for authentication
  • Match faces against a database of known people

Any functionality related to identity recognition falls outside the scope of AI-900 and is intentionally restricted due to privacy and ethical considerations.


Responsible AI Considerations

Facial analysis involves human biometric data, so Microsoft strongly emphasizes Responsible AI principles.

Key considerations include:

  • Privacy: Faces are sensitive personal data
  • Fairness: Models must work consistently across different demographics
  • Transparency: Users should be informed when facial analysis is used
  • Accountability: Humans remain responsible for how outputs are used

For AI-900, you are expected to recognize that facial detection requires extra care compared to other vision tasks like object detection or OCR.


Common AI-900 Exam Scenarios

You may see questions that describe:

  • Detecting whether people appear in an image
  • Returning bounding boxes around faces
  • Analyzing facial attributes without identifying individuals

Correct answers will typically reference:

  • Azure AI Face Detection
  • Prebuilt models
  • No custom training required

Azure AI Face Detection vs Other Vision Capabilities

CapabilityPurpose
Image classificationAssigns a single label to an image
Object detectionIdentifies objects and their locations
OCRExtracts text from images
Face detectionDetects faces and analyzes attributes

Key Takeaways for the AI-900 Exam

  • Azure AI Face Detection detects faces, not identities
  • It returns locations and attributes, not names
  • It uses pretrained models with no training required
  • Facial analysis requires Responsible AI awareness

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Features and Uses for Key Phrase Extraction (AI-900 Exam Prep)

Practice Questions


Question 1

A company wants to automatically identify the main topics discussed in thousands of customer reviews without determining whether the reviews are positive or negative.

Which NLP capability should be used?

A. Sentiment analysis
B. Language detection
C. Key phrase extraction
D. Entity recognition

Correct Answer: C

Explanation:
Key phrase extraction identifies important topics and concepts in text without analyzing emotional tone, making it ideal for summarizing review content.


Question 2

Which output is most likely returned by a key phrase extraction service?

A. A sentiment score between –1 and 1
B. A list of important words or short phrases
C. A detected language code
D. A classification label

Correct Answer: B

Explanation:
Key phrase extraction returns a list of relevant words or phrases that summarize the main ideas of the text.


Question 3

Which Azure service provides key phrase extraction using prebuilt models?

A. Azure Machine Learning
B. Azure AI Vision
C. Azure AI Language
D. Azure Cognitive Search

Correct Answer: C

Explanation:
Key phrase extraction is part of Azure AI Language, which offers prebuilt NLP models accessible via APIs.


Question 4

A support team wants to automatically tag incoming support tickets with topics such as billing, login issues, or performance.

Which NLP capability should they use?

A. Named entity recognition
B. Key phrase extraction
C. Sentiment analysis
D. Speech-to-text

Correct Answer: B

Explanation:
Key phrase extraction identifies important topics in unstructured text, making it suitable for tagging and categorization.


Question 5

Which scenario is NOT a typical use of key phrase extraction?

A. Summarizing the main topics of documents
B. Improving document search and indexing
C. Detecting the emotional tone of text
D. Identifying trending discussion topics

Correct Answer: C

Explanation:
Detecting emotional tone is handled by sentiment analysis, not key phrase extraction.


Question 6

Which statement best describes key phrase extraction for the AI-900 exam?

A. It requires labeled training data
B. It extracts names and dates only
C. It uses pretrained models on unstructured text
D. It classifies text into predefined categories

Correct Answer: C

Explanation:
Key phrase extraction uses pretrained NLP models and works directly on unstructured text without training.


Question 7

A multinational company wants to extract key topics from documents written in multiple languages.

Which feature of Azure AI Language supports this requirement?

A. Custom model training
B. Multi-language support
C. Facial recognition
D. Object detection

Correct Answer: B

Explanation:
Azure AI Language supports multiple languages for key phrase extraction, enabling global text analysis.


Question 8

Which NLP capability focuses on identifying specific items such as names, locations, and dates?

A. Key phrase extraction
B. Sentiment analysis
C. Language detection
D. Entity recognition

Correct Answer: D

Explanation:
Entity recognition extracts specific entities, while key phrase extraction focuses on main topics and concepts.


Question 9

A business wants to quickly understand what large volumes of text are about, without reading every document.

Which benefit of key phrase extraction addresses this need?

A. Emotion detection
B. Automatic topic identification
C. Speech recognition
D. Image analysis

Correct Answer: B

Explanation:
Key phrase extraction automatically identifies important topics, allowing rapid understanding of large text collections.


Question 10

Which responsible AI consideration is most relevant when using key phrase extraction?

A. Identity verification
B. Avoiding misinterpretation of extracted phrases
C. Biometric data protection
D. Facial bias detection

Correct Answer: B

Explanation:
Key phrase extraction outputs are contextual summaries, so users must avoid treating them as definitive conclusions.


Exam Tip Recap 🔑

Often paired with search, tagging, and trend analysis

Key phrase extraction = What is this text about?

It does not analyze sentiment

Uses prebuilt models in Azure AI Language


Go to the AI-900 Exam Prep Hub main page.