Category: AI-900

Describe Model Management and Deployment Capabilities in Azure Machine Learning (AI-900 Exam Prep)

Where this fits in the exam

  • Exam domain: Describe fundamental principles of machine learning on Azure (15–20%)
  • Sub-area: Describe Azure Machine Learning capabilities
  • Skill level: Conceptual understanding (no deep implementation details)

For AI-900, Microsoft expects you to understand what Azure Machine Learning can do for managing and deploying models — not how to write code or configure infrastructure in detail.


What Is Model Management in Azure Machine Learning?

Model management refers to how machine learning models are:

  • Stored
  • Versioned
  • Tracked
  • Prepared for deployment

Azure Machine Learning provides built-in tools to manage the entire model lifecycle, from training to production.


Key Model Management Capabilities

1. Model Registration

After a model is trained, it can be registered in Azure Machine Learning.

What model registration provides:

  • Centralized model storage
  • Model versioning
  • Metadata tracking (name, version, description)
  • Easy reuse across experiments and deployments

📌 Exam tip:
Registration allows multiple versions of the same model to be stored and compared.


2. Model Versioning

Azure Machine Learning automatically assigns versions to registered models.

Why this matters:

  • Compare performance between model versions
  • Roll back to a previous version if a newer model performs poorly
  • Support continuous improvement and experimentation

📌 AI-900 focus:
You only need to know that versioning exists and why it’s useful, not how to configure it.


3. Experiment Tracking

Azure Machine Learning tracks:

  • Training runs
  • Parameters
  • Metrics (accuracy, error, etc.)
  • Output artifacts

This helps data scientists:

  • Compare models
  • Reproduce results
  • Understand how a model was created

Model Deployment in Azure Machine Learning

Once a model is trained and registered, it can be deployed so applications can use it to make predictions.


Deployment Options in Azure Machine Learning

1. Real-Time Endpoints

Used for on-demand predictions.

Key characteristics:

  • Low-latency responses
  • Exposed via a REST API
  • Commonly used for web or application integrations

Typical compute targets:

  • Azure Kubernetes Service (AKS)
  • Azure Container Instances (ACI)

📌 Exam tip:
Real-time endpoints are used when predictions are needed immediately.


2. Batch Endpoints

Used for large-scale, offline predictions.

Key characteristics:

  • Processes large datasets at once
  • Not time-sensitive
  • Often scheduled or run periodically

Example use cases:

  • Scoring customer records overnight
  • Generating predictions for reports

Managed Deployment Features

Azure Machine Learning simplifies deployment by providing:

  • Containerized deployments
    Models are packaged into containers for consistency.
  • Scaling support
    Automatically handles increasing or decreasing load.
  • Monitoring and logging
    Tracks performance and usage after deployment.

📌 AI-900 emphasis:
You should understand that Azure ML manages infrastructure complexity, not the low-level details.


Model Management vs Deployment (At a Glance)

CapabilityPurpose
Model registrationStore and organize trained models
VersioningTrack changes and improvements
Experiment trackingCompare training runs and metrics
Real-time deploymentImmediate predictions via API
Batch deploymentLarge-scale, offline predictions

Why This Matters for AI-900

For the AI-900 exam, Microsoft wants you to recognize that:

  • Azure Machine Learning supports the full ML lifecycle
  • Models can be managed, versioned, and deployed without custom infrastructure
  • Deployment can be real-time or batch, depending on the scenario

You are not expected to:

  • Write deployment scripts
  • Configure Kubernetes clusters
  • Optimize production pipelines

Key Takeaways for the Exam

  • Azure Machine Learning provides centralized model management
  • Models can be registered and versioned
  • Deployment options include real-time endpoints and batch endpoints
  • Azure ML simplifies scaling, monitoring, and management

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Identify Features of Image Classification Solutions (AI-900 Exam Prep)

Overview

Image classification is one of the most common computer vision workloads assessed on the AI-900 exam. It focuses on assigning one or more labels to an image based on its visual content. Unlike object detection, image classification does not identify locations within the image — it answers the question:

“What is this image?”

On the AI-900 exam, you are expected to recognize when image classification is the correct solution, understand its core features, and know which Azure services support it.


What Is Image Classification?

Image classification is a computer vision technique that analyzes an image and categorizes it into predefined classes or labels.

Key Characteristics

  • Operates on the entire image
  • Produces labels or categories
  • Does not draw bounding boxes
  • Often uses deep learning models (convolutional neural networks)

Simple Examples

  • Classifying photos as cat, dog, or bird
  • Determining whether an image contains food, landscape, or people
  • Categorizing medical images as normal or abnormal

Common Image Classification Scenarios

Image classification is appropriate when the goal is overall categorization, not detailed localization.

Typical Use Cases

  • Product categorization (e.g., retail images)
  • Content moderation (safe vs unsafe images)
  • Quality inspection (defective vs non-defective)
  • Medical imaging classification
  • Scene recognition (indoor vs outdoor)

Image Classification vs Other Computer Vision Tasks

Understanding how image classification differs from related workloads is critical for the AI-900 exam.

TaskWhat It Does
Image classificationAssigns labels to an entire image
Object detectionIdentifies and locates objects with bounding boxes
Image segmentationClassifies each pixel in an image
Facial recognitionIdentifies or verifies people

Exam Tip:
If the question mentions counting, locating, or drawing boxes, image classification is not the correct answer.


Azure Services for Image Classification

On the AI-900 exam, Microsoft primarily expects familiarity with Azure AI Vision and Custom Vision.

Azure AI Vision (Prebuilt Models)

  • Provides ready-to-use image classification
  • Can identify:
    • Objects
    • Scenes
    • Tags
  • Requires no model training
  • Ideal for general-purpose scenarios

Azure AI Custom Vision

  • Allows you to train your own image classification model
  • Supports:
    • Custom labels
    • Domain-specific images
  • Requires labeled training data
  • Useful when prebuilt models are insufficient

Features of Image Classification Solutions

1. Label-Based Output

Image classification solutions return:

  • One or more labels
  • Confidence scores for each label

Example output:

  • Dog – 92%
  • Animal – 99%

2. Whole-Image Analysis

  • The model evaluates the entire image
  • No spatial location information is returned

This is a common AI-900 trick — don’t confuse classification with detection.


3. Confidence Scores

Predictions are typically accompanied by:

  • Probability or confidence values
  • Useful for decision-making thresholds

4. Model Training Options

Depending on the service:

  • Prebuilt models require no training
  • Custom Vision models require:
    • Labeled images
    • Training and evaluation cycles

5. Cloud-Based Inference

Azure image classification solutions:

  • Run in the cloud
  • Are accessed via REST APIs
  • Scale automatically

When to Use Image Classification

Image classification is the best choice when:

  • You only need to know what is in the image
  • Object location is not required
  • Labels are predefined or can be trained

When Not to Use It

  • When you need to count objects
  • When you need bounding boxes
  • When identifying specific individuals

Responsible AI Considerations

While AI-900 does not go deep technically, you should understand high-level considerations:

  • Bias in training images can affect predictions
  • Transparency in how labels are applied
  • Privacy concerns when images contain people

Key Exam Takeaways

  • Image classification assigns labels to entire images
  • It does not locate or count objects
  • Azure AI Vision and Custom Vision are the primary services
  • Look for keywords like categorize, classify, label
  • Be careful not to confuse classification with object detection

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Features of Image Classification Solutions (AI-900 Exam Prep)

Practice Questions


Question 1

A company wants to automatically categorize uploaded photos as landscape, food, or people. The location of objects in the image is not required. Which computer vision solution should be used?

A. Object detection
B. Image segmentation
C. Image classification
D. Facial recognition

Correct Answer: C

Explanation:
Image classification assigns one or more labels to an entire image without identifying object locations.


Question 2

Which output is typically returned by an image classification model?

A. Bounding boxes and coordinates
B. Pixel-level masks
C. Labels with confidence scores
D. Audio transcripts

Correct Answer: C

Explanation:
Image classification returns labels that describe the image, usually with confidence or probability scores.


Question 3

Which scenario is the best fit for image classification?

A. Counting the number of people in an image
B. Identifying where objects appear in an image
C. Determining whether an image contains a cat or a dog
D. Tracking a moving object in a video

Correct Answer: C

Explanation:
Image classification is ideal when determining what is in the image, not where it appears.


Question 4

Which Azure service allows you to train a custom image classification model using labeled images?

A. Azure AI Vision
B. Azure OpenAI
C. Azure AI Custom Vision
D. Azure Cognitive Search

Correct Answer: C

Explanation:
Azure AI Custom Vision enables training custom image classification models using user-provided labeled datasets.


Question 5

What is a key difference between image classification and object detection?

A. Image classification requires training; object detection does not
B. Image classification identifies object locations
C. Object detection assigns labels only
D. Image classification analyzes the entire image

Correct Answer: D

Explanation:
Image classification evaluates the whole image and assigns labels, while object detection also locates objects using bounding boxes.


Question 6

Which Azure service provides prebuilt image classification capabilities without requiring model training?

A. Azure AI Custom Vision
B. Azure AI Vision
C. Azure Machine Learning
D. Azure Blob Storage

Correct Answer: B

Explanation:
Azure AI Vision offers prebuilt computer vision models that can classify images without custom training.


Question 7

An image classification solution returns a confidence score of 0.95 for the label Animal. What does this indicate?

A. The model has been retrained
B. The label is incorrect
C. The model is highly confident in the prediction
D. The image contains multiple objects

Correct Answer: C

Explanation:
Confidence scores indicate how certain the model is about its prediction.


Question 8

Which requirement would make image classification insufficient as a solution?

A. Categorizing images by content
B. Identifying whether images contain people
C. Locating objects within an image
D. Tagging images with labels

Correct Answer: C

Explanation:
Image classification does not provide spatial location data. Object detection would be required instead.


Question 9

Which type of machine learning model is most commonly used for image classification?

A. Decision trees
B. Linear regression
C. Convolutional neural networks
D. K-means clustering

Correct Answer: C

Explanation:
Convolutional neural networks (CNNs) are widely used for image classification due to their effectiveness with visual data.


Question 10

Which phrase in an exam question is the strongest indicator that image classification is the correct solution?

A. “Identify and count objects”
B. “Detect faces and emotions”
C. “Assign a category to an image”
D. “Draw bounding boxes”

Correct Answer: C

Explanation:
Keywords such as classify, label, or categorize strongly indicate image classification.


Final AI-900 Exam Reminders

  • Image classification = labels, not locations
  • Prebuilt models → Azure AI Vision
  • Custom labels → Azure AI Custom Vision
  • Watch for exam “traps” involving bounding boxes

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Features of Object Detection Solutions (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A city wants to analyze traffic camera images to identify and count cars and bicycles. The solution must determine where each vehicle appears in the image. Which computer vision solution should be used?

A. Image classification
B. Image segmentation
C. Object detection
D. Facial recognition

Correct Answer: C

Explanation:
Object detection identifies objects and their locations using bounding boxes, making it ideal for counting and tracking vehicles.


Question 2

Which output is characteristic of an object detection solution?

A. A single label for the entire image
B. Bounding boxes with labels and confidence scores
C. Pixel-level classification masks
D. Text extracted from images

Correct Answer: B

Explanation:
Object detection returns bounding boxes for detected objects, along with labels and confidence scores.


Question 3

Which scenario best fits object detection rather than image classification?

A. Tagging photos as indoor or outdoor
B. Determining if an image contains a dog
C. Identifying the locations of multiple people in an image
D. Categorizing images by color theme

Correct Answer: C

Explanation:
Object detection is required when identifying and locating multiple objects within an image.


Question 4

Which Azure service provides prebuilt object detection models without requiring custom training?

A. Azure Machine Learning
B. Azure AI Custom Vision
C. Azure AI Vision
D. Azure Cognitive Search

Correct Answer: C

Explanation:
Azure AI Vision offers prebuilt computer vision models, including object detection, that require no training.


Question 5

What is the main difference between object detection and image segmentation?

A. Object detection identifies pixel-level boundaries
B. Image segmentation uses bounding boxes
C. Object detection locates objects using bounding boxes
D. Image segmentation does not use machine learning

Correct Answer: C

Explanation:
Object detection locates objects using bounding boxes, while segmentation classifies each pixel in the image.


Question 6

Which requirement would make object detection the most appropriate solution?

A. Classifying images into predefined categories
B. Identifying precise pixel boundaries of objects
C. Locating and counting multiple objects in an image
D. Detecting sentiment in text

Correct Answer: C

Explanation:
Object detection is best when both identification and location of objects are required.


Question 7

A team needs to detect custom manufacturing defects in images of products. Which Azure service should they use?

A. Azure AI Vision (prebuilt models)
B. Azure AI Custom Vision with object detection
C. Azure OpenAI
D. Azure Text Analytics

Correct Answer: B

Explanation:
Azure AI Custom Vision allows training custom object detection models using labeled images with bounding boxes.


Question 8

Which phrase in an exam question most strongly indicates an object detection solution?

A. “Assign a label to the image”
B. “Extract text from the image”
C. “Identify and locate objects”
D. “Classify image sentiment”

Correct Answer: C

Explanation:
Keywords such as identify, locate, and bounding box clearly point to object detection.


Question 9

An object detection model returns a confidence score for each detected object. What does this score represent?

A. The size of the object
B. The number of objects detected
C. The model’s certainty in the prediction
D. The training accuracy of the model

Correct Answer: C

Explanation:
Confidence scores indicate how certain the model is about each detected object.


Question 10

Which statement correctly describes object detection solutions on Azure?

A. They only support single-object images
B. They cannot be used in real-time scenarios
C. They return labels and bounding boxes
D. They do not use machine learning models

Correct Answer: C

Explanation:
Object detection solutions return both object labels and bounding boxes and support real-time and batch scenarios.


Final AI-900 Exam Pointers 🎯

  • Object detection = what + where
  • Look for counting, locating, bounding boxes
  • Azure AI Vision = prebuilt detection
  • Azure AI Custom Vision = custom detection models

Go to the AI-900 Exam Prep Hub main page.

Identify Features of Object Detection Solutions (AI-900 Exam Prep)

Overview

Object detection is a key computer vision workload tested on the AI-900 exam. It goes beyond identifying what appears in an image by also determining where those objects are located. Object detection solutions analyze images (or video frames) and return labels, bounding boxes, and confidence scores.

On the AI-900 exam, you must be able to:

  • Recognize object detection scenarios
  • Distinguish object detection from image classification and image segmentation
  • Identify Azure services that support object detection

What Is Object Detection?

Object detection is a computer vision technique that:

  • Identifies multiple objects in an image
  • Assigns labels to each object
  • Returns bounding boxes showing object locations

It answers the question:

“What objects are in this image, and where are they?”


Key Characteristics of Object Detection

1. Bounding Boxes

  • Objects are located using rectangular boxes
  • Each bounding box defines:
    • Position (x, y coordinates)
    • Size (width and height)

This is the clearest differentiator from image classification.


2. Multiple Objects per Image

Object detection can:

  • Detect multiple objects
  • Identify different object types in the same image

Example:

  • Person
  • Bicycle
  • Car

Each with its own bounding box.


3. Labels with Confidence Scores

For each detected object, the solution returns:

  • A label (for example, Car)
  • A confidence score indicating prediction certainty

4. Real-Time and Batch Use

Object detection can be used for:

  • Real-time scenarios (video feeds, camera streams)
  • Batch processing (analyzing stored images)

Common Object Detection Scenarios

Object detection is appropriate when location matters.

Typical Use Cases

  • Counting people or vehicles
  • Security and surveillance
  • Retail analytics (products on shelves)
  • Traffic monitoring
  • Autonomous systems (identifying obstacles)

Object Detection vs Image Classification

Understanding this difference is critical for AI-900.

FeatureImage ClassificationObject Detection
Labels entire image
Identifies object locations
Uses bounding boxes
Detects multiple objects

Exam Tip:
If a question mentions “count,” “locate,” “draw boxes,” or “find all”, object detection is the correct choice.


Azure Services for Object Detection

Azure AI Vision (Prebuilt Models)

  • Provides ready-to-use object detection
  • Detects common objects
  • No training required
  • Accessible via REST APIs

Azure AI Custom Vision

  • Supports custom object detection models
  • Requires:
    • Labeled images
    • Bounding box annotations
  • Ideal for domain-specific objects

Features of Object Detection Solutions on Azure

Cloud-Based Inference

  • Runs in Azure
  • Scales automatically
  • Accessible via APIs

Custom vs Prebuilt Models

  • Prebuilt models for general use
  • Custom models for specialized scenarios

Integration with Applications

  • Can be embedded into:
    • Web apps
    • Mobile apps
    • IoT solutions
  • Often used with camera feeds or uploaded images

When to Use Object Detection

Use object detection when:

  • You need to find and locate objects
  • Multiple objects may exist
  • You need counts or spatial awareness

When Not to Use It

  • When only overall image labels are required
  • When pixel-level accuracy is needed (segmentation)

Responsible AI Considerations

At a high level, AI-900 expects awareness of:

  • Bias in training images
  • Privacy when detecting people
  • Transparency in how results are used

Key Exam Takeaways

  • Object detection identifies what and where
  • Uses bounding boxes + labels
  • Supports multiple objects per image
  • Azure AI Vision = prebuilt
  • Azure AI Custom Vision = custom models
  • Watch for keywords: detect, locate, count, bounding box

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify Features of Optical Character Recognition (OCR) Solutions (AI-900 Exam Prep)

Practice Questions


Question 1

A company wants to convert scanned paper documents into searchable digital text. Which computer vision solution should be used?

A. Image classification
B. Object detection
C. Optical character recognition (OCR)
D. Image segmentation

Correct Answer: C

Explanation:
OCR extracts text from images and scanned documents, converting it into machine-readable text.


Question 2

Which output is typically produced by an OCR solution?

A. Image labels with confidence scores
B. Bounding boxes around detected objects
C. Extracted text and its location in the image
D. Pixel-level image masks

Correct Answer: C

Explanation:
OCR outputs recognized text along with positional information, often as bounding boxes.


Question 3

Which scenario is the best fit for OCR?

A. Counting vehicles in traffic images
B. Categorizing images as indoor or outdoor
C. Extracting invoice numbers from scanned receipts
D. Detecting faces in photos

Correct Answer: C

Explanation:
OCR is designed to extract text, such as invoice numbers, from images or documents.


Question 4

Which Azure service provides prebuilt OCR capabilities without requiring model training?

A. Azure AI Vision
B. Azure Machine Learning
C. Azure AI Custom Vision
D. Azure OpenAI

Correct Answer: A

Explanation:
Azure AI Vision includes prebuilt OCR features that can recognize text in images and documents.


Question 5

What is a key difference between OCR and object detection?

A. OCR identifies object locations
B. Object detection extracts text
C. OCR converts visual text into machine-readable text
D. Object detection does not use machine learning

Correct Answer: C

Explanation:
OCR focuses on extracting and converting text, while object detection identifies and locates objects.


Question 6

Which type of text can OCR solutions typically recognize?

A. Printed text only
B. Handwritten text only
C. Printed and handwritten text
D. Spoken language

Correct Answer: C

Explanation:
Modern OCR solutions can recognize both printed and handwritten text, though accuracy may vary.


Question 7

Which Azure service builds on OCR to extract structured information from forms and documents?

A. Azure AI Vision
B. Azure AI Document Intelligence
C. Azure Cognitive Search
D. Azure Machine Learning

Correct Answer: B

Explanation:
Azure AI Document Intelligence extends OCR capabilities to analyze forms, invoices, and receipts.


Question 8

Which phrase in an exam question most strongly indicates an OCR solution?

A. “Classify images by category”
B. “Detect and locate objects”
C. “Extract text from scanned documents”
D. “Analyze facial expressions”

Correct Answer: C

Explanation:
Keywords such as extract text, recognize text, or scan documents point directly to OCR.


Question 9

What responsible AI consideration is most relevant when using OCR on documents?

A. Object bias
B. Data privacy and security
C. Bounding box accuracy
D. Image resolution

Correct Answer: B

Explanation:
OCR often processes documents containing sensitive personal or business information, making privacy and security critical.


Question 10

Which statement correctly describes OCR solutions on Azure?

A. They only work with handwritten documents
B. They require custom training for every use case
C. They convert images of text into digital text
D. They are used to detect objects in images

Correct Answer: C

Explanation:
OCR solutions convert visual representations of text into machine-readable digital text.


Final AI-900 Exam Pointers

  • OCR = read text from images
  • Look for keywords: scan, read, extract text, digitize
  • Azure AI Vision = prebuilt OCR
  • Azure AI Document Intelligence = structured document extraction

Go to the AI-900 Exam Prep Hub main page.

Identify Features of Optical Character Recognition (OCR) Solutions (AI-900 Exam Prep)

Overview

Optical Character Recognition (OCR) is a core computer vision workload tested on the AI-900 exam. OCR solutions are designed to extract printed or handwritten text from images and documents and convert it into machine-readable text.

On the AI-900 exam, you are expected to:

  • Recognize OCR use cases
  • Understand what OCR does and does not do
  • Identify Azure services that provide OCR capabilities

What Is Optical Character Recognition (OCR)?

OCR is a computer vision technique that:

  • Detects text within images
  • Extracts characters, words, and lines
  • Converts visual text into digital text

It answers the question:

“What text appears in this image or document?”


Key Characteristics of OCR Solutions

1. Text Extraction

OCR solutions can extract:

  • Printed text
  • Handwritten text (depending on the service)
  • Numbers, symbols, and punctuation

The output is searchable and editable text.


2. Language Support

OCR solutions typically:

  • Support multiple languages
  • Automatically detect language in many cases

This is important for global document processing scenarios.


3. Layout and Structure Awareness

Advanced OCR solutions can identify:

  • Lines and paragraphs
  • Tables
  • Forms
  • Key-value pairs

This enables downstream document processing and automation.


4. Bounding Boxes for Text

OCR can return:

  • Extracted text
  • Bounding boxes showing where text appears

This allows applications to highlight or validate text locations.


5. Image and Document Input

OCR works with:

  • Images (JPG, PNG)
  • Scanned documents
  • PDFs
  • Photos taken by mobile devices

Common OCR Scenarios

OCR is the correct solution when text extraction is the primary goal.

Typical Use Cases

  • Invoice and receipt processing
  • Digitizing scanned documents
  • License plate recognition
  • Form processing
  • Reading text from signs or labels

OCR vs Other Computer Vision Workloads

Understanding this distinction is critical for AI-900.

TaskPrimary Purpose
Image classificationCategorize entire images
Object detectionLocate and identify objects
OCRExtract text from images
Image segmentationClassify pixels

Exam Tip:
If the question mentions read, extract, recognize text, or digitize documents, OCR is the correct answer.


Azure Services for OCR

Azure AI Vision (OCR Capabilities)

  • Provides prebuilt OCR models
  • Extracts printed and handwritten text
  • Supports multiple languages
  • No training required
  • Accessible via REST APIs

Azure AI Document Intelligence (formerly Form Recognizer)

  • Builds on OCR to:
    • Extract structured data
    • Analyze forms and documents
  • Commonly used for:
    • Invoices
    • Receipts
    • Business documents

Features of OCR Solutions on Azure

Prebuilt Models

  • Ready to use
  • No custom training needed
  • Ideal for common document scenarios

Scalable Cloud Processing

  • Runs in Azure
  • Handles large document volumes
  • Integrates with automation workflows

Integration with Other Services

OCR outputs are often used with:

  • Search services
  • Databases
  • Business process automation
  • AI-powered document workflows

When to Use OCR

Use OCR when:

  • Text needs to be extracted from images or documents
  • Manual data entry must be reduced
  • Documents need to be searchable

When Not to Use OCR

  • When identifying objects rather than text
  • When categorizing images without text extraction
  • When pixel-level image analysis is required

Responsible AI Considerations

At a fundamentals level, AI-900 expects awareness of:

  • Privacy when processing documents with personal data
  • Security of stored text and documents
  • Accuracy limitations, especially with handwritten or low-quality images

Key Exam Takeaways

  • OCR extracts text from images
  • Converts visual content into machine-readable text
  • Supports multiple languages
  • Azure AI Vision provides OCR capabilities
  • Azure AI Document Intelligence extends OCR for forms
  • Watch for keywords: read, extract, recognize text, scan

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Identify Features of Facial Detection and Facial Analysis Solutions (AI-900 Exam Prep)

Overview

Facial detection and facial analysis are computer vision capabilities that enable applications to locate human faces in images and extract non-identifying attributes about those faces. In Azure, these capabilities are provided through Azure AI Vision and are commonly used in scenarios such as photo moderation, accessibility tools, demographic analysis, and privacy-preserving image processing.

For the AI-900 exam, it’s critical to understand:

  • The difference between facial detection and facial analysis
  • What these solutions can and cannot do
  • Typical use cases
  • How they align with Responsible AI principles

Importantly, facial recognition (identity verification) is not part of this topic and is intentionally excluded from AI-900.


What Is Facial Detection?

Definition

Facial detection is the process of identifying whether human faces are present in an image and determining where they are located.

Key Features

Facial detection solutions can:

  • Detect one or more faces in an image
  • Return bounding box coordinates for each detected face
  • Identify facial landmarks (such as eyes, nose, and mouth positions)
  • Work on still images (not identity matching)

What Facial Detection Does Not Do

  • It does not identify individuals
  • It does not verify or authenticate users
  • It does not infer emotions, age, or gender

Common Use Cases

  • Blurring or masking faces for privacy
  • Counting people in images
  • Applying filters or effects to faces
  • Ensuring faces are present before further processing

What Is Facial Analysis?

Definition

Facial analysis builds on facial detection by extracting descriptive attributes from detected faces, without identifying who the person is.

Key Features

Facial analysis solutions can infer attributes such as:

  • Estimated age range
  • Facial expression (e.g., smiling, neutral)
  • Presence of accessories (glasses, face masks)
  • Head pose and orientation
  • Facial landmarks and geometry

These features help applications understand facial characteristics, not identity.


Facial Detection vs Facial Analysis

FeatureFacial DetectionFacial Analysis
Detects faces in images
Returns face location
Estimates age or expression
Identifies individuals
Requires model training
Uses prebuilt Azure AI models

Azure Services Used

For AI-900 purposes, these capabilities are delivered through:

Azure AI Vision

  • Prebuilt computer vision models
  • REST APIs and SDKs
  • Supports image-based facial detection and analysis
  • No machine learning expertise required

Candidates should recognize that custom model training is not required for facial detection or analysis in Azure.


Responsible AI and Facial Technologies

Microsoft places strong emphasis on Responsible AI, particularly for facial technologies due to their sensitive nature.

Key Responsible AI Principles Applied

  • Privacy & Security: Facial data is biometric information
  • Transparency: Users should understand how facial data is used
  • Fairness: Models should avoid bias across demographics
  • Accountability: Clear governance and consent are required

Exam Tip

Expect questions that test:

  • Awareness of ethical considerations
  • Understanding of appropriate vs inappropriate use cases
  • Clear distinction between analysis and identification

What AI-900 Explicitly Does NOT Cover

To avoid common exam traps, remember:

  • Facial recognition (identity matching) is not included
  • Authentication and surveillance scenarios are out of scope
  • Custom face datasets are not required
  • Training facial models from scratch is not tested

Typical AI-900 Exam Scenarios

You may be asked to identify which capability to use when:

  • Blurring faces for privacy → Facial detection
  • Estimating whether people are smiling → Facial analysis
  • Counting faces in a photo → Facial detection
  • Inferring accessories like glasses → Facial analysis

Key Takeaways for the Exam

  • Facial detection answers “Where are the faces?”
  • Facial analysis answers “What attributes do these faces have?”
  • Neither identifies who a person is
  • Both are prebuilt Azure AI Vision capabilities
  • Responsible AI considerations matter and are always relevant

Go to the Practice Exam Questions for this topic.

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Identify features of facial detection and facial analysis solutions (AI-900 Exam Prep)

Practice Questions


Question 1

You need to determine whether an image contains one or more human faces and identify where those faces are located.
Which computer vision capability should you use?

A. Image classification
B. Object detection
C. Facial detection
D. Facial recognition

Correct Answer: C

Explanation:
Facial detection is designed to identify the presence and location of faces in an image using bounding boxes. It does not identify individuals, which rules out facial recognition.


Question 2

Which output is typically returned by a facial detection solution?

A. Person’s name
B. Bounding box coordinates of faces
C. Sentiment score
D. Object category labels

Correct Answer: B

Explanation:
Facial detection returns the location of detected faces, usually as bounding boxes or facial landmarks. It does not return identity or sentiment.


Question 3

An application estimates whether people in a photo are smiling and whether they are wearing glasses.
Which capability is being used?

A. Image classification
B. Facial recognition
C. Facial analysis
D. Object detection

Correct Answer: C

Explanation:
Facial analysis extracts descriptive attributes such as facial expressions and accessories. Facial recognition would attempt to identify individuals, which is not required here.


Question 4

Which statement best describes the difference between facial detection and facial analysis?

A. Facial detection identifies people; facial analysis detects faces
B. Facial detection finds faces; facial analysis extracts attributes
C. Facial detection requires training; facial analysis does not
D. Facial analysis works only on video

Correct Answer: B

Explanation:
Facial detection locates faces, while facial analysis builds on detection by inferring attributes such as age estimates or expressions.


Question 5

Which Azure service provides prebuilt facial detection and facial analysis capabilities?

A. Azure Machine Learning
B. Azure Custom Vision
C. Azure AI Vision
D. Azure OpenAI Service

Correct Answer: C

Explanation:
Azure AI Vision provides prebuilt APIs for facial detection and analysis without requiring custom model training.


Question 6

A company wants to blur all faces in uploaded images to protect user privacy.
Which capability should be used?

A. Facial recognition
B. Facial analysis
C. Facial detection
D. Image classification

Correct Answer: C

Explanation:
Facial detection identifies the location of faces, which allows the application to blur or mask them without identifying individuals.


Question 7

Which of the following is NOT a capability of facial analysis?

A. Estimating age range
B. Detecting facial landmarks
C. Identifying a person by name
D. Detecting facial expressions

Correct Answer: C

Explanation:
Facial analysis does not identify individuals. Identifying a person by name would require facial recognition, which is outside the scope of AI-900.


Question 8

Why are facial detection and facial analysis considered sensitive AI capabilities?

A. They require expensive hardware
B. They always identify individuals
C. They involve biometric data and privacy concerns
D. They only work in controlled environments

Correct Answer: C

Explanation:
Facial data is biometric information, so its use raises privacy, fairness, and transparency concerns addressed by Responsible AI principles.


Question 9

Which Responsible AI principle is most directly related to ensuring users understand how facial data is being used?

A. Reliability and safety
B. Transparency
C. Performance optimization
D. Scalability

Correct Answer: B

Explanation:
Transparency ensures that users are informed about how facial detection or analysis systems work and how their data is processed.


Question 10

An exam question asks which scenario is appropriate for facial analysis.
Which option should you choose?

A. Authenticating a user for secure login
B. Matching a face to a passport database
C. Determining whether people in an image are smiling
D. Tracking individuals across multiple cameras

Correct Answer: C

Explanation:
Facial analysis is suitable for extracting non-identifying attributes such as expressions. Authentication, identity matching, and tracking involve facial recognition and are not covered in AI-900.


Exam Tips Recap

  • Responsible AI considerations are fair game on the exam
  • Facial detectionWhere are the faces? or Where is the face?
  • Facial analysisWhat attributes do the faces have?
  • Neither identifies individuals; Identity recognition is not part of AI-900 facial analysis
  • Azure uses prebuilt AI Vision models
  • Watch for privacy and ethics–based questions

Go to the AI-900 Exam Prep Hub main page.

Practice Questions: Describe capabilities of the Azure AI Vision service (AI-900 Exam Prep)

Practice Exam Questions


Question 1

A company wants to automatically generate short descriptions such as “A group of people standing on a beach” for images uploaded to its website. No model training is required.

Which Azure service should be used?

A. Azure Machine Learning
B. Azure AI Vision image analysis
C. Azure Custom Vision
D. Azure OpenAI Service

Correct Answer: B

Explanation:
Azure AI Vision image analysis can generate natural language descriptions of images using prebuilt models. Azure Machine Learning and Custom Vision require training, and Azure OpenAI is not designed for image analysis tasks.


Question 2

Which Azure AI Vision capability extracts printed and handwritten text from scanned documents and images?

A. Image tagging
B. Object detection
C. Optical Character Recognition (OCR)
D. Facial analysis

Correct Answer: C

Explanation:
OCR is specifically designed to detect and extract text from images, including scanned documents and handwritten content.


Question 3

A developer needs to identify objects in an image and return their locations using bounding boxes.

Which Azure AI Vision feature should be used?

A. Image classification
B. Image tagging
C. Object detection
D. Image description

Correct Answer: C

Explanation:
Object detection identifies what objects are present and where they are located using bounding boxes and confidence scores.


Question 4

Which capability of Azure AI Vision can detect faces and return attributes such as estimated age and facial expression?

A. Facial recognition
B. Facial detection and facial analysis
C. Image classification
D. Custom Vision

Correct Answer: B

Explanation:
Azure AI Vision supports facial detection and analysis, which provides facial attributes but does not identify individuals.


Question 5

A solution must automatically assign keywords like “outdoor”, “food”, or “animal” to images for search and organization.

Which Azure AI Vision feature meets this requirement?

A. OCR
B. Object detection
C. Image tagging
D. Facial analysis

Correct Answer: C

Explanation:
Image tagging assigns descriptive labels to images to improve categorization and searchability.


Question 6

Which statement best describes Azure AI Vision?

A. It requires training a custom model for each scenario
B. It provides prebuilt computer vision capabilities through APIs
C. It is only used for facial recognition
D. It can only analyze video streams

Correct Answer: B

Explanation:
Azure AI Vision offers prebuilt computer vision models accessed via APIs, requiring no model training.


Question 7

A company wants to analyze images quickly without building or training a machine learning model.

Which Azure service is most appropriate?

A. Azure Machine Learning
B. Azure Custom Vision
C. Azure AI Vision
D. Azure Databricks

Correct Answer: C

Explanation:
Azure AI Vision is designed for quick deployment using prebuilt models, making it ideal when no custom training is required.


Question 8

Which task is NOT a capability of Azure AI Vision?

A. Detecting objects in an image
B. Extracting text from images
C. Identifying specific individuals in photos
D. Generating image descriptions

Correct Answer: C

Explanation:
Azure AI Vision does not identify individuals. Facial recognition and identity verification are restricted and not required for AI-900.


Question 9

A scenario mentions analyzing images while following Microsoft’s Responsible AI principles, particularly around privacy and fairness.

Which Azure AI Vision feature is most closely associated with these considerations?

A. Image tagging
B. Facial detection and analysis
C. OCR
D. Object detection

Correct Answer: B

Explanation:
Facial detection and analysis involve human data and are closely tied to privacy, fairness, and transparency considerations.


Question 10

When should Azure AI Vision be used instead of Azure Custom Vision?

A. When you need a highly specialized image classification model
B. When you want full control over training data
C. When you need prebuilt image analysis without training
D. When labeling thousands of custom images

Correct Answer: C

Explanation:
Azure AI Vision is ideal for prebuilt, general-purpose image analysis scenarios. Custom Vision is used when custom training is required.


Final Exam Tips for This Topic

  • Think prebuilt vs custom
  • Azure AI Vision = no training
  • OCR = text extraction
  • Object detection = what + where
  • Facial analysis ≠ facial recognition

Go to the AI-900 Exam Prep Hub main page.