Tag: DP-900 Exam Prep

Practice Questions: Describe Features of Analytical Workloads (DP-900 Exam Prep)

Practice Questions


Question 1

Which scenario best represents an analytical workload?

A. Recording a new customer order
B. Updating inventory quantities
C. Generating a yearly sales trends report
D. Processing a credit card payment

Answer: C

Explanation:
Analytical workloads focus on reporting and historical analysis, not real-time operations.


Question 2

Analytical workloads primarily involve which type of queries?

A. Short insert and update statements
B. Point lookups by primary key
C. Complex queries with aggregations
D. Transactional batch commits

Answer: C

Explanation:
Analytical workloads typically use complex SELECT queries with GROUP BY, SUM, AVG, etc.


Question 3

Which characteristic is MOST associated with analytical workloads?

A. Many small write operations
B. ACID transaction enforcement
C. Read-heavy access to large datasets
D. Millisecond response requirements

Answer: C

Explanation:
Analytical systems mainly read and aggregate large volumes of data.


Question 4

Which schema design is commonly used for analytical workloads?

A. Fully normalized schema
B. Hierarchical schema
C. Denormalized star schema
D. Key-value schema

Answer: C

Explanation:
Analytical systems often use denormalized schemas (such as star schemas) to improve query performance.


Question 5

Which Azure service is MOST appropriate for enterprise-scale analytical reporting?

A. Azure SQL Database
B. Azure Synapse Analytics
C. Azure Table Storage
D. Azure Queue Storage

Answer: B

Explanation:
Azure Synapse Analytics is designed for large-scale analytical and data warehousing workloads.


Question 6

Which statement about analytical workloads is TRUE?

A. They prioritize low-latency updates
B. They mainly support operational business processes
C. They often analyze historical data
D. They require normalized schemas

Answer: C

Explanation:
Analytical workloads typically analyze historical and aggregated data for insights.


Question 7

Which storage format is commonly used to optimize analytical queries?

A. Row-based text files
B. Columnar formats such as Parquet
C. Binary key-value files
D. XML documents

Answer: B

Explanation:
Columnar formats like Parquet improve performance for analytical queries by minimizing I/O.


Question 8

Which workload characteristic differentiates analytical systems from transactional systems?

A. Use of indexes
B. Support for SQL
C. Focus on throughput over latency
D. Ability to store structured data

Answer: C

Explanation:
Analytical systems prioritize processing large volumes of data efficiently rather than ultra-fast response times.


Question 9

A company combines data from sales, marketing, and customer systems to build dashboards in Power BI.

What type of workload is this?

A. Transactional
B. Streaming
C. Analytical
D. Operational

Answer: C

Explanation:
Combining multiple sources for dashboards and insights is an analytical workload.


Question 10

Which activity is LEAST likely to be part of an analytical workload?

A. Running aggregate queries
B. Creating executive dashboards
C. Performing nightly ETL jobs
D. Updating individual customer records

Answer: D

Explanation:
Updating individual records is transactional, not analytical.


✅ Exam Memory Anchors

For DP-900, remember analytical workloads as:

✔ OLAP
✔ Large datasets
✔ Complex read-heavy queries
✔ Aggregations & reporting
✔ Historical analysis
✔ Denormalized schemas
✔ Columnar storage
✔ Azure Synapse + Data Lake
✔ Power BI consumers


Go to the DP-900 Exam Prep Hub main page.

Describe Features of Analytical Workloads (DP-900 Exam Prep)

This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. 
This topic falls under these sections:
Describe core data concepts (25–30%)
--> Describe common data workloads
--> Describe features of analytical workloads


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Analytical workloads are essential for deriving insights from data. Unlike transactional workloads — which support day-to-day operations — analytical workloads focus on querying, aggregating, summarizing, and analyzing large volumes of data to help with reporting, decision making, and trends.


What Is an Analytical Workload?

An analytical workload refers to data processing that is oriented toward analysis, rather than operational updates. It is optimized for:

  • Complex queries
  • Aggregations across large datasets
  • Historical analysis and reporting
  • Business intelligence (BI)

Analytical workloads are often associated with OLAP (Online Analytical Processing) systems.


Key Features of Analytical Workloads

1. Large Volumes of Data

Analytical systems often operate on datasets that are:

  • Much larger than transactional tables
  • Historical — spanning months or years of records
  • Combined from multiple sources (e.g., transactional systems, logs, external data)

These datasets can be stored in data warehouses, data lakes, or big data systems.


2. Complex, Read-Heavy Queries

Analytical workloads are dominated by complex SELECT queries, often involving:

  • Aggregations (SUM, AVG, COUNT)
  • Grouping by categories
  • Filtering on multiple dimensions
  • Joining large tables

These queries can be computationally intensive and are often used for reporting and dashboards.


3. Denormalized or Columnar Storage

Unlike transactional systems that use normalized schemas, analytical workloads often use:

  • Denormalized schemas (e.g., star or snowflake schemas)
  • Columnar storage formats (e.g., Parquet, ORC)

These formats improve query performance by minimizing I/O and enabling efficient aggregation.


4. Longer Query Response Times (But High Throughput)

Queries in analytical systems are not always expected to return results in milliseconds, as they:

  • Scan large amounts of data
  • Compute aggregates and summaries
  • May be optimized for throughput rather than low latency

This contrasts with transactional systems where fast, small transactions are critical.


5. Batch or Bulk Processing

Analytical workloads often rely on:

  • Batch ingestion of data (e.g., nightly ETL jobs)
  • Data transformation pipelines (cleaning, aggregating, enriching)
  • Tools like Azure Data Factory, Databricks, or Synapse pipelines

These pipelines prepare data for analytics and reporting.


6. Support for BI and Reporting Tools

Analytical workloads integrate with business intelligence tools, such as:

  • Power BI
  • Excel
  • Azure Synapse Analytics Studio

These tools connect directly to analytical stores to produce dashboards, charts, and insights.


Analytical vs Transactional Workloads — Quick Comparison

FeatureTransactionalAnalytical
Primary PurposeOperational processing (OLTP)Decision support & reporting (OLAP)
Data SizeSmall to moderateLarge or very large
Workload TypeFrequent inserts/updates/deletesComplex queries/aggregations
SchemaNormalizedOften denormalized
Query FocusSingle record operationsScanning many records
Typical ToolsRelational OLTP databasesData warehouses, big data systems

Where Analytical Workloads Run in Azure

Azure offers several services optimized for analytical workloads:

Azure Synapse Analytics

A unified analytics service that enables:

  • Data warehousing
  • Big data processing
  • Integration with Spark and SQL
  • High-performance analytics

It is ideal for large-scale reporting and BI scenarios.


Azure Data Lake Storage + Analytics

Azure Data Lake Storage Gen2 works with:

  • Apache Spark
  • Azure Databricks
  • Synapse Analytics

This combination supports big data analytics, machine learning, and data science workloads.


Azure SQL Data Warehouse (Synapse Dedicated SQL Pools)

This is the former SQL DW offering (now part of Synapse) optimized for:

  • Massive parallel processing
  • Distributed query execution
  • High-volume analytical queries

Why Analytical Workloads Matter for DP-900

For DP-900, you should be able to:

  • Define analytical workloads and distinguish them from transactional workloads
  • Recognize use cases where analytical workloads are appropriate
  • Identify Azure services designed for analytical processing
  • Understand schema design and storage options that support analytics

Being able to describe these features shows your understanding of how modern data ecosystems support business intelligence and analytics.


Summary — Exam-Relevant Takeaways

✔ Analytical workloads focus on complex queries and analysis across large datasets
✔ They use denormalized schemas and columnar storage to boost performance
✔ They are optimized for throughput and summarization, not real-time transactions
✔ They typically support reports, dashboards, and insights
✔ Azure services like Azure Synapse Analytics, Azure Data Lake, and Databricks support these workloads


Go to the Practice Exam Questions for this topic.

Go to the DP-900 Exam Prep Hub main page.

Practice Questions: Describe Features of Transactional Workloads (DP-900 Exam Prep)

Practice Questions


Question 1

Which scenario best represents a transactional workload?

A. Generating monthly sales reports
B. Training a machine learning model
C. Recording a customer purchase in real time
D. Visualizing historical trends

Answer: C

Explanation:
Transactional workloads capture operational business events as they occur.


Question 2

Which characteristic is most closely associated with transactional workloads?

A. Large batch queries
B. Complex aggregations
C. Frequent small read/write operations
D. Historical trend analysis

Answer: C

Explanation:
Transactional systems perform many small, fast inserts, updates, and deletes.


Question 3

Which ACID property ensures that completed transactions are permanently saved?

A. Atomicity
B. Consistency
C. Isolation
D. Durability

Answer: D

Explanation:
Durability guarantees that once a transaction commits, it remains stored even after failures.


Question 4

A banking system transfers money between accounts. If either debit or credit fails, both must roll back.

Which ACID property does this demonstrate?

A. Consistency
B. Isolation
C. Atomicity
D. Durability

Answer: C

Explanation:
Atomicity ensures that a transaction is all-or-nothing.


Question 5

Transactional workloads typically use which type of schema design?

A. Denormalized
B. Star schema
C. Snowflake schema
D. Normalized

Answer: D

Explanation:
Transactional systems usually use normalized schemas to reduce redundancy and enforce integrity.


Question 6

Which Azure service is MOST appropriate for a traditional OLTP application?

A. Azure Synapse Analytics
B. Azure SQL Database
C. Azure Data Lake Storage
D. Azure Blob Storage

Answer: B

Explanation:
Azure SQL Database is optimized for transactional (OLTP) workloads with ACID support.


Question 7

Which requirement is most critical for transactional workloads?

A. High throughput for batch queries
B. Schema flexibility
C. Low latency and strong consistency
D. Historical data retention

Answer: C

Explanation:
Transactional workloads prioritize fast response times and data consistency.


Question 8

Which workload is LEAST likely to be transactional?

A. Updating inventory levels
B. Processing credit card payments
C. Inserting new customer records
D. Running yearly financial summaries

Answer: D

Explanation:
Yearly summaries are analytical, not transactional.


Question 9

Which statement about transactional workloads is TRUE?

A. They primarily analyze historical data
B. They usually involve complex joins across millions of rows
C. They support operational business processes
D. They are optimized for reporting

Answer: C

Explanation:
Transactional workloads support daily operations such as orders, payments, and updates.


Question 10

An e-commerce application must confirm orders instantly and ensure inventory counts are always correct.

Which workload type does this describe?

A. Analytical
B. Batch
C. Streaming
D. Transactional

Answer: D

Explanation:
Real-time order processing with consistency requirements is transactional.


✅ Exam Tips for Transactional Workloads

For DP-900, remember:

✔ Focus on real-time operational processing
✔ Think OLTP
✔ Many small reads/writes
ACID compliance
Low latency + strong consistency
✔ Typically normalized schemas
✔ Azure SQL Database is the classic example


Go to the DP-900 Exam Prep Hub main page.

Describe Types of Databases (DP-900 Exam Prep)

This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. 
This topic falls under these sections:
Describe core data concepts (25–30%)
--> Identify options for data storage
--> Describe types of databases


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Databases are systems that store and manage data so applications can retrieve, update, and organize it efficiently. For DP-900, you should be familiar with the major types of databases, how they differ, and common use cases — especially in relation to Azure services.


What Is a Database?

A database is an organized collection of data that enables efficient access, management, and update of information. Databases may differ in how they model, structure, and query data depending on the data type, scale, and workload requirements.


Primary Types of Databases

At a high level, databases fall into two broad categories:

  1. Relational Databases
  2. Non-Relational Databases (NoSQL)

1. Relational Databases

Relational databases (RDBMS) store data in tables with rows and columns.

Key Features

  • Structured schema: Tables have defined columns with data types.
  • Relationships: Tables can be linked using keys (e.g., primary and foreign keys).
  • SQL Queries: Use Structured Query Language (SQL) to retrieve and manipulate data.
  • ACID transactions: Support atomicity, consistency, isolation, and durability for reliable data operations.

When to Use

  • Applications requiring strong data integrity
  • Banking, accounting, inventory systems
  • Workloads where relationships among data matter

Examples

  • Azure SQL Database
  • Azure Database for PostgreSQL
  • Azure Database for MySQL

2. Non-Relational Databases (NoSQL)

Non-relational databases, often called NoSQL databases, store data in ways that differ from traditional tables. They are generally schema-less and more flexible, which helps with scalability and handling varied data types.

Key Characteristics

  • No fixed schema
  • Designed for horizontal scaling and large data volumes
  • Support for semi-structured and unstructured data
  • Often optimized for specific access patterns

The most common NoSQL models include:


a. Key-Value Databases

Key-value stores are the simplest type of NoSQL database.

  • Data stored as pairs: key (identifier) and value (data).
  • Efficient for simple lookups when the key is known.

Use cases: Session state, caching, user preferences.


b. Document Databases

Document databases store data as documents, typically in JSON format.

  • Each document is a self-describing object with a unique ID.
  • Supports nested fields and flexible attributes.

Use cases: Content management, user profiles, web apps.


c. Column-Family (Wide-Column) Databases

Column-family databases use tables with column families — groups of related columns that can vary by row.

  • Designed for wide tables where columns are sparse.
  • Good for distributed data and analytical workloads.

Use cases: Time-series data, analytics, event logging.


d. Graph Databases

Graph databases focus on relationships between data elements.

  • Use nodes (entities) and edges (relationships).
  • Optimized for queries involving deep connections (e.g., social networks).

Use cases: Recommendation engines, fraud detection, network analysis.


Relational vs Non-Relational: A Quick Comparison

FeatureRelationalNon-Relational (NoSQL)
SchemaFixedFlexible / Schema-less
Data ModelTablesVaries (documents, keys, graphs)
Query LanguageSQLVaries by database
ScalabilityVertical scalingHorizontal scaling
Typical UseStrong consistency & relationshipsLarge, evolving, semi/unstructured data

How Azure Supports These Databases

Relational Database Services

Azure provides managed relational services:

  • Azure SQL Database: Managed SQL service
  • Azure Database for MySQL and PostgreSQL: Managed open-source options

These are ideal for structured data and transactional workloads.


Non-Relational Database Services

Azure supports NoSQL and other flexible databases:

  • Azure Cosmos DB: A globally distributed, multi-model NoSQL database service that supports document, key-value, column-family, and graph models.

This makes Cosmos DB unique in supporting multiple non-relational data models from a single service.


Why Understanding Types of Databases Matters for DP-900

On the DP-900 exam, you may be asked to:

  • Classify a database type based on a description of its structure.
  • Choose the best database model for a given business scenario.
  • Identify Azure services that match a database type.

Knowing relational vs non-relational databases, and the sub-types of NoSQL models, will help you answer these questions with confidence.


Summary — Exam-Relevant Takeaways

Relational databases store structured data using tables, enforce schemas, and use SQL.
NoSQL databases store non-relational data and include key-value, document, column-family, and graph types.
Azure SQL Database and open-source relational offerings support structured workloads.
Azure Cosmos DB supports multiple non-relational models for schema-flexible data.


Go to the Practice Exam Questions for this topic.

Go to the DP-900 Exam Prep Hub main page.

Practice Questions: Describe Types of Databases (DP-900 Exam Prep)

Practice Questions


Question 1

You need to store customer orders in tables with fixed columns and enforce relationships between customers and orders.

Which type of database should you use?

A. Graph
B. Document
C. Relational
D. Key-value

Answer: C

Explanation:
Relational databases store structured data in tables with defined schemas and support relationships via keys.


Question 2

Which characteristic best describes a relational database?

A. Schema-less storage
B. Data stored as JSON documents
C. Tables with rows and columns
D. Nodes and edges

Answer: C

Explanation:
Relational databases organize data into tables (rows and columns) and use SQL for querying.


Question 3

An application must store user profiles in flexible JSON documents where each user may have different attributes.

Which database type is most appropriate?

A. Column-family
B. Document
C. Relational
D. Graph

Answer: B

Explanation:
Document databases store data as JSON-like documents and allow flexible schemas — ideal for user profiles.


Question 4

Which Azure service supports multiple NoSQL data models such as Core (SQL) API, Table API, Cassandra API, and Gremlin API?

A. Azure SQL Database
B. Azure Table Storage
C. Azure Cosmos DB
D. Azure Database for PostgreSQL

Answer: C

Explanation:
Azure Cosmos DB is a globally distributed, multi-model NoSQL database service.


Question 5

You are designing a recommendation engine that analyzes relationships between users and products.

Which database type is best suited?

A. Relational
B. Key-value
C. Graph
D. Column-family

Answer: C

Explanation:
Graph databases specialize in relationship-heavy data using nodes and edges.


Question 6

Which statement about NoSQL databases is TRUE?

A. They always require fixed schemas
B. They primarily use SQL
C. They are optimized for horizontal scaling
D. They cannot store structured data

Answer: C

Explanation:
NoSQL databases are designed for horizontal scaling and flexible schemas.


Question 7

You need extremely fast lookups using a unique identifier, and the data structure is simple.

Which NoSQL model should you choose?

A. Document
B. Graph
C. Column-family
D. Key-value

Answer: D

Explanation:
Key-value databases store data as key/value pairs and provide very fast retrieval.


Question 8

Which Azure service is best suited for structured transactional workloads using SQL?

A. Azure Blob Storage
B. Azure Cosmos DB
C. Azure SQL Database
D. Azure Data Lake Storage

Answer: C

Explanation:
Azure SQL Database is a managed relational database service optimized for structured transactional data.


Question 9

Which feature is typically associated with relational databases but not guaranteed in NoSQL systems?

A. Global distribution
B. Flexible schemas
C. ACID transactions
D. Horizontal scaling

Answer: C

Explanation:
Relational databases traditionally provide full ACID transaction support.


Question 10

A company collects massive volumes of time-series telemetry data where columns may vary across rows.

Which database type fits this scenario best?

A. Relational
B. Document
C. Column-family
D. Graph

Answer: C

Explanation:
Column-family (wide-column) databases are well suited for large, sparse datasets such as time-series data.


✅ Key Exam Reminders

For DP-900, make sure you can confidently:

  • Distinguish relational vs non-relational
  • Recognize NoSQL models (key-value, document, column-family, graph)
  • Match Azure services to database types (especially Azure SQL vs Azure Cosmos DB)
  • Choose the right database type for a scenario

Go to the DP-900 Exam Prep Hub main page.

Describe Common Formats for Data Files (DP-900 Exam Prep)

This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. 
This topic falls under these sections:
Describe core data concepts (25–30%)
--> Identify options for data storage
--> Describe common formats for data files


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

In DP-900, Microsoft expects you to understand common data file formats, what type of data they typically store (structured, semi-structured, or unstructured), and why certain formats are used in analytics and Azure storage scenarios.

This topic connects directly to Azure Blob Storage, Azure Data Lake Storage, and analytics pipelines.


Why Data File Formats Matter

Data file formats define:

  • How data is organized inside a file
  • Whether the data is human-readable or binary
  • How efficiently it can be stored and queried
  • Which tools and services can process it

Choosing the right format impacts:

  • Performance
  • Storage cost
  • Analytics capabilities
  • Interoperability between systems

For DP-900, focus on understanding what each format is used for, not deep implementation details.


Common Data File Formats You Should Know

1. CSV (Comma-Separated Values)

CSV is one of the simplest and most widely used formats for structured data.

Key Characteristics

  • Plain text
  • Each row represents a record
  • Columns separated by commas (or other delimiters)
  • No embedded schema
  • Human readable

Example:

CustomerID,Name,City
1,John,Seattle
2,Maria,Austin

Typical Use Cases

  • Data exports and imports
  • Simple datasets
  • Spreadsheet interoperability

Exam Notes

  • Represents structured data
  • Lightweight and easy to move between systems
  • No support for nested structures or data types

2. JSON (JavaScript Object Notation)

JSON is the most common format for semi-structured data, especially in modern applications and APIs.

Key Characteristics

  • Key–value pairs
  • Supports nested objects and arrays
  • Self-describing
  • Human readable
  • Schema-on-read

Example:

{
"CustomerID": 1,
"Name": "John",
"Orders": [
{ "OrderID": 100, "Amount": 50 }
]
}

Typical Use Cases

  • Web APIs
  • Application data
  • Azure Cosmos DB documents
  • Logs and telemetry

Exam Notes

  • Represents semi-structured data
  • Flexible schema
  • Commonly used with Azure Cosmos DB and Azure Data Lake

3. XML (Extensible Markup Language)

XML is another semi-structured format that uses tags to describe data.

Key Characteristics

  • Tag-based hierarchy
  • Supports nested structures
  • Human readable but verbose
  • Self-describing

Example:

<Customer>
<CustomerID>1</CustomerID>
<Name>John</Name>
</Customer>

Typical Use Cases

  • Legacy systems
  • Configuration files
  • Enterprise data exchange

Exam Notes

  • Semi-structured
  • Less common than JSON in modern Azure solutions

4. Parquet

Parquet is a columnar, binary file format optimized for analytics workloads.

Key Characteristics

  • Column-based storage
  • Highly compressed
  • Not human readable
  • Very fast for analytical queries

Typical Use Cases

  • Big data analytics
  • Azure Synapse Analytics
  • Azure Data Lake Storage

Exam Notes

  • Used for large analytical datasets
  • Optimized for performance and storage efficiency
  • Common in modern data engineering pipelines

5. Avro

Avro is a binary format designed for data serialization and streaming.

Key Characteristics

  • Compact binary format
  • Includes schema with the data
  • Efficient for data movement
  • Not human readable

Typical Use Cases

  • Data pipelines
  • Event streaming
  • Big data ingestion

Exam Notes

  • Often used behind the scenes in analytics platforms
  • Supports schema evolution

6. Plain Text Files

Simple text files may also be used to store unstructured or loosely structured data.

Examples

  • Log files
  • Notes
  • Raw exports

Exam Notes

  • Usually treated as unstructured data
  • Stored in Azure Blob Storage or Data Lake

How These Formats Map to Data Types

This mapping is important for DP-900 questions:

FormatData Type
CSVStructured
JSONSemi-structured
XMLSemi-structured
ParquetStructured / Analytics
AvroSemi-structured
TXTUnstructured

Where These Formats Are Stored in Azure

You’ll commonly see these formats stored in:

Azure Blob Storage

  • Primary storage for files
  • Supports all formats (CSV, JSON, Parquet, images, etc.)
  • Used for unstructured and semi-structured data

Azure Data Lake Storage Gen2

  • Built on Blob Storage
  • Optimized for analytics
  • Common for Parquet and Avro files
  • Used with Azure Synapse and Azure Data Factory

Why This Matters for DP-900

On the exam, file formats typically appear in scenarios like:

  • Choosing storage for CSV or JSON files
  • Identifying formats used in analytics pipelines
  • Recognizing Parquet in big data workloads
  • Distinguishing structured vs semi-structured file types

You’re expected to understand purpose and characteristics, not internal file mechanics.


Summary — Exam-Relevant Takeaways

For DP-900, remember:

✔ CSV → structured, simple, text-based
✔ JSON / XML → semi-structured, flexible, self-describing
✔ Parquet → columnar, compressed, analytics-optimized
✔ Avro → binary, schema included, streaming-friendly
✔ TXT → unstructured

And:

  • These formats are commonly stored in Azure Blob Storage or Azure Data Lake Storage
  • Analytics formats (Parquet/Avro) are used with Azure Synapse and big data workloads

Go to the Practice Exam Questions for this topic.

Go to the DP-900 Exam Prep Hub main page.

Describe Features of Semi-Structured Data (DP-900 Exam Prep)

This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. 
This topic falls under these sections:
Describe core data concepts (25–30%)
--> Describe ways to represent data
--> Describe features of semi-structured data


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Introduction

For the DP-900 exam, semi-structured data sits between structured and unstructured data. You’re expected to understand what it is, how it’s organized, and why Azure provides specialized services to store and query it.


What Is Semi-Structured Data?

Semi-structured data is data that does not follow a rigid, tabular schema like relational data, but still contains organizational markers or tags that make it partially structured and machine readable.

Unlike structured data (rows and columns), semi-structured data:

  • Does not require a predefined schema
  • Can vary in shape from record to record
  • Still contains self-describing elements such as key–value pairs or hierarchical structures

In other words, semi-structured data has some structure — just not fixed tables.

Common examples include:

  • JSON documents
  • XML files
  • YAML
  • Avro / Parquet (used in analytics pipelines)

Key Features of Semi-Structured Data

1. Schema-on-Read (Not Schema-on-Write)

One of the most important characteristics of semi-structured data is schema-on-read.

This means:

  • Data is stored without enforcing a strict schema
  • Structure is interpreted when the data is queried or analyzed

This contrasts with structured data, which uses schema-on-write, where structure must be defined before data is inserted.

For DP-900, remember:

Semi-structured data is flexible at ingestion time and structured at query time.


2. Flexible and Evolving Structure

Each record in a semi-structured dataset can contain:

  • Different fields
  • Nested objects
  • Optional attributes

Example (JSON):

{
"CustomerID": 123,
"Name": "Sarah",
"Orders": [
{ "OrderID": 1, "Amount": 50 },
{ "OrderID": 2, "Amount": 75 }
]
}

Another record in the same dataset might include extra fields like Email or omit Orders entirely.

This flexibility makes semi-structured data ideal for:

  • Application telemetry
  • IoT data
  • User activity logs
  • Rapidly changing systems

3. Hierarchical or Nested Organization

Semi-structured data often uses hierarchies rather than flat tables.

For example:

  • JSON objects inside objects
  • XML elements within elements

This nested design allows complex relationships to exist inside a single document — something that would require multiple tables in relational systems.


4. Self-Describing Format

Semi-structured data embeds its own metadata using:

  • Keys
  • Tags
  • Field names

This makes the data self-describing, meaning applications can understand what each value represents without relying on an external schema definition.

Example:

"Temperature": 72

The key itself describes the value.


5. Easily Transported Across Systems

Semi-structured formats such as JSON and XML are:

  • Human readable
  • Platform independent
  • Widely supported across APIs and applications

This is why most modern web services exchange data using JSON.


Common Formats of Semi-Structured Data

You should recognize these for DP-900:

FormatDescription
JSONMost common format for APIs and applications
XMLTag-based hierarchical format
YAMLHuman-friendly configuration format
Avro / ParquetColumnar formats used in analytics pipelines

Where Semi-Structured Data Is Used in Azure

Microsoft Azure provides specialized services designed to handle semi-structured data:

Azure Cosmos DB

  • Stores JSON documents
  • Supports schema-less designs
  • Designed for globally distributed applications
  • Optimized for flexible data models

Azure Data Lake Storage

  • Stores large volumes of semi-structured files
  • Used in analytics pipelines
  • Often paired with Azure Synapse or Azure Data Factory

These services are built specifically for workloads where structure changes frequently or cannot be fully defined in advance.


Why Semi-Structured Data Matters for DP-900

Understanding semi-structured data helps you:

  • Distinguish it from relational (structured) data
  • Identify appropriate Azure services (especially Cosmos DB)
  • Understand modern application and analytics architectures

On the exam, you’ll typically see semi-structured data appear in scenarios involving:

  • JSON documents
  • Application telemetry
  • IoT data
  • Log files

Structured vs Semi-Structured (Quick Comparison)

StructuredSemi-Structured
Fixed schemaFlexible schema
Rows and columnsDocuments / nested objects
Schema-on-writeSchema-on-read
SQL databasesDocument databases
Highly consistentShape varies by record

Summary — Exam-Relevant Takeaways

For DP-900, remember:

✔ Semi-structured data has no fixed schema
✔ Uses schema-on-read
✔ Supports nested and hierarchical structures
✔ Common formats: JSON, XML
✔ Often stored in Azure Cosmos DB or Data Lake
✔ Ideal for rapidly changing or document-based data


Go to the Practice Exam Questions for this topic.

Go to the DP-900 Exam Prep Hub main page.

Practice Questions: Describe Common Formats for Data Files (DP-900 Exam Prep)

Practice Questions


Question 1

Which file format is most commonly used to store simple structured data in a plain-text, tabular form?

A. JSON
B. Parquet
C. CSV
D. Avro

Answer: C

Explanation:
CSV (Comma-Separated Values) stores structured data as rows and columns in plain text and is widely used for data exchange.


Question 2

Which format is most associated with semi-structured data and commonly used by web APIs?

A. CSV
B. JSON
C. TXT
D. JPEG

Answer: B

Explanation:
JSON uses key–value pairs and nested objects, making it ideal for semi-structured application data and APIs.


Question 3

A data engineering team needs a highly compressed, column-based file format optimized for analytics queries in Azure Synapse. Which format should they use?

A. XML
B. CSV
C. Parquet
D. TXT

Answer: C

Explanation:
Parquet is a columnar, binary format designed for high-performance analytics and efficient storage.


Question 4

Which file format is tag-based, verbose, and commonly seen in legacy systems?

A. JSON
B. XML
C. Avro
D. CSV

Answer: B

Explanation:
XML is a semi-structured, tag-based format often used in older enterprise systems and integrations.


Question 5

Which format is binary, includes schema information, and is commonly used in streaming or ingestion pipelines?

A. CSV
B. JSON
C. Avro
D. TXT

Answer: C

Explanation:
Avro is a compact binary format that embeds schema and supports schema evolution, making it suitable for pipelines and streaming.


Question 6

A company stores application logs as JSON files in Azure Data Lake Storage. What type of data is this?

A. Structured
B. Semi-structured
C. Unstructured
D. Relational

Answer: B

Explanation:
JSON represents semi-structured data because it uses keys and nested structures but does not enforce a fixed schema.


Question 7

Which format is most appropriate for exchanging small datasets between systems and opening directly in Excel?

A. Parquet
B. Avro
C. CSV
D. XML

Answer: C

Explanation:
CSV is lightweight, human readable, and easily opened in spreadsheet tools like Excel.


Question 8

Which Azure service is most commonly used to store files such as CSV, JSON, Parquet, images, and videos?

A. Azure SQL Database
B. Azure Cosmos DB
C. Azure Blob Storage
D. Azure Table Storage

Answer: C

Explanation:
Azure Blob Storage is Azure’s primary service for storing files of all formats, including structured, semi-structured, and unstructured data.


Question 9

Which format is not human readable and primarily optimized for analytics workloads?

A. CSV
B. JSON
C. Parquet
D. XML

Answer: C

Explanation:
Parquet is a binary format optimized for performance and compression, not human readability.


Question 10

Match the format to the most appropriate data type:

Which pairing is correct?

A. CSV → Unstructured
B. JSON → Structured
C. TXT → Semi-structured
D. Parquet → Structured / Analytics

Answer: D

Explanation:
Parquet is commonly used for structured analytical datasets in big data and Azure analytics workloads.


✅ Quick Exam Takeaways

For DP-900, remember:

  • CSV → Structured, plain text
  • JSON / XML → Semi-structured
  • Parquet → Columnar, analytics-optimized
  • Avro → Binary, schema included, pipeline-friendly
  • TXT → Usually unstructured

And:

  • These formats typically live in Azure Blob Storage or Azure Data Lake Storage
  • Parquet and Avro are common in analytics and data engineering pipelines

Go to the DP-900 Exam Prep Hub main page.

Describe Features of Unstructured Data (DP-900 Exam Prep)

This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. 
This topic falls under these sections:
Describe core data concepts (25–30%)
--> Describe ways to represent data
--> Describe features of unstructured data


Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.

Introduction

For the DP-900 exam, unstructured data represents the opposite end of the data spectrum from structured data. You’re expected to understand what unstructured data is, its defining characteristics, and how Azure typically stores and works with it.


What Is Unstructured Data?

Unstructured data is data that does not follow a predefined data model or schema and does not naturally fit into rows and columns.

Unlike structured or semi-structured data:

  • There is no inherent schema
  • There are no consistent fields or attributes
  • The meaning of the data is not directly machine-readable without additional processing

Common examples include:

  • Text documents (Word, PDF, emails)
  • Images
  • Audio files
  • Video files
  • Social media posts
  • Free-form text

In short:

Unstructured data is raw content without built-in organization.


Key Features of Unstructured Data

1. No Predefined Schema

Unstructured data has no fixed structure at all.

There are:

  • No columns
  • No rows
  • No data types
  • No enforced fields

Each file or object stands alone, and systems do not inherently understand its internal meaning.

For DP-900, remember:

Unstructured data uses no schema-on-write and no schema-on-read by default.

Any structure must be created later using analytics or AI tools.


2. Human-Readable, Not Machine-Optimized

Unstructured data is usually created for human consumption, not database processing.

Examples:

  • A photo is meant to be viewed
  • A video is meant to be watched
  • A document is meant to be read

Computers cannot easily extract meaning from this data without:

  • AI
  • machine learning
  • text analytics
  • computer vision

3. Stored as Files or Binary Objects

Unstructured data is typically stored as files or blobs, rather than database records.

Each item is treated as a complete object, such as:

  • image.jpg
  • recording.mp3
  • report.pdf

There is no inherent relationship between files unless you explicitly create one.


4. Requires Specialized Processing

To analyze unstructured data, you generally need advanced tools such as:

  • Natural language processing (for text)
  • Image recognition
  • Speech-to-text
  • AI models

This is very different from structured data, where SQL alone is often sufficient.


5. Extremely Large Volume

Unstructured data typically represents the majority of enterprise data.

Examples include:

  • Video archives
  • Image repositories
  • Document libraries
  • Application-generated media

This makes scalability and low-cost storage especially important.


Where Unstructured Data Is Stored in Azure

Azure provides services specifically designed for unstructured data:

Azure Blob Storage

  • Primary Azure service for unstructured data
  • Stores images, videos, documents, backups, etc.
  • Highly scalable and cost-effective
  • Treats data as binary large objects (blobs)

Azure Data Lake Storage Gen2

  • Built on Blob Storage
  • Optimized for analytics workloads
  • Commonly used when unstructured data feeds big data or AI pipelines

For DP-900 purposes:

  • Azure Blob Storage = core unstructured storage
  • Azure Data Lake Storage = analytics-oriented unstructured storage

Common Use Cases for Unstructured Data

You’ll typically see unstructured data in scenarios involving:

  • Media content (photos, videos)
  • Document management systems
  • Call recordings
  • Social media data
  • Machine learning datasets

These workloads focus on storage and later interpretation, rather than immediate querying.


How Unstructured Differs from Semi-Structured

It’s important not to confuse these two:

Semi-StructuredUnstructured
Has tags or keys (JSON/XML)No internal structure
Schema-on-readNo schema
Machine readableHuman readable
Cosmos DB / Data LakeBlob Storage / Data Lake
Nested fieldsRaw files

JSON logs = semi-structured
PDF documents = unstructured

This distinction shows up frequently in DP-900 questions.


Why Unstructured Data Matters for DP-900

Understanding unstructured data helps you:

  • Identify appropriate Azure storage services
  • Recognize when SQL is not suitable
  • Understand modern data pipelines involving AI and analytics

On the exam, unstructured data usually appears in questions involving:

  • Images
  • Videos
  • Documents
  • Blob Storage

Summary — Exam-Relevant Takeaways

For DP-900, remember:

✔ Unstructured data has no predefined schema
✔ Stored as files or blobs, not tables
✔ Not directly queryable with SQL
✔ Requires AI or analytics tools for insight
✔ Common Azure services: Azure Blob Storage, Azure Data Lake Storage
✔ Examples: images, videos, PDFs, audio, free-form text


Go to the Practice Exam Questions for this topic.

Go to the DP-900 Exam Prep Hub main page.

Practice Questions: Describe Features of Unstructured Data (DP-900 Exam Prep)

Practice Questions


Question 1

Which statement best describes unstructured data?

A. Data organized in rows and columns
B. Data with flexible key–value pairs
C. Data without a predefined schema or consistent structure
D. Data stored only in relational databases

Answer: C

Explanation:
Unstructured data has no predefined schema and does not naturally fit into tables.


Question 2

Which of the following is an example of unstructured data?

A. A customer table in Azure SQL Database
B. A JSON document
C. A PDF document
D. A CSV file

Answer: C

Explanation:
PDF documents are classic unstructured data. JSON is semi-structured, and CSV is structured.


Question 3

Which Azure service is primarily used to store unstructured data such as images and videos?

A. Azure SQL Database
B. Azure Cosmos DB
C. Azure Blob Storage
D. Azure Table Storage

Answer: C

Explanation:
Azure Blob Storage is Azure’s primary service for storing unstructured data like media files and documents.


Question 4

Why can’t unstructured data typically be queried directly using SQL?

A. SQL is deprecated
B. Unstructured data lacks a schema
C. SQL only works on cloud platforms
D. Unstructured data is encrypted

Answer: B

Explanation:
SQL relies on schemas and tables. Unstructured data has no inherent structure, so it requires additional processing before analysis.


Question 5

Which workload most commonly generates unstructured data?

A. Financial transaction systems
B. Inventory databases
C. Media content platforms
D. Payroll systems

Answer: C

Explanation:
Media platforms generate images, videos, and audio — all unstructured data.


Question 6

How is unstructured data typically stored?

A. As relational records
B. As nested documents
C. As files or binary objects
D. As key–value pairs

Answer: C

Explanation:
Unstructured data is stored as files or blobs, not rows or documents.


Question 7

Which capability is commonly required to extract meaning from unstructured text data?

A. SQL joins
B. Index clustering
C. Natural language processing
D. Primary keys

Answer: C

Explanation:
Unstructured text requires NLP or AI techniques to derive insights.


Question 8

Which statement correctly compares unstructured and semi-structured data?

A. Both require fixed schemas
B. Semi-structured data has no internal organization
C. Unstructured data contains embedded keys
D. Semi-structured data is machine readable, unstructured typically is not

Answer: D

Explanation:
Semi-structured data (like JSON) contains keys/tags, while unstructured data does not.


Question 9

A company stores call recordings and scanned documents for compliance. What type of data is this?

A. Structured
B. Semi-structured
C. Unstructured
D. Relational

Answer: C

Explanation:
Audio files and scanned documents are unstructured data.


Question 10

Which is a key characteristic of unstructured data?

A. Strong data typing
B. Fixed schema
C. Hierarchical documents
D. Requires AI or analytics tools for interpretation

Answer: D

Explanation:
Unstructured data typically needs AI, machine learning, or analytics tools (such as computer vision or text analytics) to extract meaning.


✅ Quick Exam Takeaways

For DP-900, remember:

  • Unstructured data has no schema
  • Stored as files/blobs
  • Not directly queryable with SQL
  • Requires AI or analytics for insight
  • Common Azure service: Azure Blob Storage
  • Examples: images, videos, PDFs, audio, free-form text

Go to the DP-900 Exam Prep Hub main page.