This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. This topic falls under these sections: Identify considerations for relational data on Azure (20–25%) --> Describe relational concepts --> Describe normalization and why it is used
Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.
Normalization is a foundational concept in relational database design. For the DP-900 exam, you are expected to understand what normalization is, why it is important, and how it improves data quality and efficiency.
What Is Normalization?
Normalization is the process of organizing data in a relational database to:
Reduce data redundancy (duplicate data)
Improve data integrity
Ensure logical data relationships
This is done by splitting data into multiple related tables and defining relationships between them using keys.
Why Normalization Is Used
Normalization is used to solve common data problems in poorly designed tables.
1. Reduce Data Redundancy
Without normalization, the same data may be repeated across multiple rows.
Example (Unnormalized Table):
OrderID
CustomerName
CustomerCity
Product
1
John
Seattle
Laptop
2
John
Seattle
Mouse
Here, customer information is duplicated.
After Normalization:
Customers Table
CustomerID
Name
City
1
John
Seattle
Orders Table
OrderID
CustomerID
Product
1
1
Laptop
2
1
Mouse
Now, customer data is stored once and referenced using a key.
2. Improve Data Integrity
Normalization ensures that data remains accurate and consistent.
Without normalization:
Updating a customer’s city requires changing multiple rows
Missing one update leads to inconsistent data
With normalization:
Data is updated in one place only
Consistency is maintained automatically
3. Prevent Data Anomalies
Normalization helps prevent common issues:
Insert anomaly: Cannot add data without unrelated data
Update anomaly: Inconsistent updates across rows
Delete anomaly: Deleting one record removes important data
Normalized designs eliminate these problems.
4. Improve Data Organization
Normalized databases:
Clearly separate different entities (customers, orders, products)
Use relationships to connect data logically
Make databases easier to maintain and scale
Understanding Normal Forms (Simplified for DP-900)
Normalization is often described in stages called normal forms. For DP-900, you only need a basic understanding:
First Normal Form (1NF)
No repeating groups or multi-valued fields
Each column contains atomic (single) values
Second Normal Form (2NF)
Meets 1NF
All non-key columns depend on the entire primary key
Third Normal Form (3NF)
Meets 2NF
No dependency between non-key columns
💡 DP-900 Tip: You do NOT need to memorize formal definitions — just understand that normalization reduces redundancy and improves integrity.
Trade-Offs of Normalization
While normalization has many benefits, there are trade-offs:
Advantages
✔ Reduces duplicate data ✔ Improves consistency ✔ Simplifies updates ✔ Enhances data integrity
Disadvantages
✖ Requires more tables ✖ Queries may require joins ✖ Can slightly impact performance for complex queries
Normalization vs Denormalization
Understanding this comparison is important for the exam:
Feature
Normalization
Denormalization
Data Redundancy
Reduced
Increased
Data Integrity
High
Lower
Query Complexity
Higher (joins required)
Lower
Performance
Slower for reads
Faster for analytics
Use Case
Transactional systems (OLTP)
Analytical systems (OLAP)
Where Normalization Is Used in Azure
Normalization is commonly applied in relational database services such as:
Azure SQL Database
Azure Database for PostgreSQL
Azure Database for MySQL
These services are typically used for transactional workloads, where data integrity is critical.
Why This Matters for DP-900
On the exam, you may be asked to:
Identify why normalization is used
Recognize normalized vs unnormalized structures
Understand how normalization affects data integrity
Distinguish normalization from denormalization
Summary — Exam-Relevant Takeaways
✔ Normalization organizes data into multiple related tables ✔ It reduces data redundancy ✔ It improves data integrity and consistency ✔ It prevents insert, update, and delete anomalies ✔ It is commonly used in transactional (OLTP) systems ✔ It may require joins when querying data
What is the primary purpose of normalization in a relational database?
A. Increase data duplication B. Improve query speed for analytics C. Reduce data redundancy and improve integrity D. Store unstructured data
✅ Answer: C
Explanation: Normalization reduces duplicate data and improves consistency and integrity.
Question 2
Which issue is MOST likely to occur in a non-normalized table?
A. Faster query performance B. Data redundancy C. Reduced storage requirements D. Simpler queries
✅ Answer: B
Explanation: Without normalization, the same data is often stored multiple times.
Question 3
A database stores customer details repeatedly in every order record. What problem does normalization solve in this scenario?
A. Improves indexing B. Reduces redundancy C. Enhances encryption D. Increases data size
✅ Answer: B
Explanation: Normalization separates customer data into its own table to avoid duplication.
Question 4
Which anomaly occurs when updating the same data in multiple rows leads to inconsistent values?
A. Insert anomaly B. Delete anomaly C. Update anomaly D. Query anomaly
✅ Answer: C
Explanation: Update anomalies happen when duplicate data is not consistently updated.
Question 5
Which approach is commonly used in normalized database design?
A. Storing all data in a single table B. Splitting data into related tables C. Removing all relationships between data D. Using only unstructured formats
✅ Answer: B
Explanation: Normalization divides data into multiple related tables.
Question 6
Which statement best describes First Normal Form (1NF)?
A. Data must be encrypted B. Tables must include foreign keys C. Each column contains atomic (single) values D. Tables must be denormalized
✅ Answer: C
Explanation: 1NF requires that each field contains indivisible values and no repeating groups.
Question 7
What is a trade-off of normalization?
A. Increased data redundancy B. Simpler queries with no joins C. More complex queries due to joins D. Reduced data integrity
✅ Answer: C
Explanation: Normalized databases often require joins, making queries more complex.
Question 8
Which type of workload benefits MOST from normalized data?
A. Analytical workloads B. Transactional workloads C. Streaming workloads D. Machine learning workloads
✅ Answer: B
Explanation: Transactional systems prioritize data integrity, which normalization supports.
Question 9
Which action is an example of normalization?
A. Combining multiple tables into one B. Storing duplicate data for faster reads C. Separating customer and order data into different tables D. Removing relationships between tables
✅ Answer: C
Explanation: Normalization separates data into logical entities and links them via keys.
Question 10
Why might a system choose denormalization instead of normalization?
A. To improve data integrity B. To reduce redundancy C. To improve read performance for analytics D. To enforce strict schemas
✅ Answer: C
Explanation: Denormalization is often used in analytical systems to improve query performance.
✅ Quick Exam Takeaways
For DP-900, remember:
✔ Normalization = reduce redundancy + improve integrity ✔ Involves splitting data into related tables ✔ Prevents:
Insert anomalies
Update anomalies
Delete anomalies
✔ Used primarily in transactional (OLTP) systems ✔ Trade-off = more joins / more complex queries ✔ Opposite concept = denormalization (used in analytics)
This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. This topic falls under these sections: Identify considerations for relational data on Azure (20–25%) --> Describe relational concepts --> Identify features of relational data
Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.
Relational data is one of the most fundamental concepts in data management and a core focus area for the DP-900 exam. Understanding how relational data is structured, stored, and accessed will help you confidently answer questions related to databases, querying, and Azure data services.
What Is Relational Data?
Relational data is data that is organized into tables (relations) consisting of:
Rows (records)
Columns (attributes or fields)
Each table represents a specific entity, such as customers, orders, or products. Relationships between tables are defined using keys.
Core Features of Relational Data
1. Tabular Structure (Rows and Columns)
Relational data is stored in a structured, tabular format:
Each row represents a single record
Each column represents a specific attribute
Example:
CustomerID
Name
City
1
John
Seattle
2
Maria
Austin
This structure makes relational data easy to query and understand.
2. Predefined Schema
Relational databases enforce a fixed schema, which defines:
Table structure
Column names
Data types (e.g., INT, VARCHAR, DATE)
This ensures:
Data consistency
Data validation
Predictable structure
3. Use of Keys
Keys are essential for uniquely identifying records and linking tables.
Primary Key
Uniquely identifies each row in a table
Cannot contain duplicate or null values
Example: CustomerID
Foreign Key
Links one table to another
Establishes relationships between tables
Example: Order.CustomerID → Customer.CustomerID
4. Relationships Between Tables
Relational data supports relationships such as:
One-to-One
One-to-Many
Many-to-Many
Example:
One customer can have many orders (one-to-many)
These relationships allow complex data models to be built efficiently.
5. Structured Query Language (SQL)
Relational data is accessed and manipulated using Structured Query Language (SQL).
SQL is used to:
Query data (SELECT)
Insert data (INSERT)
Update data (UPDATE)
Delete data (DELETE)
Example:
SELECT Name FROM Customers WHERE City = 'Seattle';
6. Data Integrity and Constraints
Relational databases enforce data integrity through constraints such as:
PRIMARY KEY
FOREIGN KEY
NOT NULL
UNIQUE
CHECK
These rules ensure that:
Data is accurate
Relationships remain valid
Invalid data is prevented
7. Normalization
Relational data is often normalized to reduce redundancy and improve consistency.
Normalization involves:
Splitting data into multiple related tables
Eliminating duplicate data
Ensuring dependencies are logical
Example:
Instead of storing customer details in every order row, store them in a separate Customers table.
8. ACID Transactions
Relational databases support ACID properties, ensuring reliable transactions:
Atomicity → All or nothing
Consistency → Valid state maintained
Isolation → Transactions don’t interfere
Durability → Changes persist
This is especially important for transactional workloads.
Relational Data in Azure
Azure provides several services for working with relational data:
Azure SQL Database
Fully managed relational database
Supports SQL queries
High availability and scalability
Ideal for OLTP applications
Azure Database for PostgreSQL
Managed open-source relational database
Supports PostgreSQL features and extensions
Azure Database for MySQL
Managed MySQL database service
Suitable for web and application workloads
These services support structured data, relationships, and SQL-based querying.
Why This Matters for DP-900
On the exam, you may be asked to:
Identify characteristics of relational data
Recognize table-based structures
Understand keys and relationships
Distinguish relational data from non-relational data
Match relational workloads to Azure services
Summary — Exam-Relevant Takeaways
✔ Relational data is stored in tables (rows and columns) ✔ It uses a fixed schema with defined data types ✔ Primary and foreign keys define relationships ✔ Data is accessed using SQL ✔ Supports data integrity constraints ✔ Often normalized to reduce redundancy ✔ Ensures reliability with ACID transactions
This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. This topic falls under these sections: Describe core data concepts (25–30%) --> Identify roles and responsibilities for data workloads --> Describe responsibilities for database administrators
Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.
Database administrators (DBAs) play a critical role in ensuring that data systems are secure, available, performant, and reliable. For the DP-900 exam, you should understand what DBAs are responsible for and how their role fits into the broader data ecosystem.
What Is a Database Administrator (DBA)?
A Database Administrator (DBA) is responsible for managing and maintaining databases to ensure they operate efficiently, securely, and reliably.
DBAs work closely with:
Developers
Data engineers
Security teams
Business stakeholders
Their role focuses on the operational health of databases, rather than building analytics or models.
Core Responsibilities of a DBA
1. Database Installation and Configuration
DBAs are responsible for:
Installing database management systems (DBMS)
Configuring database settings
Setting up environments (development, testing, production)
In cloud environments like Azure, much of this is automated, but DBAs still configure:
Compute tiers
Storage options
Networking and access settings
2. Performance Monitoring and Optimization
Ensuring that databases run efficiently is a key DBA responsibility.
This includes:
Monitoring query performance
Identifying slow or inefficient queries
Creating and managing indexes
Optimizing database configurations
Goal: Maintain fast query response times and efficient resource usage.
3. Backup and Recovery
DBAs ensure data can be recovered in case of failure.
Responsibilities include:
Configuring automated backups
Managing backup schedules
Testing restore processes
Planning for disaster recovery
In Azure, services like Azure SQL Database provide automated backups, but DBAs are still responsible for:
Retention policies
Recovery strategies
4. Security Management
DBAs are responsible for protecting data from unauthorized access.
This includes:
Managing user accounts and roles
Assigning permissions (read, write, admin access)
Implementing authentication and authorization
Enabling encryption (at rest and in transit)
Security is a major focus area in DP-900.
5. High Availability and Disaster Recovery (HA/DR)
DBAs ensure databases remain available even during failures.
Tasks include:
Configuring replication
Setting up failover mechanisms
Monitoring system uptime
Planning redundancy strategies
In Azure, many HA features are built-in, but DBAs still configure and monitor them.
6. Data Integrity and Consistency
DBAs enforce rules to ensure data remains accurate and reliable.
This includes:
Defining constraints (PRIMARY KEY, FOREIGN KEY, NOT NULL)
Ensuring ACID compliance
Preventing data corruption
This is especially important in transactional systems.
7. Monitoring and Troubleshooting
DBAs continuously monitor database health and resolve issues such as:
Performance bottlenecks
Failed queries
Resource constraints
Connectivity issues
They use monitoring tools, logs, and alerts to maintain system stability.
8. Automation and Maintenance
DBAs automate routine tasks to improve efficiency:
Index maintenance
Statistics updates
Scheduled jobs
Patch management
In cloud environments, some tasks are automated, but DBAs still oversee and validate them.
DBA Responsibilities in Azure
In Azure, the DBA role shifts slightly due to Platform as a Service (PaaS) offerings:
What Azure Manages
Hardware
OS updates
Basic backups
High availability infrastructure
What DBAs Still Manage
Security and access control
Query performance
Data modeling support
Backup policies and restore strategies
Monitoring and tuning
This is often called the shared responsibility model.
DBA vs Other Data Roles
Understanding how DBAs differ from other roles is important for DP-900:
Role
Focus
DBA
Database management, performance, security
Data Engineer
Data pipelines, data movement
Data Analyst
Reporting and visualization
Data Scientist
Machine learning and advanced analytics
Why DBA Responsibilities Matter for DP-900
On the exam, you may be asked to:
Identify which tasks belong to a DBA
Distinguish DBA responsibilities from other roles
Understand how Azure simplifies some DBA tasks
Recognize responsibilities related to security, performance, and availability
Summary — Exam-Relevant Takeaways
✔ DBAs manage the health, security, and performance of databases ✔ Key responsibilities include:
Performance tuning
Backup and recovery
Security management
Monitoring and troubleshooting
High availability and disaster recovery
✔ In Azure, many tasks are automated, but DBAs still handle:
Configuration
Optimization
Access control
✔ DBAs focus on operational database management, not analytics or modeling
This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. This topic falls under these sections: Describe core data concepts (25–30%) --> Identify roles and responsibilities for data workloads --> Describe responsibilities for database engineers
Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.
Data engineers play a foundational role in modern data ecosystems. They are responsible for designing, building, and maintaining data systems and pipelines that enable organizations to collect, store, and process data for analysis.
For the DP-900 exam, you should understand what data engineers do, how they differ from other roles, and how their work supports analytics and business intelligence.
What Is a Data Engineer?
A data engineer is responsible for:
Designing and building data pipelines
Integrating data from multiple sources
Transforming raw data into usable formats
Ensuring data is available, reliable, and scalable
They act as the bridge between raw data sources and analytics systems.
Core Responsibilities of a Data Engineer
1. Data Ingestion
Data engineers collect data from various sources, such as:
Transactional databases
Application logs
IoT devices
External APIs
They design processes to ingest data into storage systems like data lakes or data warehouses.
This can be:
Batch ingestion (scheduled loads)
Streaming ingestion (real-time data flow)
2. Data Transformation and Processing
Raw data is often messy and inconsistent. Data engineers:
Clean and validate data
Transform it into structured formats
Aggregate and enrich datasets
This process is often referred to as ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform).
3. Building Data Pipelines
Data engineers design and maintain data pipelines, which automate the movement and transformation of data.
Pipelines typically include:
Data ingestion
Data transformation
Data storage
Data delivery to analytics tools
Pipelines must be:
Reliable
Scalable
Efficient
4. Managing Data Storage Solutions
Data engineers choose and manage appropriate storage systems based on use cases:
Data lakes for raw and large-scale data
Data warehouses for structured analytical data
Databases for operational data
They ensure data is stored in formats optimized for processing (e.g., Parquet).
5. Ensuring Data Quality
Data engineers are responsible for maintaining high-quality data by:
Validating data accuracy
Handling missing or inconsistent data
Implementing data validation rules
High-quality data is essential for reliable analytics.
6. Optimizing Data Performance
To ensure efficient data processing, data engineers:
Pipeline orchestration using Data Factory or Synapse pipelines
Batch and streaming transformations
Data Engineer vs Other Roles
Understanding role distinctions is important for DP-900:
Role
Primary Focus
Data Engineer
Build pipelines, manage data flow
DBA
Manage database performance and security
Data Analyst
Analyze data and create reports
Data Scientist
Build predictive models and ML solutions
Why This Matters for DP-900
On the exam, you may be asked to:
Identify tasks performed by data engineers
Distinguish data engineers from DBAs or analysts
Recognize tools and services used in data engineering
Understand how data pipelines support analytics
Summary — Exam-Relevant Takeaways
✔ Data engineers build and manage data pipelines ✔ They handle data ingestion, transformation, and storage ✔ They ensure data quality, reliability, and scalability ✔ They support analytical workloads by preparing clean datasets ✔ In Azure, they commonly use:
Azure Data Factory
Azure Data Lake Storage
Azure Databricks
Azure Synapse Analytics
✔ They act as the bridge between raw data and insights
Which task is a primary responsibility of a data analyst?
A. Building data pipelines B. Managing database security C. Creating dashboards and reports D. Configuring storage systems
✅ Answer: C
Explanation: Data analysts focus on visualizing data and creating reports/dashboards.
Question 2
A company wants to understand sales trends over the past year using visual reports.
Which role is MOST appropriate?
A. Data Engineer B. Database Administrator C. Data Analyst D. Network Engineer
✅ Answer: C
Explanation: Data analysts analyze historical data and create insights through reports and dashboards.
Question 3
Which tool is most commonly used by data analysts in Azure environments?
A. Azure Data Factory B. Azure DevOps C. Power BI D. Azure Kubernetes Service
✅ Answer: C
Explanation: Power BI is the primary tool for data visualization and reporting.
Question 4
Which activity is MOST associated with a data analyst?
A. Designing ETL pipelines B. Writing SQL queries to explore data C. Managing server infrastructure D. Encrypting databases
✅ Answer: B
Explanation: Data analysts commonly use SQL to query and analyze data.
Question 5
What is the main goal of a data analyst?
A. Store large volumes of raw data B. Build machine learning models C. Turn data into actionable insights D. Manage database performance
✅ Answer: C
Explanation: Data analysts focus on interpreting data and generating insights for decision-making.
Question 6
Which task is LEAST likely to be performed by a data analyst?
A. Creating a sales dashboard B. Identifying trends in data C. Building data ingestion pipelines D. Summarizing business performance
✅ Answer: C
Explanation: Building pipelines is a data engineer responsibility, not an analyst task.
Question 7
A data analyst receives cleaned and structured data from a data warehouse. What is their PRIMARY focus?
A. Data ingestion B. Data transformation C. Data visualization and analysis D. Database administration
✅ Answer: C
Explanation: Analysts work with prepared data to analyze and visualize insights.
Question 8
Which statement best describes the role of a data analyst?
A. They design physical database servers B. They create and maintain ETL pipelines C. They analyze data to support business decisions D. They manage user permissions in databases
✅ Answer: C
Explanation: Data analysts focus on interpreting data and supporting decision-making.
Question 9
Which Azure service is MOST directly associated with data analyst reporting?
A. Azure Data Lake Storage B. Azure Synapse Analytics (SQL querying) C. Azure Virtual Machines D. Azure Key Vault
✅ Answer: B
Explanation: Data analysts often query and analyze data using Azure Synapse Analytics.
Question 10
Which activity involves communicating insights from data to business stakeholders?
A. Data encryption B. Data visualization and reporting C. Database replication D. Network configuration
✅ Answer: B
Explanation: Data analysts communicate findings through visualizations, dashboards, and reports.
✅ Key Exam Takeaways
For DP-900, remember:
✔ Data analysts focus on analysis, visualization, and reporting ✔ They work with cleaned, structured data ✔ They commonly use Power BI and SQL ✔ Their goal is to support business decision-making ✔ They do NOT typically build pipelines or manage databases
This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. This topic falls under these sections: Describe core data concepts (25–30%) --> Identify roles and responsibilities for data workloads --> Describe responsibilities for database analysts
Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.
Data analysts play a key role in turning data into insights that drive business decisions. While data engineers prepare and organize data, and DBAs manage databases, data analysts focus on exploring, analyzing, and presenting data in meaningful ways.
For the DP-900 exam, you should understand what data analysts do, how their responsibilities differ from other roles, and how they use tools (especially in Azure) to deliver insights.
What Is a Data Analyst?
A data analyst is responsible for:
Exploring and interpreting data
Identifying trends and patterns
Creating reports and visualizations
Communicating insights to stakeholders
Their primary goal is to help organizations make data-driven decisions.
Core Responsibilities of a Data Analyst
1. Data Exploration and Analysis
Data analysts examine datasets to:
Identify trends and patterns
Detect anomalies or outliers
Answer business questions
They often use:
SQL queries
Data exploration tools
Statistical techniques (basic level for DP-900)
2. Data Visualization
A major responsibility is presenting data visually in a clear and meaningful way.
This post is a part of the DP-900: Microsoft Azure Data Fundamentals Exam Prep Hub. This topic falls under these sections: Describe core data concepts (25–30%) --> Describe common data workloads --> Describe features of transactional workloads
Note that there are 10 practice questions (with answers and explanations) for each section to help you solidify your knowledge of the material. Also, there are 2 practice tests with 60 questions each available on the hub below the exam topics section.
In the DP-900 exam, transactional workloads are one of the foundational types of data workloads you should understand. These workloads are central to many operational systems in enterprises and are fundamentally different from analytical or big data workloads. This article explains what transactional workloads are, their key features, and how they map to Azure services.
What Is a Transactional Workload?
A transactional workload refers to data processing that focuses on day-to-day operations — capturing and managing business events as they occur. These workloads are typically found in systems that require frequent inserts, updates, and deletes with strong guarantees of consistency and isolation.
Examples:
Online purchasing systems
Banking and financial transactions
Inventory management
Customer relationship management (CRM)
Transactional workloads form the core of OLTP (Online Transaction Processing) systems.
Key Features of Transactional Workloads
Transactional workloads have several defining characteristics that distinguish them from analytical workloads:
1. ACID Properties
Transactional systems must maintain ACID properties to ensure reliable data processing:
Property
What It Means
Atomicity
All actions of a transaction succeed or fail as one unit
Consistency
Data remains valid and consistent before & after a transaction
Isolation
Concurrent transactions do not interfere with each other
Durability
Once a transaction completes, changes are permanent
ACID guarantees are critical for business correctness in financial and operational systems.
2. High Volume of Small Operations
Transactional workloads are made up of many small, frequent operations, such as:
Adding a new customer
Updating an order status
Recording a payment
These operations are typically short-lived and affect a small number of rows or records at a time.
3. Real-Time or Near Real-Time Requirements
Transactional systems often have low latency requirements — applications and users expect fast responses:
Web applications must serve users in milliseconds
Background services must process requests quickly
This is crucial for user experience (e.g., online ordering, banking apps).
4. Strong Consistency and Integrity
Transactional workloads require strict guarantees that data remains reliable at all times:
No partial updates
Referential integrity between related tables
Synchronous processing of business events
These guarantees prevent data corruption and enforce business rules.
5. Normalized Data Structures
Transactional systems usually use normalized schemas — breaking data into related tables to reduce redundancy and maintain integrity.
For example:
Separate tables for Customers, Orders, OrderItems
Use of primary keys and foreign keys to enforce relationships
Normalized designs are efficient for transactional updates.
Transactional vs. Analytical Workloads
Understanding the difference between transactional and analytical workloads is important for the DP-900 exam.
Feature
Transactional Workload
Analytical Workload
Primary Use
Operational processing
Reporting & insights
Data Structure
Normalized
Denormalized
Query Types
Simple & frequent
Complex & batch
Response Time
Low latency
High throughput
Examples
Point of sale, CRM
Dashboards, BI
Transactional systems support operational needs — operations performed today — while analytical systems support strategic decision-making based on historical data.
How Transactional Workloads Map to Azure Services
Azure provides managed services that support transactional workloads with strong consistency and fast response times.
Azure SQL Database
A fully managed relational database service that supports:
ACID transactions
High availability
Automatic updates and backups
It’s suitable for most OLTP systems such as:
E-commerce platforms
Customer management systems
Azure Database for PostgreSQL and MySQL
Managed open-source relational databases that provide:
ACID compliance
Compatibility with existing tools
Horizontal scaling options
These are good choices when applications already use PostgreSQL or MySQL.
Azure Cosmos DB (Transactional Core)
Although often thought of as NoSQL, Cosmos DB can support transactional workloads using:
Transactional batch operations within a partition
Strong consistency models
Cosmos DB is especially useful when you need globally distributed transactional systems.
Exam-Relevant Takeaways
When preparing for DP-900, focus on these points:
✔ Transactional workloads are all about frequent, small, consistent operations. ✔ They must satisfy ACID properties to ensure correctness. ✔ They prioritize real-time or near-real-time responsiveness. ✔ They typically use normalized data models. ✔ Azure services such as Azure SQL Database, Azure Database for PostgreSQL/MySQL, and Azure Cosmos DB support transactional use cases.
Summary — What You Need to Know for the Exam
A transactional workload handles operational tasks (e.g., orders, banking operations).
These workloads require ACID properties and fast, dependable performance.
Normalized schemas help maintain consistency and integrity.
Azure supports transactional workloads with services designed for reliable transaction processing.