Generative AI is a class of Artificial Intelligence (AI) workloads that create new content rather than only analyzing or classifying existing data. On the AI-900: Microsoft Azure AI Fundamentals exam, you are expected to understand what generative AI is, what kinds of problems it solves, and how it differs from other AI workloads—not how to train large models or write code.
This topic appears under:
Describe Artificial Intelligence workloads and considerations (15–20%)
Identify features of common AI workloads
Expect conceptual and scenario-based questions that test whether you can recognize when generative AI is the appropriate approach.
What Is a Generative AI Workload?
A generative AI workload uses models that can generate new, original content based on patterns learned from large datasets.
Generative AI systems can produce:
Text (responses, summaries, stories, code)
Images (artwork, illustrations, designs)
Audio (music, speech)
Video (short clips or animations)
Key defining feature: Unlike traditional AI that predicts or classifies, generative AI creates.
Common Generative AI Use Cases
On the AI-900 exam, generative AI is typically presented through productivity, creativity, or assistance scenarios.
Text Generation
What it does: Generates human-like text based on a prompt.
Example scenarios:
Drafting emails or reports
Writing marketing copy
Generating code snippets
Creating conversational responses
Key idea: The model produces new text rather than selecting from predefined responses.
Summarization
What it does: Creates concise summaries of longer text.
Example scenarios:
Summarizing documents or meeting notes
Condensing long articles
Exam note: Summarization may appear in both NLP and generative AI contexts. When the output is newly generated text, it is generative AI.
Question Answering and Chat Experiences
What it does: Generates natural language answers to user questions.
Example scenarios:
AI chat assistants
Knowledge-based Q&A systems
Key idea: Responses are generated dynamically rather than retrieved verbatim.
Image Generation
What it does: Creates images from text descriptions.
Example scenarios:
Generating illustrations or artwork
Creating marketing visuals
Key idea: The system produces entirely new images rather than analyzing existing ones.
Code Generation
What it does: Generates programming code from natural language prompts.
Example scenarios:
Creating sample scripts
Explaining or completing code
Azure Services Associated with Generative AI
For AI-900, service knowledge is high-level and conceptual.
Azure OpenAI Service
Supports:
Text generation
Chat-based experiences
Image generation
Code generation
This is the primary Azure service associated with generative AI workloads on the exam.
How Generative AI Differs from Other AI Workloads
Recognizing these differences is critical for AI-900.
AI Workload Type
Primary Output
Generative AI
Newly created content
Natural Language Processing
Analysis of text
Computer Vision
Analysis of images and video
Document Processing
Structured data extraction
Speech AI
Transcription or audio generation
Exam tip: If the system is creating something new (text, image, code), think generative AI.
Prompt Engineering (Conceptual Awareness)
AI-900 includes basic awareness of prompting.
Prompt engineering refers to crafting inputs that guide a generative model toward better outputs.
Examples:
Providing context
Specifying tone or format
Giving examples in the prompt
No technical depth is required, but you should understand that outputs depend on prompts.
Responsible AI Considerations
Generative AI introduces unique risks.
Key considerations include:
Hallucinations (incorrect or fabricated outputs)
Bias in generated content
Harmful or inappropriate responses
Transparency that content is AI-generated
AI-900 tests awareness, not mitigation techniques.
Exam Tips for Identifying Generative AI Workloads
Look for verbs like generate, create, draft, write, summarize
Focus on whether the output is new content
Ignore implementation details and model names
Choose generative AI when static rules or classification are insufficient
Summary
For the AI-900 exam, you should be able to:
Recognize scenarios that require generative AI
Identify common generative AI use cases
Associate generative AI with Azure OpenAI Service
Distinguish generative AI from analytical AI workloads
Understand high-level responsible AI considerations
Artificial intelligence is no longer a niche skill reserved for researchers and engineers—it has become a core capability across nearly every industry. From data analytics and software development to marketing, design, and everyday productivity, AI tools are reshaping how work gets done. As we move into 2026, the pace of innovation continues to accelerate, making it essential to understand not just what AI can do, but which tools are worth learning and why.
This article highlights 20 of the most important AI tools to learn for 2026, spanning general-purpose AI assistants, developer frameworks, creative platforms, automation tools, and autonomous agents. For each tool, you’ll find a clear description, common use cases, reasons it matters, cost considerations, learning paths, and an estimated difficulty level—helping you decide where to invest your time and energy in the rapidly evolving AI landscape. However, even if you don’t learn any of these tools, you should spend the time to learn one or more other AI tool(s) this year.
1. ChatGPT (OpenAI)
Description: A versatile large language model (LLM) that can write, research, code, summarize, and more. Often used for general assistance, content creation, dialogue systems, and prototypes. Why It Matters: It’s the Swiss Army knife of AI — foundational in productivity, automation, and AI literacy. Cost: Free tier; Plus/Pro tiers ~$20+/month with faster models and priority access. How to Learn: Start by using the official tutorials, prompt engineering guides, and building integrations via the OpenAI API. Difficulty:Beginner
2. Google Gemini / Gemini 3
Description: A multimodal AI from Google that handles text, image, and audio queries, and integrates deeply with Google Workspace. Latest versions push stronger reasoning and creative capabilities. Android Central Why It Matters: Multimodal capabilities are becoming standard; integration across tools makes it essential for workflows. Cost: Free tier with paid Pro/Ultra levels for advanced models. How to Learn: Use Google AI Studio, experiment with prompts, and explore the API. Difficulty:Beginner–Intermediate
3. Claude (Anthropic)
Description: A conversational AI with long-context handling and enhanced safety features. Excellent for deep reasoning, document analysis, and coding. DataNorth AI Why It Matters: It’s optimized for enterprise and technical tasks where accuracy over verbosity is critical. Cost: Free and subscription tiers (varies by use case). How to Learn: Tutorials via Anthropic’s docs, hands-on in Claude UI/API, real projects like contract analysis. Difficulty:Intermediate
4. Microsoft Copilot (365 + Dev)
Description: AI assistant built into Microsoft 365 apps and developer tools, helping automate reports, summaries, and code generation. Why It Matters: It brings AI directly into everyday productivity tools at enterprise scale. Cost: Included with M365 and GitHub subscriptions; Copilot versions vary by plan. How to Learn: Microsoft Learn modules and real workflows inside Office apps. Difficulty:Beginner
5. Adobe Firefly
Description: A generative AI suite focused on creative tasks, from text-to-image/video to editing workflows across Adobe products. Wikipedia Why It Matters: Creative AI is now essential for design and branding work at scale. Cost: Included in Adobe Creative Cloud subscriptions (varies). How to Learn: Adobe tutorials + hands-on in Firefly Web and apps. Difficulty:Beginner–Intermediate
6. TensorFlow
Description: Open-source deep learning framework from Google used to build and deploy neural networks. Wikipedia Why It Matters: Core tool for anyone building machine learning models and production systems. Cost: Free/open source. How to Learn: TensorFlow courses, hands-on projects, and official tutorials. Difficulty:Intermediate
7. PyTorch
Description: Another dominant open-source deep learning framework, favored for research and flexibility. Why It Matters: Central for prototyping new models and customizing architectures. Cost: Free. How to Learn: Official tutorials, MOOCs, and community notebooks (e.g., Fast.ai). Difficulty:Intermediate
8. Hugging Face Transformers
Description: A library of pre-trained models for language and multimodal tasks. Why It Matters: Makes state-of-the-art models accessible with minimal coding. Cost: Free; paid tiers for hosted inference. How to Learn: Hugging Face courses, hands-on fine-tuning tasks. Difficulty:Intermediate
9. LangChain
Description: Framework to build chain-based, context-aware LLM applications and agents. Why It Matters: Foundation for building smart workflows and agent applications. Cost: Free (open-source). How to Learn: LangChain docs and project tutorials. Difficulty:Intermediate–Advanced
10. Google Antigravity IDE
Description: AI-first coding environment where AI agents assist development workflows. Wikipedia Why It Matters: Represents the next step in how developers interact with code — AI as partner. Cost: Free preview; may move to paid models. How to Learn: Experiment with projects, follow Google documentation. Difficulty:Intermediate
11. Perplexity AI
Description: AI research assistant combining conversational AI with real-time web citations. Why It Matters: Trusted research tool that avoids hallucinations by providing sources. The Case HQ Cost: Free; Pro versions exist. How to Learn: Use for query tasks, explore research workflows. Difficulty:Beginner
12. Notion AI
Description: AI features embedded inside the Notion workspace for notes, automation, and content. Why It Matters: Enhances organization and productivity in individual and team contexts. Cost: Notion plans with AI add-ons. How to Learn: In-app experimentation and productivity courses. Difficulty:Beginner
13. Runway ML
Description: AI video and image creation/editing platform. Why It Matters: Brings generative visuals to creators without deep technical skills. Cost: Free tier with paid access to advanced models. How to Learn: Runway tutorials and creative projects. Difficulty:Beginner–Intermediate
14. Synthesia
Description: AI video generation with realistic avatars and multi-language support. Why It Matters: Revolutionizes training and marketing video creation with low cost. The Case HQ Cost: Subscription. How to Learn: Platform tutorials, storytelling use cases. Difficulty:Beginner
15. Otter.ai
Description: AI meeting transcription, summarization, and collaborative notes. Why It Matters: Boosts productivity and meeting intelligence in remote/hybrid work. The Case HQ Cost: Free + Pro tiers. How to Learn: Use in real meetings; explore integrations. Difficulty:Beginner
16. ElevenLabs
Description: High-quality voice synthesis and cloning for narration and media. Why It Matters: Audio content creation is growing — podcasts, games, accessibility, and voice UX require this skill. TechRadar Cost: Free + paid credits. How to Learn: Experiment with voice models and APIs. Difficulty:Beginner
17. Zapier / Make (Automation)
Description: Tools to connect apps and automate workflows with AI triggers. Why It Matters: Saves time by automating repetitive tasks without code. Cost: Free + paid plans. How to Learn: Zapier/Make learning paths and real automation projects. Difficulty:Beginner
18. MLflow
Description: Open-source ML lifecycle tool for tracking experiments and deploying models. Whizzbridge Why It Matters: Essential for managing AI workflows in real projects. Cost: Free. How to Learn: Hands-on with ML projects and tutorials. Difficulty:Intermediate
19. NotebookLM
Description: Research assistant for long-form documents and knowledge work. Why It Matters: Ideal for digesting research papers, books, and technical documents. Reddit Cost: Varies. How to Learn: Use cases in academic and professional workflows. Difficulty:Beginner
20. Manus (Autonomous Agent)
Description: A next-gen autonomous AI agent designed to reason, plan, and execute complex tasks independently. Wikipedia Why It Matters: Represents the frontier of agentic AI — where models act with autonomy rather than just respond. Cost: Web-based plans. How to Learn: Experiment with agent workflows and task design. Difficulty:Advanced
🧠 How to Get Started With Learning
1. Foundational Concepts: Begin with basics: prompt engineering, AI ethics, and data fundamentals.
2. Hands-On Practice: Explore tool documentation, build mini projects, and integrate APIs.
3. Structured Courses: Platforms like Coursera, Udemy, and official provider academies offer guided paths.
4. Community & Projects: Join GitHub projects, forums, and Discord groups focused on AI toolchains.
📊 Difficulty Levels (General)
Level
What It Means
Beginner
No coding needed; great for general productivity/creators
Intermediate
Some programming or technical concepts required
Advanced
Deep technical skills — frameworks, models, agents
Summary: 2026 will see AI tools become even more integrated into creativity, productivity, research, and automated workflows. Mastery over a mix of general-purpose assistants, developer frameworks, automation platforms, and creative AI gives you both breadth and depth in the evolving AI landscape. It’s going to be another exciting year. Good luck on your data journey in 2026!
Information and resources for the data professionals' community