- Home
- Microsoft
- Microsoft Certified: Azure AI Fundamentals
- AI-900
- AI-900 - Microsoft Azure AI Fundamentals
Microsoft AI-900 Microsoft Azure AI Fundamentals Exam Practice Test
Microsoft Azure AI Fundamentals Questions and Answers
You need to create a clustering model and evaluate the model by using Azure Machine Learning designer. What should you do?
Options:
Split the original dataset into a dataset for features and a dataset for labels. Use the features dataset for evaluation.
Split the original dataset into a dataset for training and a dataset for testing. Use the training dataset for evaluation.
Split the original dataset into a dataset for training and a dataset for testing. Use the testing dataset for evaluation.
Use the original dataset for training and evaluation.
Answer:
CExplanation:
According to the Microsoft Learn module “Explore fundamental principles of machine learning” and the AI-900 Official Study Guide, when building and evaluating a model (such as a clustering model) in Azure Machine Learning designer, data must be divided into two subsets:
Training dataset: Used to train the model so it can learn patterns and relationships in the data.
Testing dataset: Used to evaluate the model’s performance on unseen data, ensuring that it generalizes well and does not overfit.
In Azure ML Designer, this is typically done using the Split Data module, which separates the dataset into training and testing portions (for example, 70% training and 30% testing). After training, you connect the testing dataset to an Evaluate Model module to assess metrics such as accuracy, precision, or silhouette score (for clustering).
Other options are incorrect:
A. Split into features and labels: Clustering is an unsupervised learning technique, so it doesn’t use labeled data.
B. Use training dataset for evaluation: This would cause overfitting, as the model is being tested on the same data it learned from.
D. Use the original dataset for training and evaluation: Also causes overfitting, offering no measure of generalization.
Which Azure Al Language feature can be used to retrieve data, such as dates and people ' s names, from social media posts?
Options:
language detection
speech recognition
key phrase extraction
entity recognition
Answer:
DExplanation:
The Azure AI Language service provides several NLP features, including language detection, key phrase extraction, sentiment analysis, and named entity recognition (NER).
When you need to extract specific data points such as dates, names, organizations, or locations from unstructured text (for example, social media posts), the correct feature is Entity Recognition.
Entity Recognition identifies and classifies information in text into predefined categories like:
Person names (e.g., “John Smith”)
Organizations (e.g., “Contoso Ltd.”)
Dates and times (e.g., “October 22, 2025”)
Locations, events, and quantities
This capability helps transform unstructured textual data into structured data that can be analyzed or stored.
Option analysis:
A (Language detection): Determines the language of a text (e.g., English, French).
B (Speech recognition): Converts spoken audio to text; not applicable here.
C (Key phrase extraction): Identifies important phrases or topics but not specific entities like names or dates.
D (Entity recognition): Correctly extracts names, dates, and other specific data from text.
Hence, the accurate feature for this scenario is D. Entity Recognition.
You plan to build a conversational Al solution that can be surfaced in Microsoft Teams. Microsoft Cortana, and Amazon Alexa. Which service should you use?
Options:
Azure Bot Service
Azure Cognitive Search
Language service
Speech
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of conversational AI workloads on Azure,” the Azure Bot Service is the dedicated Azure service for building, connecting, deploying, and managing conversational AI experiences across multiple channels — such as Microsoft Teams, Cortana, and Amazon Alexa.
The Azure Bot Service integrates with the Bot Framework SDK to design intelligent chatbots that can communicate with users in natural language. It also connects seamlessly with other Azure Cognitive Services, such as Language Service (LUIS) for intent understanding and Speech Service for voice input/output.
The question specifies that the conversational AI must be accessible through multiple platforms, including Microsoft Teams, Cortana, and Alexa. Azure Bot Service supports this multi-channel communication model out of the box, allowing developers to configure a single bot that interacts through many endpoints simultaneously.
Other options:
B. Azure Cognitive Search: Used for information retrieval and knowledge mining, not conversational AI.
C. Language Service: Provides natural language understanding, key phrase extraction, sentiment analysis, etc., but doesn’t handle multi-channel communication.
D. Speech: Provides speech-to-text and text-to-speech conversion but is not a chatbot platform.
Therefore, the best solution for building and deploying a multi-channel conversational AI system is Azure Bot Service, as clearly defined in Microsoft’s AI-900 learning content.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
Statement
Yes / No
Providing an explanation of the outcome of a credit loan application is an example of the Microsoft transparency principle for responsible AI.
Yes
A triage bot that prioritizes insurance claims based on injuries is an example of the Microsoft reliability and safety principle for responsible AI.
Yes
An AI solution that is offered at different prices for different sales territories is an example of the Microsoft inclusiveness principle for responsible AI.
No
This question is based on the Responsible AI principles defined by Microsoft, a major topic in the AI-900: Microsoft Azure AI Fundamentals certification. The goal of Responsible AI is to ensure that artificial intelligence is developed and used ethically, safely, and transparently to benefit people and society. Microsoft’s framework defines six core principles: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability.
Transparency Principle – YesProviding an explanation for a loan application decision clearly reflects transparency. According to Microsoft’s Responsible AI guidelines, transparency involves ensuring that users and stakeholders understand how AI systems make decisions. When a financial AI model explains why a loan was approved or denied, it promotes user trust and confidence in automated decision-making. Transparency helps individuals understand influencing factors (like income or credit score), thereby fostering ethical AI deployment.
Reliability and Safety Principle – YesA triage bot that prioritizes insurance claims based on injury severity demonstrates reliability and safety. This principle ensures that AI systems consistently operate as intended, handle data accurately, and do not cause unintended harm. For a triage bot, safety means it must correctly interpret medical or claim information and consistently provide appropriate prioritization. Microsoft emphasizes that reliable AI systems must be tested rigorously, function correctly in various scenarios, and maintain user safety at all times.
Inclusiveness Principle – NoAn AI solution priced differently for various sales territories is unrelated to inclusiveness. Inclusiveness focuses on designing AI systems that are accessible and fair to all users, including those with disabilities or from different demographic backgrounds. Price variation across territories is a business strategy, not an ethical AI inclusion concern. Hence, this statement does not align with any Responsible AI principle.
A company employs a team of customer service agents to provide telephone and email support to customers.
The company develops a webchat bot to provide automated answers to common customer queries.
Which business benefit should the company expect as a result of creating the webchat bot solution?
Options:
increased sales
a reduced workload for the customer service agents
improved product reliability
Answer:
BExplanation:
Question number: 1
Answer: B
Full Detailed Explanation with exact Extract from your Official Study guide and Trained Data at least 250 to 300 words in Explanation:
The correct answer is B. a reduced workload for the customer service agents.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of common AI workloads”, conversational AI solutions such as chatbots are primarily designed to automate repetitive and routine customer interactions. The key business value emphasized in these materials is operational efficiency—chatbots allow organizations to respond to a high volume of customer queries without relying solely on human agents. This results in reduced workload, lower operational costs, and faster response times.
Microsoft’s AI-900 learning objectives highlight that AI can be applied to automate tasks that previously required human interaction. In the context of customer support, a webchat bot powered by Azure AI services (such as Azure Bot Service or Azure Cognitive Services for Language) can handle frequently asked questions like order status, password resets, or basic troubleshooting. This allows human agents to focus their time and skills on more complex issues that require empathy, reasoning, or decision-making—tasks that AI cannot yet handle as effectively.
Additionally, the AI-900 course materials explain that one of the measurable business benefits of deploying AI-driven chatbots is improved efficiency and scalability. Chatbots can handle thousands of simultaneous interactions, something that human teams cannot easily do. As a result, the organization experiences reduced operational pressure on support staff, improved customer satisfaction due to quicker responses, and optimized resource utilization.
Options A and C are incorrect because chatbots do not directly influence sales growth or product reliability. While increased customer satisfaction might indirectly support sales, it is not the primary or guaranteed outcome of implementing a chatbot. Similarly, product reliability is tied to engineering quality, not customer service automation.
Therefore, based on the official AI-900 study materials and Microsoft Learn concepts, the best and verified answer is B. a reduced workload for the customer service agents.
You have insurance claim reports that are stored as text.
You need to extract key terms from the reports to generate summaries.
Which type of Al workload should you use?
Options:
conversational Al
anomaly detection
natural language processing
computer vision
Answer:
CExplanation:
According to the AI-900 study guide and Microsoft Learn module “Identify features of natural language processing workloads”, Natural Language Processing (NLP) enables computers to understand, interpret, and generate human language. Tasks such as extracting key terms, summarizing documents, identifying topics, or determining sentiment fall under NLP workloads.
In this question, you have insurance claim reports stored as text, and you need to extract key terms to generate summaries. This matches the Text Analytics service in Azure Cognitive Services, which uses NLP techniques such as key phrase extraction to identify important concepts within textual data.
The other options are incorrect because:
A. Conversational AI focuses on chatbots or dialogue systems.
B. Anomaly detection identifies unusual data patterns, not textual meaning.
D. Computer vision processes image or video content, not text.
Therefore, extracting key terms from documents is a clear example of Natural Language Processing.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question assesses knowledge of the Azure Cognitive Services Speech and Text Analytics capabilities, as described in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules “Explore natural language processing” and “Explore speech capabilities.” These services are part of Azure Cognitive Services, which provide prebuilt AI capabilities for speech, language, and text understanding.
You can use the Speech service to transcribe a call to text → YesThe Speech-to-Text feature in the Azure Speech service automatically converts spoken words into written text. Microsoft Learn explains: “The Speech-to-Text capability enables applications to transcribe spoken audio to text in real time or from recorded files.” This makes it ideal for call transcription, voice assistants, and meeting captioning.
You can use the Text Analytics service to extract key entities from a call transcript → YesOnce a call has been transcribed into text, the Text Analytics service (part of Azure Cognitive Services for Language) can process that text to extract key entities, key phrases, and sentiment. For example, it can identify names, organizations, locations, and product mentions. Microsoft Learn notes: “Text Analytics can extract key phrases and named entities from text to derive insights and structure from unstructured data.”
You can use the Speech service to translate the audio of a call to a different language → YesThe Azure Speech service also includes Speech Translation, which can translate spoken language in real time. It converts audio input from one language into translated text or speech output in another language. Microsoft Learn describes this as: “Speech Translation combines speech recognition and translation to translate spoken audio to another language.”
You are building a tool that will process images from retail stores and identity the products of competitors.
The solution must be trained on images provided by your company.
Which Azure Al service should you use?
Options:
Azure Al Custom Vision
Azure Al Computer Vision
Face
Azure Al Document Intelligence
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn documentation, Azure AI Custom Vision is specifically designed for training custom image classification and object detection models using images that a company provides. In this scenario, the company wants to identify competitor products from images captured in retail stores — a classic use case for custom image classification or object detection, depending on whether you are labeling entire images or identifying multiple items within an image.
Azure AI Custom Vision allows users to:
Upload their own labeled training images.
Train a model that learns to recognize specific objects (in this case, competitor products).
Evaluate, iterate, and deploy the model as an API endpoint for real-time inference.
This fits perfectly with the requirement that the solution “must be trained on images provided by your company.” The key phrase here indicates the need for a custom-trained model rather than a prebuilt one.
The other options are not suitable for this scenario:
B. Azure AI Computer Vision provides prebuilt models for general-purpose image understanding (e.g., detecting common objects, reading text, describing scenes). It is not intended for training on custom datasets.
C. Face service is limited to detecting and recognizing human faces; it cannot be trained to identify products.
D. Azure AI Document Intelligence (formerly Form Recognizer) is focused on extracting structured data from documents and forms, not analyzing retail images.
Therefore, per Microsoft’s official AI-900 training content, when a solution must be trained on custom company images to recognize specific products, the appropriate service is Azure AI Custom Vision.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:
“features.”
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe fundamental principles of machine learning on Azure,” in a machine learning model, the data used as inputs are known as features, while the data that represents the output or target prediction is known as the label.
Features are measurable attributes or properties of the data used by a model to learn patterns and make predictions. They are also referred to as independent variables because they influence the result that the model tries to predict. For example, in a machine learning model that predicts house prices:
Features might include square footage, location, and number of bedrooms, while
The label would be the house price (the value being predicted).
In the context of Azure Machine Learning, during model training, features are passed into the algorithm as input variables (X-values), and the label is the corresponding output (Y-value). The model then learns the relationship between the features and the label.
Let’s review the incorrect options:
Functions: These are mathematical operations or relationships used inside algorithms, not the input data itself.
Labels: These are the outputs or results that the model predicts, not the inputs.
Instances: These refer to individual data records or rows in the dataset, not the input fields themselves.
Hence, in any supervised or unsupervised learning process, the input data (independent variables) are called features, and the model uses them to predict labels (dependent variables).
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

In Azure OpenAI Service, the temperature parameter directly controls the creativity and determinism of responses generated by models such as GPT-3.5. According to the Microsoft Learn documentation for Azure OpenAI models, temperature is a numeric value (typically between 0.0 and 2.0) that determines how “random” or “deterministic” the output should be.
A lower temperature value (for example, 0 or 0.2) makes the model’s responses more deterministic, meaning the same prompt consistently produces nearly identical outputs.
A higher temperature value (for example, 0.8 or 1.0) encourages creativity and variety, causing the model to generate different phrasing or interpretations each time it responds.
When a question specifies the need for more deterministic responses, Microsoft’s guidance is to decrease the temperature parameter. This adjustment makes the model focus on the most probable tokens (words) rather than exploring less likely options, improving reliability and consistency—ideal for business or technical applications where consistent answers are essential.
The other parameters serve different purposes:
Frequency penalty reduces repetition of the same phrases but does not control randomness.
Max response (max tokens) limits the maximum length of the generated output.
Stop sequence defines specific tokens that tell the model when to stop generating text.
Thus, the correct and Microsoft-verified completion is:
“You can modify the Temperature parameter to produce more deterministic responses from a chat solution that uses the Azure OpenAI GPT-3.5 model.”
You are designing an AI system that empowers everyone, including people who have hearing, visual, and other impairments.
This is an example of which Microsoft guiding principle for responsible AI?
Options:
fairness
inclusiveness
reliability and safety
accountability
Answer:
BExplanation:
Inclusiveness: At Microsoft, we firmly believe everyone should benefit from intelligent technology, meaning it must incorporate and address a broad range of human needs and experiences. For the 1 billion people with disabilities around the world, AI technologies can be a game-changer.
Which machine learning technique can be used for anomaly detection?
Options:
A machine learning technique that understands written and spoken language.
A machine learning technique that classifies objects based on user supplied images.
A machine learning technique that analyzes data over time and identifies unusual changes.
A machine learning technique that classifies images based on their contents.
Answer:
CExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore fundamental principles of machine learning,” anomaly detection is a specialized machine learning technique used to identify data points, patterns, or events that deviate significantly from normal behavior.
Anomaly detection is widely used for monitoring time-series data and detecting unexpected or rare occurrences that may indicate problems, opportunities, or fraud. For example:
Detecting fraudulent transactions in banking systems.
Identifying equipment malfunctions in industrial IoT applications.
Monitoring network intrusions in cybersecurity.
Detecting unexpected spikes or drops in web traffic or sales.
In Azure, this workload is supported by the Azure AI Anomaly Detector service, which uses statistical and machine learning algorithms to learn from historical data and establish a baseline of normal behavior. When the system detects data points that fall outside expected patterns, it flags them as anomalies.
Let’s evaluate the incorrect options:
A. A machine learning technique that understands written and spoken language → This describes Natural Language Processing (NLP), not anomaly detection.
B. A machine learning technique that classifies objects based on user-supplied images → This refers to image classification, typically using computer vision.
D. A machine learning technique that classifies images based on their contents → Also describes computer vision, not anomaly detection.
Therefore, the correct answer is C, since anomaly detection specifically refers to analyzing data over time and identifying unusual or abnormal patterns that differ from the expected trend.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
Statement
Yes / No
Providing an explanation of the outcome of a credit loan application is an example of the Microsoft transparency principle for responsible AI.
Yes
A triage bot that prioritizes insurance claims based on injuries is an example of the Microsoft reliability and safety principle for responsible AI.
Yes
An AI solution that is offered at different prices for different sales territories is an example of the Microsoft inclusiveness principle for responsible AI.
No
This question is based on the Responsible AI principles defined by Microsoft, which are part of the AI-900 Microsoft Azure AI Fundamentals curriculum. Microsoft’s Responsible AI framework consists of six key principles: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability. Each principle ensures that AI systems are developed and used in a way that benefits people and society responsibly.
Transparency Principle – YesProviding an explanation for a loan decision aligns with the Transparency principle. Microsoft defines transparency as helping users and stakeholders understand how AI systems make decisions. For example, when a credit scoring AI model approves or denies a loan, explaining the factors that influenced that outcome (such as credit history or income level) ensures that customers understand the reasoning process. This builds trust and supports responsible deployment.
Reliability and Safety Principle – YesA triage bot that prioritizes insurance claims based on injury severity relates directly to Reliability and Safety. This principle ensures AI systems operate consistently, perform accurately, and produce dependable outcomes. In the case of the triage bot, it must reliably assess the input data (injury descriptions) and rank claims appropriately to avoid harm or misjudgment, aligning with Microsoft’s emphasis on designing AI systems that are safe and robust.
Inclusiveness Principle – NoAn AI solution priced differently across sales territories is not related to Inclusiveness. Inclusiveness focuses on ensuring accessibility and eliminating bias or exclusion for all users—especially those with disabilities or underrepresented groups. Pricing strategy is a business decision, not an inclusiveness issue. Therefore, this statement is No.
In summary, based on the AI-900 Responsible AI principles, the correct selections are:
You have a dataset that contains the columns shown in the following table.

You have a machine learning model that predicts the value of ColumnE based on the other numeric columns.
Which type of model is this?
Options:
regression
analysis
clustering
Answer:
AExplanation:
The dataset described contains numeric columns (ColumnA through ColumnE). The model’s task is to predict the value of ColumnE based on the other numeric columns (A–D). This is a classic regression problem.
According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn module “Identify common types of machine learning,” a regression model is used when the target variable (the value to predict) is continuous and numeric, such as price, temperature, or—in this case—a numerical value in ColumnE.
Regression models analyze relationships between independent variables (inputs: Columns A–D) and a dependent variable (output: ColumnE) to predict a continuous outcome. Common regression algorithms include linear regression, decision tree regression, and neural network regression.
Option analysis:
A. Regression: ✅ Correct. Used for predicting numerical, continuous values.
B. Analysis: ❌ Incorrect. “Analysis” is a general term, not a machine learning model type.
C. Clustering: ❌ Incorrect. Clustering is unsupervised learning, grouping similar data points, not predicting values.
Therefore, the type of machine learning model used to predict ColumnE (a numeric value) from other numeric columns is Regression, which fits perfectly within Azure’s supervised learning models.
In which scenario should you use key phrase extraction?
Options:
translating a set of documents from English to German
generating captions for a video based on the audio track
identifying whether reviews of a restaurant are positive or negative
identifying which documents provide information about the same topics
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Extract insights from text with the Text Analytics service”, key phrase extraction is a feature of the Text Analytics service that identifies the most important words or phrases in a given document. It helps summarize the main ideas by isolating significant concepts or terms that describe what the text is about.
In this scenario, the goal is to determine which documents share similar topics or themes. By extracting key phrases from each document (for example, “policy renewal,” “coverage limits,” “claim process”), you can compare and categorize documents based on overlapping keywords. This is exactly how key phrase extraction is used—to summarize and group text content by topic relevance.
The other options do not fit this use case:
A. Translation uses the Translator service, not key phrase extraction.
B. Generating video captions involves speech recognition and computer vision.
C. Identifying sentiment relates to sentiment analysis, not key phrase extraction.
What is the maximum image size that can be processed by using the prebuilt receipt model in Azure Al Document Intelligence?
Options:
5 MB
10MB
50 MB
100 MB
Answer:
BExplanation:
According to the Microsoft Learn documentation for Azure AI Document Intelligence (formerly Form Recognizer) and the AI-900 official study materials, the prebuilt receipt model in Azure AI Document Intelligence supports analyzing image and PDF files up to a maximum size of 10 MB per document.
Azure AI Document Intelligence is a cloud-based service that applies advanced optical character recognition (OCR) and machine learning to extract structured information from documents such as receipts, invoices, identity cards, and business forms. The prebuilt receipt model is specifically designed to extract key data fields from retail receipts—such as merchant name, transaction date, items purchased, subtotal, tax, and total—without requiring users to build or train a custom model.
As per Microsoft’s service limits, the input file for the prebuilt models (including receipts, invoices, business cards, and identity documents) must:
Not exceed 10 MB in file size.
Not exceed 17 x 17 inches (43 x 43 cm) in physical dimensions.
Be in a supported image or document format such as JPG, PNG, BMP, TIFF, or PDF.
Let’s examine why other options are incorrect:
A. 5 MB → Too small; the service allows up to 10 MB.
C. 50 MB and D. 100 MB → Exceed the official maximum file size supported by Azure AI Document Intelligence.
Therefore, when using the prebuilt receipt model, you must ensure that the input file is 10 MB or smaller to be successfully processed by the service.
You are building a tool that will process images from retail stores and identify the products of competitors.
The solution will use a custom model.
Which Azure Cognitive Services service should you use?
Options:
Custom Vision
Form Recognizer
Face
Computer Vision
Answer:
AExplanation:
to 300 words in Explanation:
The Custom Vision service under Azure Cognitive Services is specifically designed for image classification and object detection tasks that require a custom-trained model. According to the AI-900 official study materials, Custom Vision enables developers to “build, deploy, and improve image classifiers that recognize specific objects in images based on custom data.”
In this question, the goal is to build a system that processes images from retail stores and identifies products of competitors. Since these are unique products that may not be part of Microsoft’s pre-trained models, a custom model must be created. The Custom Vision service allows you to upload your own labeled images (e.g., product pictures), train a model to recognize those products, and then deploy it as an API for image recognition tasks.
Other options explained:
B. Form Recognizer is used to extract text, key-value pairs, and tables from structured or semi-structured documents like invoices or receipts. It is not suitable for object identification.
C. Face service detects and analyzes human faces, providing attributes like age, emotion, and facial landmarks, but cannot recognize general objects like products.
D. Computer Vision is a general-purpose image analysis service used for tagging, OCR, and scene recognition, but it uses pre-trained models. It doesn’t allow for custom product identification.
Thus, based on Microsoft’s guidance, the best fit for recognizing competitor products from images using a custom-trained model is A. Custom Vision.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure,” computer vision is a field of artificial intelligence that enables computers to interpret and understand visual information from the world — such as images or videos.
In this scenario, the task is to count the number of animals in an area based on a video feed. This requires the system to:
Detect the presence of animals in each frame of the video (object detection).
Track and count them across multiple frames as they move.
These are classic computer vision tasks, as they involve analyzing visual inputs (video or image data) and identifying objects (in this case, animals). Azure provides services such as Azure Computer Vision, Custom Vision, and Video Indexer, which can perform object detection, counting, and activity recognition using AI models trained on visual datasets.
Why the other options are incorrect:
Forecasting: Involves predicting future values based on historical data (e.g., predicting sales or weather), not analyzing video feeds.
Knowledge mining: Focuses on extracting insights from large text-based document repositories, not images or videos.
Anomaly detection: Identifies unusual patterns in numeric or time-series data, not visual objects.
Therefore, identifying and counting animals in video footage falls under computer vision, since it uses AI to visually detect, classify, and quantify objects in real-time or recorded feeds.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question evaluates understanding of clustering—an unsupervised learning technique explained in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore fundamental principles of machine learning.” Clustering involves finding natural groupings within data without prior knowledge of output labels. The algorithm identifies similarities among data points and groups them accordingly, with each group (or cluster) containing items that are more similar to each other than to those in other groups.
Organizing documents into groups based on similarities of the text contained in the documents → YesThis is a classic clustering application. In text analytics or natural language processing (NLP), clustering algorithms such as K-means or hierarchical clustering are used to group documents with similar content or topics. According to Microsoft Learn, “clustering identifies relationships in data and groups items that share common characteristics.” Therefore, organizing text documents based on content similarity is a textbook example of clustering.
Grouping similar patients based on symptoms and diagnostic test results → YesThis is another example of clustering. In healthcare analytics, clustering can be used to segment patients with similar health patterns or risks. The study guide emphasizes that clustering can “discover natural groupings in data such as customers with similar buying patterns or patients with similar clinical results.” Thus, this task correctly describes unsupervised clustering because it does not involve predicting a known outcome but grouping based on similarity.
Predicting whether a person will develop mild, moderate, or severe allergy symptoms based on pollen count → NoThis is a classification problem, not clustering. Classification is a supervised learning technique where the model is trained with labeled data to predict predefined categories (in this case, mild, moderate, or severe). Microsoft Learn clearly distinguishes between clustering (discovering hidden patterns) and classification (predicting predefined categories).
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of regression machine learning”, regression is a type of supervised machine learning used when the target variable (the value you want to predict) is a continuous numeric value.
In this scenario, the task is to predict how many hours of overtime a delivery person will work based on the number of orders received. Both the input (number of orders) and the output (hours of overtime) are numeric variables. Since the goal is to estimate a quantitative value rather than categorize or group data, this is a classic example of a regression problem.
Regression models analyze the relationship between variables to make numerical predictions. For example, the model might learn that each additional 20 orders increases overtime by about two hours. Common algorithms used for regression include linear regression, decision tree regression, and boosted regression models. These models produce outputs such as “expected overtime = 5.6 hours,” which are continuous numeric results.
To contrast with the other options:
Classification is used for predicting categories or labels, such as “overtime required” vs. “no overtime,” or “high-risk” vs. “low-risk.” It deals with discrete outputs rather than continuous numbers.
Clustering is an unsupervised learning approach used to group similar data points based on shared characteristics, such as grouping delivery staff by performance patterns or customer types.
As emphasized in Microsoft’s Responsible AI and Machine Learning Fundamentals learning paths, regression models are ideal for numeric forecasting problems such as predicting sales, revenue, demand, or working hours.
Therefore, the correct answer is: Regression.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

The Azure Text Analytics service, a component of Azure Cognitive Services, provides natural language processing (NLP) capabilities to analyze and understand text-based data. According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features and uses for Natural Language Processing (NLP)”, the Text Analytics service supports multiple text understanding tasks, such as language detection, key phrase extraction, sentiment analysis, and entity recognition.
Language Identification – Yes:Text Analytics can automatically detect the language in which text is written. This feature analyzes linguistic patterns and assigns a language code (for example, “en” for English, “es” for Spanish). It is one of the primary features described in Microsoft Learn as part of the service’s Language Detection API.
Detect Handwritten Signatures – No:Detecting handwritten signatures is not a text-based NLP task. Instead, it belongs to the computer vision domain, specifically Optical Character Recognition (OCR). The Text Analytics service only processes digital text, not handwritten or image-based data. To detect handwriting or signatures, you would use the Computer Vision OCR API, not Text Analytics.
Entity Recognition – Yes:The Text Analytics service can identify named entities—such as people, locations, organizations, dates, and quantities—within documents. This is known as Named Entity Recognition (NER), which helps extract structured information from unstructured text.
What are two tasks that can be performed by using computer vision? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
Predict stock prices.
Detect brands in an image.
Detect the color scheme in an image
Translate text between languages.
Extract key phrases.
Answer:
B, CExplanation:
According to the Microsoft Azure AI Fundamentals study guide and Microsoft Learn module “Identify features of computer vision workloads”, computer vision is an AI workload that allows systems to interpret and understand visual information from the world, such as images and videos.
Computer vision tasks typically include:
Object detection and image classification (e.g., detecting brands, logos, or items in images)
Image analysis (e.g., identifying colors, patterns, or visual features)
Face detection and recognition
Optical Character Recognition (OCR) for reading text in images
Therefore, both detecting brands and detecting color schemes in an image are clear examples of computer vision tasks because they involve analyzing visual content.
In contrast:
A. Predict stock prices → Regression task, not vision-based.
D. Translate text between languages → Natural language processing (NLP).
E. Extract key phrases → NLP as well.
Thus, the correct computer vision tasks are B and C.
You plan to deploy an Azure Machine Learning model as a service that will be used by client applications.
Which three processes should you perform in sequence before you deploy the model? To answer, move the appropriate processes from the list of processes to the answer area and arrange them in the correct order.

Options:
Answer:

Explanation:

The correct order of processes before deploying a model as a service is:
(1) Data preparation → (2) Model training → (3) Model evaluation.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore the machine learning process”, machine learning follows a structured lifecycle that involves several sequential stages. Before a model can be deployed, the data must be properly prepared, the model must be trained, and then its performance must be evaluated to ensure accuracy and reliability.
Data Preparation:The first stage involves collecting, cleaning, and transforming raw data into a usable format. Azure Machine Learning provides tools like Data Wrangler, Data Labeling, and Data Transformation pipelines to ensure the dataset is accurate and consistent. As per Microsoft Learn, “data preparation is essential to remove noise, handle missing values, and split the dataset into training and testing sets.” This step ensures the model learns from quality input.
Model Training:In this step, algorithms are applied to the prepared training data to create a predictive model. The system learns patterns and relationships from the data. Azure Machine Learning allows model training using AutoML, custom code, or designer pipelines. The training process produces a model that can make predictions, but it still needs to be tested before deployment.
Model Evaluation:Once trained, the model’s performance is tested against unseen (test) data. Evaluation metrics like accuracy, precision, recall, and F1-score are analyzed to verify if the model meets business and performance requirements. Microsoft Learn defines this stage as “assessing the model’s performance to determine its readiness for deployment.”
After these three processes, the model can then be deployed as a web service using Azure Machine Learning endpoints. Model retraining happens later when new data becomes available, and data encryption is a security process, not part of model development steps.
You are developing a Chabot solution in Azure.
Which service should you use to determine a user’s intent?
Options:
Translator
Azure Cognitive Search
Speech
Language
Answer:
DExplanation:
In Azure, the Language service unifies several natural language capabilities, including LUIS, QnA Maker, and Text Analytics, into one comprehensive service. To determine a user’s intent in a chatbot, you use the Conversational Language Understanding (CLU) feature of the Language service, which is the evolution of LUIS.
CLU helps chatbots and applications comprehend natural language input by identifying the intent (the purpose of the user’s statement) and extracting entities (important details). For example, when a user types “Book a meeting for tomorrow,” the model recognizes the intent (BookMeeting) and the entity (tomorrow).
The other options do not determine intent:
Translator (A) is used for language translation.
Azure Cognitive Search (B) retrieves documents based on search queries.
Speech (C) converts audio to text but doesn’t analyze meaning.
Thus, to determine a user’s intent in a chatbot scenario, the correct service is D. Language.
Stating the source of the data used to train a model is an example of which responsible Al principle?
Options:
fairness
transparency
reliability and safety
privacy and security
Answer:
BExplanation:
According to Microsoft’s Responsible AI Principles, Transparency means that AI systems should clearly communicate how they operate, including data sources, limitations, and decision-making processes. Stating the source of data used to train a model helps users understand where the model’s knowledge comes from, enabling informed trust and accountability.
Transparency ensures that organizations disclose relevant details about data collection and model design, especially for compliance, fairness, and reproducibility.
Other options are incorrect:
A. Fairness: Focuses on avoiding bias and ensuring equitable outcomes.
C. Reliability and safety: Ensures AI performs consistently and safely.
D. Privacy and security: Protects user data and maintains confidentiality.
Thus, the principle illustrated by disclosing training data sources is Transparency.
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:

The correct answer is object detection. According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn module “Explore computer vision”, object detection is the process of identifying and locating objects within an image or video. The primary characteristic of object detection, as emphasized in the study guide, is its ability to return a bounding box around each detected object along with a corresponding label or class.
In this question, the task involves returning a bounding box that indicates the location of a vehicle in an image. This is the exact definition of object detection — identifying that the object exists (a vehicle) and determining its position within the frame. Microsoft Learn clearly differentiates this from other computer vision tasks. Image classification, for example, only determines what an image contains as a whole (for instance, “this image contains a vehicle”), but it does not indicate where in the image the object is located. Optical character recognition (OCR) is specifically used for extracting printed or handwritten text from images, and semantic segmentation involves classifying every pixel in an image to understand boundaries in greater detail, often used in autonomous driving or medical imaging.
The official AI-900 guide highlights object detection as one of the key computer vision workloads supported by Azure Computer Vision, Custom Vision, and Azure Cognitive Services. These services are designed to detect multiple instances of various object types in a single image, outputting bounding boxes and confidence scores for each.
Therefore, based on the AI-900 official curriculum and Microsoft Learn concepts, returning a bounding box that shows the location of a vehicle is a textbook example of object detection, as it involves both recognition and localization of the object within the image frame.
Extracting relationships between data from large volumes of unstructured data is an example of which type of Al workload?
Options:
computer vision
knowledge mining
natural language processing (NLP)
anomaly detection
Answer:
BExplanation:
Extracting relationships and insights from large volumes of unstructured data (such as documents, text files, or images) aligns with the Knowledge Mining workload in Microsoft Azure AI. According to the Microsoft AI Fundamentals (AI-900) study guide and Microsoft Learn module “Describe features of common AI workloads,” knowledge mining involves using AI to search, extract, and structure information from vast amounts of unstructured or semi-structured content.
In a typical knowledge mining solution, tools like Azure AI Search and Azure AI Document Intelligence work together to index data, apply cognitive skills (such as OCR, key phrase extraction, and entity recognition), and then enable users to discover relationships and patterns through intelligent search. The process transforms raw content into searchable knowledge.
The key characteristics of knowledge mining include:
Using AI to extract entities and relationships between data points.
Applying cognitive skills to text, images, and documents.
Creating searchable knowledge stores from unstructured data.
Hence, B. Knowledge Mining is correct.
The other options—computer vision, NLP, and anomaly detection—deal with image recognition, language understanding, and data irregularities, respectively, not large-scale information extraction.
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:
Reliability & Safety
https://en.wikipedia.org/wiki/Tay_(bot)
“To build trust, it ' s critical that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. It ' s also important to be able to verify that these systems are behaving as intended under actual operating conditions. How they behave and the variety of conditions they can handle reliably and safely largely reflects the range of situations and circumstances that developers anticipate during design and testing. We believe that rigorous testing is essential during system development and deployment to ensure AI systems can respond safely in unanticipated situations and edge cases, don ' t have unexpected performance failures, and don ' t evolve in ways that are inconsistent with original expectations”
You have the following apps:
• App1: Understands the public perception of a brand or topic
• App2: Applies profanity filters to speech-to-text
What does each app use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

App1: “Understands the public perception of a brand or topic” → Sentiment analysis
According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn’s Natural Language Processing (NLP) documentation, Sentiment analysis is a feature of the Azure AI Language Service that determines the emotional tone or attitude expressed in text. It classifies text as positive, negative, neutral, or mixed, which makes it ideal for analyzing customer opinions, brand perception, or product feedback.
For example, an organization can use sentiment analysis to process customer reviews or social media posts to determine how people feel about a particular brand or topic. This insight helps companies assess customer satisfaction, public perception, and marketing impact.
App2: “Applies profanity filters to speech-to-text” → Language detection
The task of applying profanity filters occurs during or after speech-to-text transcription, which involves identifying the language used so that the correct filter can be applied. Language detection is an NLP feature that determines which language is being spoken or written. Once the language is detected, appropriate profanity filtering rules are automatically applied to remove or mask offensive words from transcribed text.
Other options such as Captioning or Named Entity Recognition (NER) are not relevant:
Captioning describes images or videos, not speech filtering.
NER identifies people, locations, or organizations but does not handle profanity or language detection.
Therefore, based on Azure AI NLP features:
App1 uses Sentiment analysis
App2 uses Language detection
You have an Azure Machine Learning model that predicts product quality. The model has a training dataset that contains 50,000 records. A sample of the data is shown in the following table.

For each of the following Statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question tests the understanding of features and labels in machine learning, a core concept covered in the Microsoft Azure AI Fundamentals (AI-900) syllabus under “Describe fundamental principles of machine learning on Azure.”
In supervised machine learning, data is divided into features (inputs) and labels (outputs).
Features are the independent variables — measurable properties or characteristics used by the model to make predictions.
Labels are the dependent variables — the target outcome the model is trained to predict.
From the provided dataset, the goal of the Azure Machine Learning model is to predict product quality (Pass or Fail). Therefore:
Mass (kg) is a feature – Yes“Mass (kg)” represents an input variable used by the model to learn patterns that influence product quality. It helps the algorithm understand how variations in mass might correlate with passing or failing the quality test. Thus, it is correctly classified as a feature.
Quality Test is a label – YesThe “Quality Test” column indicates the outcome of the manufacturing process, marked as either Pass or Fail. This is the target the model tries to predict during training. In Azure ML terminology, this column is the label, as it represents the dependent variable.
Temperature (C) is a label – No“Temperature (C)” is an input that helps the model determine quality outcomes, not the outcome itself. It influences the quality result but is not the value being predicted. Therefore, temperature is another feature, not a label.
In conclusion, per Microsoft Learn and AI-900 study materials, features are measurable inputs (like mass and temperature), while the label is the target output (like the quality test result).
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

In the Microsoft Azure AI Fundamentals (AI-900) curriculum, computer vision capabilities refer to artificial intelligence systems that can analyze and interpret visual content such as images and videos. The Azure AI Vision and Face API services provide pretrained models for detecting, recognizing, and analyzing visual information, enabling developers to build intelligent applications that understand what they " see. "
When asked how computer vision capabilities can be deployed, the correct answer is to integrate a face detection feature into an app. This aligns with Microsoft Learn’s module “Describe features of computer vision workloads,” which explains that computer vision can identify objects, classify images, detect faces, and extract text (OCR). The Face API, a part of Azure AI Vision, specifically provides face detection, verification, and emotion recognition capabilities.
Integrating these services into an application allows it to perform actions such as:
Detecting human faces in photos or video streams.
Recognizing facial attributes like age, emotion, or head pose.
Enabling secure authentication based on face recognition.
The other options are incorrect because they relate to different AI workloads:
Develop a text-based chatbot for a website: This falls under Conversational AI, implemented with Azure Bot Service or Conversational Language Understanding (CLU).
Identify anomalous customer behavior on an online store: This task relates to machine learning and anomaly detection models, not computer vision.
Suggest automated responses to incoming email: This uses Natural Language Processing (NLP) capabilities, not visual analysis.
Therefore, the correct and Microsoft-verified completion of the statement is:
“Computer vision capabilities can be deployed to integrate a face detection feature into an app.”
brectly completes the sentence.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of common AI workloads”, OCR (Optical Character Recognition) is a Computer Vision technology that detects and extracts printed or handwritten text from images and scanned documents. OCR allows organizations and individuals to convert physical or image-based text into machine-readable, editable, and searchable digital text.
In the context of this question, a historian working with old newspaper articles or archival documents would use OCR to digitize printed content. For instance, the historian can scan or photograph old newspaper pages, and then use an OCR tool—such as Azure Computer Vision’s OCR API—to automatically recognize and extract the textual content from those images. This process enables the historian to store, edit, and analyze the content digitally without manually typing everything.
OCR works by using deep learning algorithms trained on thousands of text samples. The system analyzes patterns, shapes, and spatial relationships of characters to identify text accurately, even from low-quality or aged paper documents. Once extracted, the digital text can be indexed, translated, or processed further using Natural Language Processing (NLP) tools for content analysis.
Now, addressing the other options:
Facial analysis is used to detect emotions, age, or gender from human faces—irrelevant to text digitization.
Image classification identifies entire images by categories (e.g., cat, car, flower).
Object detection identifies and locates multiple objects within an image but doesn’t extract text.
Therefore, per the AI-900 learning objectives under the Computer Vision workload, the correct and verified completion is:
You plan to apply Text Analytics API features to a technical support ticketing system.
Match the Text Analytics API features to the appropriate natural language processing scenarios.
To answer, drag the appropriate feature from the column on the left to its scenario on the right. Each feature may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

Box1: Sentiment analysis
Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral.
Box 2: Broad entity extraction
Broad entity extraction: Identify important concepts in text, including key
Key phrase extraction/ Broad entity extraction: Identify important concepts in text, including key phrases and named entities such as people, places, and organizations.
Box 3: Entity Recognition
Named Entity Recognition: Identify and categorize entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. Well-known entities are also recognized and linked to more information on the web.
Match the Azure Cognitive Services to the appropriate Al workloads.
To answer, drag the appropriate service from the column on the left to its workload on the right. Each service may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.

Options:
Answer:

Explanation:

The correct matches are Custom Vision, Form Recognizer, and Face — each corresponding to a distinct capability under Azure Cognitive Services as described in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules on Computer Vision workloads.
Custom Vision → Identify objects in an imageThe Custom Vision service is part of the Azure Cognitive Services suite that enables developers to train custom image classification and object detection models. Unlike the prebuilt Computer Vision API, Custom Vision allows users to upload their own labeled images and teach the model to recognize specific objects relevant to their business context. The AI-900 syllabus explains that Custom Vision is ideal for tasks such as identifying products on a shelf, categorizing images, or detecting defects in manufacturing.
Form Recognizer → Automatically import data from an invoice to a databaseForm Recognizer is a document processing AI service that extracts structured data from forms, receipts, and invoices. It uses optical character recognition (OCR) combined with layout and key-value pair detection to automatically capture information such as invoice numbers, amounts, and vendor names. The AI-900 study materials highlight this service under the Document Intelligence category, emphasizing its ability to streamline data entry and business automation workflows by importing extracted data directly into databases or applications.
Face → Identify people in an imageThe Face service provides advanced facial detection and recognition capabilities. It can locate faces in images, compare similarities between faces, identify known individuals, and even detect facial attributes such as age or emotion. The AI-900 course classifies this under Computer Vision services for person identification and security-related use cases such as access control or identity verification.
Thus, each mapping aligns precisely with the AI-900 official learning outcomes on Cognitive Services capabilities:
Custom Vision → Object recognition
Form Recognizer → Data extraction from forms
Face → People identification
✅ Final verified configuration:
Custom Vision → Identify objects in an image
Form Recognizer → Automatically import data from an invoice to a database
Face → Identify people in an image
Match the machine learning tasks to the appropriate scenarios.
To answer, drag the appropriate task from the column on the left to its scenario on the right. Each task may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

This question tests your understanding of machine learning workflow tasks as described in the Microsoft Azure AI Fundamentals (AI-900) study guide and the Microsoft Learn module “Explore the machine learning process.” The AI-900 curriculum divides the machine learning lifecycle into key phases: data preparation, feature engineering and selection, model training, model evaluation, and model deployment. Each phase has specific tasks designed to prepare, build, and assess predictive models before deployment.
Examining the values of a confusion matrix → Model evaluationIn Azure Machine Learning, evaluating a model involves checking its performance using metrics such as accuracy, precision, recall, and F1-score. The confusion matrix is one of the most common tools for this purpose. According to Microsoft Learn, “model evaluation is the process of assessing a trained model’s performance against test data to ensure reliability before deployment.” Analyzing the confusion matrix helps determine whether predictions align with actual outcomes, making this task part of model evaluation.
Splitting a date into month, day, and year fields → Feature engineeringFeature engineering refers to transforming raw data into features that better represent the underlying patterns to improve model performance. The study guide describes it as “the process of creating new input features from existing data.” Splitting a date field into separate numeric fields (month, day, year) is a classic example of feature engineering because it enables the model to learn from temporal patterns that might otherwise remain hidden.
Picking temperature and pressure to train a weather model → Feature selectionFeature selection involves identifying the most relevant variables that have predictive power for the model. As defined in Microsoft Learn, “feature selection is the process of choosing the most useful subset of input features for training.” In this scenario, selecting temperature and pressure variables as inputs for a weather prediction model fits perfectly within the feature selection stage.
Therefore, the correct matches are:
✅ Examining confusion matrix → Model evaluation
✅ Splitting date field → Feature engineering
✅ Picking temperature & pressure → Feature selection
You have an Azure Machine Learning pipeline that contains a Split Data module. The Split Data module outputs to a Train Model module and a Score Model module. What is the function of the Split Data module?
Options:
selecting columns that must be included in the model
creating training and validation datasets
diverting records that have missing data
scaling numeric variables so that they are within a consistent numeric range
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of Azure Machine Learning”, the Split Data module in an Azure Machine Learning pipeline is used to divide a dataset into two or more subsets—typically a training dataset and a testing (or validation) dataset. This is a fundamental step in the supervised machine learning workflow because it allows for accurate evaluation of the model’s performance on data it has not seen during training.
In a typical workflow, the data flows as follows:
The dataset is first preprocessed (cleaned, normalized, or transformed).
The Split Data module divides this dataset into two parts — one for training the model and another for testing or scoring the model’s accuracy.
The Train Model module uses the training data output from the Split Data module to learn patterns and build a predictive model.
The Score Model module then takes the trained model and applies it to the test data output to measure how well the model performs on unseen data.
The Split Data module typically uses a defined ratio (such as 0.7:0.3 or 70% for training and 30% for testing). This ensures that the trained model can generalize well to new, real-world data rather than simply memorizing the training examples.
Now, addressing the incorrect options:
A. Selecting columns that must be included in the model is done by the Select Columns in Dataset module.
C. Diverting records that have missing data is handled by the Clean Missing Data module.
D. Scaling numeric variables is done using the Normalize Data or Edit Metadata modules.
Therefore, based on the official AI-900 learning objectives, the verified and most accurate answer is B. creating training and validation datasets.
What ate two common use cases for generative Al solutions? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
generating draft responses for customer service agents
creating original artwork from textual descriptions
predicting sales revenue based on historical data
classifying email messages as spam or non-spam
Answer:
A, BExplanation:
Generative AI focuses on creating new content rather than just analyzing existing data. As per Microsoft’s AI-900 curriculum and Azure OpenAI documentation, typical use cases include generating text, images, code, or other creative outputs based on input prompts.
A. Generating draft responses for customer service agents — ✅ Correct. GPT-based models can automatically generate draft replies to customer queries, enabling agents to refine responses and increase efficiency.
B. Creating original artwork from textual descriptions — ✅ Correct. DALL-E, available through Azure OpenAI, can produce unique images based on natural language prompts.
Options C and D are incorrect because they involve predictive or classification models, not generative ones:
C. Predicting sales revenue → Regression (machine learning).
D. Classifying email messages → Classification (machine learning).
Correct answers: A and B.
You need to identify groups of rows with similar numeric values in a dataset. Which type of machine learning should you use?
Options:
clustering
regression
classification
Answer:
AExplanation:
When you need to identify groups of rows with similar numeric values in a dataset, the correct machine learning approach is clustering. This method belongs to unsupervised learning, where the model groups data points based on similarity without using pre-labeled training data.
In Azure AI-900 study modules, clustering is introduced as a technique for discovering natural groupings in data. For instance, clustering could be used to group customers with similar purchase histories or to find products with similar features. The algorithm—such as K-means or hierarchical clustering—calculates distances between data points and organizes them into clusters based on how close they are numerically or statistically.
The other options are incorrect:
B. Regression predicts continuous numeric values (e.g., predicting sales or prices).
C. Classification assigns data to predefined categories (e.g., spam or not spam).
You need to reduce the load on telephone operators by implementing a Chabot to answer simple questions with predefined answers.
Which two Al services should you use to achieve the goal? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Options:
Azure 8ol Service
Azure Machine Learning
Translator
Language Service
Answer:
A, DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure,” to create a chatbot that can automatically answer simple, predefined user questions, you need two main Azure AI components — one to handle the conversation interface and another to manage the knowledge and language understanding aspect.
Azure Bot Service (A)This service is used to create, manage, and deploy chatbots that interact with users through text or voice. The Bot Service provides the framework for conversation management, user interaction, and channel integration (e.g., webchat, Microsoft Teams, Skype). It serves as the backbone of conversational AI applications and supports integration with other cognitive services like the Language Service.
Language Service (D)The Azure AI Language Service (which now includes Question Answering, formerly QnA Maker) is used to build and manage the knowledge base of predefined questions and answers. This service enables the chatbot to understand user queries and return appropriate responses automatically. The QnA capability allows you to import documents, FAQs, or structured data to create a searchable database of responses for the bot.
Why the other options are incorrect:
B. Azure Machine Learning: This service is used for building, training, and deploying custom machine learning models, not for chatbot Q & A automation.
C. Translator: This service performs language translation, which is not required for answering predefined questions unless multilingual support is specifically needed.
Therefore, to implement a chatbot that can answer simple, repetitive user questions and reduce the load on human operators, you combine Azure Bot Service (for interaction) with the Language Service (for question-answering intelligence).
You need to convert handwritten notes into digital text.
Which type of computer vision should you use?
Options:
optical character recognition (OCR)
object detection
image classification
facial detection
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn documentation on Azure AI Vision, OCR is a computer vision technology that detects and extracts printed or handwritten text from images, scanned documents, or photographs. The OCR feature in Azure AI Vision can analyze images containing handwritten notes, recognize the characters, and convert them into machine-readable digital text.
This process is ideal for digitizing handwritten meeting notes, forms, or classroom materials. OCR works by identifying text regions in an image, segmenting characters or words, and then applying language models to interpret them correctly. Azure’s OCR capabilities support multiple languages and can handle varied handwriting styles.
Other options are incorrect because:
B. Object detection identifies and locates objects (like cars, animals, or furniture) within an image, not text.
C. Image classification assigns an image to a predefined category (e.g., “dog” or “cat”) rather than extracting text.
D. Facial detection detects or recognizes human faces, not written text.
Therefore, to convert handwritten notes into digital text, the correct computer vision technique is Optical Character Recognition (OCR).
You need to identify street names based on street signs in photographs.
Which type of computer vision should you use?
Options:
object detection
optical character recognition (OCR)
image classification
facial recognition
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of computer vision workloads on Azure”, Optical Character Recognition (OCR) is a core computer vision workload that enables AI systems to detect and extract text from images or scanned documents.
In this scenario, the goal is to identify street names from street signs in photographs. Since the text is embedded within images, OCR is the correct technology to use. OCR works by analyzing the visual patterns of letters, numbers, and symbols, then converting them into machine-readable text. Azure’s Computer Vision API and Azure AI Vision Service provide OCR capabilities that can extract printed or handwritten text from pictures, documents, and even real-time camera feeds.
Let’s analyze the other options:
A. Object detection: Identifies and locates objects (like cars, people, or street signs) but not the text written on them.
C. Image classification: Classifies an entire image into categories (e.g., “street scene” or “traffic sign”) but doesn’t extract text content.
D. Facial recognition: Identifies or verifies people by analyzing facial features, unrelated to text extraction.
Therefore, identifying street names on street signs is a text extraction problem, making Optical Character Recognition (OCR) the most accurate and verified answer per Microsoft Learn content.
You use natural language processing to process text from a Microsoft news story.
You receive the output shown in the following exhibit.

Which type of natural languages processing was performed?
Options:
entity recognition
key phrase extraction
sentiment analysis
translation
Answer:
AExplanation:
https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview
You can provide the Text Analytics service with unstructured text and it will return a list of entities in the text that it recognizes. You can provide the Text Analytics service with unstructured text and it will return a list of entities in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially an item of a particular type or a category; and in some cases, subtype, such as those as shown in the following table.
https://docs.microsoft.com/en-us/learn/modules/analyze-text-with-text-analytics-service/2-get-started-azure
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

During model training, a portion of the dataset (commonly 70–80%) is used to teach the machine learning algorithm to identify patterns and relationships between input features and the output label. The remaining data (usually 20–30%) is held back to evaluate the model’s performance and verify its accuracy on unseen data. This ensures the model is not overfitted (too tightly fitted to training data) and can generalize well to new inputs.
Key steps highlighted in Microsoft Learn materials:
Model Training: Use the training data to fit the model — the algorithm learns relationships between input features and labels.
Model Evaluation: Use the test or validation data to assess the accuracy, precision, recall, or other metrics of the trained model.
Model Deployment: Once validated, the model is deployed to make real-world predictions.
Other options explained:
Feature engineering: Involves preparing and transforming input data, not splitting datasets for training and testing.
Time constraints: Not a machine learning process step.
Feature stripping: Not a recognized ML concept.
MLflow models: Refers to an open-source tool for tracking and managing models, not dataset splitting or training.
Thus, when you use a portion of the dataset to prepare and train a machine learning model, and retain the rest to verify results, the process is known as model training.
You are building a Language Understanding model for an e-commerce business.
You need to ensure that the model detects when utterances are outside the intended scope of the model.
What should you do?
Options:
Test the model by using new utterances
Add utterances to the None intent
Create a prebuilt task entity
Create a new model
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of conversational AI workloads on Azure”, a Language Understanding (LUIS) model is designed to interpret natural language input by identifying intents (the purpose of an utterance) and entities (specific data items in the utterance).
Every LUIS model automatically includes a special intent called “None.” This intent is used to handle utterances that do not fall into any of the model’s defined intents. Adding examples of irrelevant or out-of-scope utterances to the None intent helps the model learn to recognize when a user’s input does not match any existing categories.
For example, if your e-commerce chatbot handles intents such as “TrackOrder” and “CancelOrder,” but a user says “What’s your favorite color?”, that input should be mapped to the None intent so the bot can respond appropriately, such as “I’m not sure how to answer that.”
The AI-900 curriculum emphasizes that including diverse None intent examples improves model robustness and prevents false matches, thereby enhancing user experience.
Other options are incorrect:
A. Test the model by using new utterances: Testing is important but does not define how to detect out-of-scope inputs.
C. Create a prebuilt task entity: Entities extract specific data but are unrelated to intent classification.
D. Create a new model: Unnecessary; handling out-of-scope utterances is done within the same model via the None intent.
✅ Final Answer: B. Add utterances to the None intent
You need to analyze images of vehicles on a highway and measure the distance between the vehicles. Which type of computer vision model should you use?
Options:
object detection
image classification
facial recognition
optical character recognition (OCR)
Answer:
AExplanation:
In this scenario, analyzing vehicle images and measuring the distance between them requires first detecting each vehicle’s position in the image. Object detection models can locate and identify multiple objects (such as cars, trucks, or motorcycles) by assigning bounding boxes. Once detected, their coordinates can be used to calculate distances or spacing.
Image classification only assigns a single label per image, not per object. Facial recognition is human-focused, and OCR deals with text extraction. Thus, object detection is the correct model type for this task.
You are creating an app to help employees write emails and reports based on user prompts. What should you use?
Options:
Azure Al Speech
Azure OpenAI in Foundry Models
Azure Al Vision
Azure Machine Learning studio
Answer:
BExplanation:
For an app that helps employees write emails and reports based on user prompts, you need a text generation model capable of understanding natural language instructions and producing coherent, contextually appropriate output. Azure OpenAI GPT models—available through Azure AI Foundry (formerly Azure OpenAI Studio)—are specifically designed for such generative tasks.
By integrating GPT-3.5 or GPT-4, the app can analyze prompts like “Write a professional email to a client about project updates” and automatically generate polished text in seconds.
The other options do not fit:
A. Azure AI Speech: Converts spoken language to text or text to speech; not suitable for generating written content.
C. Azure AI Vision: Processes and analyzes images or video content.
D. Azure Machine Learning Studio: Used for training, testing, and deploying custom ML models, not directly for content generation.
Therefore, to create a writing-assistance app for emails and reports, the correct solution is B. Azure OpenAI in Foundry Models using GPT-based language generation.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:
Safety system.
According to the Microsoft Learn documentation and the AI-900: Microsoft Azure AI Fundamentals official study guide, the safety system layer in generative AI architecture plays a crucial role in monitoring, filtering, and mitigating harmful or unsafe model outputs. This layer works alongside the model and user experience layers to ensure that generative AI systems—such as those powered by Azure OpenAI—produce responses that are safe, aligned, and responsible.
The safety system layer uses various techniques including content filtering, prompt moderation, and policy enforcement to prevent outputs that could be harmful, biased, misleading, or inappropriate. It evaluates both user inputs (prompts) and model-generated outputs to identify and block unsafe or unethical content. The system might use predefined rules, classifiers, or human feedback signals to decide whether to allow, modify, or stop a response.
In contrast, the other layers serve different purposes:
The model layer contains the core large language or generative model (e.g., GPT or DALL-E) that processes inputs and produces outputs.
The metaprompt and grounding layer ensures the model’s responses are contextually relevant and factually supported, often linking to organizational data sources or system prompts.
The user experience layer defines how users interact with the AI system, including the interface and conversational flow, but does not manage safety enforcement.
Therefore, the layer that uses system inputs and context to mitigate harmful outputs from a generative AI model is the Safety system layer.
This aligns with Microsoft’s responsible AI principles—Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability—ensuring generative AI operates ethically and safely.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
✅ Yes – Extract key phrases
❌ No – Generate press releases
✅ Yes – Detect sentiment
The Azure AI Language service is a powerful set of natural language processing (NLP) tools within Azure Cognitive Services, designed to analyze, understand, and interpret human language in text form. According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn documentation, this service includes several capabilities such as key phrase extraction, sentiment analysis, language detection, named entity recognition (NER), and question answering.
Extract key phrases from documents → YesThe Key Phrase Extraction feature identifies the most relevant words or short phrases within a document, helping summarize important topics. This is useful for indexing, summarizing, or organizing content. For instance, from “Azure AI Language helps analyze customer feedback,” it may extract “Azure AI Language” and “customer feedback” as key phrases.
Generate press releases based on user prompts → NoThis functionality falls under generative AI, specifically within Azure OpenAI Service, which uses models such as GPT-4 for text creation. The Azure AI Language service focuses on analyzing and understanding existing text, not generating new content like press releases or articles.
Build a social media feed analyzer to detect sentiment → YesThe Sentiment Analysis capability determines the emotional tone (positive, neutral, negative, or mixed) of text data, making it ideal for analyzing social media posts, reviews, or feedback. Businesses often use this to gauge customer satisfaction or brand reputation.
In summary, the Azure AI Language service analyzes text to extract insights and detect sentiment but does not generate new textual content.
You need to predict the income range of a given customer by using the following dataset.

Which two fields should you use as features? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
Education Level
Last Name
Age
Income Range
First Name
Answer:
A, CExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe core concepts of machine learning on Azure”, when building a predictive machine learning model, features are the input variables used by the algorithm to predict the target label. The target label is the output or value the model is trained to predict.
In this dataset, the target variable is clearly the Income Range, since the goal is to predict a customer’s income bracket. Therefore, Income Range (D) is the label, not a feature. Features must be other attributes that help the model make this prediction.
The fields Education Level (A) and Age (C) are the most relevant features because both can logically and statistically influence income level.
Education Level is a categorical variable that often correlates strongly with income. Individuals with higher education levels tend to earn more on average, making this an important predictor.
Age is a numerical variable that typically affects income level due to factors such as experience and career progression.
By contrast:
First Name (E) and Last Name (B) are irrelevant as features because they are identifiers, not meaningful predictors of income. Including them could lead to bias or model overfitting without contributing to accurate predictions.
Hence, according to AI-900 principles, the features used to train a model predicting income range would be Education Level and Age.
You have a website that includes customer reviews.
You need to store the reviews in English and present the reviews to users in their respective language by recognizing each user’s geographical location.
Which type of natural language processing workload should you use?
Options:
translation
language modeling
key phrase extraction
speech recognition
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) syllabus and Microsoft Learn module “Describe features of natural language processing (NLP) workloads on Azure,” translation is a core NLP workload that converts text from one language into another while maintaining meaning and context.
In this scenario, the website stores reviews in English and must present them in the user’s native language based on geographical location. This directly requires a translation workload, which uses Azure Cognitive Services — specifically, the Translator service — to automatically translate content dynamically for each user.
Other options explained:
B. Language modeling involves predicting the next word in a sentence or understanding linguistic patterns; it’s used in model training, not translation.
C. Key phrase extraction identifies main ideas in text, not language conversion.
D. Speech recognition converts spoken words into written text but does not perform translation or handle geographic adaptation.
Microsoft’s Translator service supports real-time text translation, multi-language detection, and context preservation, making it ideal for global websites. The AI-900 study guide emphasizes translation as one of the most common NLP workloads, enabling applications to break language barriers and enhance accessibility for diverse audiences.
Therefore, based on official Microsoft Learn material, the correct answer is:
✅ A. translation.
You are building a knowledge base by using QnA Maker. Which file format can you use to populate the knowledge base?
Options:
PPTX
XML
ZIP
Answer:
AExplanation:
QnA Maker supports automatic extraction of question-and-answer pairs from structured files such as PDF, Microsoft Word, or Excel documents, as well as from public webpages. This makes PDF the correct file format for populating a knowledge base.
Other options are invalid:
B. PPTX – Not supported.
C. XML – Not a recognized input for QnA Maker.
D. ZIP – Used for packaging, not Q & A content.
You are developing a natural language processing solution in Azure. The solution will analyze customer reviews and determine how positive or negative each review is.
This is an example of which type of natural language processing workload?
Options:
language detection
sentiment analysis
key phrase extraction
entity recognition
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore natural language processing (NLP) in Azure,” sentiment analysis is a core natural language processing (NLP) workload used to determine the emotional tone or attitude expressed in a piece of text. It helps identify whether a statement, review, or comment conveys a positive, negative, neutral, or mixed sentiment.
In this question, the scenario involves analyzing customer reviews and determining how positive or negative each review is. This directly aligns with sentiment analysis, which evaluates subjective text and quantifies the expressed opinion. In Azure, this workload is implemented through the Azure AI Language service (formerly Text Analytics API), where the Sentiment Analysis feature assigns a sentiment score to text inputs and classifies them accordingly.
For example:
“I love this product!” → Positive sentiment
“It’s okay, but could be better.” → Neutral or mixed sentiment
“I’m disappointed with the service.” → Negative sentiment
Let’s analyze why the other options are incorrect:
A. Language detection: Identifies which language (e.g., English, Spanish, French) the text is written in. It doesn’t measure positivity or negativity.
C. Key phrase extraction: Identifies the main topics or keywords in text (e.g., “battery life,” “customer support”), not the emotion.
D. Entity recognition: Detects and categorizes specific entities such as people, locations, organizations, or dates within the text.
Therefore, based on Microsoft’s AI-900 syllabus and Azure AI Language documentation, the workload that analyzes text to determine positive or negative opinions is Sentiment Analysis (Option B). This capability is widely used in customer feedback analysis, brand monitoring, and social media analytics to understand public perception and improve business decisions.
What is an example of a Microsoft responsible Al principle?
Options:
Al systems should protect the interests of developers.
Al systems should be in the public domain.
Al systems should be secure and respect privacy.
Al systems should make personal details accessible.
Answer:
CExplanation:
Microsoft’s Responsible AI principles are central to the AI-900 curriculum and consist of six key tenets:
Fairness – AI systems should treat all people fairly.
Reliability and safety – AI systems should perform reliably and safely.
Privacy and security – AI systems should be secure and respect user privacy.
Inclusiveness – AI systems should empower everyone.
Transparency – AI systems should be understandable.
Accountability – People should be accountable for AI outcomes.
The statement “AI systems should be secure and respect privacy” reflects the Privacy and Security principle, which ensures AI solutions protect personal data and operate within compliance frameworks. Microsoft’s responsible AI framework emphasizes building trust by safeguarding sensitive data used in AI applications.
The other options do not align with official responsible AI principles; for example, AI systems need not “be in the public domain,” nor are they meant to prioritize developers’ interests or expose personal details. Hence, the correct and Microsoft-verified answer is C. AI systems should be secure and respect privacy.
Which parameter should you configure to produce a more diverse range of tokens in the responses from a chat solution that uses the Azure OpenAI GPT-3.5 model?
Options:
Max response
Past messages included
Presence penalty
Stop sequence
Answer:
CExplanation:
In Azure OpenAI Service, model behavior during text or chat generation is controlled by several parameters, such as temperature, max tokens, top_p, presence penalty, and frequency penalty. According to Microsoft Learn’s documentation for Azure OpenAI GPT models, the presence penalty influences how likely the model is to introduce new or diverse tokens in its responses.
Specifically, the presence penalty discourages the model from repeating previously used tokens, encouraging it to explore new topics or ideas instead of sticking to existing ones. Increasing the presence penalty value typically results in more diverse and creative outputs, while lowering it makes responses more repetitive or focused.
Option analysis:
A. Max response (Max tokens): Controls the maximum length of the generated response, not its diversity.
B. Past messages included: Defines how much chat history the model considers for context; it doesn’t affect diversity directly.
C. Presence penalty: Encourages novelty and introduces new tokens—this is correct for increasing response variety.
D. Stop sequence: Specifies a sequence of characters or tokens where the model should stop generating output.
Match the Al workload to the appropriate task.
To answer, drag the appropriate Al workload from the column on the left to its task on the right. Each workload may be used once, more than once, or not at all
NOTE: Each correct match is worth one point.

Options:
Answer:

An app that analyzes social media posts to identify their tone is an example of which type of natural language processing (NLP) workload?
Options:
sentiment analysis
key phrase extraction
entity recognition
speech recognition
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of natural language processing (NLP) workloads on Azure,” sentiment analysis is an NLP workload that determines the emotional tone or opinion expressed in a piece of text. This could be positive, negative, or neutral sentiment.
When an app analyzes social media posts to identify their tone, it is performing sentiment analysis, since it aims to understand the emotional context behind user-generated text such as tweets, reviews, or comments. Azure provides this functionality through the Azure Cognitive Services – Text Analytics API, which evaluates text and returns sentiment scores.
Other options are not suitable:
Key phrase extraction identifies main ideas in text but not tone.
Entity recognition identifies names of people, organizations, or locations.
Speech recognition converts spoken words into text, not emotional analysis.
Therefore, analyzing social media tone is an example of sentiment analysis, a key NLP workload in Microsoft’s AI-900 syllabus.
For each of The following statements, select Yes If the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point

Options:
Answer:

Explanation:
Statements
Yes
No
A webchat bot can interact with users visiting a website.
Yes
Automatically generating captions for pre-recorded videos is an example of natural language processing.
No
A smart device in the home that responds to questions such as “What will the weather be like today?” is an example of natural language processing.
Yes
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn modules on AI workloads, each of these statements maps to a distinct area of artificial intelligence — namely Conversational AI, Speech AI, and Natural Language Processing (NLP).
“A webchat bot can interact with users visiting a website.” – YesThis is true. A webchat bot represents an example of Conversational AI. It leverages natural language understanding (NLU) to interpret user input and generate appropriate responses. These bots can be created using Azure services such as Azure AI Bot Service and Language Understanding (LUIS). They enable automated interactions with users through text-based communication on websites, applications, or messaging platforms.
“Automatically generating captions for pre-recorded videos is an example of natural language processing.” – NoThis is false. Generating captions from audio involves speech recognition, not NLP. Specifically, it uses speech-to-text technology to transcribe spoken words into written text. This function is typically performed by Azure’s Speech service, which is part of the Speech AI workload, not the language-processing workload.
“A smart device in the home that responds to questions such as ‘What will the weather be like today?’ is an example of natural language processing.” – YesThis is true. Smart assistants like Alexa or Cortana use NLP to interpret spoken queries, extract meaning, and generate appropriate responses. NLP allows these devices to understand human language, retrieve relevant information, and respond conversationally.
You build a QnA Maker bot by using a frequently asked questions (FAQ) page.
You need to add professional greetings and other responses to make the bot more user friendly.
What should you do?
Options:
Increase the confidence threshold of responses
Enable active learning
Create multi-turn questions
Add chit-chat
Answer:
DExplanation:
According to the Microsoft Learn module “Build a QnA Maker knowledge base”, QnA Maker allows developers to create bots that answer user queries based on documents like FAQs or manuals. To make a bot more natural and conversational, Microsoft provides a “chit-chat” feature — a prebuilt, professionally written set of responses to common conversational phrases such as greetings (“Hello”), small talk (“How are you?”), and polite phrases (“Thank you”).
Adding chit-chat improves user experience by making the bot sound friendlier and more human-like. It doesn’t alter the main Q & A logic but enhances the bot’s tone and responsiveness.
The other options are not correct:
A. Increase the confidence threshold makes the bot more selective in responses but doesn’t add new conversational features.
B. Enable active learning improves knowledge base accuracy over time through user feedback.
C. Create multi-turn questions adds conversational flow for related topics but doesn’t add greetings or casual dialogue.
Thus, to make the bot more personable, the correct action is to Add chit-chat.
For a machine learning progress, how should you split data for training and evaluation?
Options:
Use features for training and labels for evaluation.
Randomly split the data into rows for training and rows for evaluation.
Use labels for training and features for evaluation.
Randomly split the data into columns for training and columns for evaluation.
Answer:
BExplanation:
https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/split-data
The correct answer is B. Randomly split the data into rows for training and rows for evaluation.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe fundamental principles of machine learning on Azure”, the process of developing a machine learning model involves dividing the available dataset into two or more parts—commonly training data and evaluation (or testing) data. The goal is to ensure that the model can learn patterns from one subset of the data (training set) and then be objectively tested on unseen data (evaluation set) to measure how well it generalizes to new situations.
The training dataset contains both features (the measurable inputs) and labels (the target outputs). The model learns from the patterns and relationships between these features and labels. The evaluation dataset also contains features and labels, but it is kept separate during the training phase. Once the model has been trained, it is tested on this unseen evaluation data to calculate metrics like accuracy, precision, recall, or F1 score.
Microsoft emphasizes that the data split should be random and based on rows, not columns. Each row represents a complete observation (for example, one customer record, one transaction, or one image). Randomly splitting ensures that both subsets represent the same distribution of data, avoiding bias. Splitting by columns would separate features themselves, which would make the model training invalid.
The AI-900 materials often illustrate this using Azure Machine Learning’s data preparation workflow, where data is randomly divided (commonly 70% for training and 30% for testing). This ensures the model learns from diverse examples and is fairly evaluated.
Therefore, the verified and correct approach, as per Microsoft’s official guidance, is B. Randomly split the data into rows for training and rows for evaluation.
For each of The following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

The Azure AI Language service (part of Azure Cognitive Services) provides a set of natural language processing (NLP) capabilities designed to analyze and interpret text data. Its core features include language detection, key phrase extraction, sentiment analysis, and named entity recognition (NER).
Language Identification – YESAccording to the Microsoft Learn module “Analyze text with Azure AI Language,” one of the service’s built-in capabilities is language detection, which determines the language of a given text string (e.g., English, Spanish, or French). This allows applications to automatically adapt to multilingual input.
Handwritten Signature Detection – NOThe Azure AI Language service only processes text-based data; it does not analyze images or handwriting. Detecting handwritten signatures requires computer vision capabilities, specifically Azure AI Vision or Azure AI Document Intelligence, which can extract and interpret visual content from scanned documents or images.
Identifying Companies and Organizations – YESThe Named Entity Recognition (NER) feature within Azure AI Language can identify entities such as people, locations, dates, organizations, and companies mentioned in text. It tags these entities with categories, enabling structured analysis of unstructured data.
✅ Summary:
Language detection → Yes (supported by AI Language).
Handwritten signatures → No (requires Computer Vision).
Entity recognition for companies/organizations → Yes (supported by AI Language NER).
What can be used to analyze scanned invoices and extract data, such as billing addresses and the total amount due?
Options:
Azure Al Search
Azure Al Document intelligence
Azure Al Custom Vision
Azure OpenAI
Answer:
BExplanation:
The correct answer is B. Azure AI Document Intelligence (formerly Form Recognizer).
This Azure service uses AI and OCR technologies to analyze and extract structured data from documents such as invoices, receipts, and purchase orders. It identifies key fields like billing address, invoice number, total amount due, and line items. The service supports prebuilt models for common document types and custom models for specialized layouts.
Option review:
A. Azure AI Search: Used for knowledge mining and semantic search, not document data extraction.
B. Azure AI Document Intelligence — ✅ Correct. Designed for form and invoice extraction.
C. Azure AI Custom Vision: Used for image classification and object detection, not text extraction.
D. Azure OpenAI: Generates or processes language but not structured document data.
Therefore, Azure AI Document Intelligence is the right service to extract data from scanned invoices.
Match the types of machine learning to the appropriate scenarios.
To answer, drag the appropriate machine learning type from the column on the left to its scenario on the right. Each machine learning type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of common AI workloads”, there are three primary supervised and unsupervised machine learning types: Regression, Classification, and Clustering. Each type of learning addresses a different kind of problem depending on the data and desired prediction output.
Regression – Regression models are used to predict numeric, continuous values. The study guide specifies that “regression predicts a number.” In the scenario “Predict how many minutes late a flight will arrive based on the amount of snowfall,” the output (minutes late) is a continuous numeric value. Therefore, this is a regression problem. Regression algorithms like linear regression or decision tree regression estimate relationships between variables and predict measurable quantities.
Clustering – Clustering falls under unsupervised learning, where the model identifies natural groupings or patterns in unlabeled data. The official AI-900 training material states that “clustering is used to find groups or segments of data that share similar characteristics.” The scenario “Segment customers into different groups to support a marketing department” fits this description because the goal is to group customers based on behavior or demographics without predefined labels. Thus, it is a clustering problem.
Classification – Classification is a supervised learning method used to predict discrete categories or labels. The AI-900 content defines classification as “predicting which category an item belongs to.” The scenario “Predict whether a student will complete a university course” requires a yes/no (binary) outcome, which is a classic classification problem. Examples include logistic regression, decision trees, or neural networks trained for categorical prediction.
In summary:
Regression → Predicts continuous numeric outcomes.
Clustering → Groups data by similarities without predefined labels.
Classification → Predicts discrete or categorical outcomes.
Hence, the correct and verified mappings based on the official AI-900 study material are:
Regression → Flight delay prediction
Clustering → Customer segmentation
Classification → Course completion prediction
For each of the following statements. select Yes if the statement is true. Otherwise, select No. NOTE; Each correct selection is worth one point

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of Computer Vision workloads on Azure”, the Custom Vision service is a part of Azure Cognitive Services that allows users to build, train, and deploy custom image classification and object detection models. It is primarily designed for still-image analysis, not video processing.
“The Custom Vision service can be used to detect objects in an image.” – Yes.This is correct. The Custom Vision service supports two major model types: classification (categorizing entire images) and object detection (identifying and locating multiple objects within a single image). In object detection mode, the model outputs both the object’s category and its position in the image using bounding boxes. This capability is emphasized in the AI-900 curriculum as an example of applying computer vision to real-world scenarios, such as identifying products on shelves or detecting equipment parts in manufacturing.
“The Custom Vision service requires that you provide your own data to train the model.” – Yes.This statement is also true. Unlike prebuilt computer vision models, Custom Vision is a trainable model that requires users to upload their own labeled images to create a domain-specific AI model. The model’s accuracy depends on the quality and quantity of this user-provided data. The AI-900 study materials explain that Custom Vision is used when prebuilt models do not meet specific needs, enabling businesses to train models tailored to unique image sets.
“The Custom Vision service can be used to analyze video files.” – No.This is incorrect. Custom Vision is limited to image-based analysis. To analyze video content (detecting objects or motion in moving frames), Azure provides Video Indexer, which is a separate service designed for extracting insights from video files, including speech, objects, faces, and emotions.
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:
Classification
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe features of common AI workloads”, classification is a supervised machine learning technique used when the goal is to predict which category or class an item belongs to. In supervised learning, the model is trained with labeled data—data that already contains known outcomes. The system learns patterns and relationships between input features and their corresponding labels so it can predict future classifications accurately.
In the scenario provided — “A banking system that predicts whether a loan will be repaid” — the model’s output is a binary decision, meaning there are two possible outcomes:
The loan will be repaid (positive class)
The loan will not be repaid (negative class)
This kind of problem involves predicting a discrete value (a label or category), not a continuous numeric output. Therefore, it perfectly fits the classification type of machine learning.
The AI-900 learning materials describe classification as being used in many real-world examples, including:
Determining whether an email is spam or not spam.
Predicting whether a customer will churn (leave) or stay.
Detecting fraudulent transactions.
Assessing medical test results as positive or negative.
By contrast:
Regression predicts continuous numeric values, such as predicting house prices, temperatures, or sales revenue. It would not apply here because repayment prediction is not a numeric value but a categorical decision.
Clustering is an unsupervised learning method that groups similar data points without predefined categories, such as segmenting customers by purchasing behavior.
Thus, based on Microsoft’s Responsible AI and AI-900 study guide concepts, a banking system that predicts whether a loan will be repaid uses the Classification type of machine learning.
You need to develop a web-based AI solution for a customer support system. Users must be able to interact with a web app that will guide them to the best resource or answer.
Which service should you integrate with the web app to meet the goal?
Options:
Azure Al Language Service
Face
Azure Al Translator
Azure Al Custom Vision
Answer:
DExplanation:
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semistructured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base—automatically. Your knowledge base gets smarter, too, as it
continually learns from user behavior.
Your company manufactures widgets.
You have 1.000 digital photos of the widgets.
You need to identify the location of the widgets within the photos.
What should you use?
Options:
Computer Vision Spatial Analysis
Custom Vision object detection
Custom Vision classification
Computer Vision Image Analysis
Answer:
BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure,” object detection is a computer vision technique used to locate and identify objects within an image. It not only determines what objects are present but also where they appear in the image by returning bounding box coordinates around each detected item.
In this scenario, the goal is to identify the location of widgets within digital photos. This requires both recognition (knowing that the object is a widget) and localization (determining its position). The Custom Vision service in Azure allows you to train a model specifically for your own images, making it ideal for recognizing company-specific products such as widgets. By selecting the Object Detection domain in Custom Vision, you can label regions of interest in your training images. The model then learns to detect and locate those objects in new photos.
Let’s examine the other options:
A. Computer Vision Spatial Analysis: Used for people tracking, movement detection, and occupancy analytics in video streams — not for locating products in still images.
C. Custom Vision classification: This model categorizes an image as a whole (e.g., “contains a widget” or “does not contain a widget”) but does not locate objects within the image.
D. Computer Vision Image Analysis: Provides general image tagging, description, and OCR capabilities but does not pinpoint object locations.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE; Each correct selection is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module: “Describe features of common AI workloads”, conversational AI solutions like chatbots can be created using various methods—not only through custom code. Azure provides both no- code/low-code and developer-focused approaches. For instance, users can design chatbots using Power Virtual Agents, which requires no programming knowledge, or they can use Azure Bot Service with the Bot Framework SDK for fully customized scenarios. Hence, the statement “Chatbots can only be built by using custom code” is False (No) because Azure supports multiple levels of technical involvement for building bots.
The second statement is True (Yes) because the Azure Bot Service is designed specifically to host, manage, and connect conversational bots to users across different channels. Microsoft Learn explicitly explains that the service provides integrated hosting, connection management, and telemetry for bots built using the Bot Framework or Power Virtual Agents. It acts as the foundation for deploying, scaling, and managing chatbot workloads in Azure.
The third statement is also True (Yes) because Azure Bot Service supports integration with Microsoft Teams, among many other channels such as Skype, Facebook Messenger, Slack, and web chat. Microsoft documentation states that Azure-hosted bots can communicate directly with Teams users through the Teams channel, enabling intelligent virtual assistants within the Teams environment.
You have the following apps:
• App1: Uses a set of images and photos to extract brand names
• App2: Enables touchless access control for buildings
Which Azure Al Vision service does each app use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn documentation on Azure AI Vision services, different Azure AI Vision capabilities are suited for different use cases such as object detection, brand recognition, facial recognition, and spatial analysis.
App1: Uses a set of images and photos to extract brand names → Image AnalysisThe Azure AI Vision – Image Analysis service (formerly part of Computer Vision) can detect and extract brands, objects, text, and other visual features from images. It uses advanced image classification and object detection models to recognize logos and identify brand names (for example, “Microsoft” or “Coca-Cola”) in photos. The Image Analysis API can also return descriptive tags, scene descriptions, and confidence scores. Therefore, since App1 analyzes static images to extract brand names, it specifically relies on the Image Analysis feature of Azure AI Vision.
App2: Enables touchless access control for buildings → FaceThe Azure AI Face service is designed for facial detection, verification, and identification. It can recognize and match faces in real time, making it ideal for access control, identity verification, and attendance tracking systems. A “touchless access control” system uses a camera to detect a person’s face and verify identity against a stored profile, allowing or denying entry without physical interaction.
The other options are not suitable:
Optical Character Recognition (OCR) extracts text, not brand logos.
Spatial Analysis is for detecting movement or presence in video feeds.
Video Analysis is for analyzing dynamic video content rather than still images.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

“Generative AI enables software applications to generate new content, such as language dialogs and images.” — YES
This statement is true. According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn documentation, Generative AI refers to systems capable of creating new content such as text, audio, images, video, and code. Models like GPT, DALL·E, and Codex use deep learning to generate human-like responses, natural conversations, or creative media. This is a key differentiator between generative and discriminative AI — generative AI produces new data, while discriminative AI categorizes or analyzes existing data.
“The difference between a large language model (LLM) and a small language model (SLM) is the number of variables in the model.” — YES
This statement is true. The primary distinction between an LLM and an SLM lies in the scale of parameters (variables) within the neural network. LLMs contain billions or even trillions of parameters, which enable them to capture complex linguistic patterns and perform broader tasks. SLMs have fewer parameters, making them faster but less capable of handling complex, context-rich tasks.
“Generative AI is a type of supervised learning.” — NO
This statement is false. Generative AI models are typically trained using unsupervised or self-supervised learning methods. They learn by predicting missing or next elements in large text or image datasets rather than relying on labeled input-output pairs, which are used in supervised learning.
What is a form of unsupervised machine learning?
Options:
multiclass classification
clustering
binary classification
regression
Answer:
BExplanation:
As outlined in the AI-900 study guide and Microsoft Learn’s “Explore fundamental principles of machine learning” module, clustering is a core example of unsupervised machine learning.
In unsupervised learning, the model is trained on data without labeled outcomes. The goal is to discover patterns or groupings naturally present in the data. Clustering algorithms, such as K-means, DBSCAN, or Hierarchical clustering, analyze similarities among data points and group them into clusters. For example, clustering can group customers by purchasing behavior or segment products by shared characteristics — all without predefined labels.
Supervised learning, by contrast, uses labeled data (input-output pairs) to train a model that predicts outcomes. This includes:
A. Multiclass classification – Predicts more than two categories (e.g., classifying images as dog, cat, or bird).
C. Binary classification – Predicts two categories (e.g., spam vs. not spam).
D. Regression – Predicts continuous numeric values (e.g., price prediction).
Therefore, the only option representing unsupervised learning is clustering, which enables data discovery without predefined labels.
You need to build an app that will read recipe instructions aloud to support users who have reduced vision.
Which version service should you use?
Options:
Text Analytics
Translator Text
Speech
Language Understanding (LUIS)
Answer:
CExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of speech capabilities in Azure Cognitive Services”, the Azure Speech service provides functionality for converting text to spoken words (speech synthesis) and speech to text (speech recognition).
In this scenario, the app must read recipe instructions aloud to assist users with visual impairments. This task is achieved through speech synthesis, also known as text-to-speech (TTS). The Azure Speech service uses advanced neural network models to generate natural-sounding voices in many languages and accents, making it ideal for accessibility scenarios such as screen readers, virtual assistants, and educational tools.
Microsoft Learn defines Speech service as a unified offering that includes:
Speech-to-text (speech recognition): Converts spoken words into text.
Text-to-speech (speech synthesis): Converts written text into natural-sounding audio output.
Speech translation: Translates spoken language into another language in real time.
Speaker recognition: Identifies or verifies a person based on their voice.
The other options do not fit the requirements:
A. Text Analytics – Performs text-based natural language analysis such as sentiment, key phrase extraction, and entity recognition, but it cannot produce audio output.
B. Translator Text – Translates text between languages but does not generate speech output.
D. Language Understanding (LUIS) – Interprets user intent from text or speech for conversational bots but does not read text aloud.
Therefore, based on the AI-900 curriculum and Microsoft Learn documentation, the correct service for converting recipe text to spoken audio is the Azure Speech service.
✅ Final Answer: C. Speech
In which two scenarios can you use speech recognition? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
an in-car system that reads text messages aloud
providing closed captions for recorded or live videos
creating an automated public address system for a train station
creating a transcript of a telephone call or meeting
Answer:
B, DExplanation:
The correct answers are B and D.
Speech recognition, part of Azure’s Speech service, converts spoken audio into written text. It is a core feature of Azure Cognitive Services for speech-to-text scenarios.
Providing closed captions for recorded or live videos (B) – This is a typical application of speech recognition. The AI system listens to audio content from a video and generates real-time or post-event captions. Azure’s Speech-to-Text API is frequently used in broadcasting and video platforms to improve accessibility and searchability.
Creating a transcript of a telephone call or meeting (D) – Another common use case is automated transcription. The Speech service can process real-time audio streams (such as meetings or calls) and produce accurate text transcripts. This is widely used in customer service, call analytics, and meeting documentation.
The incorrect options are:
A. an in-car system that reads text messages aloud – This uses Text-to-Speech, not speech recognition.
C. creating an automated public address system for a train station – This also uses Text-to-Speech, since it generates spoken output from text.
Therefore, scenarios that convert spoken words into text correctly represent speech recognition, making B and D the right answers.
You plan to use Azure Machine Learning Studio and automated machine learning (automated ML) to build and train a model What should you create first?
Options:
a Jupyter notebook
a Machine Learning workspace
a registered dataset
a Machine Learning designer pipeline
Answer:
BExplanation:
Before building or training any model in Azure Machine Learning Studio—including when using Automated ML (AutoML)—you must first create a Machine Learning workspace.
A workspace serves as the central environment for all machine learning assets such as datasets, compute targets, models, pipelines, and experiments. According to the AI-900 study guide and Microsoft Learn module “Describe features and tools for machine learning in Azure,” a workspace is the foundational setup required to organize and manage all ML-related resources.
The sequence typically follows these steps:
Create a Machine Learning workspace.
Configure compute resources (e.g., compute instance or cluster).
Upload or register datasets.
Use Automated ML or Designer to train models.
Deploy and manage the trained models.
Option A (Jupyter notebook) is an optional tool for coding experiments.
Option C (Registered dataset) is created after the workspace exists.
Option D (Designer pipeline) is a visual tool used within the workspace.
Hence, B. a Machine Learning workspace is the correct answer because it is the first and mandatory step before using Automated ML or any training component in Azure Machine Learning Studio.
You need to predict the animal population of an area.
Which Azure Machine Learning type should you use?
Options:
clustering
classification
regression
Answer:
CExplanation:
According to the AI-900 official study materials, regression is a type of supervised machine learning used to predict continuous numeric values. Predicting the animal population of an area involves estimating a numeric quantity, which makes regression the appropriate model type.
Microsoft Learn defines regression workloads as predicting real-valued outputs, such as:
Forecasting sales or demand.
Predicting housing prices.
Estimating resource usage or population sizes.
In contrast:
Classification predicts discrete categories (e.g., “cat” or “dog”).
Clustering groups data into similar clusters but doesn’t produce numeric predictions.
Therefore, because the task requires predicting a numerical population size, the verified answer is C. Regression, as per Microsoft’s AI-900 official guidelines.
You are authoring a Language Understanding (LUIS) application to support a music festival.
You want users to be able to ask questions about scheduled shows, such as: “Which act is playing on the main stage?”
The question “Which act is playing on the main stage?” is an example of which type of element?
Options:
an intent
an utterance
a domain
an entity
Answer:
BExplanation:
In a Language Understanding (LUIS) application, an utterance represents an example of what a user might say to the bot. According to Microsoft Learn – “Build a Language Understanding app”, an utterance is a sample phrase that helps train the LUIS model to recognize user intent.
In the given example — “Which act is playing on the main stage?” — the statement is an utterance that a user might say to find out about show schedules. LUIS uses utterances like this to identify the intent (the user’s goal, e.g., GetShowInfo) and to extract any entities (e.g., main stage) that provide additional details for fulfilling the request.
To clarify the other elements:
Intent: The overall purpose or action (e.g., “FindShowDetails”).
Entity: Specific information in the utterance (e.g., “main stage”).
Domain: A general subject area (e.g., entertainment, events).
Thus, “Which act is playing on the main stage?” is an utterance used to train the LUIS model to understand natural language input.
You plan to develop a bot that will enable users to query a knowledge base by using natural language processing.
Which two services should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Options:
Language Service
Azure Bot Service
Form Recognizer
Anomaly Detector
Answer:
A, BExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure,” conversational bots are AI applications that can understand and respond to natural language inputs through text or speech. Building such a bot typically involves two key Azure services:
Azure Bot Service (Option B):This service provides the framework and infrastructure needed to create, test, and deploy intelligent chatbots that interact with users across multiple channels (webchat, Teams, email, etc.). It handles conversation flow, integration, and user message management.
Azure Language Service (Option A):This service powers the natural language understanding (NLU) capability of the bot. It enables the bot to interpret user input, extract intent, and query a knowledge base using Question Answering (formerly QnA Maker). This allows the bot to respond intelligently to user questions by finding the most relevant answers.
The other options are incorrect:
C. Form Recognizer is used for extracting structured data from documents like invoices or forms.
D. Anomaly Detector is used for identifying unusual patterns in time-series data.
Hence, to build a bot that understands and answers user questions in natural language, the solution must combine Azure Bot Service for conversation management and Azure Language Service for knowledge-based question answering and natural language understanding.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:
Yes, Yes, and No.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules under the topic “Describe features of common AI workloads”, conversational AI solutions like chatbots are used to automate and enhance customer interactions. A chatbot is an AI service capable of understanding user inputs (text or voice) and providing appropriate responses, often integrated into websites, mobile apps, or messaging platforms.
A restaurant can use a chatbot to empower customers to make reservations using a website or an app – Yes.This statement is true because conversational AI is designed to handle structured tasks such as booking, scheduling, and information retrieval. Chatbots built with Azure Bot Service can connect to backend systems (like a reservation database) to let customers make or modify reservations through a chat interface. The AI-900 study guide explicitly notes that chatbots can help businesses “automate processes such as booking or reservations” to improve efficiency and customer experience.
A restaurant can use a chatbot to answer inquiries about business hours from a webpage – Yes.This is also true. Chatbots can be trained using QnA Maker (now integrated into Azure AI Language) or Azure Cognitive Services for Language to answer common customer questions. FAQs such as opening hours, menu details, and directions are ideal for chatbot automation, as outlined in the AI-900 modules discussing customer support automation.
A restaurant can use a chatbot to automate responses to customer reviews on an external website – No.This is not a typical chatbot use case taught in AI-900. Chatbots are meant for direct interactions within controlled channels, such as a company’s own website or messaging app. Managing and posting responses to reviews on external platforms (like Yelp or Google Reviews) would involve policy restrictions, authentication issues, and reputational risk. The AI-900 course specifies that responsible AI usage requires maintaining human oversight in public-facing communications that influence brand image.
You have a chatbot that answers technical questions by using the Azure OpenAI GPT-3.5 large language model (LLM). Which two statements accurately describe the chatbot? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.
Options:
Grounding data can be used to constrain the output of the chatbot.
The chatbot will always provide accurate data.
The chatbot might respond with inaccurate data.
The chatbot is suitable for performing medical diagnosis.
Answer:
A, CExplanation:
The correct answers are A. Grounding data can be used to constrain the output of the chatbot and C. The chatbot might respond with inaccurate data.
According to the Microsoft Azure AI Fundamentals (AI-900) study material and Microsoft Learn modules on Azure OpenAI, a chatbot built with Azure OpenAI GPT-3.5 is a large language model (LLM) capable of generating natural language responses. However, these models operate based on statistical patterns learned from massive text datasets—they do not inherently guarantee factual accuracy. Hence, while GPT-based models can produce highly coherent text, they may sometimes generate inaccurate, outdated, or fabricated information (commonly referred to as “hallucinations”). This makes C correct.
Grounding data, as described in Microsoft’s Responsible AI and Azure OpenAI grounding documentation, refers to integrating trusted external data sources—such as company documents, databases, or knowledge bases—into the prompt context. This helps the model stay aligned with factual or domain-specific content, effectively constraining its output to be relevant and verifiable. Therefore, A is also correct.
Options B and D are incorrect because GPT models do not always provide accurate information, and they are not approved for critical use cases such as medical diagnosis. Microsoft’s Responsible AI principles explicitly prohibit unverified use in healthcare or other high-risk domains.
Thus, the verified answers are A and C.
Which two scenarios are examples of a natural language processing workload? Each correct answer presents a complete solution.
NOTE; Each correct selection is worth one point.
Options:
assembly line machinery that autonomously inserts headlamps into cars
a smart device in the home that responds to questions such as, " What will the weather be like today?
monitoring the temperature of machinery to turn on a fan when the temperature reaches a specific threshold
a website that uses a knowledge base to interactively respond to users ' questions
Answer:
B, DExplanation:
The correct answers are B. a smart device in the home that responds to questions such as, " What will the weather be like today? " and D. a website that uses a knowledge base to interactively respond to users ' questions.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of Natural Language Processing (NLP) workloads on Azure”, Natural Language Processing (NLP) is a branch of artificial intelligence that enables computers to understand, interpret, and generate human language in a meaningful way. NLP bridges the gap between human communication and machine understanding, allowing systems to process both spoken and written language.
Option B – A smart device in the home that responds to questions such as “What will the weather be like today?”This is an example of an NLP workload because the device must process spoken language (speech-to-text), interpret the user’s intent (language understanding), and generate a relevant spoken response (text-to-speech). This workflow involves several Azure Cognitive Services, such as Speech Service for recognizing and synthesizing speech, and Language Understanding (LUIS) for interpreting intent. This aligns with conversational AI and NLP tasks in the AI-900 syllabus.
Option D – A website that uses a knowledge base to interactively respond to users’ questions.This is also an NLP workload because the system interprets text input from users and retrieves appropriate answers from a knowledge base. Microsoft’s QnA Maker (now part of the Azure AI Language service) and Azure Bot Service enable such behavior. The model uses NLP to understand the user’s question, find the most relevant response, and generate an appropriate reply — key characteristics of natural language processing.
Incorrect options:
A (assembly line machinery) represents automation or robotics, not NLP.
C (monitoring temperature to activate a fan) is an example of an IoT (Internet of Things) or rule-based system, not related to language processing.
Match the tasks to the appropriate machine learning models.
To answer, drag the appropriate model from the column on the left to its scenario on the right. Each model may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

According to the Microsoft Azure AI Fundamentals (AI-900) study guide, the three main types of supervised and unsupervised machine learning models—classification, clustering, and regression—are used for distinct problem types depending on the structure of the data and the prediction goal.
Clustering is an unsupervised learning technique used when the goal is to group items with similar characteristics without predefined labels. In this scenario, “Assign categories to passengers based on demographic data” implies automatically grouping passengers based on patterns such as age, income, or travel frequency, without any prior labeling. This directly maps to clustering, which discovers hidden groupings (for example, segmenting customers into categories like business travelers or vacationers).
Regression is a supervised learning method used to predict continuous numerical values. The scenario “Predict the amount of consumed fuel based on flight distance” is a classic regression problem because the output (fuel consumption) is a continuous variable dependent on another continuous variable (distance). Regression models, such as linear regression, are trained to estimate numeric outputs.
Classification is also a supervised learning approach, but it predicts discrete categories or outcomes. The scenario “Predict whether a passenger will miss their flight based on demographic data” involves a binary decision (missed or not missed), which is typical of classification tasks. These models learn from labeled examples to assign new instances to specific categories.
In summary, Clustering groups similar passengers, Regression predicts continuous numerical outcomes, and Classification determines categorical outcomes. This alignment precisely matches the definitions in Microsoft’s AI-900 learning objectives under “Describe common machine learning types and scenarios.”
What should you do to ensure that an Azure OpenAI model generates accurate responses that include recent events?
Options:
Modify the system message.
Add grounding data.
Add few-shot learning.
Add training data.
Answer:
BExplanation:
In Azure OpenAI, grounding refers to the process of connecting the model to external data sources (for example, a database, search index, or API) so that it can retrieve accurate and up-to-date information before generating a response. This is particularly important for scenarios requiring current facts or events, since OpenAI models like GPT-3.5 and GPT-4 are trained on data available only up to a certain cutoff date.
By adding grounding data, the model’s responses are “anchored” to factual sources retrieved at runtime, improving reliability and factual accuracy. Grounding is commonly implemented in Azure OpenAI + Azure Cognitive Search solutions (Retrieval-Augmented Generation or RAG).
Option review:
A. Modify the system message: Changes model tone or behavior but doesn’t supply real-time data.
B. Add grounding data: ✅ Correct — allows access to recent and domain-specific information.
C. Add few-shot learning: Provides examples in the prompt to improve context understanding but not factual accuracy.
D. Add training data: Refers to fine-tuning; this requires retraining and doesn’t update the model’s awareness of current events.
Hence, the best method to ensure accurate and current responses from an Azure OpenAI model is to add grounding data, enabling the model to reference real, updated sources dynamically.
You need to provide content for a business chatbot that will help answer simple user queries.
What are three ways to create question and answer text by using QnA Maker? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Options:
Generate the questions and answers from an existing webpage.
Use automated machine learning to train a model based on a file that contains the questions.
Manually enter the questions and answers.
Connect the bot to the Cortana channel and ask questions by using Cortana.
Import chit-chat content from a predefined data source.
Answer:
A, C, EExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure,” the QnA Maker (now integrated into the Azure AI Language Service as Custom Question Answering) is used to create, train, and publish a knowledge base of question-and-answer pairs that can power a chatbot.
There are three primary methods to create Q & A content:
Generate questions and answers from an existing webpage (Option A):QnA Maker can automatically extract question–answer pairs from structured or semi-structured data sources like FAQs, product manuals, or support webpages.
Manually enter questions and answers (Option C):Users can create Q & A pairs directly in the QnA Maker portal or Azure Language Studio, enabling custom answers to be crafted manually.
Import chit-chat content from a predefined data source (Option E):QnA Maker provides predefined “chit-chat” datasets that let a bot handle casual conversation (e.g., greetings or small talk) naturally.
The other options are incorrect:
B. Use automated machine learning – AutoML is for predictive modeling, not knowledge extraction.
D. Connect the bot to Cortana – This is a channel integration, not a method of content creation.
During the process of Machine Learning, when should you review evaluation metrics?
Options:
After you clean the data.
Before you train a model.
Before you choose the type of model.
After you test a model on the validation data.
Answer:
DExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module “Identify features of common machine learning types,” the evaluation phase occurs after training and testing a machine learning model. Evaluation metrics are used to measure how well the model performs when applied to data it has not seen before (the validation data).
The machine learning workflow includes the following key steps:
Data Preparation – Importing, cleaning, and transforming data.
Splitting the Data – Dividing it into training and validation (or test) sets.
Model Training – Using the training data to teach the model patterns or relationships.
Model Evaluation – Assessing the trained model using the validation data and evaluation metrics such as accuracy, precision, recall, F1 score, and root mean square error (RMSE).
As stated in the AI-900 content, evaluation metrics are crucial after testing, as they help determine if the model is accurate enough or if it requires retraining with different parameters or algorithms.
A. After you clean the data → incorrect, as metrics cannot be reviewed before training.
B. Before you train a model → incorrect, since the model has not yet learned patterns.
C. Before you choose the type of model → incorrect, as metrics depend on the model’s output.
Therefore, the verified answer is D. After you test a model on the validation data, which is when you review evaluation metrics to determine model performance and readiness for deployment.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:
When building a K-means clustering model, all features (variables) used in the model must be numeric in nature. According to the Microsoft Azure AI Fundamentals (AI-900) study materials and standard machine learning theory, K-means clustering is an unsupervised learning algorithm that groups data points into clusters based on their similarity — specifically by minimizing the Euclidean distance between data points and their assigned cluster centroids.
Because the K-means algorithm depends on distance calculations, it requires numeric data types. The Euclidean distance (or similar measures) can only be computed between numerical values. Therefore, all categorical or text data must first be converted into numeric form through feature engineering techniques such as one-hot encoding, label encoding, or embedding vectors, depending on the nature of the data.
Here’s how K-means works in summary:
The algorithm initializes a predefined number of centroids (K).
Each data point is assigned to the nearest centroid based on numeric distance.
The centroids are recalculated as the mean of the points in each cluster.
The process repeats until convergence.
If non-numeric data (e.g., text or Boolean) were provided, the model would not be able to calculate distances accurately, leading to computational errors.
Other options are incorrect:
Boolean and integer types can represent numeric values but are considered special cases; the algorithm requires general numeric representation (e.g., continuous values).
Text cannot be processed directly without conversion.
Thus, according to Azure Machine Learning and AI-900 official concepts, all features in a K-means clustering model must be numeric to ensure valid mathematical operations and clustering accuracy.

Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

In Azure Machine Learning Designer, the Dataset output visualization feature is specifically used to explore and understand the distribution of values in potential feature columns before model training begins. This capability is critical for data exploration and preprocessing, two essential stages of the machine learning pipeline described in the Microsoft Azure AI Fundamentals (AI-900) and Azure Machine Learning learning paths.
When a dataset is imported into Azure Machine Learning Designer, users can right-click on the dataset output port and select “Visualize”. This launches the dataset visualization pane, which provides detailed statistical summaries for each column, including:
Data type (numeric, categorical, string, Boolean)
Minimum, maximum, mean, and standard deviation values for numeric columns
Frequency counts and distinct values for categorical columns
Missing value counts
This visual inspection helps determine which columns should be used as features, which might need normalization or encoding, and which contain missing or irrelevant data. It is a vital step in ensuring the dataset is clean and ready for model training.
Let’s examine why other options are incorrect:
Normalize Data module is used to scale numeric data, not to visualize distributions.
Select Columns in Dataset module is used to include or exclude columns, not to analyze them.
Evaluation results visualization feature is used after model training to interpret performance metrics like accuracy or recall, not data distributions.
Therefore, based on official Microsoft documentation and AI-900 study materials, to explore the distribution of values in potential feature columns, you use the Dataset output visualization feature in Azure Machine Learning Designer.
You need to provide customers with the ability to query the status of orders by using phones, social media, or digital assistants.
What should you use?
Options:
Azure Al Bot Service
the Azure Al Translator service
an Azure Al Document Intelligence model
an Azure Machine Learning model
Answer:
AExplanation:
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify Azure services for conversational AI,” the Azure AI Bot Service is specifically designed to create intelligent conversational agents (chatbots) that can interact with users across multiple communication channels, such as web chat, social media, phone calls, Microsoft Teams, and digital assistants.
In this scenario, customers need the ability to query the status of their orders through various interfaces — including voice and text platforms. Azure AI Bot Service enables this by integrating with Azure AI Language (for understanding natural language), Azure Speech (for speech-to-text and text-to-speech capabilities), and Azure Communication Services (for telephony or chat integration).
The bot can interpret user input like “Where is my order?” or “Check my delivery status,” call backend systems (such as an order database or API), and then respond appropriately to the user through the same communication channel.
Let’s analyze the incorrect options:
B. Azure AI Translator Service: Used for real-time text translation between languages; it doesn’t handle conversation logic or database queries.
C. Azure AI Document Intelligence model: Extracts data from structured and semi-structured documents (e.g., invoices, receipts), not user queries.
D. Azure Machine Learning model: Builds and deploys predictive models, but doesn’t provide conversational or multi-channel interaction capabilities.
Thus, for enabling multi-channel conversational experiences where customers can inquire about order statuses using voice, chat, or digital assistants, the most appropriate solution is Azure AI Bot Service, as outlined in Azure’s AI conversational workload documentation.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

The correct answer is Azure AI Language, which includes the Question Answering capability (previously known as QnA Maker). According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn documentation, the Azure AI Language service can be used to create a knowledge base from frequently asked questions (FAQ) and other structured or semi-structured text sources.
This service allows developers to build intelligent applications that can understand and respond to user questions in natural language by referencing prebuilt or custom knowledge bases. The Question Answering feature extracts pairs of questions and answers from documents, websites, or manually entered data and uses them to construct a searchable knowledge base. This knowledge base can then be integrated with Azure Bot Service or other conversational platforms to create interactive, self-service chatbots.
Here’s how it works:
Developers upload FAQ documents, URLs, or structured content.
Azure AI Language processes the content and identifies logical question-answer pairs.
The model stores these pairs in a knowledge base that can be queried by user input.
When users ask questions, the model finds the best matching answer using natural language understanding techniques.
In contrast:
Azure AI Document Intelligence (Form Recognizer) is used to extract structured data from forms and documents, not to create FAQ knowledge bases.
Azure AI Bot Service is for managing and deploying conversational bots but does not generate knowledge bases.
Microsoft Bot Framework SDK provides tools for building conversational logic but still requires a knowledge source like Question Answering from Azure AI Language.
Therefore, the service that can create a knowledge base from FAQ content is Azure AI Language.
Match the services to the appropriate descriptions.
To answer, drag the appropriate service from the column on the left to its description on the right. Each service may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point

Options:
Answer:

Explanation:
Description
Correct Service
Enables the use of natural language to query a knowledge base.
QnA Maker
Enables the real-time transcription of speech-to-text.
Speech
This question tests understanding of Azure Cognitive Services and their use cases as outlined in the Microsoft Azure AI Fundamentals (AI-900) study guide.
“Enables the use of natural language to query a knowledge base.” → QnA MakerAccording to Microsoft Learn’s AI-900 module “Identify features of Natural Language Processing (NLP) workloads and services,” QnA Maker is a cloud-based service that allows developers to build a question-and-answer layer over structured or unstructured content. It enables users to ask questions in natural language, and the service retrieves the most relevant answer from a knowledge base (such as FAQs, manuals, or documents).QnA Maker uses Natural Language Processing (NLP) techniques to interpret user intent and return an appropriate response. It is often integrated into chatbots built with Azure Bot Service to make them capable of conversational question-answering. In the newer Azure Cognitive Services lineup, QnA Maker capabilities are merged into Azure Cognitive Service for Language (Question Answering).
“Enables the real-time transcription of speech-to-text.” → SpeechThe Azure Speech service (part of Azure Cognitive Services) provides the ability to convert spoken language into written text in real time. This feature, called Speech-to-Text, uses deep neural network models to recognize and transcribe human speech with high accuracy.Microsoft’s AI-900 documentation specifies that Speech service capabilities also include text-to-speech, speech translation, and speaker recognition. Real-time transcription is widely used in applications such as voice assistants, captioning systems, call analytics, and accessibility tools.
Other listed services such as Azure Storage and Language Understanding (LUIS) serve different purposes:
Azure Storage handles data storage, not AI workloads.
LUIS identifies user intent from natural language but does not query knowledge bases directly.
Which parameter should you configure to produce more verbose responses from a chat solution that uses the Azure OpenAI GPT-3.5 model?
Options:
Presence penalty
Temperature
Stop sequence
Max responseB
Answer:
BExplanation:
In a chat solution using the Azure OpenAI GPT-3.5 model, the temperature parameter controls the creativity and variability of generated responses. According to the Microsoft Learn documentation for Azure OpenAI Service, temperature is a float value typically between 0 and 2, determining how deterministic or random the model’s output is. A lower temperature (e.g., 0–0.3) makes responses more focused and deterministic, while a higher temperature (e.g., 0.8–1.2) produces more verbose, creative, and diverse responses.
When you want the chat model to generate more detailed or expressive output, increasing the temperature encourages the model to explore a broader range of possible continuations, leading to longer and more varied text. This parameter directly affects how “verbose” or elaborate the model’s responses can be, which is why it is the correct answer.
The other options are not appropriate for this scenario:
A. Presence penalty reduces repetition by discouraging reuse of the same phrases but does not control verbosity.
C. Stop sequence defines tokens where generation should stop, limiting rather than extending response length.
D. Max response (max tokens) controls the maximum length of the response but does not inherently make answers more verbose or expressive.
Thus, to encourage more elaborate and detailed output from the Azure OpenAI GPT-3.5 model, the correct configuration parameter to adjust is Temperature (B).
Which action can be performed by using the Azure Al Vision service?
Options:
identifying breeds of animals in live video streams
extracting key phrases from documents
extracting data from handwritten letters
creating thumbnails for training videos
Answer:
AExplanation:
The Azure AI Vision service (formerly Computer Vision) is designed to analyze visual content in images and videos. According to Microsoft Learn’s “Describe features of computer vision workloads,” Azure AI Vision can identify objects, people, text, and scenes, and even classify images or detect objects in real time.
Identifying breeds of animals in live video streams is an example of image classification or object detection—core capabilities of Azure AI Vision. The Vision service can analyze each frame in a video, recognize animals, and classify them according to known categories, making this the correct answer.
The other options are incorrect:
B. Extracting key phrases from documents → Done by Azure AI Language (Text Analytics).
C. Extracting data from handwritten letters → Done by Azure AI Document Intelligence (Form Recognizer) using OCR.
D. Creating thumbnails for training videos → While possible in Azure Media Services, it’s not a primary Azure AI Vision function.
Thus, the best answer is A. Identifying breeds of animals in live video streams.
You plan to create an Al application by using Azure Al Foundry. The solution will be deployed to dedicated virtual machines. Which deployment option should you use?
Options:
serverless API
Azure Kubernetes Service (AKS) cluster
Azure virtual machines
managed compute
Answer:
ATo complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:

The correct answer is “adding and connecting modules on a visual canvas.”
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore automated machine learning in Azure Machine Learning,” the Azure Machine Learning designer is a drag-and-drop, no-code environment that allows users to create, train, and deploy machine learning models visually. It is specifically designed for users who prefer an intuitive graphical interface rather than writing extensive code.
Microsoft Learn defines Azure Machine Learning designer as a tool that allows you to “build, test, and deploy machine learning models by dragging and connecting pre-built modules on a visual canvas.” These modules can represent data inputs, transformations, training algorithms, and evaluation processes. By linking them together, users can create an end-to-end machine learning pipeline.
The designer simplifies the machine learning workflow by allowing data scientists, analysts, and even non-developers to:
Import and prepare datasets visually.
Choose and connect algorithm modules (e.g., classification, regression, clustering).
Train and evaluate models interactively.
Publish inference pipelines as web services for prediction.
Let’s analyze the other options:
Automatically performing common data preparation tasks – This describes Automated ML (AutoML), not the Designer.
Automatically selecting an algorithm to build the most accurate model – Also a characteristic of AutoML, where the system tests multiple algorithms automatically.
Using a code-first notebook experience – This describes the Azure Machine Learning notebooks environment, which uses Python and SDKs, not the Designer interface.
Therefore, based on the official AI-900 learning objectives and Microsoft Learn documentation, the Azure Machine Learning designer allows you to create models by adding and connecting modules on a visual canvas, providing a no-code, interactive experience ideal for users building custom machine learning workflows visually.
Match the facial recognition tasks to the appropriate questions.
To answer, drag the appropriate task from the column on the left to its question on the right. Each task may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

The correct matches are based on the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure.” These materials explain that facial recognition tasks can be categorized into four major operations: verification, identification, similarity, and grouping. Each task serves a distinct purpose in facial recognition scenarios.
Verification – “Do two images of a face belong to the same person?”The verification task determines whether two facial images represent the same individual. Azure Face API compares the facial features and returns a confidence score indicating the likelihood that the two faces belong to the same person.
Similarity – “Does this person look like other people?”The similarity task compares a face against a collection of faces to find visually similar individuals. It does not confirm identity but measures how closely two or more faces resemble each other.
Grouping – “Do all the faces belong together?”Grouping organizes a set of unknown faces into clusters based on similar facial features. This is used when identities are not known beforehand, helping discover potential duplicates or visually similar clusters within an image dataset.
Identification – “Who is this person in this group of people?”The identification task is used when the system tries to determine who a specific person is by comparing their face against a known collection (face database or gallery). It returns the identity that best matches the input face.
According to Microsoft’s AI-900 training, these tasks form the basis of Azure Face API’s capabilities. Each helps solve a different type of facial recognition problem—from matching pairs to discovering unknown identities—making them essential components of responsible AI-based vision systems.
To complete the sentence, select the appropriate option in the answer area.

Options:
Answer:

Explanation:

According to Microsoft’s Responsible AI principles, one of the key guiding values is Reliability and Safety, which ensures that AI systems operate consistently, accurately, and safely under all intended conditions. The AI-900 study materials and Microsoft Learn modules explain that an AI system must be trustworthy and dependable, meaning it should not produce results when the input data is incomplete, corrupted, or significantly outside the expected range.
In the given scenario, the AI system avoids providing predictions when important fields contain unusual or missing values. This behavior demonstrates reliability and safety because it prevents the system from making unreliable or potentially harmful decisions based on bad or insufficient data. Microsoft emphasizes that AI systems must undergo extensive validation, testing, and monitoring to ensure stable performance and predictable outcomes, even when data conditions vary.
The other options do not fit this scenario:
Inclusiveness ensures that AI systems are accessible to and usable by all people, regardless of abilities or backgrounds.
Privacy and Security focuses on protecting user data and ensuring it is used responsibly.
Transparency involves making AI decisions explainable and understandable to humans.
Only Reliability and Safety directly address the concept of an AI system refusing to act or returning an error when it cannot make a trustworthy prediction. This principle helps prevent inaccurate or unsafe outputs, maintaining confidence in the system’s integrity.
Therefore, ensuring an AI system does not produce predictions when input data is incomplete or unusual aligns directly with Microsoft’s Reliability and Safety principle for responsible AI.
Select the answer that correctly completes the sentence.

Options:
Answer:

Explanation:

The correct answer is Document Intelligence.
According to the Microsoft Azure AI Fundamentals (AI-900) study materials and Microsoft Learn documentation, the Azure AI Document Intelligence service (formerly known as Form Recognizer) is specifically designed to extract structured data from documents, including scanned invoices, receipts, forms, and business cards.
This service combines optical character recognition (OCR) with machine learning to analyze both the layout and semantic meaning of document content. When processing scanned invoices, Document Intelligence identifies and extracts fields such as invoice numbers, dates, totals, taxes, vendor names, and line-item details. The extracted information can then be automatically imported into business systems like accounting software or databases, eliminating manual data entry and improving operational efficiency.
Here’s why the other options are incorrect:
Generative AI: Focuses on creating new content such as text, images, or code (for example, using GPT-4 or DALL·E). It is not used for structured data extraction.
Natural Language Processing (NLP): Deals with understanding and generating human language from text-based input, not document scanning or layout interpretation.
The Document Intelligence workload excels at handling semi-structured documents where the location and format of data vary between samples. Microsoft’s prebuilt models—like Invoice, Receipt, Identity Document, and Contract—simplify extraction without requiring custom training.
In summary, if the task involves extracting data from scanned invoices, the appropriate Azure AI service is Azure AI Document Intelligence, which uses AI-powered document understanding to convert unstructured document images into structured, usable data.
You have an Azure subscription that uses Azure OpenAI.
You need to create an original image of a rural scene to use on a website.
What should you do?
Options:
From Azure Al Foundry, deploy a GPT-3.5 Turbo model and provide instructions to create the image of the rural scene.
From Microsoft Bing, search the term " rural scene " and download the results.
From GitHub Copilot, provide instructions to create the image of the rural scene.
From Azure Al Foundry, deploy a DALL-E model, and provide instructions to create the rural scene.
Answer:
DExplanation:
The Azure OpenAI DALL-E model is specifically designed for generating original images from natural language prompts. If you want to create an image of a “rural scene,” you can use Azure AI Foundry (formerly Azure OpenAI Studio) to deploy the DALL-E model and provide descriptive instructions such as “create an image of a peaceful rural village with trees and a sunset.”
A. GPT-3.5 Turbo → Handles text generation, not image creation.
B. Bing search → Finds existing images, not generate original ones.
C. GitHub Copilot → Assists with writing code, not generating images.
You have the following dataset.

You plan to use the dataset to train a model that will predict the house price categories of houses.
What are Household Income and House Price Category? To answer, select the appropriate option in the answer area.
NOTE: Each correct selection is worth one point.

Options:
Answer:

Explanation:

In machine learning, especially within the Microsoft Azure AI Fundamentals (AI-900) framework, datasets used for supervised learning are composed of features (inputs) and labels (outputs). According to the Microsoft Learn module “Explore the machine learning process”, a feature is any measurable property or attribute used by the model to make predictions, whereas a label is the actual value or category the model is trying to predict.
Household Income → FeatureA feature (also known as an independent variable) represents the input data that the machine learning algorithm uses to detect patterns or correlations. In this dataset, Household Income is a numeric value that influences the prediction of house price categories. During training, the model learns how variations in household income correlate with changes in the house price category. Microsoft Learn defines features as “the attributes or measurable inputs that are used to train the model.” Thus, Household Income serves as a predictive input or feature.
House Price Category → LabelThe label (or dependent variable) represents the output the model aims to predict. It is the known result during training that helps the algorithm learn correct mappings between features and outcomes. In this scenario, House Price Category—which can take values such as “Low,” “Middle,” or “High”—is the classification outcome that the model will predict based on household income (and possibly other variables). According to Microsoft Learn, “the label is the variable that contains the known values that the model is trained to predict.”
In summary, the dataset defines a supervised learning classification problem, where Household Income is the feature (input) and House Price Category is the label (output) that the model will learn to predict.
Unlock AI-900 Features
- AI-900 All Real Exam Questions
- AI-900 Exam easy to use and print PDF format
- Download Free AI-900 Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet
Questions & Answers PDF Demo
- AI-900 All Real Exam Questions
- AI-900 Exam easy to use and print PDF format
- Download Free AI-900 Demo (Try before Buy)
- Free Frequent Updates
- 100% Passing Guarantee by Activedumpsnet