You are developing the smart e-commerce project.
You need to implement autocompletion as part of the Cognitive Search solution.
Which three actions should you perform? Each correct answer presents part of the solution. (Choose three.)
NOTE: Each correct selection is worth one point.
Make API queries to the autocomplete endpoint and include suggesterName in the body.
Add a suggester that has the three product name fields as source fields.
Make API queries to the search endpoint and include the product name fields in the searchFields query parameter.
Add a suggester for each of the three product name fields.
Set the searchAnalyzer property for the three product name variants.
Set the analyzer property for the three product name variants.
Scenario: Support autocompletion and autosuggestion based on all product name variants.
A: Call a suggester-enabled query, in the form of a Suggestion request or Autocomplete request, using an API. API usage is illustrated in the following call to the Autocomplete REST API.
POST /indexes/myxboxgames/docs/autocomplete?search&api-version=2020-06-30
{
"search": "minecraf",
"suggesterName": "sg"
}
B: In Azure Cognitive Search, typeahead or "search-as-you-type" is enabled through a suggester. A suggester provides a list of fields that undergo additional tokenization, generating prefix sequences to support matches on partial terms. For example, a suggester that includes a City field with a value for "Seattle" will have prefix combinations of "sea", "seat", "seatt", and "seattl" to support typeahead.
F. Use the default standard Lucene analyzer ("analyzer": null) or a language analyzer (for example, "analyzer": "en.Microsoft") on the field.
ON NO: 2 DRAG DROP
You are planning the product creation project.
You need to recommend a process for analyzing videos.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. (Choose four.)



Scenario: All videos must have transcripts that are associated to the video and included in product descriptions.
Product descriptions, transcripts, and all text must be available in English, Spanish, and Portuguese.
Step 1: Upload the video to blob storage
Given a video or audio file, the file is first dropped into a Blob Storage. T
Step 2: Index the video by using the Video Indexer API.
When a video is indexed, Video Indexer produces the JSON content that contains details of the specified video insights. The insights include: transcripts, OCRs, faces, topics, blocks, etc.
Step 3: Extract the transcript from the Video Indexer API.
Step 4: Translate the transcript by using the Translator API.
You are developing the shopping on-the-go project.
You need to build the Adaptive Card for the chatbot.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.




Comprehensive Detailed ExplanationThe requirement is to build an Adaptive Card for the chatbot that supports multilingual product display and meets accessibility requirements.
Product Name Display
The product JSON stores localized names like "name": { "en": "...", "es": "...", "pt": "..." }.
To dynamically render based on the current language, you must use name[language].
This ensures the product name shown in the card adapts automatically to the user’s preferred language.
Stock Level Warning
Business rules require warnings when stock is low or out of stock.
This means you display the stock warning only when stockLevel is not 'OK'.
Therefore, the correct condition is:"$when": "${stockLevel != 'OK'}"
Image Alt Text for Accessibility
Accessibility requirements specify that all images must have alt text in English, Spanish, and Portuguese.
In the product JSON, "alttext" is also localized:
"altText": { "en": "Bicycle", "es": "Bicicleta", "pt": "Bicicleta" }
To render the correct localized alt text dynamically, use image.altText[language].
Correct Selections:
First blank: name[language]
Second blank: "$when": "${stockLevel != 'OK'}"
Third blank: image.altText[language]
Adaptive Cards Templating
Adaptive Card $when property
Multilingual data in Azure Cognitive Search and Adaptive Cards
Microsoft References
You are developing the shopping on-the-go project.
You are configuring access to the QnA Maker resources.
Which role should you assign to AllUsers and LeadershipTeam? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.



Box 1: QnA Maker Editor
Scenario: Provide all employees with the ability to edit Q&As.
The QnA Maker Editor (read/write) has the following permissions:
Create KB API
Update KB API
Replace KB API
Replace Alterations
"Train API" [in
new service model v5]
Box 2: Contributor
Scenario: Only senior managers must be able to publish updates.
Contributor permission: All except ability to add new members to roles
You are developing the smart e-commerce project.
You need to design the skillset to include the contents of PDFs in searches.
How should you complete the skillset design diagram? To answer, drag the appropriate services to the correct stages. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.



Box 1: Azure Blob storage
At the start of the pipeline, you have unstructured text or non-text content (such as images, scanned documents, or JPEG files). Data must exist in an Azure data storage service that can be accessed by an indexer.
Box 2: Computer Vision API
Scenario: Provide users with the ability to search insight gained from the images, manuals, and videos associated with the products.
The Computer Vision Read API is Azure's latest OCR technology (learn what's new) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents.
Box 3: Translator API
Scenario: Product descriptions, transcripts, and all text must be available in English, Spanish, and Portuguese.
Box 4: Azure Files
Scenario: Store all raw insight data that was generated, so the data can be processed later.
You need to develop code to upload images for the product creation project. The solution must meet the accessibility requirements.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.


Parameter type: stream
Visual features list: VisualFeatureTypes.Description
Result to use: results.Description.Captions[0]
To generate accessible alt text, you should use the caption produced by Azure Computer Vision’s Description feature (it produces a human-readable sentence with a confidence score). Therefore:
The image input should be a stream, because you’re uploading images (not just passing URLs) during product creation and AnalyzeImage…Async supports image streams.
Request the Description feature in VisualFeatureTypes so the service returns results.Description.Captions.
Use results.Description.Captions[0] and return the caption text if its confidence is high enough (e.g., > 0.5) to meet the accessibility requirement that all images must have relevant alt text.
Other features (Tags, Objects, Brands) are useful for enrichment but do not directly return natural-language captions suitable for alt text.
Microsoft Azure AI Solution References
Computer Vision (Image Analysis) – Description/Captions and features: Microsoft Docs, Image Analysis – VisualFeatureTypes.Description returns description.captions.https://learn.microsoft.com/azure/ai-services/computer-vision/concept-image-analysis
SDK usage (analyze image from a stream): Microsoft Docs, Analyze an image by using the Computer Vision client library.https://learn.microsoft.com/azure/ai-services/computer-visi on/how-to/call-analyze-image?tabs=version-3-2#analyze-an-image-from-a-stream
You are planning the product creation project.
You need to build the REST endpoint to create the multilingual product descriptions.
How should you complete the URI? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.


Box 1: api-nam.cognitive.microsofttranslator.com
https://docs.microsoft.com/en-us/azure/cognitive-services/translator/referenc e/v3-0-reference
Box 2: /translate
You have an Azure subscription that contains an Azure OpenAI resource.
You plan to build an agent by using the Azure Ai Agent Service. The agent will perform the following actions:
• Interpret written and spoken questions from users.
• Generate answers to the questions.
• Output the answers as speech.
You need to create the project for the agent.
What should you use?
Language Studio
Azure Al Foundry
Speech studio
the Azure portal
Azure AI Agent Service projects are created and managed in Azure AI Foundry (ai.azure.com). Foundry provides the “project” workspace where you configure agents, connect tools (like speech and search), and use your Azure OpenAI deployments. Language Studio and Speech Studio are for the classic Language and Speech services respectively, and the Azure portal is used to create resources—not the agent project itself. The Microsoft docs explicitly state you start by creating an Azure AI Foundry project to build agents.
References
What is Azure AI Foundry Agent Service? – “To get started… create an Azure AI Foundry project.” Microsoft Learn
Create a project in Azure AI Foundry (projects organize agents, files, evaluations). Microsoft Learn
Quickstart: Create an agent in Azure AI Foundry Agent Service. Microsoft Learn
You plan to build an agent that will combine and process multiple files uploaded by users.
You are evaluating whether to use the Azure Al Agent Service to develop the agent.
What is the maximum size of all the files that can be uploaded to the service?
1 GB
10 GB
100 GB
1 TB
For Azure AI Agent Service, the documented quota for the maximum total size of all files uploaded for agents has been listed as 100 GB. Individual files are limited to 512 MB each. These limits are relevant when you plan an agent that must combine and process multiple user-uploaded files.
(Note: Quotas can evolve; check the current “Quotas and limits” page for your region.)
References (Microsoft Docs / Q&A):
Microsoft Q&A (Agent Service limits: “Max size for all uploaded files for agents 100 GB”; per-file 512 MB). Microsoft Learn
File Search tool how-to (per-file 512 MB guidance). Microsoft Learn
You are building an agent by using the Azure Al Agent Service.
You need to ensure that the agent can access publicly accessible data that was released during the past 90 days.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.



To let an Azure AI Agent access publicly available, recent web content, you add the Bing grounding tool. In the SDK, you first define a ToolConnectionList with the Bing connection ID (here, "bingConnectionId"), then construct a BingGroundingToolDefinition(connectionList) so the agent can call Bing Search with recency filters (e.g., last 90 days) when answering questions.
When creating the agent, tool definitions are passed via the tools: parameter of CreateAgentAsync(...) as a List
Therefore:
Use BingGroundingToolDefinition to ground the agent on current public web data.
Provide that tool in tools: new List
Microsoft Azure AI References (titles only)
Azure AI Agent Service – Bing web grounding tool (tool definitions and connections)
Azure AI Agent Service SDK – Agent creation (CreateAgentAsync parameters: tools vs toolResources)
Azure AI Agent Service – Tool connections and ToolConnectionList usage
You have a Language Understanding solution that runs in a Docker container.
You download the Language Understanding container image from the Microsoft Container Registry (MCR).
You need to deploy the container image to a host computer.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.


Explanation:

You are deploying a Language Understanding (LUIS) container. When running LUIS in Docker, the model must first be exported from the Language Understanding portal, and then provided to the container at runtime.
Step-by-step reasoning:
From the Language Understanding portal, export the solution as a package file.
The trained LUIS model must be exported from the portal into a .json package file.
This is required because the container cannot access the hosted service directly.
From the host computer, move the package file to the Docker input directory.
Containers expect the model to be available locally.
The package file is placed into the input directory that the container maps for models.
From the host computer, run the container and specify the input directory.
When starting the container, you specify --volume
This makes the exported model available inside the container for processing.
Why not the other options?
Retain the model in the portal is not sufficient; the container cannot pull directly from the cloud.
Build the container and specify the output directory is not required; the container image is already available from MCR and is not custom-built for this step.
Correct Answer Order:
Export the solution as a package file.
Move the package file to the Docker input directory.
Run the container and specify the input directory.
Run LUIS containers
Use containers with Azure AI services
Microsoft References
You are building an app by using the Speech SDK. The app will translate speech from French to German by using natural language processing.
You need to define the source language and the output language.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.


SpeechRecognitionLanguage must be set to "fr" (French).
AddTargetLanguage("de") must be used to specify German as the translation output.
Correct Code Completionvar speechTranslationConfig =
SpeechTranslationConfig.FromSubscription(speechKey, speechRegion);
speechTranslationConfig.SpeechRecognitionLanguage = "fr";
speechTranslationConfig.AddTargetLanguage("de");
SpeechRecognitionLanguage → sets the source spoken language that the Speech service listens to (here, French).
AddTargetLanguage("de") → adds German as the target translation language.
SpeechSynthesisLanguage would be used if you want to speak out the translation, but since the requirement is only about defining input and output languages, we use AddTargetLanguage.
Explanation
Speech translation with Speech SDK
Microsoft Reference
Final Answer:
First blank → SpeechRecognitionLanguage
Second blank → AddTargetLanguage
NO: 124
You have data saved in the following format.

Which format was used?
CSV
JSON
HTML
YAML
The given data format is:
FirstName,LastName,Age,LeisureHobby,SportsHobby
John,Smith,23,Reading,Basketball
Ben,Smith,21,Guitar,Curling
This format uses commas to separate fields, with the first row as column headers and subsequent rows as records.
This is the definition of CSV (Comma-Separated Values) format.
Now let’s eliminate the other options:
JSON: Uses { } braces and key-value pairs like "FirstName": "John". Not shown here.
HTML: Uses YAML: Uses indentation and key-value pairs (e.g., FirstName: John). Not shown here. Therefore, the correct format is CSV. Correct Answer: A. CSV Common data formats: CSV, JSON, Avro, and Parquet Microsoft Reference NO: 217 You are building a Chatbot by using the Microsoft Bot Framework SDK. The bot will be used to accept food orders from customers and allow the customers to customize each food item. You need to configure the bot to ask the user for additional input based on the type of item ordered. The solution must minimize development effort. Which two types of dialogs should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. adaptive action waterfall prompt input In the Bot Framework SDK, if you want to collect structured input step by step, you typically use a Waterfall dialog. Each step in the waterfall represents one piece of logic, such as asking for the size, toppings, or type of food item. Prompts are used inside waterfall steps to actually ask the user for input (e.g., text prompt, choice prompt, date-time prompt). Adaptive, Action, Input are concepts more relevant to Power Virtual Agents or Adaptive Dialogs (Composer), not the Bot Framework SDK question context. Since the question specifically mentions Bot Framework SDK, the correct answer is Waterfall dialog + Prompts. Correct Answer: C, D You build a bot by using the Microsoft Bot Framework SDK and the Azure Bot Service. You plan to deploy the bot to Azure. You register the bot by using the Bot Channels Registration service. Which two values are required to complete the deployment? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. botld tenancld appld objeccld appSecrec You are building an Azure web app named App1 that will translate text from English to Spanish. You need to use the Text Translation REST API to perform the translation. The solution must ensure that you have data sovereignty in the United States. How should you complete the URI? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. 1. api-nam.cognitive.microsofttranslator.com 2. translate https://learn.microsoft.com/en-us/azure/cognitive-services/Translator/reference/v3-0-reference#base-urls Requests to Translator are, in most cases, handled by the datacenter that is closest to where the request originated. If there's a datacenter failure when using the global endpoint, the request may be routed outside of the geography. To force the request to be handled within a specific geography, use the desired geographical endpoint. All requests are processed among the datacenters within the geography. - United States api-nam.cognitive.microsofttranslator.com https://learn.microsoft.com/en-us/azure/cognitive-services/translator/reference/rest-api-guide - translate Translate specified source language text into the target language text. Select the answer that correctly completes the sentence. Semi-structured data Let’s break down the options: Graph data Used in graph databases (e.g., Azure Cosmos DB Gremlin API, Neo4j). Represents nodes and edges with relationships. JSON is not graph data. Relational data Stored in relational databases with tables, rows, and columns. JSON does not follow a fixed schema with rows/columns. Semi-structured data Correct. JSON, XML, and Avro are common examples of semi-structured data. They don’t require a rigid schema like relational data, but still have organizational elements (tags, key-value pairs). JSON is widely used in NoSQL/document databases like Azure Cosmos DB. Unstructured data Examples: images, videos, free text, audio. JSON does not fit here because it has a defined structure with keys and values. Correct Answer: Semi-structured data Structured, semi-structured, and unstructured data What is semi-structured data? Azure Cosmos DB and JSON data model Microsoft References You are building an app that will answer customer calls about the status of an order. The app will query a database for the order details and provide the customers with a spoken response. You need to identify which Azure Al service APIs to use. The solution must minimize development effort. Which object should you use for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. For a phone-style bot that listens to customers and replies with spoken output, you use two core Azure AI Speech SDK objects: SpeechRecognizer performs speech-to-text (STT), turning the caller’s audio into text that your app can use to query the orders database. This minimizes effort because it handles audio capture/streaming, language models, and returns recognized text events/results directly. SpeechSynthesizer performs text-to-speech (TTS), converting the retrieved order status text into natural-sounding audio to play back to the customer. It supports neural voices and simple calls like SpeakTextAsync(). Other options are not appropriate here: TranslationRecognizer is for real-time speech translation, not needed for simple STT. VoiceProfileClient is for speaker verification/identification, not for recognition or synthesis of the conversation content. Microsoft References Azure AI Speech SDK – Speech to text (SpeechRecognizer) overview and quickstarts. Azure AI Speech SDK – Text to speech (SpeechSynthesizer) overview and quickstarts. Azure AI Speech SDK concepts – recognition vs. synthesis pipelines. You are developing an internet-based training solution for remote learners. Your company identifies that during the training, some learners leave their desk for long periods or become distracted. You need to use a video and audio feed from each learner’s computer to detect whether the learner is present and paying attention. The solution must minimize development effort and identify each learner. Which Azure Cognitive Services service should you use for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Scenario RecapYou are building a remote training monitoring solution. Requirement: Use video and audio feeds to detect if a learner is present, paying attention, and talking. Services available: Face, Speech, Text Analytics. From a learner’s video feed, verify whether the learner is present. The Face API can detect and identify faces in a video feed. It can tell if a person is present and recognized, fulfilling the requirement. From a learner’s facial expression in the video feed, verify whether the learner is paying attention. Again, the Face API provides facial expression and emotion recognition (happiness, anger, neutral, etc.). This can be mapped to “paying attention vs. distracted.” From a learner’s audio feed, detect whether the learner is talking. The Speech service detects spoken input and can determine if speech is present. Text Analytics works on text (not raw audio) and is therefore not appropriate here. Analysis From a learner’s video feed, verify whether the learner is present: Face From a learner’s facial expression in the video feed, verify whether the learner is paying attention: Face From a learner’s audio feed, detect whether the learner is talking: Speech Final Answer (Answer Area Selections) Face API – Face detection & identification Face API – Emotion recognition Azure Speech service Microsoft References ESTION NO: 33 HOTSPOT You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands. You have the following code segment. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Query successful An analysis of the provided code segment is needed to determine the truth value of the statements, which are not explicitly provided in the prompt. Since I don't have the statements, I will analyze the code and infer common types of questions related to this code structure to provide a helpful response. Code AnalysisThe code segment is written in a C#-like syntax and iterates through a collection named brands. C# foreach (var brand in brands) { if (brand.Confidence >= .75) Console.WriteLine($"Logo of {brand.Name} between {brand.Rectangle.X}, {brand.Rectangle.Y} and {brand.Rectangle.W}, {brand.Rectangle.H}"); } Iteration: The foreach (var brand in brands) loop processes each item (presumably a detected brand or logo) in the brands collection. Filtering/Condition: The if (brand.Confidence >= .75) statement acts as a filter. It only processes a brand if its associated Confidence value (likely a floating-point number between 0.0 and 1.0) is greater than or equal to 0.75. Output: For brands that pass the confidence threshold, a line of text is printed to the console using string interpolation ($""). It reports the Name of the brand. It reports the bounding box coordinates, presumably X (horizontal position), Y (vertical position), W (width), and H (height), which are properties of the nested brand.Rectangle object. Select the answer that correctly completes the sentence. account The statement is: "When provisioning an Azure Cosmos DB ______, you need to specify which type of API you will use." Options: account → Correct. When you create a Cosmos DB account, you must choose the API type (e.g., SQL API, MongoDB API, Cassandra API, Table API, Gremlin API). This decision defines the data model and query language. ✅ container → A container is created inside a database (tables, collections, graphs) but does not determine the API type. database → Exists inside an account and inherits the API type from the account. item → Represents individual records/documents; not where the API is chosen. Thus, the API choice is made at the account level. Correct Answer: account Azure Cosmos DB account overview Choose the right API for Cosmos DB Microsoft References You use the Custom Vision service to build a classifier. After training is complete, you need to evaluate the classifier. Which two metrics are available for review? Each correct answer presents a complete solution. (Choose two.) NOTE: Each correct selection is worth one point. recall F-score weighted accuracy precision area under the curve (AUC) The question is about evaluating a Custom Vision classifier after training. When you train an image classifier in Azure Custom Vision, the service automatically calculates performance metrics to help you evaluate the quality of the model. Precision: The percentage of correct positive predictions out of all positive predictions made. Formula: Precision = TP / (TP + FP) Helps determine how reliable positive predictions are. Recall: The percentage of actual positives that were correctly predicted. Formula: Recall = TP / (TP + FN) Helps determine how many of the true positives were captured. Metrics Provided by Custom Vision:These two metrics are explicitly shown in the Custom Vision portal and via API. A. Recall Yes, Custom Vision reports recall. Correct B. F-score Not directly reported in the Custom Vision portal. Although it can be derived from precision and recall, it is not provided as a direct metric. Incorrect C. Weighted accuracy Not reported by Custom Vision. Incorrect D. Precision Yes, Custom Vision reports precision. Correct E. Area under the curve (AUC) Not reported by Custom Vision. More common in ROC curve analysis, not part of Custom Vision output. Incorrect Option Analysis Correct Answer: A. recall, D. precision Evaluate the prediction performance of your classifier in Custom Vision Custom Vision training and evaluation You are developing a webpage that will use the Video Indexer service to display videos of internal company meetings. You embed the Player widget and the Cognitive Insights widget into the page. You need to configure the widgets to meet the following requirements: Ensure that users can search for keywords. Display the names and faces of people in the video. Show captions in the video in English (United States). How should you complete the URL for each widget? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Cognitive Insights Widget URL https://www.videoindexer.ai/embed/insights/ {accountId}/{videoId}/?widgets=people,keywords&controls=search Player Widget URL https://www.videoindexer.ai/embed/player/ {accountId}/{videoId}/?showcaptions=true&captions=en-US Comprehensive Detailed ExplanationThe requirements are: Ensure that users can search for keywords. In the Cognitive Insights Widget, the controls parameter can be set to search to allow keyword search. Display the names and faces of people in the video. The widgets parameter in the Cognitive Insights Widget determines which insights are shown. To show people and keywords, use people,keywords. Show captions in the video in English (United States). For the Player Widget, captions are controlled by two parameters: showcaptions=true enables captions. captions=en-US specifies the language. Cognitive Insights Widget widgets=people,keywords controls=search Player Widget showcaptions=true captions=en-US Final Placement of Values Embed Video Indexer widgets Video Indexer parameters and customization Microsoft References NO: 177 DRAG DROP You build a bot by using the Microsoft Bot Framework SDK. You need to test the bot interactively on a local machine. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Comprehensive Detailed ExplanationThe question asks about testing a bot interactively on a local machine when using the Microsoft Bot Framework SDK. Steps to test locally: Build and run the bot Compile the bot application code and run it locally (e.g., using Visual Studio or Node.js). This exposes the bot’s endpoint on localhost (commonly http://localhost:3978/api/messages ). Open the Bot Framework Emulator The Bot Framework Emulator is a desktop app for testing and debugging bots built with the Bot Framework SDK. It allows you to send messages to the bot and see the responses. Connect to the bot endpoint Use the Emulator to connect to the locally running bot by entering its endpoint URL. Once connected, you can interact with the bot as if you were a real user. Register the bot with the Azure Bot Service → Needed for cloud deployment, not local testing. Open the Bot Framework Composer → Only needed if designing the bot with Composer, but this scenario uses the SDK directly. Actions Not Required in This Case Build and run the bot → Open the Bot Framework Emulator → Connect to the bot endpoint Correct Sequences (multiple possible valid orders): Test a bot locally with Bot Framework Emulator Bot Framework SDK overview Microsoft References ✅ Final Answer Sequence: Build and run the bot Open the Bot Framework Emulator Connect to the bot endpoint You have an Azure Cognitive Search solution and a collection of blog posts that include a category field. You need to index the posts. The solution must meet the following requirements: • Include the category field in the search results. • Ensure that users can search for words in the category field. • Ensure that users can perform drill down filtering based on category. Which index attributes should you configure for the category field? searchable, facetable, and retrievable retrievable, filterable, and sortable retrievable, facetable, and key searchable, sortable, and retrievable For the category field in Azure Cognitive Search: searchable → allows users to search by words in the category field. facetable → enables drill-down filtering (facets). retrievable → ensures the field appears in search results. filterable/sortable/key are not required here based on the scenario. Correct Answer: A You have an Azure subscription that contains an Azure OpenAI resource named AH and an Azure Al Content Safety resource named CS1. You build a chatbot that uses All to provide generative answers to specific questions and CS1 to check input and output for objectionable content. You need to optimize the content filter configurations by running tests on sample questions. Solution: From Content Safety Studio, you use the Protected material detection feature to run the tests. Does this meet the requirement? Yes No “Protected material detection” in Content Safety is designed to flag model outputs that match copyrighted text/code (e.g., lyrics, articles, recipes, GitHub code). It is not used to tune or test content filter configurations for safety categories like hate/sexual/violence/self-harm. Therefore, it does not meet the requirement to optimize content filters by running tests on sample questions. You successfully run the following HTTP request. POST https://management.azure.com/subs criptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/contosol/regenerateKey?api-version=2017-04-18 Body{"keyName": "Key2"} What is the result of the request? A key for Azure Cognitive Services was generated in Azure Key Vault. A new query key was generated. The primary subscription key and the secondary subscription key were rotated. The secondary subscription key was reset. The HTTP request provided is: POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/contosol/regenerateKey?api-version=2017-04-18 Body {"keyName": "Key2"} The request is made against the Azure Cognitive Services Management API (Microsoft.CognitiveServices/accounts/.../regenerateKey). The regenerateKey operation is specifically designed to regenerate one of the two subscription keys used to authenticate Cognitive Services API calls. Cognitive Services resources always have two keys: Key1 (primary) and Key2 (secondary). This design allows key rotation without downtime: you can regenerate one key while using the other in production. Key Observations:Body Payload:{"keyName": "Key2"} This explicitly tells Azure to regenerate Key2 (the secondary subscription key). After the call, the secondary subscription key value changes, while Key1 (primary) remains unaffected. A. A key for Azure Cognitive Services was generated in Azure Key Vault Incorrect. The operation is not integrated with Key Vault; it only regenerates Cognitive Services subscription keys. B. A new query key was generated Incorrect. Query keys are related to Azure Cognitive Search (not general Cognitive Services). The request clearly targets CognitiveServices/accounts. C. The primary subscription key and the secondary subscription key were rotated Incorrect. The request regenerates only the specified key, not both. D. The secondary subscription key was reset Correct. The payload specifies "Key2", so only the secondary subscription key is regenerated. Option Analysis Correct Answer: D. The secondary subscription key was reset Azure REST API – Cognitive Services regenerate key Authenticate requests to Azure AI services with keys and endpoint Manage Cognitive Services keys (key1/key2) Microsoft References Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You migrate to a Cognitive Search service that uses a higher tier. Does this meet the goal? Yes No Comprehensive Detailed ExplanationYou have an Azure Cognitive Search service. Query volume has increased, and some search requests are now being throttled. This means your current tier is no longer sufficient to handle the query traffic. Each Azure Cognitive Search pricing tier (Basic, Standard S1/S2/S3, Storage Optimized, etc.) provides different resource limits for queries per second (QPS), indexing throughput, and storage. Throttling occurs when query traffic exceeds the capacity limits of the current tier. Moving to a higher tier increases: The allowed query units (QUs), Maximum queries per second (QPS), and available compute resources. Therefore, migrating to a higher tier reduces the likelihood of throttling and supports the increased query volume. Simply adding replicas can help scale out query workloads, but the question specifically asks whether moving to a higher tier meets the goal—and it does. Using indexer scaling or adjusting query patterns might help, but they are not direct answers to the throttling caused by insufficient service tier capacity. Why does migrating to a higher tier work?Why not other solutions? Correct Answer: A. Yes Azure Cognitive Search service limits by tier Scale resources in Azure Cognitive Search Azure Cognitive Search pricing tiers Microsoft References You are developing a text processing solution. You have the following function. You call the function and use the following string as the second argument Our tour of London included a visit to Buckingham Palace What will be the output of the function? Our tour of London included a visit to Buckingham Palace London and Tour only Tour and visit only London and Buckingham Palace only You are examining the Text Analytics output of an application. The text analyzed is: "Our tour guide took us up the Space Needle during our trip to Seattle last week." The response contains the data shown in the following table. Which Text Analytics API is used to analyze the text? Sentiment Analysis Named Entity Recognition Entity Linking Key Phrase Extraction https://learn.microsoft. com/en-us/azure/cognitive-services/language-service/named-entity-recognition/overview Named Entity Recognition (NER) is one of the features offered by Azure Cognitive Service for Language, a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The NER feature can identify and categorize entities in unstructured text. For example: people, places, organizations, and quantities. A1 You have an Azure OpenA1 resource named AH that hosts three deployments of the GPT 3.5 model. Each deployment is optimized for a unique workload. You plan to deploy three apps. Each app will access AM by using the REST API and will use the deployment that was optimized for the apps intended workload. You need to provide each app with access to AH and the appropriate deployment. The solution must ensure that only the apps can access AM. What should you use to provide access to AM. and what should each app use to connect to its appropriate deployment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Select the answer that correctly completes the sentence. a tabular form of rows and columns Relational data is, by definition, data that adheres to the relational model. In the relational model, data is organized into tables, which consist of rows (records or tuples) and columns (fields or attributes). This is known as the tabular form. The other options are incorrect: Unstructured data is the opposite of relational data. A hierarchical folder structure is used for file organization, not the logical structure of relational data. Comma-separated value (CSV) files can contain relational data, but the core definition of relational data is the tabular structure, not the file format itself. Rationale What is a characteristic of a non-relational database? full support for Transact-SGL a fixed schema self describing entities Non-relational databases (NoSQL) typically store semi-structured or unstructured data and have the following characteristics: Entities are often self-describing, meaning they store their own schema within the data (e.g., JSON documents). Schema is flexible, allowing for changes without restructuring the entire database. Other options: Full support for Transact-SQL → This is a characteristic of relational (SQL) databases, not NoSQL. A fixed schema → Also a relational database characteristic. Correct Answer: self describing entities You plan to deploy an Azure OpenAI resource by using an Azure Resource Manager (ARM) template. You need to ensure that the resource can respond to 600 requests per minute. How should you complete the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. When deploying an Azure OpenAI model using the Microsoft.CognitiveServices/accounts/deployments resource in an ARM template, the throughput for a Standard deployment is set on the SKU object. Specifically: Use the capacity field of the SKU to specify the per-deployment rate. Provide the integer value that represents the requests per minute (RPM) you want the deployment to handle. Therefore, to meet the requirement of 600 requests per minute, complete the template as: "sku": { "name": "Standard", "capacity": 600 } This uses a single deployment SKU with a defined capacity. The ARM schema for accounts/deployments explicitly includes sku.capacity for resources that support scale (Azure OpenAI deployments do), and Azure OpenAI quota/rate-limit guidance confirms that rate limits are assigned per deployment. Microsoft Azure AI Solution References ARM template—Microsoft.CognitiveServices/accounts/deployments (2023-05-01): property reference showing sku.capacity. Microsoft Learn Resource type reference—latest accounts/deployments: confirms SKU usage on deployments. Microsoft Learn Azure OpenAI quota and rate limits: explains assigning deployment-level limits (RPM/TPM). Microsoft Learn You have an Azure subscription. You need to deploy an Azure Al Document Intelligence resource. How should you complete the Azure Resource Manager (ARM) template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Azure AI Document Intelligence (formerly Form Recognizer) is provisioned as a Cognitive Services account with the kind set to FormRecognizer. In an ARM template, Document Intelligence is created under the resource provider Microsoft.CognitiveServices with the resource type accounts. Therefore: type must be "Microsoft.CognitiveServices/accounts" kind must be "FormRecognizer" This aligns with Microsoft’s ARM schema for Cognitive Services accounts and the documented way to deploy Document Intelligence. Key references from Microsoft documentation: Azure Resource Manager template reference for Cognitive Services accounts — shows type: Microsoft.CognitiveServices/accounts and supported kind values including FormRecognizer.https://learn.microsoft.com/azure/templates/microsoft.cognitiveservices/accounts Azure AI Document Intelligence (Form Recognizer) resource creation guidance — indicates the service is deployed as a Cognitive Services account with kind FormRecognizer.https://learn.microsoft.com/azure/ai-services/document-intelligence/overview Create Document Intelligence resources (portal/ARM/Bicep) – reiterates Cognitive Services account with FormRecognizer kind.https://learn.microsoft.com/azure/ai-services/document-intelligence/create-resources You are building a customer support chatbot. You need to configure the bot to identify the following: • Code names for internal product development • Messages that include credit card numbers The solution must minimize development effort. Which Azure Cognitive Service for Language feature should you use for each requirement? To answer, drag the appropriate features to the correct requirements. Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content NOTE: Each correct selection is worth one point. The chatbot needs to recognize two different kinds of information: Code names for internal product development These are not standard entities like "locations" or "organizations" that prebuilt NER can detect. You need to train a custom model to recognize internal terms or codenames (for example, "Project Falcon"). The correct Azure Cognitive Service for Language feature is Custom Named Entity Recognition (Custom NER), which allows defining and training entity categories specific to your business. Messages that include credit card numbers Credit card numbers are sensitive data falling under Personally Identifiable Information (PII). Azure Cognitive Service for Language provides a PII detection feature that automatically identifies and masks sensitive information such as credit card numbers, SSNs, and phone numbers. This minimizes development effort since it is prebuilt and ready to use. Correct Answer Mapping: Identify code names for internal product development → Custom named entity recognition (NER) Identify messages that include credit card numbers → Personally Identifiable Information (PII) detection Custom Named Entity Recognition in Azure AI Language PII detection in Azure AI Language Microsoft References You are building a chatbot. You need to ensure that the bot will recognize the names of your company's products and codenames. The solution must minimize development effort. Which Azure Cognitive Service for Language service should you include in the solution? custom text classification entity linking custom Named Entity Recognition (NER) key phrase extraction Which database transaction property ensures that transactional changes to a database are preserved during unexpected operating system restarts? durability atomicity consistency isolation The D in ACID stands for Durability. Durability guarantees that once a transaction is committed, the changes are permanent, even in the case of power loss, crashes, or OS restarts. Atomicity = all-or-nothing. Consistency = valid state transitions. Isolation = transactions run independently without interference. Correct Answer: A. durability You are developing the knowledgebase by using Azure Cognitive Search. You need to build a skill that will be used by indexers. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Box 1: "categories": ["Locations", "Persons", "Organizations"], Locations, Persons, Organizations are in the outputs. Scenario: Contoso plans to develop a searchable knowledgebase of all the intellectual property Note: The categories parameter is an array of categories that should be extracted. Possible category types: "Person", "Location", "Organization", "Quantity", "Datetime", "URL", "Email". If no category is provided, all types are returned. Box 2: {"name": " entities"} The include wikis, so should include entities in the outputs. Note: entities is an array of complex types that contains rich information about the entities extracted from text, with the following fields name (the actual entity name. This represents a "normalized" form) wikipediaId wikipediaLanguage wikipediaUrl (a link to Wikipedia page for the entity) etc. You need to develop an extract solution for the receipt images. The solution must meet the document processing requirements and the technical requirements. You upload the receipt images to the From Recognizer API for analysis, and the API ret urns the following JSON. Which expression should you use to trigger a manual review of the extracted information by a member of the Consultant-Bookkeeper group? documentResults.docType == "prebuilt:receipt" documentResults.fields.".confidence < 0.7 documentResults.fields.ReceiptType.confidence > 0.7 documentResults.fields.MerchantName.confidence < 0.7 The requirements state: All AI solution responses must have a confidence score ≥ 70%. If the response confidence score is < 70%, the response must be improved by human input. Members of the Consultant-Bookkeeper group must be able to process financial documents, which includes performing manual review when the AI confidence is below threshold. Looking at the provided JSON: "fields": { "ReceiptType": { "type": "string", "valueString": "Itemized", "confidence": 0.672 }, "MerchantName": { "type": "string", "valueString": "Tailwind", "confidence": 0.913 } } ReceiptType.confidence = 0.672 → this is below 0.7. MerchantName.confidence = 0.913 → this is above 0.7. Therefore, the correct condition for triggering manual review is: documentResults.fields. Now, evaluating the options: A. documentResults.docType == "prebuilt:receipt" Always true for receipts, does not check confidence. Not correct. B. documentResults.fields.*.confidence < 0.7 This is the correct general expression: trigger manual review whenever any field confidence is below 0.7. C. documentResults.fields.ReceiptType.confidence > 0.7 This would bypass manual review when ReceiptType has high confidence. The requirement is to trigger review when confidence < 0.7, so this is the opposite. D. documentResults.fields.MerchantName.confidence < 0.7 This only checks one field (MerchantName). In the JSON, MerchantName has confidence 0.913 (>0.7), so this condition would not trigger, but ReceiptType clearly needs review. Too narrow. Correct Answer: B. documentResults.fields.*.confidence < 0.7 Azure AI Document Intelligence – Confidence scores Human-in-the-loop for Document Intelligence Microsoft References You are developing a solution for the Management-Bookkeepers group to meet the document processing requirements. The solution must contain the following components: A form Recognizer resource An Azure web app that hosts the Form Recognizer sample labeling tool The Management-Bookkeepers group needs to create a custom table extractor by using the sample labeling tool. Which three actions should the Management-Bookkeepers group perform in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order. Step 1: Create a new project and load sample documents Create a new project. Projects store your configurations and settings. Step 2: Label the sample documents When you create or open a project, the main tag editor window opens. Step 3: Train a custom model. Finally, train a custom model. ION NO: 15 You are developing the knowledgebase. You use Azure Video Analyzer for Media (previously Video indexer) to obtain transcripts of webinars. You need to ensure that the solution meets the knowledgebase requirements. What should you do? Create a custom language model Configure audio indexing for videos only Enable multi-language detection for videos Build a custom Person model for webinar presenters Can search content in different formats, including video Audio and video insights (multi-channels). When indexing by one channel, partial result for those models will be available. Keywords extraction: Extracts keywords from speech and visual text. Named entities extraction: Extracts brands, locations, and people from speech and visual text via natural language processing (NLP). Topic inference: Makes inference of main topics from transcripts. The 2nd-level IPTC taxonomy is included. Artifacts: Extracts rich set of "next level of details" artifacts for each of the models. Sentiment analysis: Identifies positive, negative, and neutral sentiments from speech and visual text. : 16 You are developing the knowledgebase by using Azure Cognitive Search. You need to process wiki content to meet the technical requirements. What should you include in the solution? an indexer for Azure Blob storage attached to a skillset that contains the language detection skill and the text translation skill an indexer for Azure Blob storage attached to a skillset that contains the language detection skill an indexer for Azure Cosmos DB attached to a skillset that contains the document extraction skill and the text translation skill an indexer for Azure Cosmos DB attached to a skillset that contains the language detection skill and the text translation skill The wiki contains text in English, French and Portuguese. Scenario: All planned projects must support English, French, and Portuguese. The Document Extraction skill extracts content from a file within the enrichment pipeline. This allows you to take advantage of the document extraction step that normally happens before the skillset execution with files that may be generated by other skills. Note: The Translator Text API will be used to determine the from language. The Language detection skill is not required. You build a QnA Maker resource to meet the chatbot requirements. Which RBAC role should you assign to each group? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Box 1: Cognitive Service User Ensure that the members of a group named Management-Accountants can approve the FAQs. Approve=publish. Cognitive Service User (read/write/publish): API permissions: All access to Cognitive Services resource except for ability to: 1. Add new members to roles. 2. Create new resources. Box 2: Cognitive Services QnA Maker Editor Ensure that the members of a group named Consultant-Accountants can create and amend the FAQs. QnA Maker Editor: API permissions: 1. Create KB API 2. Update KB API 3. Replace KB API 4. Replace Alterations 5. "Train API" [in new service model v5] Box 3: Cognitive Services QnA Maker Read Ensure that the members of a group named the Agent-CustomerServices can browse the FAQs. QnA Maker Read: API Permissions: 1. Download KB API 2. List KBs for user API 3. Get Knowledge base details 4. Download Alterations Generate Answer You are developing the document processing workflow. You need to identify which API endpoints to use to extract text from the financial documents. The solution must meet the document processing requirements. Which two API endpoints should you identify? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. /vision/v3.2/read/analyzeResults /formrecognizer/v2.0/prebuilt/receipt/analyze /vision/v3.2/read/analyze /vision/v3.2/describe /formercognizer/v2.0/custom/models{modelId}/ analyze C: Analyze Receipt - Get Analyze Receipt Result. Query the status and retrieve the result of an Analyze Receipt operation. Request URL: https://{endpoint}/formrecognizer/v2.0-preview/prebuilt/receipt/analyzeResults/{resultId} E: POST {Endpoint}/vision/v3.2/read/analyze Use this interface to get the result of a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents. Scenario: Contoso plans to develop a document processing workflow to extract information automatically from PDFs and images of financial documents The document processing solution must be able to process standardized financial documents that have the following characteristics: - Contain fewer than 20 pages. - Be formatted as PDF or JPEG files. - Have a distinct standard for each office. *The document processing solution must be able to extract tables and text from the financial documents. The document processing solution must be able to extract information from receipt images. You are developing the chatbot. You create the following components: * A QnA Maker resource * A chatbot by using the Azure Bot Framework SDK. You need to integrate the components to meet the chatbot requirements. Which property should you use? QnADialogResponseOptions.CardNoMatchText Qna MakerOptions-ScoreThreshold Qna Maker Op t ions StrickFilters QnaMakerOptions.RankerType Explanation: Scenario: When the response confidence score is low, ensure that the chatbot can provide other response options to the customers. When no good match is found by the ranker, the confidence score of 0.0 or "None" is returned and the default response is "No good match found in the KB". You can override this default response in the bot or application code calling the endpoint. Alternately, you can also set the override response in Azure and this changes the default for all knowledge bases deployed in a particular QnA Maker service. Choosing Ranker type: By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the RankerType=QuestionOnly in the POST body of the GenerateAnswer request. You are developing the chatbot. You create the following components: • A QnA Maker resource • A chatbot by using the Azure Bot Framework SDK You need to add an additional component to meet the technical requirements and the chatbot requirements. What should you add? Dispatch chatdown Language Understanding Microsoft Translator Scenario: All planned projects must support English, French, and Portuguese. If a bot uses multiple LUIS models and QnA Maker knowledge bases (knowledge bases), you can use the Dispatch tool to determine which LUIS model or QnA Maker knowledge base best matches the user input. The dispatch tool does this by creating a single LUIS app to route user input to the correct model. You are developing the knowledgebase by using Azure Cognitive Search. You need to meet the knowledgebase requirements for searching equivalent terms. What should you include in the solution? a synonym map a suggester a custom analyzer a built-in key phrase extraction skill Within a search service, synonym maps are a global resource that associate equivalent terms, expanding the scope of a query without the user having to actually provide the term. For example, assuming "dog", "canine", and "puppy" are mapped synonyms, a query on "canine" will match on a document containing "dog". Create synonyms: A synonym map is an asset that can be created once and used by many indexes.,
, ). Not shown here. Options:
Answer:
C, D
Explanation:
Options:
Answer:
C, E
Explanation:

Options:
Answer:

Explanation:


Options:
Answer:

Explanation:

Options:
Answer:

Explanation:


Options:
Answer:

Explanation:



Options:
Answer:

Explanation:

Options:
Answer:

Explanation:
Options:
Answer:
A, D
Explanation:

Options:
Answer:

Explanation:


Options:
Answer:

Explanation:

Options:
Answer:
A
Explanation:
Options:
Answer:
B
Explanation:
Options:
Answer:
D
Explanation:
Options:
Answer:
A
Explanation:

Options:
Answer:
D

Options:
Answer:
B
Explanation:

Options:
Answer:


Options:
Answer:

Explanation:
Options:
Answer:
C
Explanation:

Options:
Answer:

Explanation:

Options:
Answer:

Explanation:


Options:
Answer:

Explanation:

Options:
Answer:
C
Options:
Answer:
A
Explanation:

Options:
Answer:

Explanation:

Options:
Answer:
B
Explanation:

Options:
Answer:

Explanation:
Options:
Answer:
A
Explanation:
Options:
Answer:
C
Explanation:

Options:
Answer:

Explanation:
Options:
Answer:
B, C
Explanation:
Options:
Answer:
D
Explanation:
Options:
Answer:
A
Explanation:
Options:
Answer:
A
Explanation:
Unlock AI-102 Features
Questions & Answers PDF Demo
Practice Tesitng Engine Demo
Copyright © 2014-2025 Activedumpsnet. All Rights Reserved