AI-100 Practice Exam Free – 50 Questions to Simulate the Real Exam
Are you getting ready for the AI-100 certification? Take your preparation to the next level with our AI-100 Practice Exam Free – a carefully designed set of 50 realistic exam-style questions to help you evaluate your knowledge and boost your confidence.
Using a AI-100 practice exam free is one of the best ways to:
Experience the format and difficulty of the real exam
Identify your strengths and focus on weak areas
Improve your test-taking speed and accuracy
Below, you will find 50 realistic AI-100 practice exam free questions covering key exam topics. Each question reflects the structure and challenge of the actual exam.
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have Azure IoT Edge devices that generate streaming data.
On the devices, you need to detect anomalies in the data by using Azure Machine Learning models. Once an anomaly is detected, the devices must add information about the anomaly to the Azure IoT Hub stream.
Solution: You deploy an Azure Machine Learning model as an IoT Edge module.
Does this meet the goal?
A. Yes
B. No
Suggested Answer: A
You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. For example, you can deploy an Azure
Machine Learning module that predicts when a device fails based on simulated machine temperature data.
References: https://docs.microsoft.com/bs-latn-ba/azure/iot-edge/tutorial-deploy-machine-learning
You have deployed several Azure IoT Edge devices for an AI solution. The Azure IoT Edge devices generate measurement data from temperature sensors.
You need a solution to process the sensor data. Your solution must be able to write configuration changes back to the devices.
You make use of Microsoft Azure IoT Hub.
Does this action accomplish your objective?
You are developing a Microsoft Bot Framework application. The application consumes structured NoSQL data that must be stored in the cloud.
You implement Azure Blob storage for the application. You want access to the blob store to be controlled by using a role.
You implement Shared Key authorization on the storage account.
Does this action accomplish your objective?
You have deployed several Azure IoT Edge devices for an AI solution. The Azure IoT Edge devices generate measurement data from temperature sensors.
You need a solution to process the sensor data. Your solution must be able to write configuration changes back to the devices.
You make use of Azure Notification Hub.
Does this action accomplish your objective?
Q48 DRAG DROP –
You have developed an AI application for your company.
You want to prepare the application for deployment to Kubernetes.
Which three of the following actions should you perform? To answer, move the selected actions from the list of actions to the answer area and rearrange them in the right order.
NOTE: Each correct selection is worth one point.
<img src=”https://www.examtopics.com/assets/media/exam-media/03857/0023800001.png” alt=”Reference Image” />
Reference: https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app
You are designing an AI solution that will provide feedback to teachers who train students over the Internet. The students will be in classrooms located in remote areas. The solution will capture video and audio data of the students in the classrooms.
You need to recommend Azure Cognitive Services for the AI solution to meet the following requirements:
✑ Alert teachers if a student seems angry or distracted.
✑ Identify each student in the classrooms for attendance purposes.
✑ Allow the teachers to log the text of conversations between themselves and the students.
Which Cognitive Services should you recommend?
A. Computer Vision, Text Analytics, and Face API
B. Video Indexer, Face API, and Text Analytics
C. Computer Vision, Speech to Text, and Text Analytics
D. Text Analytics, QnA Maker, and Computer Vision
E. Video Indexer, Speech to Text, and Face API
Suggested Answer: E
Azure Video Indexer is a cloud application built on Azure Media Analytics, Azure Search, Cognitive Services (such as the Face API, Microsoft Translator, the
Computer Vision API, and Custom Speech Service). It enables you to extract the insights from your videos using Video Indexer video and audio models.
Face API enables you to search, identify, and match faces in your private repository of up to 1 million people.
The Face API now integrates emotion recognition, returning the confidence across a set of emotions for each face in the image such as anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. These emotions are understood to be cross-culturally and universally communicated with particular facial expressions.
Speech-to-text from Azure Speech Services, also known as speech-to-text, enables real-time transcription of audio streams into text that your applications, tools, or devices can consume, display, and take action on as command input. This service is powered by the same recognition technology that Microsoft uses for
Cortana and Office products, and works seamlessly with the translation and text-to-speech.
Incorrect Answers:
Computer Vision or the QnA is not required.
References: https://docs.microsoft.com/en-us/azure/media-services/video-indexer/video-indexer-overview https://azure.microsoft.com/en-us/services/cognitive-services/face/ https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-to-text
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are deploying an Azure Machine Learning model to an Azure Kubernetes Service (AKS) container.
You need to monitor the scoring accuracy of each run of the model.
Solution: You modify the Config.json file.
Does this meet the goal?
Your company plans to create a mobile app that will be used by employees to query the employee handbook.
You need to ensure that the employees can query the handbook by typing or by using speech.
Which core component should you use for the app?
A. Language Understanding (LUIS)
B. QnA Maker
C. Text Analytics
D. Azure Search
Suggested Answer: D
Azure Cognitive Search (formerly known as “Azure Search”) is a search-as-a-service cloud solution that gives developers APIs and tools for adding a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. Your code or a tool invokes data ingestion (indexing) to create and load an index. Optionally, you can add cognitive skills to apply AI processes during indexing. Doing so can add new information and structures useful for search and other scenarios.
Incorrect Answres:
B: QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge baseג€”automatically.
References: https://docs.microsoft.com/en-us/azure/search/search-what-is-azure-search
Your company has factories in 10 countries. Each factory contains several thousand IoT devices.
The devices present status and trending data on a dashboard.
You need to ingest the data from the IoT devices into a data warehouse.
Which two Microsoft Azure technologies should you use? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You are developing a mobile application that will perform optical character recognition (OCR) from photos.
The application will annotate the photos by using metadata, store the photos in Azure Blob storage, and then score the photos by using an Azure Machine
Learning model.
What should you use to process the data?
You have thousands of images that contain text.
You need to process the text from the images to a machine-readable character stream.
Which Azure Cognitive Services service should you use?
You have a developed an AI app that uses a Cognitive Services API to identify images.
You want all the processed images to be stored an on-premises datacenter.
What should you do?
A. Use a Docker container to host the Cognitive Services API.
B. Use an Azure Stack Edge Pro to host the Cognitive Services API.
C. Use an Azure Private Endpoint to host the Cognitive Services API.
D. Use an Azure Data Box to host the Cognitive Services API.
Suggested Answer: Answer: B
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Reference: https://www.docker.com/resources/what-container
Q27)
You are developing a bot for an ecommerce application. The bot will support five languages.
The bot will use Language Understanding (LUIS) to detect the language of the customer, and QnA Maker to answer common customer questions. LUIS supports all the languages.
You need to determine the minimum number of Azure resources that you must create for the bot.
You create one instance of QnA Maker and one instance Language Understanding (LUIS).
Does this action accomplish your objective?
A. Yes, it does –
B. No, it does not –
You need to have a new QnA Maker resource for each language.
If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use Microsoft Translator API to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive the resulting scores.
Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-language-support
Your company uses several bots. The bots use Azure Bot Service.
Several users report that some of the bots fail to return the expected results.
You plan to view the service health of the bot service.
You need to request the appropriate role to access the service health of the bot service. The solution must use the principle of least privilege.
Which role should you request?
A. The Contributor role on the Azure subscription
B. The Reader role on the bot service
C. The Owner role on the bot service
D. The Reader role on the Azure subscription
Suggested Answer: B
Use the Reader role on the bot service to limit access and scope.
Note: Access management for cloud resources is a critical function for any organization that is using the cloud. Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
Azure includes several built-in roles that you can use. The Reader Role can view existing Azure resources.
Scope is the set of resources that the access applies to. When you assign a role, you can further limit the actions allowed by defining a scope. In Azure, you can specify a scope at multiple levels: management group, subscription, resource group, or resource.
Reference: https://docs.microsoft.com/en-us/azure/role-based-access-control/overview
Your company plans to implement an AI solution that will analyze data from IoT devices.
Data from the devices will be analyzed in real time. The results of the analysis will be stored in a SQL database.
You need to recommend a data processing solution that uses the Transact-SQL language.
Which data processing solution should you recommend?
You are designing an Azure Batch AI solution that will perform image recognition. The solution will be used to train several Azure Machine Learning models.
You need to enable versioning for Azure Machine Learning models.
What should you do?
You are developing an app for a conference provider. The app will use speech-to-text to provide transcription at a conference in English. It will also use the
Translator Text API to translate the transcripts to the language preferred by the conference attendees.
You test the translation features on the app and discover that the translations are fairly poor.
You want to improve the quality of the translations.
Which of the following actions should you take?
A. Use Text Analytics to perform the translations.
B. Use the Language Understanding (LUIS) API to perform the translations.
C. Perform the translations by training a custom model using Custom Translator.
D. Use the Computer Vision API to perform the translations.
Suggested Answer: C
Custom Translator is a feature of the Microsoft Translator service. With Custom Translator, enterprises, app developers, and language service providers can build neural translation systems that understand the terminology used in their own business and industry. The customized translation system will then seamlessly integrate into existing applications, workflows and websites.
Custom Translator allows users to customize Microsoft Translator’s advanced neural machine translation for Translator’s supported neural translation languages.
Custom Translator can be used for customizing text when using the Microsoft Translator Text API, and speech translation using the Microsoft Speech services.
Reference: https://www.microsoft.com/en-us/translator/business/customization/
HOTSPOT -
You plan to deploy an Azure Data Factory pipeline that will perform the following:
✑ Move data from on-premises to the cloud.
✑ Consume Azure Cognitive Services APIs.
You need to recommend which technologies the pipeline should use. The solution must minimize custom code.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Suggested Answer:
Box 1: Self-hosted Integration Runtime
A self-hosted IR is capable of running copy activity between a cloud data stores and a data store in private network.
Not Azure-SSIS Integration Runtime, as you would need to write custom code.
Box 2: Azure Logic Apps –
Azure Logic Apps helps you orchestrate and integrate different services by providing 100+ ready-to-use connectors, ranging from on-premises SQL Server or SAP to Microsoft Cognitive Services.
Incorrect:
Not Azure API Management: Use Azure API Management as a turnkey solution for publishing APIs to external and internal customers.
References: https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-examples-and-scenarios
You need to evaluate trends in fuel prices during a period of 10 years. The solution must identify unusual fluctuations in prices and produce visual representations.
Which Azure Cognitive Services API should you use?
A. Anomaly Detector
B. Computer Vision
C. Text Analytics
D. Bing Autosuggest
Suggested Answer: A
The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data with machine learning. The Anomaly Detector API adapts by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview
You deploy an Azure bot.
You need to collect Key Performance Indicator (KPI) data from the bot. The type of data includes:
✑ The number of users interacting with the bot
✑ The number of messages interacting with the bot
✑ The number of messages on different channels received by the bot
✑ The number of users and messages continuously interacting with the bot
What should you configure?
You are developing an AI application for your company. The application that uses batch processing to analyze data in JSON and PDF documents.
You want to store the JSON and PDF documents in Azure. You want to ensure data persistence while keeping costs at a minimum.
Which of the following actions should you take?
A. Make use of Azure Blob storage
B. Make use of Azure Cosmos DB
C. Make use of Azure Databricks
D. Make use of Azure Table storage
Suggested Answer: A
The following technologies are recommended choices for batch processing solutions in Azure.
Azure Storage Blob Containers. Many existing Azure business processes already use Azure blob storage, making this a good choice for a big data store.
Azure Data Lake Store. Azure Data Lake Store offers virtually unlimited storage for any size of file, and extensive security options, making it a good choice for extremely large-scale big data solutions that require a centralized store for data in heterogeneous formats.
Reference: https://docs.microsoft.com/en-us/azure/architecture/data-guide/big-data/batch-processing https://docs.microsoft.com/bs-latn-ba/azure/storage/blobs/storage-blobs-introduction
Your company has a data team of Scala and R experts.
You plan to ingest data from multiple Apache Kafka streams.
You need to recommend a processing technology to broker messages at scale from Kafka streams to Azure Storage.
What should you recommend?
A. Azure Databricks
B. Azure Functions
C. Azure HDInsight with Apache Storm
D. Azure HDInsight with Microsoft Machine Learning Server
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing an application that uses an Azure Kubernetes Service (AKS) cluster.
You are troubleshooting a node issue.
You need to connect to an AKS node by using SSH.
Solution: You create a managed identity for AKS, and then you create an SSH connection.
Does this meet the goal?
You are developing a Computer Vision application.
You plan to use a workflow that will load data from an on-premises database to Azure Blob storage, and then connect to an Azure Machine Learning service.
What should you use to orchestrate the workflow?
A. Azure Kubernetes Service (AKS)
B. Azure Pipelines
C. Azure Data Factory
D. Azure Container Instances
Suggested Answer: C
With Azure Data Factory you can use workflows to orchestrate data integration and data transformation processes at scale.
Build data integration, and easily transform and integrate big data processing and machine learning with the visual interface.
References: https://azure.microsoft.com/en-us/services/data-factory/
Your company has an Azure subscription that contains an Azure Active Directory (Azure AD) tenant.
Azure AD contains 500 user accounts for your company's employees. Some temporary employees do NOT have user accounts in Azure AD.
You are designing a storage solution for video files and metadata files.
You plan to deploy an application to perform analysis of the metadata files.
You need to recommend an authentication solution to provide links to the video files. The solution must provide access to each file for only five minutes.
What should you include in the recommendation?
Your company develops an API application that is orchestrated by using Kubernetes.
You need to deploy the application.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
HOTSPOT -
You plan to create a bot that will support five languages. The bot will be used by users located in three different countries. The bot will answer common customer questions. The bot will use Language Understanding (LUIS) to identify which skill to use and to detect the language of the customer.
You need to identify the minimum number of Azure resources that must be created for the planned bot.
How many QnA Maker, LUIS and Text Analytics instances should you create? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Suggested Answer:
QnA Maker: 5 –
If the user plans to support multiple languages, they need to have a new QnA Maker resource for each language.
LUIS: 5 –
If you need a multi-language LUIS client application such as a chatbot, you have a few options. If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use Microsoft Translator API to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive the resulting scores.
Language detection: 1 –
The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document. The response also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional or cultural languages. The exact list of languages for this feature isn’t published.
Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/language-support https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-language-support https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection
You are developing a Microsoft Bot Framework application. The application consumes structured NoSQL data that must be stored in the cloud.
You implement Azure Blob storage for the application. You want access to the blob store to be controlled by using a role.
You implement Azure Active Directory (Azure AD) integration on the storage account.
Does this action accomplish your objective?
A. Yes, it does
B. No, it does not
Suggested Answer: A
Azure Active Directory (Azure AD) integration for blobs, and queues provides Azure role-based access control (Azure RBAC) for control over a client’s access to resources in a storage account.
Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-auth
HOTSPOT -
You are designing a solution that will ingest data from an Azure IoT Edge device, preprocess the data in Azure Machine Learning, and then move the data to
Azure HDInsight for further processing.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Suggested Answer:
Box 1: Export Data –
The Export data to Hive option in the Export Data module in Azure Machine Learning Studio. This option is useful when you are working with very large datasets, and want to save your machine learning experiment data to a Hadoop cluster or HDInsight distributed storage.
Box 2: Apache Hive –
Apache Hive is a data warehouse system for Apache Hadoop. Hive enables data summarization, querying, and analysis of data. Hive queries are written in
HiveQL, which is a query language similar to SQL.
Box 3: Azure Data Lake –
Default storage for the HDFS file system of HDInsight clusters can be associated with either an Azure Storage account or an Azure Data Lake Storage.
References: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/export-to-hive-query https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/hdinsight-use-hive
HOTSPOT -
You plan to build an app that will provide users with the ability to dictate messages and convert the messages into text.
You need to recommend a solution to meet the following requirements for the app:
✑ Must be able to transcribe streaming dictated messages that are longer than 15 seconds.
✑ Must be able to upload existing recordings to Azure Blob storage to be transcribed later.
Which solution should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Suggested Answer:
Box 1: The Speech SDK –
The Speech SDK is not limited to 15 seconds.
Box 2: Batch Transcription API –
Batch transcription is a set of REST API operations that enables you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. With the new v3.0 API, you have the choice of transcribing one or more audio files, or process a whole storage container.
Asynchronous speech-to-text transcription is just one of the features.
Reference: https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/13 https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/batch-transcription
DRAG DROP -
You are designing an Azure Batch AI solution that will be used to train many different Azure Machine Learning models. The solution will perform the following:
✑ Image recognition
✑ Deep learning that uses convolutional neural networks.
You need to select a compute infrastructure for each model. The solution must minimize the processing time.
What should you use for each model? To answer, drag the appropriate compute infrastructures to the correct models. Each compute infrastructure may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
You are designing a business application that will use Azure Cognitive Services to parse images of business forms. You have the following requirements:
Parsed image data must be uploaded to Azure Storage once a week.
The solution must minimize infrastructure costs.
What should you do?
A. Use Azure API Apps to upload the data.
B. Use Azure Bot Service to upload the data.
C. Use Azure Data Factory (ADF) to upload the data.
HOTSPOT -
Your company plans to build an app that will perform the following tasks:
Match a user's picture to a picture of a celebrity.
✑ Tag a scene from a movie, and then search for movie scenes by using the tags.
You need to recommend which Azure Cognitive Services APIs must be used to perform the tasks.
Which Cognitive Services API should you recommend for each task? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Suggested Answer:
Box 1: Computer Vision –
Azure’s Computer Vision service provides developers with access to advanced algorithms that process images and return information.
Computer Vision Detect Faces: Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates, rectangle, gender, and age for each detected face.
Computer Vision provides a subset of the Face service functionality. You can use the Face service for more detailed analysis, such as facial identification and pose detection.
Box 2: Bing Video Search –
Search for videos and get comprehensive results
With Bing Video Search API v7, find videos across the web. Results provide useful metadata including creator, encoding format, video length, view count, improved & simplified paging, and more.
Incorrect Answers:
Video Indexer:
Automatically extract metadataג€”such as spoken words, written text, faces, speakers, celebrities, emotions, topics, brands, and scenesג€”from video and audio files.
Custom Vision:
Easily customize your own state-of-the-art computer vision models for your unique use case. Just upload a few labeled images and let Custom Vision Service do the hard work. With just one click, you can export trained models to be run on device or as Docker containers.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home https://azure.microsoft.com/en-us/services/cognitive-services/bing-video-search-api/
DRAG DROP -
You need to build an AI solution that will be shared between several developers and customers.
You plan to write code, host code, and document the runtime all within a single user experience.
You build the environment to host the solution.
Which three actions should you perform in sequence next? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Suggested Answer:
Step 1: Create an Azure Machine Learning Studio workspace
Step 2: Create a notebook –
You can manage notebooks using the UI, the CLI, and by invoking the Workspace API.
To create a notebook –
1. Click the Workspace button Workspace Icon or the Home button Home Icon in the sidebar. Do one of the following:
Next to any folder, click the Menu Dropdown on the right side of the text and select Create > Notebook. Create Notebook
In the Workspace or a user folder, click Down Caret and select Create > Notebook.
2. In the Create Notebook dialog, enter a name and select the notebook’s primary language.
3. If there are running clusters, the Cluster drop-down displays. Select the cluster to attach the notebook to.
4. Click Create.
Step 3: Create a new experiment –
Create a new experiment by clicking +NEW at the bottom of the Machine Learning Studio window. Select EXPERIMENT > Blank Experiment.
References: https://docs.azuredatabricks.net/user-guide/notebooks/notebook-manage.html https://docs.microsoft.com/en-us/azure/machine-learning/service/quickstart-run-cloud-notebook
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to create an IoT solution that performs the following tasks:
✑ Identifies hazards
✑ Provides a real-time online dashboard
✑ Takes images of an area every minute
✑ Counts the number of people in an area every minute
Solution: You configure the IoT devices to send the images to an Azure IoT hub, and then you configure an Azure Automation call to Azure Cognitive Services that sends the results to an Azure event hub. You configure Microsoft Power BI to connect to the event hub by using Azure Stream Analytics.
Does this meet the goal?
Your company has recently deployed 5,000 Internet-connected sensors for a planned AI solution.
You need to recommend a computing solution to perform a real-time analysis of the data generated by the sensors.
Which computing solution should you recommend?
HOTSPOT -
You are designing a solution that will analyze bank transactions in real time. The transactions will be evaluated by using an algorithm and classified into one of five groups. The transaction data will be enriched with information taken from Azure SQL Database before the transactions are sent to the classification process. The enrichment process will require custom code. Data from different banks will require different stored procedures.
You need to develop a pipeline for the solution.
Which components should you use for data ingestion and data preparation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure SQL database, an Azure Data Lake Storage Gen 2 account, and an API developed by using Azure Machine Learning Studio.
You need to ingest data once daily from the database, score each row by using the API, and write the data to the storage account.
Solution: You create an Azure Data Factory pipeline that contains the Machine Learning Batch Execution activity.
Does this meet the goal?
You are developing a bot for an ecommerce application. The bot will support five languages.
The bot will use Language Understanding (LUIS) to detect the language of the customer, and QnA Maker to answer common customer questions. LUIS supports all the languages.
You need to determine the minimum number of Azure resources that you must create for the bot.
You create one instance of QnA Maker and five instances Language Understanding (LUIS).
Does this action accomplish your objective?
You need to design the Butler chatbot solution to meet the technical requirements.
What is the best channel and pricing tier to use? More than one answer choice may achieve the goal. Select the BEST answer.
A. Standard channels that use the S1 pricing tier
B. Standard channels that use the Free pricing tier
C. Premium channels that use the Free pricing tier
Your company has recently purchased and deployed 25,000 IoT devices.
You need to recommend a data analysis solution for the devices that meets the following requirements:
✑ Each device must use its own credentials for identity.
✑ Each device must be able to route data to multiple endpoints.
✑ The solution must require the minimum amount of customized code.
What should you include in the recommendation?
A. Microsoft Azure Notification Hubs
B. Microsoft Azure Event Hubs
C. Microsoft Azure IoT Hub
D. Microsoft Azure Service Bus
Suggested Answer: C
An IoT hub has a default built-in endpoint. You can create custom endpoints to route messages to by linking other services in your subscription to the hub.
Individual devices connect using credentials stored in the IoT hub’s identity registry.
References: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-security
Your plan to design a bot that will be hosted by using Azure Bot Service.
Your company identifies the following compliance requirements for the bot:
✑ Payment Card Industry Data Security Standards (PCI DSS)
✑ General Data Protection Regulation (GDPR)
✑ ISO 27001
You need to identify which compliance requirements are met by hosting the bot in the bot service.
What should you identify?
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses the Face API.
App1 contains several PersonGroup objects.
You discover that a PersonGroup object for an individual named Ben Smith cannot accept additional entries. The PersonGroup object for Ben Smith contains
10,000 entries.
You need to ensure that additional entries can be added to the PersonGroup object for Ben Smith. The solution must ensure that Ben Smith can be identified by all the entries.
Solution: You delete 1,000 entries from the PersonGroup object for Ben Smith.
Does this meet the goal?
You need to create a prototype of a bot to demonstrate a user performing a task. The demonstration will use the Bot Framework Emulator.
Which botbuilder CLI tool should you use to create the prototype?
A. Chatdown
B. QnAMaker
C. Dispatch
D. LuDown
Suggested Answer: A
Use Chatdown to produce prototype mock conversations in markdown and convert the markdown to transcripts you can load and view in the new V4 Bot
Framework Emulator.
Incorrect Answers:
B: QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge baseג€”automatically. Your knowledge base gets smarter, too, as it continually learns from user behavior.
C: Dispatch lets you build language models that allow you to dispatch between disparate components (such as QnA, LUIS and custom code).
D: LuDown build LUIS language understanding models using markdown files
References: https://github.com/microsoft/botframework/blob/master/README.md
You are building an Azure Analysis Services cube for your AI deployment.
The source data for the cube is located in an on premises network in a Microsoft SQL Server database.
You need to ensure that the Azure Analysis Services service can access the source data.
What should you deploy to your Azure subscription?
A. a site-to-site VPN
B. a data gateway
C. Azure Data Factory
D. a network gateway
Suggested Answer: B
From April 2017 onward we can use On-premises Data Gateway for Azure Analysis Services. This means you can connect your Tabular Models hosted in Azure
Analysis Services to your on-premises data sources through On-premises Data Gateway.
References: alt=”Reference Image” />
References: https://biinsight.com/on-premises-data-gateway-for-azure-analysis-services/
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You create several API models in Azure Machine Learning Studio.
You deploy the models to a production environment.
You need to monitor the compute performance of the models.
Solution: You create environment files.
Does this meet the goal?
Your company has a data team of Transact-SQL experts.
You plan to ingest data from multiple sources into Azure Event Hubs.
You need to recommend which technology the data team should use to move and query data from Event Hubs to Azure Storage. The solution must leverage the data team's existing skills.
What is the best recommendation to achieve the goal? More than one answer choice may achieve the goal.
A. Azure Notification Hubs
B. Azure Event Grid
C. Apache Kafka streams
D. Azure Stream Analytics
Suggested Answer: B
Event Hubs Capture is the easiest way to automatically deliver streamed data in Event Hubs to an Azure Blob storage or Azure Data Lake store. You can subsequently process and deliver the data to any other storage destinations of your choice, such as SQL Data Warehouse or Cosmos DB.
You to capture data from your event hub into a SQL data warehouse by using an Azure function triggered by an event grid.
Example:
First, you create an event hub with the Capture feature enabled and set an Azure blob storage as the destination. Data generated by WindTurbineGenerator is streamed into the event hub and is automatically captured into Azure Storage as Avro files.
Next, you create an Azure Event Grid subscription with the Event Hubs namespace as its source and the Azure Function endpoint as its destination.
Whenever a new Avro file is delivered to the Azure Storage blob by the Event Hubs Capture feature, Event Grid notifies the Azure Function with the blob URI. The
Function then migrates data from the blob to a SQL data warehouse.
References: alt=”Reference Image” />
First, you create an event hub with the Capture feature enabled and set an Azure blob storage as the destination. Data generated by WindTurbineGenerator is streamed into the event hub and is automatically captured into Azure Storage as Avro files.
Next, you create an Azure Event Grid subscription with the Event Hubs namespace as its source and the Azure Function endpoint as its destination.
Whenever a new Avro file is delivered to the Azure Storage blob by the Event Hubs Capture feature, Event Grid notifies the Azure Function with the blob URI. The
Function then migrates data from the blob to a SQL data warehouse.
References: https://docs.microsoft.com/en-us/azure/event-hubs/store-captured-data-data-warehouse
After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You create several AI models in Azure Machine Learning Studio.
You deploy the models to a production environment.
You need to monitor the compute performance of the models.
Solution: You enable Model data collection.
Does this meet the goal?
You are designing an AI system for your company. Your system will consume several Apache Kafka data streams.
You want your system to be able to process the data streams at scale and in real-time.
Which of the following actions should you take?
A. Make use of Azure HDInsight with Apache HBase
B. Make use of Azure HDInsight with Apache Spark
C. Make use of Azure HDInsight with Apache Storm
D. Make use of Azure HDInsight with Microsoft Machine Learning Server
HOTSPOT -
You need to build an interactive website that will accept uploaded images, and then ask a series of predefined questions based on each image.
Which services should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Suggested Answer:
Box 1: Azure Bot Service –
Box 2: Computer Vision –
The Computer Vision Analyze an image feature, returns information about visual content found in an image. Use tagging, domain-specific models, and descriptions in four languages to identify content and label it with confidence. Use Object Detection to get location of thousands of objects within an image. Apply the adult/racy settings to help you detect potential adult content. Identify image types and color schemes in pictures.
References: https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/
You build an internal application that uses the Computer Vision API.
You need to ensure that only specific employees can access the application.
What should you include in the solution?
A. a single-service subscription key
B. user principals in Azure Active Directory (Azure AD)
C. service principals in Azure Active Directory (Azure AD)
Looking for additional practice? Click here to access a full set of AI-100 practice exam free questions and continue building your skills across all exam domains.
Our question sets are updated regularly to ensure they stay aligned with the latest exam objectives—so be sure to visit often!