Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
  • Login
  • Register
Quesions Library
  • Cisco
    • 200-301
    • 200-901
      • Multiple Choice
      • Drag Drop
    • 350-401
      • Multiple Choice
      • Drag Drop
    • 350-701
    • 300-410
      • Multiple Choice
      • Drag Drop
    • 300-415
      • Multiple Choice
      • Drag Drop
    • 300-425
    • Others
  • AWS
    • CLF-C02
    • SAA-C03
    • SAP-C02
    • ANS-C01
    • Others
  • Microsoft
    • AZ-104
    • AZ-204
    • AZ-305
    • AZ-900
    • AI-900
    • SC-900
    • Others
  • CompTIA
    • SY0-601
    • N10-008
    • 220-1101
    • 220-1102
    • Others
  • Google
    • Associate Cloud Engineer
    • Professional Cloud Architect
    • Professional Cloud DevOps Engineer
    • Others
  • ISACA
    • CISM
    • CRIS
    • Others
  • LPI
    • 101-500
    • 102-500
    • 201-450
    • 202-450
  • Fortinet
    • NSE4_FGT-7.2
  • VMware
  • >>
    • Juniper
    • EC-Council
      • 312-50v12
    • ISC
      • CISSP
    • PMI
      • PMP
    • Palo Alto Networks
    • RedHat
    • Oracle
    • GIAC
    • F5
    • ITILF
    • Salesforce
Contribute
Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
Practice Test Free
No Result
View All Result
Home Mock Test Free

DP-200 Mock Test Free

Table of Contents

Toggle
  • DP-200 Mock Test Free – 50 Realistic Questions to Prepare with Confidence.
  • Access Full DP-200 Mock Test Free

DP-200 Mock Test Free – 50 Realistic Questions to Prepare with Confidence.

Getting ready for your DP-200 certification exam? Start your preparation the smart way with our DP-200 Mock Test Free – a carefully crafted set of 50 realistic, exam-style questions to help you practice effectively and boost your confidence.

Using a mock test free for DP-200 exam is one of the best ways to:

  • Familiarize yourself with the actual exam format and question style
  • Identify areas where you need more review
  • Strengthen your time management and test-taking strategy

Below, you will find 50 free questions from our DP-200 Mock Test Free resource. These questions are structured to reflect the real exam’s difficulty and content areas, helping you assess your readiness accurately.

Question 1

DRAG DROP -
You have an ASP.NET web app that uses an Azure SQL database. The database contains a table named Employee. The table contains sensitive employee information, including a column named DateOfBirth.
You need to ensure that the data in the DateOfBirth column is encrypted both in the database and when transmitted between a client and Azure. Only authorized clients must be able to view the data in the column.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions in the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted

Question 2

HOTSPOT -
A company is planning to use Microsoft Azure Cosmos DB as the data store for an application. You have the following Azure CLI command: az cosmosdb create -`"name "cosmosdbdev1" `"-resource-group "rgdev"
You need to minimize latency and expose the SQL API. How should you complete the command? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

Box 1: Eventual –
With Azure Cosmos DB, developers can choose from five well-defined consistency models on the consistency spectrum. From strongest to more relaxed, the models include strong, bounded staleness, session, consistent prefix, and eventual consistency.
The following image shows the different consistency levels as a spectrum.
Reference Image
Box 2: GlobalDocumentDB –
Select Core(SQL) to create a document database and query by using SQL syntax.
Note: The API determines the type of account to create. Azure Cosmos DB provides five APIs: Core(SQL) and MongoDB for document databases, Gremlin for graph databases, Azure Table, and Cassandra.
References: alt=”Reference Image” />
Box 2: GlobalDocumentDB –
Select Core(SQL) to create a document database and query by using SQL syntax.
Note: The API determines the type of account to create. Azure Cosmos DB provides five APIs: Core(SQL) and MongoDB for document databases, Gremlin for graph databases, Azure Table, and Cassandra.
References:
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-dotnet

Question 3

Overview -
XYZ is an online training provider.
Current Environment -
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the List of applications, tiers and their individual requirements:
 Image
Below are the current requirements of the company:
* For Tier 4 and Tier 5 databases, the backup strategy must include the following:
- Transactional log backup every hour
- Differential backup every day
- Full backup every week
* Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL databases
* Tier 1 database must implement the following data masking logic:
- For Data type XYZ-A `" Mask 4 or less string data type characters
- For Data type XYZ-B `" Expose the first letter and mask the domain
- For Data type XYZ-C `" Mask everything except characters at the beginning and the end
* All certificates and keys are internally managed in on-premise data stores
* For Tier 2 databases, if there are any conflicts between the data transfer from on-premise, preference should be given to on-premise data.
* Monitoring must be setup on every database
* Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data engineers.
* Azure SQL Data warehouse would be used to gather data from multiple internal and external databases.
* The Azure SQL Data warehouse must be optimized to use data from its cache
* The below metrics must be available when it comes to the cache:
- Metric XYZ-A `" Low cache hit %, high cache usage %
- Metric XYZ-B `" Low cache hit %, low cache usage %
- Metric XYZ-C `" high cache hit %, high cache usage %
* The reporting data for external partners must be stored in Azure storage. The data should be made available during regular business hours in connecting regions.
* The reporting for Tier 9 needs to be moved to Event Hubs.
* The reporting for Tier 10 needs to be moved to Azure Blobs.
The following issues have been identified in the setup:
* The External partners have control over the data formats, types and schemas.
* For External based clients, the queries can't be changed or optimized.
* The database development staff are familiar with T-SQL language.
* Because of the size and amount of data, some applications and reporting features are not performing at SLA levels.
The data for the external applications needs to be encrypted at rest. You decide to implement the following steps:
- Use the Always Encrypted Wizard in SQL Server Management Studio
- Select the column that needs to be encrypted
- Set the encryption type to Randomized
- Configure the master key to be used from the Windows Certificate Store
- Confirm the configuration and deploy the solution
Would these steps fulfill the requirement?

A. Yes

B. No

 


Suggested Answer: B

As per the documentation, the encryption type needs to set as Deterministic when enabling Always Encrypted:
Column Selection –
Click Next on the Introduction page to open the Column Selection page. On this page, you will select which columns you want to encrypt, the type of encryption, and what column encryption key (CEK) to use.
Encrypt SSN and BirthDate information for each patient. The SSN column will use deterministic encryption, which supports equality lookups, joins, and group by.
The BirthDate column will use randomized encryption, which does not support operations.
Set the Encryption Type for the SSN column to Deterministic and the BirthDate column to Randomized. Click Next.
Reference Image
Reference: alt=”Reference Image” />
Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted

Question 4

You have to deploy resources on Azure HDInsight for a batch processing job. The batch processing must run daily and must scale to minimize costs. You also be able to monitor cluster performance.
You need to decide on a tool that will monitor the clusters and provide information on suggestions on how to scale.
You decide on monitoring the cluster load by using the Ambari Web UI.
Would this fulfill the requirement?

A. Yes

B. No

 


Suggested Answer: A

Yes, this will give you a good idea on the load on the Azure HDInsight cluster.
The Microsoft documentation mentions the following:
Monitor cluster load –
Hadoop clusters can deliver the most optimal performance when the load on cluster is evenly distributed across all the nodes. This enables the processing tasks to run without being constrained by RAM, CPU, or disk resources on individual nodes.
To get a high-level look at the nodes of your cluster and their loading, sign in to the Ambari Web UI, then select the Hosts tab. Your hosts are listed by their fully qualified domain names. Each host’s operating status is shown by a colored health indicator:
Reference Image
Reference: alt=”Reference Image” />
Reference:
https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-key-scenarios-to-monitor

Question 5

You create an Azure Databricks cluster and specify an additional library to install.
When you attempt to load the library to a notebook, the library is not found.
You need to identify the cause of the issue.
What should you review?

A. workspace logs

B. notebook logs

C. global init scripts logs

D. cluster event logs

 


Suggested Answer: C

Cluster-scoped Init Scripts: Init scripts are shell scripts that run during the startup of each cluster node before the Spark driver or worker JVM starts. Databricks customers use init scripts for various purposes such as installing custom libraries, launching background processes, or applying enterprise security policies.
Logs for Cluster-scoped init scripts are now more consistent with Cluster Log Delivery and can be found in the same root folder as driver and executor logs for the cluster.
Reference:
https://databricks.com/blog/2018/08/30/introducing-cluster-scoped-init-scripts.html

Question 6

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to configure data encryption for external applications.
Solution:
1. Access the Always Encrypted Wizard in SQL Server Management Studio
2. Select the column to be encrypted
3. Set the encryption type to Randomized
4. Configure the master key to use the Windows Certificate Store
5. Validate configuration results and deploy the solution
Does the solution meet the goal?

A. Yes

B. No

 


Suggested Answer: B

Use the Azure Key Vault, not the Windows Certificate Store, to store the master key as it must be used by external applications.
Note: The Master Key Configuration page is where you set up your CMK (Column Master Key) and select the key store provider where the CMK will be stored.
Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a hardware security module (HSM).
Reference Image
However, if you use the Windows Certificate Store for external applications to use the key, the external application must run on the same computer where you ran the Always Encrypted wizard, or you must deploy the Always Encrypted certificates to the computer running the external application.
Reference: alt=”Reference Image” />
However, if you use the Windows Certificate Store for external applications to use the key, the external application must run on the same computer where you ran the Always Encrypted wizard, or you must deploy the Always Encrypted certificates to the computer running the external application.
Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted-azure-key-vault
https://docs.microsoft.com/en-us/azure/azure-sql/database/always-encrypted-certificate-store-configure

Question 7

You plan to perform batch processing in Azure Databricks once daily.
Which type of Databricks cluster should you use?

A. automated

B. interactive

C. High Concurrency

 


Suggested Answer: A

Azure Databricks has two types of clusters: interactive and automated. You use interactive clusters to analyze data collaboratively with interactive notebooks. You use automated clusters to run fast and robust automated jobs.
Example: Scheduled batch workloads (data engineers running ETL jobs)
This scenario involves running batch job JARs and notebooks on a regular cadence through the Databricks platform.
The suggested best practice is to launch a new cluster for each run of critical jobs. This helps avoid any issues (failures, missing SLA, and so on) due to an existing workload (noisy neighbor) on a shared cluster.
Reference:
https://docs.databricks.com/administration-guide/cloud-configurations/aws/cmbp.html#scenario-3-scheduled-batch-workloads-data-engineers-running-etl-jobs

Question 8

A company has a real-time data analysis solution that is hosted on Microsoft Azure. The solution uses Azure Event Hub to ingest data and an Azure Stream
Analytics cloud job to analyze the data. The cloud job is configured to use 120 Streaming Units (SU).
You need to optimize performance for the Azure Stream Analytics job.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Implement event ordering

B. Scale the SU count for the job up

C. Implement Azure Stream Analytics user-defined functions (UDF)

D. Scale the SU count for the job down

E. Implement query parallelization by partitioning the data output

F. Implement query parallelization by partitioning the data input

 


Suggested Answer: BF

Scale out the query by allowing the system to process each input partition separately.
F: A Stream Analytics job definition includes inputs, a query, and output. Inputs are where the job reads the data stream from.
References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parallelization

Question 9

DRAG DROP -
You implement an event processing solution using Microsoft Azure Stream Analytics.
The solution must meet the following requirements:
✑ Ingest data from Blob storage
✑ Analyze data in real time
✑ Store processed data in Azure Cosmos DB
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Step 1: Configure Blob storage as input; select items with the TIMESTAMP BY clause
The default timestamp of Blob storage events in Stream Analytics is the timestamp that the blob was last modified, which is BlobLastModifiedUtcTime. To process the data as a stream using a timestamp in the event payload, you must use the TIMESTAMP BY keyword.
Example:
The following is a TIMESTAMP BY example which uses the EntryTime column as the application time for events:
SELECT TollId, EntryTime AS VehicleEntryTime, LicensePlate, State, Make, Model, VehicleType, VehicleWeight, Toll, Tag
FROM TollTagEntry TIMESTAMP BY EntryTime
Step 2: Set up cosmos DB as the output
Creating Cosmos DB as an output in Stream Analytics generates a prompt for information as seen below.
Reference Image
Step 3: Create a query statement with the SELECT INTO statement.
References: alt=”Reference Image” />
Step 3: Create a query statement with the SELECT INTO statement.
References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-inputs

Question 10

DRAG DROP -
You develop data engineering solutions for a company.
A project requires analysis of real-time Twitter feeds. Posts that contain specific keywords must be stored and processed on Microsoft Azure and then displayed by using Microsoft Power BI. You need to implement the solution.
Which five actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Step 1: Create an HDInisght cluster with the Spark cluster type
Step 2: Create a Jyputer Notebook
Step 3: Create a table –
The Jupyter Notebook that you created in the previous step includes code to create an hvac table.
Step 4: Run a job that uses the Spark Streaming API to ingest data from Twitter
Step 5: Load the hvac table into Power BI Desktop
You use Power BI to create visualizations, reports, and dashboards from the Spark cluster data.
References:
https://acadgild.com/blog/streaming-twitter-data-using-spark

https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-use-with-data-lake-store

Question 11

HOTSPOT -
You have a new Azure Data Factory environment.
You need to periodically analyze pipeline executions from the last 60 days to identify trends in execution durations. The solution must use Azure Log Analytics to query the data and create charts.
Which diagnostic settings should you configure in Data Factory? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

Log type: PipelineRuns –
A pipeline run in Azure Data Factory defines an instance of a pipeline execution.
Storage location: An Azure Storage account
Data Factory stores pipeline-run data for only 45 days. Use Monitor if you want to keep that data for a longer time. With Monitor, you can route diagnostic logs for analysis. You can also keep them in a storage account so that you have factory information for your chosen duration.
Save your diagnostic logs to a storage account for auditing or manual inspection. You can use the diagnostic settings to specify the retention time in days.
Reference:
https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers
https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor

Question 12

A company has an Azure SQL data warehouse. They want to use PolyBase to retrieve data from an Azure Blob storage account and ingest into the Azure SQL data warehouse. The files are stored in parquet format. The data needs to be loaded into a table called XYZ_sales.
Which of the following actions need to be performed to implement this requirement? (Choose four.)

A. Create an external file format that would map to the parquet-based files

B. Load the data into a staging table

C. Create an external table called XYZ_sales_details

D. Create an external data source for the Azure Blob storage account

E. Create a master key on the database

F. Configure Polybase to use the Azure Blob storage account

 


Suggested Answer: BCDE

There is an article on github as part of the Microsoft documentation that provides details on how to load data into an Azure SQL data warehouse from an Azure
Blob storage account. The key steps are:
Creating a master key in the database.
Creating an external data source for the Azure Blob storage account:
3. Create a master key for the MySampleDataWarehouse database. You only need to create a master key once per database.
CREATE MASTER KEY;
4. Run the following CREATE EXTERNAL DATA SOURCE statement to define the location of the Azure blob. This is the location of the external taxi cab data. To run a command that you have appended to the query window, highlight the commands you wish to run and click Execute.
<img src=”https://www.examtopics.com/assets/media/exam-media/03872/0050100001.png” alt=”Reference Image” />
Next you load the data. But it is always beneficial to load the data into a staging table first:
Load the data into your data warehouse.
This section uses the external tables you just defined to load the sample data from Azure Storage Blob to SQL Data Warehouse.
[!NOTE] This tutorial loads the data directly into the final table. In a production environment, you will usually use CREATE TABLE AS SELECT to load into a staging table. While data is in the staging table you can perform any necessary transformations. To append the data in the staging table to a production table, you can use the INSERT…SELECT statement. For more information, see Inserting data into a production table.
Since this is clearly provided in the documentation, all other options are incorrect.

Question 13

DRAG DROP -
You have an Azure Stream Analytics job that is a Stream Analytics project solution in Microsoft Visual Studio. The job accepts data generated by IoT devices in the JSON format.
You need to modify the job to accept data generated by the IoT devices in the Protobuf format.
Which three actions should you perform from Visual Studio in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Step 1: Add an Azure Stream Analytics Custom Deserializer Project (.NET) project to the solution.
Create a custom deserializer –
1. Open Visual Studio and select File > New > Project. Search for Stream Analytics and select Azure Stream Analytics Custom Deserializer Project (.NET). Give the project a name, like Protobuf Deserializer.
Reference Image
2. In Solution Explorer, right-click your Protobuf Deserializer project and select Manage NuGet Packages from the menu. Then install the
Microsoft.Azure.StreamAnalytics and Google.Protobuf NuGet packages.
3. Add the MessageBodyProto class and the MessageBodyDeserializer class to your project.
4. Build the Protobuf Deserializer project.
Step 2: Add .NET deserializer code for Protobuf to the custom deserializer project
Azure Stream Analytics has built-in support for three data formats: JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as Protocol Buffer, Bond and other user defined formats for both cloud and edge jobs.
Step 3: Add an Azure Stream Analytics Application project to the solution
Add an Azure Stream Analytics project
1. In Solution Explorer, right-click the Protobuf Deserializer solution and select Add > New Project. Under Azure Stream Analytics > Stream Analytics, choose
Azure Stream Analytics Application. Name it ProtobufCloudDeserializer and select OK.
2. Right-click References under the ProtobufCloudDeserializer Azure Stream Analytics project. Under Projects, add Protobuf Deserializer. It should be automatically populated for you.
Reference: alt=”Reference Image” />
2. In Solution Explorer, right-click your Protobuf Deserializer project and select Manage NuGet Packages from the menu. Then install the
Microsoft.Azure.StreamAnalytics and Google.Protobuf NuGet packages.
3. Add the MessageBodyProto class and the MessageBodyDeserializer class to your project.
4. Build the Protobuf Deserializer project.
Step 2: Add .NET deserializer code for Protobuf to the custom deserializer project
Azure Stream Analytics has built-in support for three data formats: JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as Protocol Buffer, Bond and other user defined formats for both cloud and edge jobs.
Step 3: Add an Azure Stream Analytics Application project to the solution
Add an Azure Stream Analytics project
1. In Solution Explorer, right-click the Protobuf Deserializer solution and select Add > New Project. Under Azure Stream Analytics > Stream Analytics, choose
Azure Stream Analytics Application. Name it ProtobufCloudDeserializer and select OK.
2. Right-click References under the ProtobufCloudDeserializer Azure Stream Analytics project. Under Projects, add Protobuf Deserializer. It should be automatically populated for you.
Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/custom-deserializer

Question 14

Note: This question is a part of series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You develop a data ingestion process that will import data to an enterprise data warehouse in Azure Synapse Analytics. The data to be ingested resides in parquet files stored in an Azure Data Lake Gen 2 storage account.
You need to load the data from the Azure Data Lake Gen 2 storage account into the Data Warehouse.
Solution:
1. Create an external data source pointing to the Azure Data Lake Gen 2 storage account
2. Create an external file format and external table using the external data source
3. Load the data using the CREATE TABLE AS SELECT statement
Does the solution meet the goal?

A. Yes

B. No

 


Suggested Answer: A

You need to create an external file format and external table using the external data source.
You load the data using the CREATE TABLE AS SELECT statement.
References:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store

Question 15

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a container named Sales in an Azure Cosmos DB database. Sales has 120 GB of data. Each entry in Sales has the following structure.
 Image
The partition key is set to the OrderId attribute.
Users report that when they perform queries that retrieve data by ProductName, the queries take longer than expected to complete.
You need to reduce the amount of time it takes to execute the problematic queries.
Solution: You create a lookup collection that uses ProductName as a partition key.
Does this meet the goal?

A. Yes

B. No

 


Suggested Answer: B

One option is to have a lookup collection ג€ProductNameג€ for the mapping of ג€ProductNameג€ to ג€OrderIdג€.
References:
https://azure.microsoft.com/sv-se/blog/azure-cosmos-db-partitioning-design-patterns-part-1/

Question 16

HOTSPOT -
You need to ensure that Azure Data Factory pipelines can be deployed. How should you configure authentication and authorization for deployments? To answer, select the appropriate options in the answer choices.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

The way you control access to resources using RBAC is to create role assignments. This is a key concept to understand ג€” it’s how permissions are enforced. A role assignment consists of three elements: security principal, role definition, and scope.
Scenario:
No credentials or secrets should be used during deployments
Phone-based poll data must only be uploaded by authorized users from authorized devices
Contractors must not have access to any polling data other than their own
Access to polling data must set on a per-active directory user basis
References:
https://docs.microsoft.com/en-us/azure/role-based-access-control/overview

Question 17

DRAG DROP -
You are implementing an Azure Blob storage account for an application that has the following requirements:
✑ Data created during the last 12 months must be readily accessible.
✑ Blobs older than 24 months must use the lowest storage costs. This data will be accessed infrequently.
✑ Data created 12 to 24 months ago will be accessed infrequently but must be readily accessible at the lowest storage costs.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Step 1: Create a block blob in a Blob storage account
First create the block blob.
Azure Blob storage lifecycle management offers a rich, rule-based policy for GPv2 and Blob storage accounts.
Step 2: Use an Azure Resource Manager template that has a lifecycle management policy
Step 3: Create a rule that has the rule actions of TierToCool and TierToArchive
Each rule definition includes a filter set and an action set. The filter set limits rule actions to a certain set of objects within a container or objects names.
Note: You can add a Rule through Azure portal:
Sign in to the Azure portal.
1. In the Azure portal, search for and select your storage account.
2. Under Blob service, select Lifecycle Management to view or change your rules.
3. Select the List View tab.
4. Select Add a rule and name your rule on the Details form. You can also set the Rule scope, Blob type, and Blob subtype values.
5. Select Base blobs to set the conditions for your rule. For example, blobs are moved to cool storage if they haven’t been modified for 30 days.
6. Etc.
Incorrect Answers:
✑ Schedule the lifecycle management policy to run:
You don’t Schedule the lifecycle management policy to run. The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours for some actions to run for the first time.
✑ Create a rule filter:
No need for a rule filter. Rule filters limit rule actions to a subset of blobs within the storage account.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts

Question 18

You implement an enterprise data warehouse in Azure Synapse Analytics.
You have a large fact table that is 10 terabytes (TB) in size.
Incoming queries use the primary key Sale Key column to retrieve data as displayed in the following table:
 Image
You need to distribute the large fact table across multiple nodes to optimize performance of the table.
Which technology should you use?

A. hash distributed table with clustered ColumnStore index

B. hash distributed table with clustered index

C. heap table with distribution replicate

D. round robin distributed table with clustered index

E. round robin distributed table with clustered ColumnStore index

 


Suggested Answer: A

Hash-distributed tables improve query performance on large fact tables.
Columnstore indexes can achieve up to 100x better performance on analytics and data warehousing workloads and up to 10x better data compression than traditional rowstore indexes.
Incorrect Answers:
D, E: Round-robin tables are useful for improving loading speed.
Reference:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-distribute
https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-query-performance

Question 19

You use Azure Stream Analytics to receive Twitter data from Azure Event Hubs and to output the data to an Azure Blob storage account.
You need to output the count of tweets during the last five minutes every five minutes. Each tweet must only be counted once.
Which windowing function should you use?

A. a five-minute Sliding window

B. a five-minute Session window

C. a five-minute Tumbling window

D. a five-minute Hopping window that has a one-minute hop

 


Suggested Answer: C

Tumbling window functions are used to segment a data stream into distinct time segments and perform a function against them, such as the example below. The key differentiators of a Tumbling window are that they repeat, do not overlap, and an event cannot belong to more than one tumbling window.
Reference Image
Incorrect Answers:
D: Hopping window functions hop forward in time by a fixed period. It may be easy to think of them as Tumbling windows that can overlap, so events can belong to more than one Hopping window result set. To make a Hopping window the same as a Tumbling window, specify the hop size to be the same as the window size.
Reference: alt=”Reference Image” />
Incorrect Answers:
D: Hopping window functions hop forward in time by a fixed period. It may be easy to think of them as Tumbling windows that can overlap, so events can belong to more than one Hopping window result set. To make a Hopping window the same as a Tumbling window, specify the hop size to be the same as the window size.
Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-window-functions

Question 20

You have a data warehouse in Azure Synapse Analytics.
You need to ensure that the data in the data warehouse is encrypted at rest.
What should you enable?

A. Transparent Data Encryption (TDE)

B. Secure transfer required

C. Always Encrypted for all columns

D. Advanced Data Security for this database

 


Suggested Answer: A

Azure SQL Database currently supports encryption at rest for Microsoft-managed service side and client-side encryption scenarios.
✑ Support for server encryption is currently provided through the SQL feature called Transparent Data Encryption.
✑ Client-side encryption of Azure SQL Database data is supported through the Always Encrypted feature.
Reference:
https://docs.microsoft.com/en-us/azure/security/fundamentals/encryption-atrest

Question 21

DRAG DROP -
You have an Azure subscription that contains an Azure Databricks environment and an Azure Storage account.
You need to implement secure communication between Databricks and the storage account.
You create an Azure key vault.
Which four actions should you perform in sequence? To answer, move the actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Step 1: Mount the storage account
Step 2: Retrieve an access key from the storage account.
Step 3: Add a secret to the key vault.
Step 4: Add a secret scope to the Databricks environment.
Managing secrets begins with creating a secret scope.
To reference secrets stored in an Azure Key Vault, you can create a secret scope backed by Azure Key Vault.
References:
https://docs.microsoft.com/en-us/azure/azure-databricks/store-secrets-azure-key-vault

Question 22

Note: This question is a part of series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You develop a data ingestion process that will import data to an enterprise data warehouse in Azure Synapse Analytics. The data to be ingested resides in parquet files stored in an Azure Data Lake Gen 2 storage account.
You need to load the data from the Azure Data Lake Gen 2 storage account into the Data Warehouse.
Solution:
1. Create an external data source pointing to the Azure storage account
2. Create a workload group using the Azure storage account name as the pool name
3. Load the data using the INSERT`¦SELECT statement
Does the solution meet the goal?

A. Yes

B. No

 


Suggested Answer: B

You need to create an external file format and external table using the external data source.
You then load the data using the CREATE TABLE AS SELECT statement.
References:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store

Question 23

A company plans to use Azure Storage for file storage purposes. Compliance rules require:
✑ A single storage account to store all operations including reads, writes and deletes
✑ Retention of an on-premises copy of historical operations
You need to configure the storage account.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Configure the storage account to log read, write and delete operations for service type Blob

B. Use the AzCopy tool to download log data from $logs/blob

C. Configure the storage account to log read, write and delete operations for service-type table

D. Use the storage client to download log data from $logs/table

E. Configure the storage account to log read, write and delete operations for service type queue

 


Suggested Answer: AB

Storage Logging logs request data in a set of blobs in a blob container named $logs in your storage account. This container does not show up if you list all the blob containers in your account but you can see its contents if you access it directly.
To view and analyze your log data, you should download the blobs that contain the log data you are interested in to a local machine. Many storage-browsing tools enable you to download blobs from your storage account; you can also use the Azure Storage team provided command-line Azure Copy Tool (AzCopy) to download your log data.
References:
https://docs.microsoft.com/en-us/rest/api/storageservices/enabling-storage-logging-and-accessing-log-data

Question 24

HOTSPOT -
You have an Azure Stream Analytics job named ASA1.
The Diagnostic settings for ASA1 are configured to write errors to Log Analytics.
ASA1 reports an error, and the following message is sent to Log Analytics.
 Image
You need to write a Kusto query language query to identify all instances of the error and return the message field.
How should you complete the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

Box 1: DataErrorType –
The DataErrorType is InputDeserializerError.InvalidData.
Box 2: Message –
Retrieve the message.
Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/data-errors

Question 25

SIMULATION -
 Image
Use the following login credentials as needed:
Azure Username: xxxxx -
Azure Password: xxxxx -
The following information is for technical support purposes only:
Lab Instance: 10277521 -
You need to replicate db1 to a new Azure SQL server named REPL10277521 in the Central Canada region.
To complete this task, sign in to the Azure portal.
NOTE: This task might take several minutes to complete. You can perform other tasks while the task completes or ends this section of the exam.
To complete this task, sign in to the Azure portal.

 


Suggested Answer: See the explanation below.

1. In the Azure portal, browse to the database that you want to set up for geo-replication.
2. On the SQL database page, select geo-replication, and then select the region to create the secondary database.
Reference Image
3. Select or configure the server and for the secondary database.
Region: Central Canada –
Target server: REPL10277521 –
Reference Image
4. Click Create to add the secondary.
5. The secondary database is created and the seeding process begins.
Reference Image
6. When the seeding process is complete, the secondary database displays its status.
Reference Image
References: alt=”Reference Image” />
3. Select or configure the server and for the secondary database.
Region: Central Canada –
Target server: REPL10277521 –
Reference Image
4. Click Create to add the secondary.
5. The secondary database is created and the seeding process begins.
Reference Image
6. When the seeding process is complete, the secondary database displays its status.
<img src=”https://www.examtopics.com/assets/media/exam-media/03872/0005700001.png” alt=”Reference Image” />
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-active-geo-replication-portal

Question 26

DRAG DROP -
You are developing the data platform for a global retail company. The company operates during normal working hours in each region. The analytical database is used once a week for building sales projections.
Each region maintains its own private virtual network.
Building the sales projections is very resource intensive and generates upwards of 20 terabytes (TB) of data.
Microsoft Azure SQL Databases must be provisioned.
✑ Database provisioning must maximize performance and minimize cost
✑ The daily sales for each region must be stored in an Azure SQL Database instance
✑ Once a day, the data for all regions must be loaded in an analytical Azure SQL Database instance
You need to provision Azure SQL database instances.
How should you provision the database instances? To answer, drag the appropriate Azure SQL products to the correct databases. Each Azure SQL product may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Box 1: Azure SQL Database elastic pools
SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price. Elastic pools in Azure
SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.
Box 2: Azure SQL Database Hyperscale
A Hyperscale database is an Azure SQL database in the Hyperscale service tier that is backed by the Hyperscale scale-out storage technology. A Hyperscale database supports up to 100 TB of data and provides high throughput and performance, as well as rapid scaling to adapt to the workload requirements. Scaling is transparent to the application ג€” connectivity, query processing, and so on, work like any other SQL database.
Incorrect Answers:
Azure SQL Database Managed Instance: The managed instance deployment model is designed for customers looking to migrate a large number of apps from on- premises or IaaS, self-built, or ISV provided environment to fully managed PaaS cloud environment, with as low migration effort as possible.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tier-hyperscale-faq

Question 27

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to create an Azure Databricks workspace that has a tiered structure. The workspace will contain the following three workloads:
✑ A workload for data engineers who will use Python and SQL
✑ A workload for jobs that will run notebooks that use Python, Scala, and SQL
✑ A workload that data scientists will use to perform ad hoc analysis in Scala and R
The enterprise architecture team at your company identifies the following standards for Databricks environments:
✑ The data engineers must share a cluster.
✑ The job cluster will be managed by using a request process whereby data scientists and data engineers provide packaged notebooks for deployment to the cluster.
✑ All the data scientists must be assigned their own cluster that terminates automatically after 120 minutes of inactivity. Currently, there are three data scientists.
You need to create the Databricks clusters for the workloads.
Solution: You create a Standard cluster for each data scientist, a High Concurrency cluster for the data engineers, and a Standard cluster for the jobs.
Does this meet the goal?

A. Yes

B. No

 


Suggested Answer: B

We would need a High Concurrency cluster for the jobs.
Note:
Standard clusters are recommended for a single user. Standard can run workloads developed in any language: Python, R, Scala, and SQL.
A high concurrency cluster is a managed cloud resource. The key benefits of high concurrency clusters are that they provide Apache Spark-native fine-grained sharing for maximum resource utilization and minimum query latencies.
References:
https://docs.azuredatabricks.net/clusters/configure.html

Question 28

HOTSPOT -
You need to receive an alert when Azure Synapse Analytics consumes the maximum allotted resources.
Which resource type and signal should you use to create the alert in Azure Monitor? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

Resource type: SQL data warehouse
DWU limit belongs to the SQL data warehouse resource type.
Signal: DWU limit –
SQL Data Warehouse capacity limits are maximum values allowed for various components of Azure SQL Data Warehouse.
Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-insights-alerts-portal

Question 29

What should you implement to optimize SQL Database for Race Central to meet the technical requirements?

A. the sp_update_stats stored procedure

B. automatic tuning

C. Query Store

D. the dbcc checkdb command

 


Suggested Answer: A

Scenario: The query performance of Race Central must be stable, and the administrative time it takes to perform optimizations must be minimized. sp_updatestats updates query optimization statistics on a table or indexed view. By default, the query optimizer already updates statistics as necessary to improve the query plan; in some cases you can improve query performance by using UPDATE STATISTICS or the stored procedure sp_updatestats to update statistics more frequently than the default updates.
Incorrect Answers:
D: dbcc checkdchecks the logical and physical integrity of all the objects in the specified database
References:
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-updatestats-transact-sql?view=sql-server-ver15

Question 30

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to configure data encryption for external applications.
Solution:
1. Access the Always Encrypted Wizard in SQL Server Management Studio
2. Select the column to be encrypted
3. Set the encryption type to Deterministic
4. Configure the master key to use the Windows Certificate Store
5. Validate configuration results and deploy the solution
Does the solution meet the goal?

A. Yes

B. No

 


Suggested Answer: B

Use the Azure Key Vault, not the Windows Certificate Store, to store the master key as it must be used by external applications.
Note: The Master Key Configuration page is where you set up your CMK (Column Master Key) and select the key store provider where the CMK will be stored.
Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a hardware security module (HSM).
Reference Image
However, if you use the Windows Certificate Store for external applications to use the key, the external application must run on the same computer where you ran the Always Encrypted wizard, or you must deploy the Always Encrypted certificates to the computer running the external application.
Reference: alt=”Reference Image” />
However, if you use the Windows Certificate Store for external applications to use the key, the external application must run on the same computer where you ran the Always Encrypted wizard, or you must deploy the Always Encrypted certificates to the computer running the external application.
Reference:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted-azure-key-vault
https://docs.microsoft.com/en-us/azure/azure-sql/database/always-encrypted-certificate-store-configure

Question 31

You are monitoring the Data Factory pipeline that runs from Cosmos DB to SQL Database for Race Central.
You discover that the job takes 45 minutes to run.
What should you do to improve the performance of the job?

A. Decrease parallelism for the copy activities.

B. Increase that data integration units.

C. Configure the copy activities to use staged copy.

D. Configure the copy activities to perform compression.

 


Suggested Answer: B

Performance tuning tips and optimization features. In some cases, when you run a copy activity in Azure Data Factory, you see a “Performance tuning tips” message on top of the copy activity monitoring, as shown in the following example. The message tells you the bottleneck that was identified for the given copy run.
It also guides you on what to change to boost copy throughput. The performance tuning tips currently provide suggestions like:
✑ Use PolyBase when you copy data into Azure SQL Data Warehouse.
✑ Increase Azure Cosmos DB Request Units or Azure SQL Database DTUs (Database Throughput Units) when the resource on the data store side is the bottleneck.
✑ Remove the unnecessary staged copy.
References:
https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-performance

Question 32

Your company uses several Azure HDInsight clusters.
The data engineering team reports several errors with some applications using these clusters.
You need to recommend a solution to review the health of the clusters.
What should you include in your recommendation?

A. Azure Automation

B. Log Analytics

C. Application Insights

 


Suggested Answer: B

Azure Monitor logs integration. Azure Monitor logs enables data generated by multiple resources such as HDInsight clusters, to be collected and aggregated in one place to achieve a unified monitoring experience.
As a prerequisite, you will need a Log Analytics Workspace to store the collected data. If you have not already created one, you can follow the instructions for creating a Log Analytics Workspace.
You can then easily configure an HDInsight cluster to send many workload-specific metrics to Log Analytics.
References:
https://azure.microsoft.com/sv-se/blog/monitoring-on-azure-hdinsight-part-2-cluster-health-and-availability/

Question 33

You need to develop a pipeline for processing data. The pipeline must meet the following requirements:
✑ Scale up and down resources for cost reduction
✑ Use an in-memory data processing engine to speed up ETL and machine learning operations.
✑ Use streaming capabilities
✑ Provide the ability to code in SQL, Python, Scala, and R
Integrate workspace collaboration with Git
 Image
What should you use?

A. HDInsight Spark Cluster

B. Azure Stream Analytics

C. HDInsight Hadoop Cluster

D. Azure SQL Data Warehouse

E. HDInsight Kafka Cluster

F. HDInsight Storm Cluster

 


Suggested Answer: A

Aparch Spark is an open-source, parallel-processing framework that supports in-memory processing to boost the performance of big-data analysis applications.
HDInsight is a managed Hadoop service. Use it deploy and manage Hadoop clusters in Azure. For batch processing, you can use Spark, Hive, Hive LLAP,
MapReduce.
Languages: R, Python, Java, Scala, SQL
You can create an HDInsight Spark cluster using an Azure Resource Manager template. The template can be found in GitHub.
References:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processing

Question 34

DRAG DROP -
A company uses Microsoft Azure SQL Database to store sensitive company data. You encrypt the data and only allow access to specified users from specified locations.
You must monitor data usage, and data copied from the system to prevent data leakage.
You need to configure Azure SQL Database to email a specific user when data leakage occurs.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Step 1: Enable advanced threat protection
Set up threat detection for your database in the Azure portal
1. Launch the Azure portal at https://portal.azure.com.
2. Navigate to the configuration page of the Azure SQL Database server you want to protect. In the security settings, select Advanced Data Security.
3. On the Advanced Data Security configuration page:
Enable advanced data security on the server.
In Threat Detection Settings, in the Send alerts to text box, provide the list of emails to receive security alerts upon detection of anomalous database activities.
Reference Image
Step 2: Configure the service to send email alerts to
security@contoso.team
Step 3:..of type data exfiltration
The benefits of Advanced Threat Protection for Azure Storage include:
Detection of anomalous access and data exfiltration activities.
Security alerts are triggered when anomalies in activity occur: access from an unusual location, anonymous access, access by an unusual application, data exfiltration, unexpected delete operations, access permission change, and so on.
Admins can view these alerts via Azure Security Center and can also choose to be notified of each of them via email.
References:https://portal.azure.com.

2. Navigate to the configuration page of the Azure SQL Database server you want to protect. In the security settings, select Advanced Data Security.
3. On the Advanced Data Security configuration page:
Enable advanced data security on the server.
In Threat Detection Settings, in the Send alerts to text box, provide the list of emails to receive security alerts upon detection of anomalous database activities.
<img src=”https://www.examtopics.com/assets/media/exam-media/03872/0033200001.jpg” alt=”Reference Image” />
Step 2: Configure the service to send email alerts to
security@contoso.team
Step 3:..of type data exfiltration
The benefits of Advanced Threat Protection for Azure Storage include:
Detection of anomalous access and data exfiltration activities.
Security alerts are triggered when anomalies in activity occur: access from an unusual location, anonymous access, access by an unusual application, data exfiltration, unexpected delete operations, access permission change, and so on.
Admins can view these alerts via Azure Security Center and can also choose to be notified of each of them via email.
References:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-threat-detection
https://www.helpnetsecurity.com/2019/04/04/microsoft-azure-security/

Question 35

HOTSPOT -
You are designing a new Lambda architecture on Microsoft Azure.
The real-time processing layer must meet the following requirements:
Ingestion:
✑ Receive millions of events per second
✑ Act as a fully managed Platform-as-a-Service (PaaS) solution
✑ Integrate with Azure Functions
Stream processing:
✑ Process on a per-job basis
✑ Provide seamless connectivity with Azure services
✑ Use a SQL-based query language
Analytical data store:
✑ Act as a managed service
✑ Use a document store
✑ Provide data encryption at rest
You need to identify the correct technologies to build the Lambda architecture using minimal effort. Which technologies should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

Box 1: Azure Event Hubs –
This portion of a streaming architecture is often referred to as stream buffering. Options include Azure Event Hubs, Azure IoT Hub, and Kafka.
Incorrect Answers: Not HDInsight Kafka
Azure Functions need a trigger defined in order to run. There is a limited set of supported trigger types, and Kafka is not one of them.
Box 2: Azure Stream Analytics –
Azure Stream Analytics provides a managed stream processing service based on perpetually running SQL queries that operate on unbounded streams.
You can also use open source Apache streaming technologies like Storm and Spark Streaming in an HDInsight cluster.
Box 3: Azure Synapse Analytics –
Azure Synapse Analytics provides a managed service for large-scale, cloud-based data warehousing. HDInsight supports Interactive Hive, HBase, and Spark
SQL, which can also be used to serve data for analysis.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/big-data/

Question 36

You have an enterprise data warehouse in Azure Synapse Analytics.
Using PolyBase, you create an external table named [Ext].[Items] to query Parquet files stored in Azure Data Lake Storage Gen2 without importing the data to the data warehouse.
The external table has three columns.
You discover that the Parquet files have a fourth column named ItemID.
Which command should you run to add the ItemID column to the external table?
 Image

A. Option A

B. Option B

C. Option C

D. Option D

 


Suggested Answer: A

Incorrect Answers:
B, D: Only these Data Definition Language (DDL) statements are allowed on external tables:
✑ CREATE TABLE and DROP TABLE
✑ CREATE STATISTICS and DROP STATISTICS
✑ CREATE VIEW and DROP VIEW
Reference:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-table-transact-sql

Question 37

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing a solution that will use Azure Stream Analytics. The solution will accept an Azure Blob storage file named Customers. The file will contain both in-store and online customer details. The online customers will provide a mailing address.
You have a file in Blob storage named LocationIncomes that contains median incomes based on location. The file rarely changes.
You need to use an address to look up a median income based on location. You must output the data to Azure SQL Database for immediate use and to Azure
Data Lake Storage Gen2 for long-term retention.
Solution: You implement a Stream Analytics job that has one streaming input, one reference input, two queries, and four outputs.
Does this meet the goal?

A. Yes

B. No

 


Suggested Answer: A

We need one reference data input for LocationIncomes, which rarely changes.
We need two queries, on for in-store customers, and one for online customers.
For each query two outputs is needed.
Note: Stream Analytics also supports input known as reference data. Reference data is either completely static or changes slowly.
References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-add-inputs#stream-and-reference-inputs
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-outputs

Question 38

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to create an Azure Databricks workspace that has a tiered structure. The workspace will contain the following three workloads:
✑ A workload for data engineers who will use Python and SQL
✑ A workload for jobs that will run notebooks that use Python, Scala, and SQL
✑ A workload that data scientists will use to perform ad hoc analysis in Scala and R
The enterprise architecture team at your company identifies the following standards for Databricks environments:
✑ The data engineers must share a cluster.
✑ The job cluster will be managed by using a request process whereby data scientists and data engineers provide packaged notebooks for deployment to the cluster.
✑ All the data scientists must be assigned their own cluster that terminates automatically after 120 minutes of inactivity. Currently, there are three data scientists.
You need to create the Databricks clusters for the workloads.
Solution: You create a High Concurrency cluster for each data scientist, a High Concurrency cluster for the data engineers, and a Standard cluster for the jobs.
Does this meet the goal?

A. Yes

B. No

 


Suggested Answer: B

No need for a High Concurrency cluster for each data scientist.
Standard clusters are recommended for a single user. Standard can run workloads developed in any language: Python, R, Scala, and SQL.
A high concurrency cluster is a managed cloud resource. The key benefits of high concurrency clusters are that they provide Apache Spark-native fine-grained sharing for maximum resource utilization and minimum query latencies.
References:
https://docs.azuredatabricks.net/clusters/configure.html

Question 39

You need to deploy a Microsoft Azure Stream Analytics job for an IoT based solution. The solution must minimize latency. The solution must also minimize the bandwidth usage between the job and the IoT device.
Which of the following actions must you perform for this requirement? (Choose four.)

A. Ensure to configure routes

B. Create an Azure Blob storage container

C. Configure Streaming Units

D. Create an IoT Hub and add the Azure Stream Analytics modules to the IoT Hub namespace

E. Create an Azure Stream Analytics edge job and configure job definition save location

F. Create an Azure Stream Analytics cloud job and configure job definition save location

 


Suggested Answer: ABDF

There is an article in the Microsoft documentation on configuring Azure Stream Analytics on IoT Edge devices.
You need to have a storage container for the job definition:
Installation instructions –
The high-level steps are described in the following table. More details are given in the following sections.
Reference Image
You also need to create a cloud part job definition:
Reference Image
You also need to set the modules for your IoT edge device:
Deployment ASA on your IoT Edge device(s)
Add ASA to your deployment –
ג€¢ In the Azure portal, open IoT Hub, navigate to IoT Edge and click on the device you want to target for this deployment.
ג€¢ Select Set modules, then select + Add and choose Azure Stream Analytics Module.
ג€¢ Select the subscription and the ASA Edge job that you created. Click Save.
Reference Image
You also need to configure the Routes:
Configure routes –
IoT Edge provides a way to declaratively route messages between modules, and between modules and IoT Hub. The full syntax is described here. Names of the inputs and outputs created in the ASA job can be used as endpoints for routing.
Since this is clear from the Microsoft documentation, all other options are incorrect.
Reference: alt=”Reference Image” />
You also need to create a cloud part job definition:
Reference Image
You also need to set the modules for your IoT edge device:
Deployment ASA on your IoT Edge device(s)
Add ASA to your deployment –
ג€¢ In the Azure portal, open IoT Hub, navigate to IoT Edge and click on the device you want to target for this deployment.
ג€¢ Select Set modules, then select + Add and choose Azure Stream Analytics Module.
ג€¢ Select the subscription and the ASA Edge job that you created. Click Save.
<img src=”https://www.examtopics.com/assets/media/exam-media/03872/0056000001.jpg” alt=”Reference Image” />
You also need to configure the Routes:
Configure routes –
IoT Edge provides a way to declaratively route messages between modules, and between modules and IoT Hub. The full syntax is described here. Names of the inputs and outputs created in the ASA job can be used as endpoints for routing.
Since this is clear from the Microsoft documentation, all other options are incorrect.
Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge

Question 40

HOTSPOT -
A company plans to develop solutions to perform batch processing of multiple sets of geospatial data.
You need to implement the solutions.
Which Azure services should you use? To answer, select the appropriate configuration in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

Box 1: HDInsight Tools for Visual Studio
Azure HDInsight Tools for Visual Studio Code is an extension in the Visual Studio Code Marketplace for developing Hive Interactive Query, Hive Batch Job and
PySpark Job against Microsoft HDInsight.
Box 2: Hive View –
You can use Apache Ambari Hive View with Apache Hadoop in HDInsight. The Hive View allows you to author, optimize, and run Hive queries from your web browser.
Box 3: HDInsight REST API –
Azure HDInsight REST APIs are used to create and manage HDInsight resources through Azure Resource Manager.
References:
https://visualstudiomagazine.com/articles/2019/01/25/vscode-hdinsight.aspx
https://docs.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-use-hive-ambari-view
https://docs.microsoft.com/en-us/rest/api/hdinsight/

Question 41

You have an Azure data factory.
You need to examine the pipeline failures from the last 60 days.
What should you use?

A. the Activity log blade for the Data Factory resource

B. Azure Monitor

C. the Monitor & Manage app in Data Factory

D. the Resource health blade for the Data Factory resource

 


Suggested Answer: B

Data Factory stores pipeline-run data for only 45 days. Use Azure Monitor if you want to keep that data for a longer time. With Monitor, you can route diagnostic logs for analysis to multiple different targets.
Reference:
https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor

Question 42

You have an alert on a SQL pool in Azure Synapse that uses the signal logic shown in the exhibit.
 Image
On the same day, failures occur at the following times:
✑ 08:01
✑ 08:03
✑ 08:04
✑ 08:06
✑ 08:11
✑ 08:16
✑ 08:19
The evaluation period starts on the hour.
At which times will alert notifications be sent?

A. 08:15 only

B. 08:10, 08:15, and 08:20

C. 08:05 and 08:10 only

D. 08:10 only

E. 08:05 only

 


Suggested Answer: B

Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/alerts-insights-configure-portal

Question 43

A company is designing a hybrid solution to synchronize data and on-premises Microsoft SQL Server database to Azure SQL Database.
You must perform an assessment of databases to determine whether data will move without compatibility issues. You need to perform the assessment.
Which tool should you use?

A. SQL Server Migration Assistant (SSMA)

B. Microsoft Assessment and Planning Toolkit

C. SQL Vulnerability Assessment (VA)

D. Azure SQL Data Sync

E. Data Migration Assistant (DMA)

 


Suggested Answer: E

The Data Migration Assistant (DMA) helps you upgrade to a modern data platform by detecting compatibility issues that can impact database functionality in your new version of SQL Server or Azure SQL Database. DMA recommends performance and reliability improvements for your target environment and allows you to move your schema, data, and uncontained objects from your source server to your target server.
References:
https://docs.microsoft.com/en-us/sql/dma/dma-overview

Question 44

You have a SQL pool in Azure Synapse.
A user reports that queries against the pool take longer than expected to complete.
You need to add monitoring to the underlying storage to help diagnose the issue.
Which two metrics should you monitor? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Cache used percentage

B. DWU Limit

C. Snapshot Storage Size

D. Active queries

E. Cache hit percentage

 


Suggested Answer: AE

A: Cache used is the sum of all bytes in the local SSD cache across all nodes and cache capacity is the sum of the storage capacity of the local SSD cache across all nodes.
E: Cache hits is the sum of all columnstore segments hits in the local SSD cache and cache miss is the columnstore segments misses in the local SSD cache summed across all nodes
Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity

Question 45

You need to ensure that phone-based poling data can be analyzed in the PollingData database.
How should you configure Azure Data Factory?

A. Use a tumbling schedule trigger

B. Use an event-based trigger

C. Use a schedule trigger

D. Use manual execution

 


Suggested Answer: C

When creating a schedule trigger, you specify a schedule (start date, recurrence, end date etc.) for the trigger, and associate with a Data Factory pipeline.
Scenario:
All data migration processes must use Azure Data Factory
All data migrations must run automatically during non-business hours
References:
https://docs.microsoft.com/en-us/azure/data-factory/how-to-create-schedule-trigger

Question 46

SIMULATION -
 Image
Use the following login credentials as needed:
Azure Username: xxxxx -
Azure Password: xxxxx -
The following information is for technical support purposes only:
Lab Instance: 10277521 -
You plan to generate large amounts of real-time data that will be copied to Azure Blob storage.
You plan to create reports that will read the data from an Azure Cosmos DB database.
You need to create an Azure Stream Analytics job that will input the data from a blob storage named storage10277521 to the Cosmos DB database.
To complete this task, sign in to the Azure portal.

 


Suggested Answer: See the explanation below.

Step 1: Create a Stream Analytics job
1. Sign in to the Azure portal.
2. Select Create a resource in the upper left-hand corner of the Azure portal.
3. Select Analytics > Stream Analytics job from the results list.
4. Fill out the Stream Analytics job page.
Reference Image
5. Check the Pin to dashboard box to place your job on your dashboard and then select Create.
6. You should see a Deployment in progress… notification displayed in the top right of your browser window.
Step 2: Configure job input –
1. Navigate to your Stream Analytics job.
2. Select Inputs > Add Stream input > Azure Blob storage
Reference Image
3. In the Azure Blob storage setting choose: storage10277521. Leave other options to default values and select Save to save the settings.
Reference: alt=”Reference Image” />
5. Check the Pin to dashboard box to place your job on your dashboard and then select Create.
6. You should see a Deployment in progress… notification displayed in the top right of your browser window.
Step 2: Configure job input –
1. Navigate to your Stream Analytics job.
2. Select Inputs > Add Stream input > Azure Blob storage
<img src=”https://www.examtopics.com/assets/media/exam-media/03872/0016200001.jpg” alt=”Reference Image” />
3. In the Azure Blob storage setting choose: storage10277521. Leave other options to default values and select Save to save the settings.
Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-quick-create-portal

Question 47

You develop data engineering solutions for a company.
You need to ingest and visualize real-time Twitter data by using Microsoft Azure.
Which three technologies should you use? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Event Grid topic

B. Azure Stream Analytics Job that queries Twitter data from an Event Hub

C. Azure Stream Analytics Job that queries Twitter data from an Event Grid

D. Logic App that sends Twitter posts which have target keywords to Azure

E. Event Grid subscription

F. Event Hub instance

 


Suggested Answer: BDF

You can use Azure Logic apps to send tweets to an event hub and then use a Stream Analytics job to read from event hub and send them to PowerBI.
References:
https://community.powerbi.com/t5/Integrations-with-Files-and/Twitter-streaming-analytics-step-by-step/td-p/9594

Question 48

DRAG DROP -
A company builds an application to allow developers to share and compare code. The conversations, code snippets, and links shared by people in the application are stored in a Microsoft Azure SQL Database instance. The application allows for searches of historical conversations and code snippets.
When users share code snippets, the code snippet is compared against previously share code snippets by using a combination of Transact-SQL functions including SUBSTRING, FIRST_VALUE, and SQRT. If a match is found, a link to the match is added to the conversation.
Customers report the following issues:
✑ Delays occur during live conversations
✑ A delay occurs before matching links appear after code snippets are added to conversations
You need to resolve the performance issues.
Which technologies should you use? To answer, drag the appropriate technologies to the correct issues. Each technology may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Box 1: memory-optimized table –
In-Memory OLTP can provide great performance benefits for transaction processing, data ingestion, and transient data scenarios.
Box 2: materialized view –
To support efficient querying, a common solution is to generate, in advance, a view that materializes the data in a format suited to the required results set. The
Materialized View pattern describes generating prepopulated views of data in environments where the source data isn’t in a suitable format for querying, where generating a suitable query is difficult, or where query performance is poor due to the nature of the data or the data store.
These materialized views, which only contain data required by a query, allow applications to quickly obtain the information they need. In addition to joining tables or combining data entities, materialized views can include the current values of calculated columns or data items, the results of combining values or executing transformations on the data items, and values specified as part of the query. A materialized view can even be optimized for just a single query.
References:
https://docs.microsoft.com/en-us/azure/architecture/patterns/materialized-view

Question 49

HOTSPOT -
You have an enterprise data warehouse in Azure Synapse Analytics that contains a table named FactOnlineSales. The table contains data from the start of 2009 to the end of 2012.
You need to improve the performance of queries against FactOnlineSales by using table partitions. The solution must meet the following requirements:
✑ Create four partitions based on the order date.
✑ Ensure that each partition contains all the orders placed during a given calendar year.
How should you complete the T-SQL command? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
 Image

 


Suggested Answer:
Correct Answer Image

Box 1: LEFT –
RANGE LEFT: Specifies the boundary value belongs to the partition on the left (lower values). The default is LEFT.
Box 2: 20090101, 20100101, 20110101, 20120101
FOR VALUES ( boundary_value [,…n] ) specifies the boundary values for the partition. boundary_value is a constant expression.
Reference:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-azure-sql-data-warehouse

Question 50

DRAG DROP -
You develop data engineering solutions for a company.
You need to deploy a Microsoft Azure Stream Analytics job for an IoT solution. The solution must:
✑ Minimize latency.
✑ Minimize bandwidth usage between the job and IoT device.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
 Image

 


Suggested Answer:
Correct Answer Image

Step 1: Create an Azure Blob Storage container
To prepare your Stream Analytics job to be deployed on an IoT Edge device, you need to associate the job with a container in a storage account. When you go to deploy your job, the job definition is exported to the storage container.
Step 2: Create an Azure Stream Analytics edge job and configure job definition save location
When you create an Azure Stream Analytics job to run on an IoT Edge device, it needs to be stored in a way that can be called from the device.
Step 3: Create and IoT hub and add the Azure Stream Analytics module to the IoT Hub namespace
An IoT Hub in Azure is required.
Stream Analytics accepts data incoming from several kinds of event sources including Event Hubs, IoT Hub, and Blob storage.
Step 4: Configure routes –
You are now ready to deploy the Azure Stream Analytics job on your IoT Edge device.
The routes that you declare define the flow of data through the IoT Edge device.
Reference:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-add-inputs
https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-deploy-stream-analytics
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge

Access Full DP-200 Mock Test Free

Want a full-length mock test experience? Click here to unlock the complete DP-200 Mock Test Free set and get access to hundreds of additional practice questions covering all key topics.

We regularly update our question sets to stay aligned with the latest exam objectives—so check back often for fresh content!

Start practicing with our DP-200 mock test free today—and take a major step toward exam success!

Share18Tweet11
Previous Post

DP-100 Mock Test Free

Next Post

DP-201 Mock Test Free

Next Post

DP-201 Mock Test Free

DP-203 Mock Test Free

DP-500 Mock Test Free

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Network+ Practice Test

Comptia Security+ Practice Test

A+ Certification Practice Test

Aws Cloud Practitioner Exam Questions

Aws Cloud Practitioner Practice Exam

Comptia A+ Practice Test

  • About
  • DMCA
  • Privacy & Policy
  • Contact

PracticeTestFree.com materials do not contain actual questions and answers from Cisco's Certification Exams. PracticeTestFree.com doesn't offer Real Microsoft Exam Questions. PracticeTestFree.com doesn't offer Real Amazon Exam Questions.

  • Login
  • Sign Up
No Result
View All Result
  • Quesions
    • Cisco
    • AWS
    • Microsoft
    • CompTIA
    • Google
    • ISACA
    • ECCouncil
    • F5
    • GIAC
    • ISC
    • Juniper
    • LPI
    • Oracle
    • Palo Alto Networks
    • PMI
    • RedHat
    • Salesforce
    • VMware
  • Courses
    • CCNA
    • ENCOR
    • VMware vSphere
  • Certificates

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.