Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
  • Login
  • Register
Quesions Library
  • Cisco
    • 200-301
    • 200-901
      • Multiple Choice
      • Drag Drop
    • 350-401
      • Multiple Choice
      • Drag Drop
    • 350-701
    • 300-410
      • Multiple Choice
      • Drag Drop
    • 300-415
      • Multiple Choice
      • Drag Drop
    • 300-425
    • Others
  • AWS
    • CLF-C02
    • SAA-C03
    • SAP-C02
    • ANS-C01
    • Others
  • Microsoft
    • AZ-104
    • AZ-204
    • AZ-305
    • AZ-900
    • AI-900
    • SC-900
    • Others
  • CompTIA
    • SY0-601
    • N10-008
    • 220-1101
    • 220-1102
    • Others
  • Google
    • Associate Cloud Engineer
    • Professional Cloud Architect
    • Professional Cloud DevOps Engineer
    • Others
  • ISACA
    • CISM
    • CRIS
    • Others
  • LPI
    • 101-500
    • 102-500
    • 201-450
    • 202-450
  • Fortinet
    • NSE4_FGT-7.2
  • VMware
  • >>
    • Juniper
    • EC-Council
      • 312-50v12
    • ISC
      • CISSP
    • PMI
      • PMP
    • Palo Alto Networks
    • RedHat
    • Oracle
    • GIAC
    • F5
    • ITILF
    • Salesforce
Contribute
Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
Practice Test Free
No Result
View All Result
Home Practice Test Free

DBS-C01 Practice Test Free

Table of Contents

Toggle
  • DBS-C01 Practice Test Free – 50 Real Exam Questions to Boost Your Confidence
  • Free Access Full DBS-C01 Practice Test Free Questions

DBS-C01 Practice Test Free – 50 Real Exam Questions to Boost Your Confidence

Preparing for the DBS-C01 exam? Start with our DBS-C01 Practice Test Free – a set of 50 high-quality, exam-style questions crafted to help you assess your knowledge and improve your chances of passing on the first try.

Taking a DBS-C01 practice test free is one of the smartest ways to:

  • Get familiar with the real exam format and question types
  • Evaluate your strengths and spot knowledge gaps
  • Gain the confidence you need to succeed on exam day

Below, you will find 50 free DBS-C01 practice questions to help you prepare for the exam. These questions are designed to reflect the real exam structure and difficulty level. You can click on each Question to explore the details.

Question 1

A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the
Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to
RDS Data API is private and never passes through the public internet.
What should the database specialist do to meet this requirement?

A. Modify the Aurora Serverless cluster by selecting a VPC with private subnets.

B. Modify the Aurora Serverless cluster by unchecking the publicly accessible option.

C. Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API.

D. Create a gateway VPC endpoint for RDS Data API.

 


Suggested Answer: C

Community Answer: C

 

Question 2

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture.
Which solution will meet these requirements?

A. Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.

B. Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

C. Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

D. Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.

 


Suggested Answer: C

Community Answer: B

Reference:
https://aws.amazon.com/blogs/mt/best-practices-for-migrating-microsoft-sql-server-databases-to-amazon-ec2-using-cloudendure/

 

Question 3

A large retail company recently migrated its three-tier ecommerce applications to AWS. The company's backend database is hosted on Amazon Aurora
PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.
How should this issue be resolved?

A. Modify the application to commit transactions in batches

B. Add a new Aurora Replica to the Aurora DB cluster.

C. Add an Amazon ElastiCache for Redis cluster and change the application to write through.

D. Change the Aurora DB cluster storage to Provisioned IOPS (PIOPS).

 


Suggested Answer: B

Community Answer: A

 

Question 4

A development team asks a database specialist to create a copy of a production Amazon RDS for MySQL DB instance every morning. The development team will use the copied DB instance as a testing environment for development. The original DB instance and the copy will be hosted in different VPCs of the same AWS account. The development team wants the copy to be available by 6 AM each day and wants to use the same endpoint address each day.
Which combination of steps should the database specialist take to meet these requirements MOST cost-effectively? (Choose three.)

A. Create a snapshot of the production database each day before the 6 AM deadline.

B. Create an RDS for MySQL DB instance from the snapshot. Select the desired DB instance size.

C. Update a defined Amazon Route 53 CNAME record to point to the copied DB instance.

D. Set up an AWS Database Migration Service (AWS DMS) migration task to copy the snapshot to the copied DB instance.

E. Use the CopySnapshot action on the production DB instance to create a snapshot before 6 AM.

F. Update a defined Amazon Route 53 alias record to point to the copied DB instance.

 


Suggested Answer: AEF

Community Answer: ABC

 

Question 5

A retail company uses Amazon Redshift Spectrum to run complex analytical queries on objects that are stored in an Amazon S3 bucket. The objects are joined with multiple dimension tables that are stored in an Amazon Redshift database. The company uses the database to create monthly and quarterly aggregated reports. Users who attempt to run queries are reporting the following error message: error: Spectrum Scan Error: Access throttled
Which solution will resolve this error?

A. Check file sizes of fact tables in Amazon S3, and look for large files. Break up large files into smaller files of equal size between 100 MB and 1 GB

B. Reduce the number of queries that users can run in parallel.

C. Check file sizes of fact tables in Amazon S3, and look for small files. Merge the small files into larger files of at least 64 MB in size.

D. Review and optimize queries that submit a large aggregation step to Redshift Spectrum.

 


Suggested Answer: A

Community Answer: C

Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-upload-large-files/

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0012700001.png” alt=”Reference Image” />

 

Question 6

A company has a hybrid environment in which a VPC connects to an on-premises network through an AWS Site-to-Site VPN connection. The VPC contains an application that is hosted on Amazon EC2 instances. The EC2 instances run in private subnets behind an Application Load Balancer (ALB) that is associated with multiple public subnets. The EC2 instances need to securely access an Amazon DynamoDB table.
Which solution will meet these requirements?

A. Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.

B. Add a NAT gateway in one of the public subnets of the VPC. Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway.

C. Use the Site-to-Site VPN connection to route all DynamoDB network traffic through the on-premises network infrastructure to access the EC2 instances.

D. Create a VPC endpoint for DynamoDB. Assign the endpoint to the route table of the private subnets that contain the EC2 instances.

 


Suggested Answer: C

Community Answer: D

 

Question 7

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis.
Which solution meets these requirements?

A. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

B. Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

C. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.

D. Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.

 


Suggested Answer: D

Community Answer: D

Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0012600002.png” alt=”Reference Image” />

 

Question 8

A database specialist manages a critical Amazon RDS for MySQL DB instance for a company. The data stored daily could vary from .01% to 10% of the current database size. The database specialist needs to ensure that the DB instance storage grows as needed.
What is the MOST operationally efficient and cost-effective solution?

A. Configure RDS Storage Auto Scaling.

B. Configure RDS instance Auto Scaling.

C. Modify the DB instance allocated storage to meet the forecasted requirements.

D. Monitor the Amazon CloudWatch FreeStorageSpace metric daily and add storage as required.

 


Suggested Answer: B

Community Answer: A

 

Question 9

A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS
CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company's code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.
What is most secure solution to store the master password?

A. Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.

B. Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.

C. Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.

D. Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.

 


Suggested Answer: C

Community Answer: C

 

Question 10

A company is using an Amazon Aurora PostgreSQL DB cluster for a project. A database specialist must ensure that the database is encrypted at rest. The database size is 500 GB.
What is the FASTEST way to secure the data through encryption at rest in the DB cluster?

A. Take a manual snapshot of the unencrypted DB cluster. Create an encrypted copy of that snapshot in the same AWS Region as the unencrypted snapshot. Restore a DB cluster from the encrypted snapshot.

B. Create an AWS Key Management Service (AWS KMS) key in the same AWS Region and create a new encrypted Aurora cluster using this key.

C. Take a manual snapshot of the unencrypted DB cluster. Restore the unencrypted snapshot to a new encrypted Aurora PostgreSQL DB cluster.

D. Create a new encrypted Aurora PostgreSQL DB cluster. Use AWS Database Migration Service (AWS DMS) to migrate the data from the unencrypted DB cluster to the encrypted DB cluster.

 


Suggested Answer: D

Community Answer: C

 

Question 11

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company's network bandwidth is available.
How should the company perform this data load?

A. Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

B. Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

C. Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

D. Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

 


Suggested Answer: C

Community Answer: C

 

Question 12

A company runs a MySQL database for its ecommerce application on a single Amazon RDS DB instance. Application purchases are automatically saved to the database, which causes intensive writes. Company employees frequently generate purchase reports. The company needs to improve database performance and reduce downtime due to patching for upgrades.
Which approach will meet these requirements with the LEAST amount of operational overhead?

A. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.

B. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.

C. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.

D. Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.

 


Suggested Answer: C

Community Answer: C

Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

 

Question 13

A media company hosts a highly available news website on AWS but needs to improve its page load time, especially during very popular news releases. Once a news page is published, it is very unlikely to change unless an error is identified. The company has decided to use Amazon ElastiCache.
What is the recommended strategy for this use case?

A. Use ElastiCache for Memcached with write-through and long time to live (TTL)

B. Use ElastiCache for Redis with lazy loading and short time to live (TTL)

C. Use ElastiCache for Memcached with lazy loading and short time to live (TTL)

D. Use ElastiCache for Redis with write-through and long time to live (TTL)

 


Suggested Answer: B

Community Answer: D

 

Question 14

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.
How can this solution be implemented?

A. Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B. Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C. Use the AWS CLI to update the DynamoDB table and modify the partition key.

D. Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

 


Suggested Answer: D

Community Answer: A

 

Question 15

A company has a 250 GB Amazon RDS Multi-AZ DB instance. The company’s disaster recovery policy requires an RPO of 6 hours in a second AWS Region.
Which solution will meet these requirements MOST cost-effectively?

A. Use RDS automated snapshots. Create an AWS Lambda function to copy the snapshot to a second Region.

B. Use RDS automated snapshots every 6 hours. Use Amazon S3 Cross-Region Replication to copy the snapshot to a second Region.

C. Use AWS Backup to take an RDS snapshot every 6 hours and to copy the snapshot to a second Region.

D. Create an RDS cross-Region read replica in a second Region. Use AWS Backup to take an automated snapshot of the read replica every 6 hours.

 


Suggested Answer: C

Community Answer: C

 

Question 16

An information management services company is storing JSON documents on premises. The company is using a MongoDB 3.6 database but wants to migrate to
AWS. The solution must be compatible, scalable, and fully managed. The solution also must result in as little downtime as possible during the migration.
Which solution meets these requirements?

A. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility).

B. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2

C. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility).

D. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2.

 


Suggested Answer: C

Community Answer: A

 

Question 17

A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on-premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.
Which approach should the Database Specialist take to securely manage the database credentials?

A. Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.

B. Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.

C. Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.

D. Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.

 


Suggested Answer: B

Community Answer: C

 

Question 18

A company wants to migrate its Microsoft SQL Server Enterprise Edition database instance from on-premises to AWS. A deep review is performed and the AWS
Schema Conversion Tool (AWS SCT) provides options for running this workload on Amazon RDS for SQL Server Enterprise Edition, Amazon RDS for SQL Server
Standard Edition, Amazon Aurora MySQL, and Amazon Aurora PostgreSQL. The company does not want to use its own SQL server license and does not want to change from Microsoft SQL Server.
What is the MOST cost-effective and operationally efficient solution?

A. Run SQL Server Enterprise Edition on Amazon EC2.

B. Run SQL Server Standard Edition on Amazon RDS.

C. Run SQL Server Enterprise Edition on Amazon RDS.

D. Run Amazon Aurora MySQL leveraging SQL Server on Linux compatibility libraries.

 


Suggested Answer: D

Community Answer: B

Reference:
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html

 

Question 19

A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management
Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.
What is the most likely reason for this?

A. The source DB instance has to be converted to Single-AZ first to create a read replica from it.

B. Enhanced Monitoring is not enabled on the source DB instance.

C. The minor MySQL version in the source DB instance does not support read replicas.

D. Automated backups are not enabled on the source DB instance.

 


Suggested Answer: D

Community Answer: D

Reference:
https://aws.amazon.com/rds/features/read-replicas/

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0002600001.jpg” alt=”Reference Image” />

 

Question 20

A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the
Amazon DynamoDB API. After the call returns, the script attempts to call PutItem.
Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls.
How should a database specialist fix this issue?

A. Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.

B. Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.

C. Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.

D. Add a ConditionExpression parameter in the PutItem request.

 


Suggested Answer: D

Community Answer: C

Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html

 

Question 21

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.
While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units
(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.
What should the database specialist do to address the issue?

A. Change the data model to avoid hot partitions in the global secondary index.

B. Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C. Modify the table to use on-demand capacity instead of provisioned capacity.

D. Increase the number of retries on the bulk loading application.

 


Suggested Answer: B

Community Answer: A

Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0012200001.jpg” alt=”Reference Image” />

 

Question 22

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The company’s operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience.
Which solution will solve the throttling issue without requiring changes to the app?

A. Add a DynamoDB table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.

B. Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

C. Use on-demand capacity mode for the DynamoDB table.

D. Use DynamoDB Accelerator (DAX).

 


Suggested Answer: C

Community Answer: C

 

Question 23

A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance.
The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.
What will happen when the modification is submitted?

A. The request will fail because this storage capacity is too large.

B. The request will succeed only if the primary instance is in active status.

C. The request will succeed only if CPU utilization is less than 10%.

D. The request will fail as the most recent modification was too soon.

 


Suggested Answer: B

Community Answer: D

 

Question 24

A company developed a new application that is deployed on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances use the security group named sg-application-servers. The company needs a database to store the data from the application and decides to use an Amazon RDS for MySQL DB instance. The DB instance is deployed in a private DB subnet.
What is the MOST restrictive configuration for the DB instance security group?

A. Only allow incoming traffic from the sg-application-servers security group on port 3306.

B. Only allow incoming traffic from the sg-application-servers security group on port 443.

C. Only allow incoming traffic from the subnet of the application servers on port 3306.

D. Only allow incoming traffic from the subnet of the application servers on port 443.

 


Suggested Answer: B

Community Answer: A

 

Question 25

A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template.
What is the MOST operationally efficient solution to meet these requirements?

A. Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password.

B. Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30.

C. Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***.

D. Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password.

 


Suggested Answer: C

Community Answer: B

 

Question 26

A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.
Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

A. Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.

B. Increase the size of the ElastiCache cluster nodes to a larger instance size.

C. Create an additional ElastiCache cluster and load-balance traffic between the two clusters.

D. Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.

 


Suggested Answer: B

Community Answer: B

Reference:
https://aws.amazon.com/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0002900001.jpg” alt=”Reference Image” />

 

Question 27

A company uses Microsoft SQL Server on Amazon RDS in a Multi-AZ deployment as the database engine for its application. The company was recently acquired by another company. A database specialist must rename the database to follow a new naming standard.
Which combination of steps should the database specialist take to rename the database? (Choose two.)

A. Turn off automatic snapshots for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on the automatic snapshots.

B. Turn off Multi-AZ for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on Multi-AZ Mirroring.

C. Delete all existing snapshots for the DB instance. Use the rdsadmin.dbo.rds_modify_db_name stored procedure.

D. Update the application with the new database connection string.

E. Update the DNS record for the DB instance.

 


Suggested Answer: BD

Community Answer: BD

Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.RenamingDB.html

 

Question 28

An online advertising company is implementing an application that displays advertisements to its users. The application uses an Amazon DynamoDB table as a data store. The application also uses a DynamoDB Accelerator (DAX) cluster to cache its reads. Most of the reads are from the GetItem query and the
BatchGetItem query. Consistency of reads is not a requirement for this application.
Upon deployment, the application cache is not performing as expected. Specific strongly consistent queries that run against the DAX cluster are taking many milliseconds to respond instead of microseconds.
How can the company improve the cache behavior to increase application performance?

A. Increase the size of the DAX cluster.

B. Configure DAX to be an item cache with no query cache

C. Use eventually consistent reads instead of strongly consistent reads.

D. Create a new DAX cluster with a higher TTL for the item cache.

 


Suggested Answer: C

Community Answer: C

Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html

 

Question 29

A database specialist is working on an Amazon RDS for PostgreSQL DB instance that is experiencing application performance issues due to the addition of new workloads. The database has 5 ׀¢׀’ of storage space with Provisioned IOPS. Amazon CloudWatch metrics show that the average disk queue depth is greater than
200 and that the disk I/O response time is significantly higher than usual.
What should the database specialist do to improve the performance of the application immediately?

A. Increase the Provisioned IOPS rate on the storage.

B. Increase the available storage space.

C. Use General Purpose SSD (gp2) storage with burst credits.

D. Create a read replica to offload Read IOPS from the DB instance.

 


Suggested Answer: C

Community Answer: A

General Purpose SSD.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0014200001.jpg” alt=”Reference Image” />

 

Question 30

An ecommerce company runs an application on Amazon RDS for SQL Server 2017 Enterprise edition. Due to the increase in read volume, the company’s application team is planning to offload the read transactions by adding a read replica to the RDS for SQL Server DB instance.
What architectural conditions should a database specialist set? (Choose two.)

A. Ensure that the automatic backups are turned on for the RDS DB instance

B. Ensure the backup retention value is set to 0 for the RDSDB instance

C. Ensure the RDS DB instance is set to Multi-AZ

D. Ensure the RDS DB instance is set to Single-AZ

E. Ensure the RDS DB instance is in a stopped state to turn on the read replica

 


Suggested Answer: AC

Community Answer: AC

 

Question 31

A company uses an Amazon Aurora MySQL DB cluster with the most recent version of the MySQL database engine. The company wants all data that is transferred between clients and the DB cluster to be encrypted.
What should a database specialist do to meet this requirement?

A. Turn on data encryption when modifying the DB cluster by using the AWS Management Console or by using the AWS CLI to call the modify-db-cluster command.

B. Download the key pair for the DB instance. Reference that file from the –key-name option when connecting with a MySQL client.

C. Turn on data encryption by using AWS Key Management Service (AWS KMS). Use the AWS KMS key to encrypt the connections between a MySQL client and the Aurora DB cluster.

D. Turn on the require_secure_transport parameter in the DB cluster parameter group. Download the root certificate for the DB instance. Reference that file from the –ssl-ca option when connecting with a MySQL client.

 


Suggested Answer: A

Community Answer: D

 

Question 32

A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.
Which action will improve query performance with the LEAST operational effort?

A. Migrate the database to a new Amazon Redshift data warehouse.

B. Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.

C. Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.

D. Add an Aurora read replica.

 


Suggested Answer: B

Community Answer: B

Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.BestPractices.html

 

Question 33

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table.
The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.
Which solution will meet these requirements?

A. Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.

B. Create a VPC endpoint for DynamoDB in the application’s VPC. Use the VPC endpoint to access the table.

C. Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.

D. Use a VPN to route all communication to DynamoDB through the company’s own corporate network infrastructure.

 


Suggested Answer: C

Community Answer: B

 

Question 34

A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully.
The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.
How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.

B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.

C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.

D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.

 


Suggested Answer: D

Community Answer: C

Reference:
https://docs.aws.amazon.com/systems-manager/latest/userguide/mw-cli-task-options.html

 

Question 35

A company hosts an online gaming application on AWS. A single Amazon DynamoDB table contains one item for each registered user. The partition key for each item is the user's ID.
A daily report generator computes the sum totals of two well-known attributes for all items in the table that contain a dimension attribute. As the number of users grows, the report generator takes more time to generate the report.
Which combination of steps will minimize the time it takes to generate the report? (Choose two.)

A. Create a global secondary index (GSI) that uses the user ID as the partition key and the dimension attribute as the sort key. Use the GSI to project the two attributes that the report generator uses to compute the sum totals.

B. Create a local secondary index (LSI) that uses the user ID as the partition key and the dimension attribute as the sort key. Use the LSI to project the two attributes that the report generator uses to compute the sum totals.

C. Modify the report generator to query the index instead of the table.

D. Modify the report generator to scan the index instead of the table.

E. Modify the report generator to call the BatchGetItem operation.

 


Suggested Answer: CE

Community Answer: AC

 

Question 36

A manufacturing company's website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.

B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.

C. Edit and enable Aurora DB cluster cache management in parameter groups.

D. Set TCP keepalive parameters to a high value.

E. Set JDBC connection string timeout variables to a low value.

F. Set Java DNS caching timeouts to a high value.

 


Suggested Answer: ABC

Community Answer: ACE

 

Question 37

A company is using Amazon DynamoDB global tables for an online gaming application. The game has players around the world. As the game has become more popular, the volume of requests to DynamoDB has increased significantly. Recently, players have reported that the game state is inconsistent between players in different countries. A database specialist observes that the ReplicationLatency metric for some of the replica tables is too high.
Which approach will alleviate the problem?

A. Configure all replica tables to use DynamoDB auto scaling.

B. Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.

C. Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.

D. Configure the table-level write throughput limit service quota to a higher value.

 


Suggested Answer: A

Community Answer: A

Using DynamoDB auto scaling is the recommended way to manage throughput capacity settings for replica tables that use the provisioned mode.
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables_reqs_bestpractices.html

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0011100001.png” alt=”Reference Image” />

 

Question 38

A gaming company is building a mobile game that will have as many as 25,000 active concurrent users in the first 2 weeks after launch. The game has a leaderboard that shows the 10 highest scoring players over the last 24 hours. The leaderboard calculations are processed by an AWS Lambda function, which takes about 10 seconds. The company wants the data on the leaderboard to be no more than 1 minute old.
Which architecture will meet these requirements in the MOST operationally efficient way?

A. Deliver the player data to an Amazon Timestream database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.

B. Deliver the player data to an Amazon Timestream database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in DynamoDCreate a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.

C. Deliver the player data to an Amazon Aurora MySQL database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in MySQL. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.

D. Deliver the player data to an Amazon Neptune database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.

 


Suggested Answer: C

Community Answer: A

 

Question 39

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add
RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)

A. Set DeletionProtection to True

B. Set MultiAZ to True

C. Set TerminationProtection to True

D. Set DeleteAutomatedBackups to False

E. Set DeletionPolicy to Delete

F. Set DeletionPolicy to Retain

 


Suggested Answer: ACF

Community Answer: ADF

Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0000300001.png” alt=”Reference Image” />
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-accidental-updates/

 

Question 40

A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon
RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?

A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint

B. The client-side application is caching the DNS data and its TTL is set too high

C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections

D. There were no active Aurora Replicas in the Aurora DB cluster

 


Suggested Answer: C

Community Answer: B

 

Question 41

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.
How can the Database Specialists accomplish this?

A. Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

B. Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

C. Enable Amazon RDS Performance Insights and review the appropriate dashboard

D. Enable Enhanced Monitoring will the appropriate settings

 


Suggested Answer: C

Community Answer: C

 

Question 42

A company is running a two-tier ecommerce application in one AWS account. The application is backed by an Amazon RDS for MySQL Multi-AZ DB instance. A developer mistakenly deleted the DB instance in the production environment. The company restores the database, but this event results in hours of downtime and lost revenue.
Which combination of changes would minimize the risk of this mistake occurring in the future? (Choose three.)

A. Grant least privilege to groups, IAM users, and roles.

B. Allow all users to restore a database from a backup.

C. Enable deletion protection on existing production DB instances.

D. Use an ACL policy to restrict users from DB instance deletion.

E. Enable AWS CloudTrail logging and Enhanced Monitoring.

 


Suggested Answer: ACE

Community Answer: ACD

 

Question 43

A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.
Which action will meet these requirements?

A. Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.

B. Modify the DB instance and enable encryption.

C. Restore a DB instance from the most recent automated snapshot and enable encryption.

D. Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.

 


Suggested Answer: C

Community Answer: A

Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

 

Question 44

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared location for automatic deployment and is exposed to all users who can access the location.
A database specialist must use encryption to ensure that the credentials are not visible in the source code.
Which solution will meet these requirements?

A. Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption.

B. Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to Systems Manager.

C. Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to Systems Manager.

D. Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates.

 


Suggested Answer: B

Community Answer: C

 

Question 45

A company uses Amazon DynamoDB as a data store for multi-tenant data. Approximately 70% of the reads by the company's application are strongly consistent. The current key schema for the DynamoDB table is as follows:
Partition key: OrgID -
Sort key: TenantID#Version -
Due to a change in design and access patterns, the company needs to support strongly consistent lookups based on the new schema below:
Partition key: OrgID#TenantID -
Sort key: Version -
How can the database specialist implement this change?

A. Create a global secondary index (GSI) on the existing table with the specified partition and sort key.

B. Create a local secondary index (LSI) on the existing table with the specified partition and sort key.

C. Create a new table with the specified partition and sort key. Create an AWS Glue ETL job to perform the transformation and write the transformed data to the new table.

D. Create a new table with the specified partition and sort key. Use AWS Database Migration Service (AWS DMS) to migrate the data to the new table.

 


Suggested Answer: B

Community Answer: C

 

Question 46

An online retailer uses Amazon DynamoDB for its product catalog and order data. Some popular items have led to frequently accessed keys in the data, and the company is using DynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessed keys. As the number of popular products is growing, the company realizes that more items need to be cached. The company observes a high cache miss rate and needs a solution to address this issue.
What should a database specialist do to accommodate the changing requirements for DAX?

A. Increase the number of nodes in the existing DAX cluster.

B. Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.

C. Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.

D. Modify the node type in the existing DAX cluster.

 


Suggested Answer: A

Community Answer: C

 

Question 47

A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in minimal changes to the test data. The Development team wants each environment refreshed nightly so each test database contains fresh production data every day.
Which migration approach will be the fastest and most cost-effective to implement?

A. Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

B. Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST using Aurora Serverless.

C. Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be deleted and re-created nightly.

D. Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the clones to be deleted and re-created nightly.

 


Suggested Answer: A

Community Answer: A

 

Question 48

An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.
The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.
Which solution meets these requirements?

A. Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B. Provision a clone of the existing DB cluster for the new Application team.

C. Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D. Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

 


Suggested Answer: A

Community Answer: D

 

Question 49

A database specialist needs to delete user data and sensor data 1 year after it was loaded in an Amazon DynamoDB table. TTL is enabled on one of the attributes. The database specialist monitors TTL rates on the Amazon CloudWatch metrics for the table and observes that items are not being deleted as expected.
What is the MOST likely reason that the items are not being deleted?

A. The TTL attribute’s value is set as a Number data type.

B. The TTL attribute’s value is set as a Binary data type.

C. The TTL attribute’s value is a timestamp in the Unix epoch time format in seconds.

D. The TTL attribute’s value is set with an expiration of 1 year.

 


Suggested Answer: C

Community Answer: B

Attribute’s value is a timestamp in Unix epoch time format in seconds.
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html

<img src=”https://www.examtopics.com/assets/media/exam-media/04237/0013200001.png” alt=”Reference Image” />

 

Question 50

A company provisioned a three-tier application by using AWS CloudFormation and an Amazon RDS DB instance. During a test, a database administrator accidentally deleted the CloudFormation stack. The results were a deletion of all the resources, including the DB instance, and a loss of critical data. The company wants to prevent accidental deletion of a DB instance from happening in the future.
Which solutions will meet this requirement? (Choose two.)

A. Set the deletion policy of the stack to Retain.

B. Set the deletion policy of the RDS resource to Retain.

C. Set the deletion policy of the stack to Snapshot.

D. Enable termination protection for the RDS resource.

E. Enable termination protection for the stack.

 


Suggested Answer: AB

Community Answer: BE

 

Free Access Full DBS-C01 Practice Test Free Questions

If you’re looking for more DBS-C01 practice test free questions, click here to access the full DBS-C01 practice test.

We regularly update this page with new practice questions, so be sure to check back frequently.

Good luck with your DBS-C01 certification journey!

Share18Tweet11
Previous Post

DAS-C01 Practice Test Free

Next Post

DEA-C01 Practice Test Free

Next Post

DEA-C01 Practice Test Free

DOP-C01 Practice Test Free

DOP-C02 Practice Test Free

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Network+ Practice Test

Comptia Security+ Practice Test

A+ Certification Practice Test

Aws Cloud Practitioner Exam Questions

Aws Cloud Practitioner Practice Exam

Comptia A+ Practice Test

  • About
  • DMCA
  • Privacy & Policy
  • Contact

PracticeTestFree.com materials do not contain actual questions and answers from Cisco's Certification Exams. PracticeTestFree.com doesn't offer Real Microsoft Exam Questions. PracticeTestFree.com doesn't offer Real Amazon Exam Questions.

  • Login
  • Sign Up
No Result
View All Result
  • Quesions
    • Cisco
    • AWS
    • Microsoft
    • CompTIA
    • Google
    • ISACA
    • ECCouncil
    • F5
    • GIAC
    • ISC
    • Juniper
    • LPI
    • Oracle
    • Palo Alto Networks
    • PMI
    • RedHat
    • Salesforce
    • VMware
  • Courses
    • CCNA
    • ENCOR
    • VMware vSphere
  • Certificates

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.