DBS-C01 Exam Prep Free – 50 Practice Questions to Get You Ready for Exam Day
Getting ready for the DBS-C01 certification? Our DBS-C01 Exam Prep Free resource includes 50 exam-style questions designed to help you practice effectively and feel confident on test day
Effective DBS-C01 exam prep free is the key to success. With our free practice questions, you can:
- Get familiar with exam format and question style
- Identify which topics you’ve mastered—and which need more review
- Boost your confidence and reduce exam anxiety
Below, you will find 50 realistic DBS-C01 Exam Prep Free questions that cover key exam topics. These questions are designed to reflect the structure and challenge level of the actual exam, making them perfect for your study routine.
A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table. To prepare the new table with identical settings, which steps should be performed? (Choose two.)
A. Re-create global secondary indexes in the new table
B. Define IAM policies for access to the new table
C. Define the TTL settings
D. Encrypt the table from the AWS Management Console or use the update-table command
E. Set the provisioned read and write capacity
A company released a mobile game that quickly grew to 10 million daily active users in North America. The game's backend is hosted on AWS and makes extensive use of an Amazon DynamoDB table that is configured with a TTL attribute. When an item is added or updated, its TTL is set to the current epoch time plus 600 seconds. The game logic relies on old data being purged so that it can calculate rewards points accurately. Occasionally, items are read from the table that are several hours past their TTL expiry. How should a database specialist fix this issue?
A. Use a client library that supports the TTL functionality for DynamoDB.
B. Include a query filter expression to ignore items with an expired TTL.
C. Set the ConsistentRead parameter to true when querying the table.
D. Create a local secondary index on the TTL attribute.
A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company's network bandwidth is available. How should the company perform this data load?
A. Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
B. Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
C. Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
D. Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
A company is running an on-premises application comprised of a web tier, an application tier, and a MySQL database tier. The database is used primarily during business hours with random activity peaks throughout the day. A database specialist needs to improve the availability and reduce the cost of the MySQL database tier as part of the company's migration to AWS. Which MySQL database option would meet these requirements?
A. Amazon RDS for MySQL with Multi-AZ
B. Amazon Aurora Serverless MySQL cluster
C. Amazon Aurora MySQL cluster
D. Amazon RDS for MySQL with read replica
A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table. Periodically, the other users' devices read the latest statuses of their teammates from the table using the BatchGetltemn operation. Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation. Which recommendation would resolve this issue?
A. Ensure the DynamoDB table is configured to be always consistent.
B. Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.
C. Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.
D. Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.
A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group. What should the Database Specialist do to automatically collect the database logs for the Administrator?
A. Enable DocumentDB to export the logs to Amazon CloudWatch Logs
B. Enable DocumentDB to export the logs to AWS CloudTrail
C. Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
D. Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3
A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation. How can the Database Specialists accomplish this?
A. Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
B. Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
C. Enable Amazon RDS Performance Insights and review the appropriate dashboard
D. Enable Enhanced Monitoring will the appropriate settings
A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost-effective and able to handle unpredictable application traffic. What should a Database Specialist recommend for this user?
A. Create an Amazon DynamoDB table with provisioned capacity mode
B. Create an Amazon DocumentDB cluster
C. Create an Amazon DynamoDB table with on-demand capacity mode
D. Create an Amazon Aurora Serverless DB cluster
A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created. Which of the following are possible reasons why the snapshot was not created? (Choose two.)
A. A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.
B. A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.
C. The RDS maintenance window is not configured.
D. The RDS DB instance is in the STORAGE_FULL state.
E. RDS event notifications have not been enabled.
A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover. Which solution on AWS will meet these requirements with the LEAST operational overhead?
A. Deploy an Amazon RDS DB instance with a read replica.
B. Deploy an Amazon RDS Multi-AZ DB instance.
C. Deploy Amazon DynamoDB global tables.
D. Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.
A database specialist is designing an enterprise application for a large company. The application uses Amazon DynamoDB with DynamoDB Accelerator (DAX). The database specialist observes that most of the queries are not found in the DAX cache and that they still require DynamoDB table reads. What should the database specialist review first to improve the utility of DAX?
A. The DynamoDB ConsumedReadCapacityUnits metric
B. The trust relationship to perform the DynamoDB API calls
C. The DAX cluster’s TTL setting
D. The validity of customer-specified AWS Key Management Service (AWS KMS) keys for DAX encryption at rest
A company recently launched a mobile app that has grown in popularity during the last week. The company started development in the cloud and did not initially follow security best practices during development of the mobile app. The mobile app gives customers the ability to use the platform anonymously. Platform architects use Amazon ElastiCache for Redis in a VPC to manage session affinity (sticky sessions) and cookies for customers. The company's security team now mandates encryption in transit and encryption at rest for all traffic. A database specialist is using the AWS CLI to comply with this mandate. Which combination of steps should the database specialist take to meet these requirements? (Choose three.)
A. Create a manual backup of the existing Redis replication group by using the create-snapshot command. Restore from the backup by using the create-replication-group command
B. Use the –transit-encryption-enabled parameter on the new Redis replication group
C. Use the –at-rest-encryption-enabled parameter on the existing Redis replication group
D. Use the –transit-encryption-enabled parameter on the existing Redis replication group
E. Use the –at-rest-encryption-enabled parameter on the new Redis replication group
F. Create a manual backup of the existing Redis replication group by using the CreateBackupSelection command. Restore from the backup by using the StartRestoreJob command
A company uses an Amazon Aurora MySQL DB cluster with the most recent version of the MySQL database engine. The company wants all data that is transferred between clients and the DB cluster to be encrypted. What should a database specialist do to meet this requirement?
A. Turn on data encryption when modifying the DB cluster by using the AWS Management Console or by using the AWS CLI to call the modify-db-cluster command.
B. Download the key pair for the DB instance. Reference that file from the –key-name option when connecting with a MySQL client.
C. Turn on data encryption by using AWS Key Management Service (AWS KMS). Use the AWS KMS key to encrypt the connections between a MySQL client and the Aurora DB cluster.
D. Turn on the require_secure_transport parameter in the DB cluster parameter group. Download the root certificate for the DB instance. Reference that file from the –ssl-ca option when connecting with a MySQL client.
A finance company migrated its 3 ׀¢׀’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime. Which solution will meet these requirements?
A. Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.
B. Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.
C. Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.
D. Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.
An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream. Which solution will meet these requirements?
A. Amazon Aurora MySQL Multi-AZ DB cluster
B. Amazon Keyspaces (for Apache Cassandra)
C. Amazon DynamoDB table with DynamoDB auto scaling
D. Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone
A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration. Which solution will MOST improve the performance of the data migration?
A. Increase the number of tables that are loaded in parallel.
B. Drop all indexes on the source tables.
C. Change the processing mode from the batch optimized apply option to transactional mode.
D. Enable Multi-AZ on the target database while the full load task is in progress.
A company uses multiple AWS accounts in AWS Organizations to separate development teams that work on different applications. Each AWS account contains multiple applications that run in the default VPC with interface endpoints. The applications need access to the same underlying data in an Amazon Aurora PostgreSQL DB cluster in one of the AWS accounts. Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Resource Access Manager (AWS RAM) to share the subnet that contains the database. Create an Amazon RDS Proxy endpoint for the other applications to access.
B. Use VPC peering to connect the VPCs of the other AWS accounts to the subnet that contains the database.
C. Create an Amazon S3 bucket that stores database backups. Configure replication to S3 buckets in the other accounts. Restore the backups in the other AWS accounts.
D. Create an interface VPC endpoint for the Amazon RDS API. Attach an endpoint policy that grants the other AWS accounts access to the database.
A database specialist is creating an AWS CloudFormation stack. The database specialist wants to prevent accidental deletion of an Amazon RDS ProductionDatabase resource in the stack. Which solution will meet this requirement?
A. Create a stack policy to prevent updates. Include ג€Effectג€ : ג€ProductionDatabaseג€ and ג€Resourceג€ : ג€Denyג€ in the policy.
B. Create an AWS CloudFormation stack in XML format. Set xAttribute as false.
C. Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.
D. Create a stack policy to prevent updates. Include ג€Effectג€ : ג€Denyג€ and ג€Resourceג€ : ג€ProductionDatabaseג€ in the policy.
A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user. What is the MOST operationally efficient way to restore the default permissions of the master user?
A. Modify the DB instance and set a new master user password.
B. Use AWS Secrets Manager to modify the master user password and restart the DB instance.
C. Create a new master user for the DB instance.
D. Review the IAM user that owns the DB instance, and add missing permissions.
A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This application is very popular and the company expects a tenfold increase in the user base in next few months. The application experiences more traffic during the morning and evening hours. This application has two parts: ✑ An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from users. ✑ A third-party customer relationship management (CRM) component used by customer care representatives. The CRM uses queries to access booking data. A database specialist needs to design a cost-effective database solution to handle this workload. Which solution meets these requirements?
A. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.
B. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
C. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.
D. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle. Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?
A. Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.
B. Increase the size of the ElastiCache cluster nodes to a larger instance size.
C. Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
D. Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.
A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account. Which combination of actions must the application development team take to meet these requirements? (Choose two.)
A. Add access from the “WeReceive” account to the custom AWS KMS key policy of the sharing team.
B. Make a copy of the DB snapshot, and set the encryption option to disable.
C. Share the DB snapshot by setting the DB snapshot visibility option to public.
D. Make a copy of the DB snapshot, and set the encryption option to enable.
E. Share the DB snapshot by using the default AWS KMS encryption key.
A database administrator is working on transferring data from an on-premises Oracle instance to an Amazon RDS for Oracle DB instance through an AWS Database Migration Service (AWS DMS) task with ongoing replication only. The database administrator noticed that the migration task failed after running successfully for some time. The logs indicate that there was generic error. The database administrator wants to know which data definition language (DDL) statement caused this issue. What should the database administrator do to identify' this issue in the MOST operationally efficient manner?
A. Export AWS DMS logs to Amazon CloudWatch and identify the DDL statement from the AWS Management Console
B. Turn on logging for the AWS DMS task by setting the TARGET_LOAD action with the level of severity set to LOGGER_SEVERITY_DETAILED_DEBUG
C. Turn on DDL activity tracing in the RDS for Oracle DB instance parameter group
D. Turn on logging for the AWS DMS task by setting the TARGET_APPLY action with the level of severity’ set to LOGGER_SEVERITY_DETAILED_DEBUG
A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly. How should a Database Specialist ensure DynamoDB can handle the increased traffic?
A. Ensure the table is always provisioned to meet peak needs
B. Allow burst capacity to handle the additional load
C. Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
D. Preprovision additional capacity for the known peaks and then reduce the capacity after the event
A database specialist is working on an Amazon RDS for PostgreSQL DB instance that is experiencing application performance issues due to the addition of new workloads. The database has 5 ׀¢׀’ of storage space with Provisioned IOPS. Amazon CloudWatch metrics show that the average disk queue depth is greater than 200 and that the disk I/O response time is significantly higher than usual. What should the database specialist do to improve the performance of the application immediately?
A. Increase the Provisioned IOPS rate on the storage.
B. Increase the available storage space.
C. Use General Purpose SSD (gp2) storage with burst credits.
D. Create a read replica to offload Read IOPS from the DB instance.
A company needs to migrate Oracle Database Standard Edition running on an Amazon EC2 instance to an Amazon RDS for Oracle DB instance with Multi-AZ. The database supports an ecommerce website that runs continuously. The company can only provide a maintenance window of up to 5 minutes. Which solution will meet these requirements?
A. Configure Oracle Real Application Clusters (RAC) on the EC2 instance and the RDS DB instance. Update the connection string to point to the RAC cluster. Once the EC2 instance and RDS DB instance are in sync, fail over from Amazon EC2 to Amazon RDS.
B. Export the Oracle database from the EC2 instance using Oracle Data Pump and perform an import into Amazon RDS. Stop the application for the entire process. When the import is complete, change the database connection string and then restart the application.
C. Configure AWS DMS with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
D. Configure AWS DataSync with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
A company is preparing to release a new application. During a load test on the application before launch, the company noticed that its Amazon RDS for MySQL database responded more slowly than expected. As a result, the application did not meet performance goals. A database specialist must determine which SQL statements are consuming the most load. Which set of steps should the database specialist take to obtain this information?
A. Navigate to RDS Performance Insights. Select the database that is associated with the application. Update the counter metrics to show top_sql. Update the time range to when the load test occurred. Review the top SQL statements.
B. Navigate to RDS Performance Insights. Select the database that is associated with the application. Update the time range to when the load test occurred. Change the slice to SQL. Review the top SQL statements.
C. Navigate to Amazon CloudWatch. Select the metrics for the appropriate DB instance. Review the top SQL statements metric for the time range when the load test occurred. Create a CloudWatch dashboard to watch during future load tests.
D. Navigate to Amazon CloudWatch. Find the log group for the application’s database. Review the top-sql-statements log file for the time range when the load test occurred.
A company's database specialist implements an AWS Database Migration Service (AWS DMS) task for change data capture (CDC) to replicate data from an on- premises Oracle database to Amazon S3. When usage of the company's application increases, the database specialist notices multiple hours of latency with the CDC. Which solutions will reduce this latency? (Choose two.)
A. Configure the DMS task to run in full large binary object (LOB) mode.
B. Configure the DMS task to run in limited large binary object (LOB) mode.
C. Create a Multi-AZ replication instance.
D. Load tables in parallel by creating multiple replication instances for sets of tables that participate in common transactions.
E. Replicate tables in parallel by creating multiple DMS tasks for sets of tables that do not participate in common transactions.
A company wants to migrate its on-premises Oracle database to a managed open-source database engine in Amazon RDS by using AWS Database Migration Service (AWS DMS). A database specialist needs to identify the target engine in Amazon RDS based on the conversion percentage of database code objects such as stored procedures, functions, views, and database storage objects. The company will select the engine that has the least manual conversion effort. What should the database specialist do to identify the target engine?
A. Use the AWS Schema Conversion Tool (AWS SCT) database migration assessment report
B. Use the AWS Schema Conversion Tool (AWS SCT) multiserver assessor
C. Use an AWS DMS pre-migration assessment
D. Use the AWS DMS data validation tool
A company is running its critical production workload on a 500 GB Amazon Aurora MySQL DB cluster. A database engineer must move the workload to a new Amazon Aurora Serverless MySQL DB cluster without data loss. Which solution will accomplish the move with the LEAST downtime and the LEAST application impact?
A. Modify the existing DB cluster and update the Aurora configuration to ג€Serverless.ג€
B. Create a snapshot of the existing DB cluster and restore it to a new Aurora Serverless DB cluster.
C. Create an Aurora Serverless replica from the existing DB cluster and promote it to primary when the replica lag is minimal.
D. Replicate the data between the existing DB cluster and a new Aurora Serverless DB cluster by using AWS Database Migration Service (AWS DMS) with change data capture (CDC) enabled.
A company is migrating a database in an Amazon RDS for SQL Server DB instance from one AWS Region to another. The company wants to minimize database downtime during the migration. Which strategy should the company choose for this cross-Region migration?
A. Back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.
B. Back up the source database using native backup to an Amazon S3 bucket in the same Region. Use Amazon S3 Cross-Region Replication to copy the backup to an S3 bucket in the target Region. Then restore the backup in the target Region.
C. Configure AWS Database Migration Service (AWS DMS) to replicate data between the source and the target databases. Once the replication is in sync, terminate the DMS task.
D. Add an RDS for SQL Server cross-Region read replica in the target Region. Once the replication is in sync, promote the read replica to master.
A large IT hardware manufacturing company wants to deploy a MySQL database solution in the AWS Cloud. The solution should quickly create copies of the company's production databases for test purposes. The solution must deploy the test databases in minutes, and the test data should match the latest production data as closely as possible. Developers must also be able to make changes in the test database and delete the instances afterward. Which solution meets these requirements?
A. Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqidump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.
B. Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.
C. Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.
D. Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.
A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The company’s operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience. Which solution will solve the throttling issue without requiring changes to the app?
A. Add a DynamoDB table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.
B. Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
C. Use on-demand capacity mode for the DynamoDB table.
D. Use DynamoDB Accelerator (DAX).
A retail company manages a web application that stores data in an Amazon DynamoDB table. The company is undergoing account consolidation efforts. A database engineer needs to migrate the DynamoDB table from the current AWS account to a new AWS account. Which strategy meets these requirements with the LEAST amount of administrative work?
A. Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.
B. Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.
C. Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.
D. Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items to a DynamoDB table in the new account.
An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources. What should a Database Specialist do in this situation to increase performance and return latency to sub-second levels?
A. Increase the size of the DB instance storage
B. Change the underlying EBS storage type to General Purpose SSD (gp2)
C. Disable EBS optimization on the DB instance
D. Change the DB instance to an instance class with a higher maximum bandwidth
A company has an on-premises SQL Server database. The users access the database using Active Directory authentication. The company successfully migrated its database to Amazon RDS for SQL Server. However, the company is concerned about user authentication in the AWS Cloud environment. Which solution should a database specialist provide for the user to authenticate?
A. Deploy Active Directory Federation Services (AD FS) on premises and configure it with an on-premises Active Directory. Set up delegation between the on- premises AD FS and AWS Security Token Service (AWS STS) to map user identities to a role using theAmazonRDSDirectoryServiceAccess managed IAM policy.
B. Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Use AWS SSO to configure an Active Directory user delegated to access the databases in RDS for SQL Server.
C. Use Active Directory Connector to redirect directory requests to the company’s on-premises Active Directory without caching any information in the cloud. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.
D. Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Ensure RDS for SQL Server is using mixed mode authentication. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.
A company needs to troubleshoot its Amazon Aurora Serverless MySQL database. The company selected a db.t3, medium instance class for the database's initial deployment. The database experienced light usage, and performance was normal. As the number of client connections increases, the application that is connected to the database is experiencing higher latency and occasional lost connections. A database specialist determines that the database needs to support a maximum of 2,000 simultaneous connections. Which solution will meet these requirements MOST cost-effectively?
A. Modify the instance class to db.r3.xlarge. Apply the changes immediately.
B. Edit the default parameter group for the MySQL engine that the database uses. Change the max_connections value to 2,000. Reboot the DB instance to apply the new value.
C. Create a new parameter group for the MySQL engine that the database uses. Set the max_connections value to 2,000. Assign the parameter group to the DB instance. Apply the changes immediately.
D. Modify the instance class to db.t3.large. Apply the changes immediately.
A finance company needs to make sure that its MySQL database backups are available for the most recent 90 days. All of the MySQL databases are hosted on Amazon RDS for MySQL DB instances. A database specialist must implement a solution that meets the backup retention requirement with the least possible development effort. Which approach should the database specialist take?
A. Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.
B. Modify the DB instances to enable the automated backup option. Select the required backup retention period.
C. Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.
D. Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.
An online advertising website uses an Amazon DynamoDB table with on-demand capacity mode as its data store. The website also has a DynamoDB Accelerator (DAX) cluster in the same VPC as its web application server. The application needs to perform infrequent writes and many strongly consistent reads from the data store by querying the DAX cluster. During a performance audit, a systems administrator notices that the application can look up items by using the DAX cluster. However, the QueryCacheHits metric for the DAX cluster consistently shows 0 while the QueryCacheMisses metric continuously keeps growing in Amazon CloudWatch. What is the MOST likely reason for this occurrence?
A. A VPC endpoint was not added to access DynamoDB.
B. Strongly consistent reads are always passed through DAX to DynamoDB.
C. DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
D. A VPC endpoint was not added to access CloudWatch.
A gaming company is building a mobile game that will have as many as 25,000 active concurrent users in the first 2 weeks after launch. The game has a leaderboard that shows the 10 highest scoring players over the last 24 hours. The leaderboard calculations are processed by an AWS Lambda function, which takes about 10 seconds. The company wants the data on the leaderboard to be no more than 1 minute old. Which architecture will meet these requirements in the MOST operationally efficient way?
A. Deliver the player data to an Amazon Timestream database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.
B. Deliver the player data to an Amazon Timestream database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in DynamoDCreate a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.
C. Deliver the player data to an Amazon Aurora MySQL database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in MySQL. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.
D. Deliver the player data to an Amazon Neptune database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.
A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times. What could be causing these slow response times?
A. New volumes created from snapshots load lazily in the background
B. Long-running statements on the master
C. Insufficient resources on the master
D. Overload of a single replication thread by excessive writes on the master
A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us- east-2 Regions. This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location. Which set of actions will meet these requirements?
A. Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap- southeast-1 front-end dashboard to access this instance.
B. Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us-west-2 Region. Reconfigure the ap-southeast-1 front- end dashboard to access this instance.
C. Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
D. Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live. What change should the Database Specialist make to enable the migration?
A. Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
B. Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
C. Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
D. Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup. The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company. Which solution will meet these requirements with minimal effort?
A. Create an Amazon CloudWatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
B. Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
C. Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.
D. Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev-VPC1. What is likely causing the timeouts?
A. The database is deployed in a VPC that is in a different Region.
B. The database is deployed in a VPC that is in a different Availability Zone.
C. The database is deployed with misconfigured security groups.
D. The database is deployed with the wrong client connect timeout configuration.
A company is planning on migrating a 500-GB database from Oracle to Amazon Aurora PostgreSQL using the AWS Schema Conversion Tool (AWS SCT) and AWS DMS. The database does not have any stored procedures to migrate but has some tables that are large or partitioned. The application is critical for business so a migration with minimal downtime is preferred. Which combination of steps should a database specialist take to accelerate the migration process? (Choose three.)
A. Use the AWS SCT data extraction agent to migrate the schema from Oracle to Aurora PostgreSQL.
B. For the large tables, change the setting for the maximum number of tables to load in parallel and perform a full load using AWS DMS.
C. For the large tables, create a table settings rule with a parallel load option in AWS DMS, then perform a full load using DMS.
D. Use AWS DMS to set up change data capture (CDC) for continuous replication until the cutover date.
E. Use AWS SCT to convert the schema from Oracle to Aurora PostgreSQL.
F. Use AWS DMS to convert the schema from Oracle to Aurora PostgreSQL and for continuous replication.
An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM. The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered. How should the database administrator remediate this issue at the lowest cost?
A. Enable auto scaling and set the target usage rate to 90%.
B. Switch the table to provisioned mode and enable auto scaling.
C. Switch the table to provisioned mode and set the throughput to the peak value.
D. Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.
A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company's code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production. What is most secure solution to store the master password?
A. Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.
B. Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.
C. Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
D. Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.
A company is running its production databases in a 3 TB Amazon Aurora MySQL DB cluster. The DB cluster is deployed to the us-east-1 Region. For disaster recovery (DR) purposes, the company's database specialist needs to make the DB cluster rapidly available in another AWS Region to cover the production load with an RTO of less than 2 hours. What is the MOST operationally efficient solution to meet these requirements?
A. Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.
B. Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.
C. Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.
D. Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.
A retail company runs its production database on Amazon RDS for SOL Server. The company wants more flexibility in backing up and restoring the database. A database specialist needs to create a native backup and restore strategy. The solution must take native SQL Server backups and store them in a highly scalable manner. Which combination of stops should the database specialist take to meet those requirements? (Choose three.)
A. Set up an Amazon S3 destination bucket. Establish a trust relationship with an IAM role that includes permissions for Amazon RDS.
B. Set up an Amazon FSx for Windows File Server destination file system. Establish a trust relationship with an IAM role that includes permissions for Amazon RDS.
C. Create an option group. Add the SQLSERVER_BACKUP_RESTORE option to the option group
D. Modify the existing default option group. Add the SQLSERVER_BACKUP_RESTORE option to the option group
E. Back up the database by using the native BACKUP DATABASE TSQL command. Restore the database by using the RESTORE DATABASE TSQL command.
F. Back up the database by using the rds_backup_database stored procedure. Restore the database by using the rds_restore_database stored procedure.
Access Full DBS-C01 Exam Prep Free
Want to go beyond these 50 questions? Click here to unlock a full set of DBS-C01 exam prep free questions covering every domain tested on the exam.
We continuously update our content to ensure you have the most current and effective prep materials.
Good luck with your DBS-C01 certification journey!