DBS-C01 Dump Free – 50 Practice Questions to Sharpen Your Exam Readiness.
Looking for a reliable way to prepare for your DBS-C01 certification? Our DBS-C01 Dump Free includes 50 exam-style practice questions designed to reflect real test scenarios—helping you study smarter and pass with confidence.
Using an DBS-C01 dump free set of questions can give you an edge in your exam prep by helping you:
- Understand the format and types of questions you’ll face
- Pinpoint weak areas and focus your study efforts
- Boost your confidence with realistic question practice
Below, you will find 50 free questions from our DBS-C01 Dump Free collection. These cover key topics and are structured to simulate the difficulty level of the real exam, making them a valuable tool for review or final prep.
A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379. Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)
A. Enable in-transit and at-rest encryption on the ElastiCache cluster.
B. Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
C. Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
D. Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
E. Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.
F. Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
A company recently acquired a new business. A database specialist must migrate an unencrypted 12 TB Amazon RDS for MySQL DB instance to a new AWS account. The database specialist needs to minimize the amount of time required to migrate the database. Which solution meets these requirements?
A. Create a snapshot of the source DB instance in the source account. Share the snapshot with the destination account. In the target account, create a DB instance from the snapshot.
B. Use AWS Resource Access Manager to share the source DB instance with the destination account. Create a DB instance in the destination account using the shared resource.
C. Create a read replica of the DB instance. Give the destination account access to the read replica. In the destination account, create a snapshot of the shared read replica and provision a new RDS for MySQL DB instance.
D. Use mysqldump to back up the source database. Create an RDS for MySQL DB instance in the destination account. Use the mysql command to restore the backup in the destination database.
A company is due for renewing its database license. The company wants to migrate its 80 TB transactional database system from on-premises to the AWS Cloud. The migration should incur the least possible downtime on the downstream database applications. The company's network infrastructure has limited network bandwidth that is shared with other applications. Which solution should a database specialist use for a timely migration?
A. Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.
B. Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.
C. Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.
D. Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.
A company is building a web application on AWS. The application requires the database to support read and write operations in multiple AWS Regions simultaneously. The database also needs to propagate data changes between Regions as the changes occur. The application must be highly available and must provide latency of single-digit milliseconds. Which solution meets these requirements?
A. Amazon DynamoDB global tables
B. Amazon DynamoDB streams with AWS Lambda to replicate the data
C. An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards
D. An Amazon Aurora global database
A company migrated an on-premises Oracle database to Amazon RDS for Oracle. A database specialist needs to monitor the latency of the database. Which solution will meet this requirement with the LEAST operational overhead?
A. Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWS CloudTrail filters to monitor database performance
B. Install Oracle Statspack. Enable the performance statistics feature to collect, store, and display performance data to monitor database performance.
C. Enable RDS Performance Insights to visualize the database load. Enable Enhanced Monitoring to view how different threads use the CPU
D. Create a new DB parameter group that includes the AllocatedStorage, DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS Performance Insights
To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3. Which solution meets these requirements?
A. Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.
B. Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
C. Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
D. Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.
A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries run. Amazon CloudWatch metrics indicate that the instance requires more I/O capacity. Which actions can a database specialist perform to resolve this issue? (Choose two.)
A. Restart the application tool used to run queries.
B. Change to a database instance class with higher throughput.
C. Convert from Single-AZ to Multi-AZ.
D. Increase the I/O parameter in Amazon RDS Enhanced Monitoring.
E. Convert from General Purpose to Provisioned IOPS (PIOPS).
A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account. Which combination of actions must the application development team take to meet these requirements? (Choose two.)
A. Add access from the “WeReceive” account to the custom AWS KMS key policy of the sharing team.
B. Make a copy of the DB snapshot, and set the encryption option to disable.
C. Share the DB snapshot by setting the DB snapshot visibility option to public.
D. Make a copy of the DB snapshot, and set the encryption option to enable.
E. Share the DB snapshot by using the default AWS KMS encryption key.
A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration. Which solution will MOST improve the performance of the data migration?
A. Increase the number of tables that are loaded in parallel.
B. Drop all indexes on the source tables.
C. Change the processing mode from the batch optimized apply option to transactional mode.
D. Enable Multi-AZ on the target database while the full load task is in progress.
A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only. How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?
A. Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.
B. Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.
C. Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.
D. Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.
A company wants to implement a design that includes multiple AWS Regions to support disaster recovery for its application. The company currently has a static website that is hosted on Amazon S3 and Amazon CloudFront. The application connects to an existing Amazon DynamoDB database in the us-east-1 Region. The DynamoDB table was recently created and was initialized with a large amount of company data. The company wants to replicate the database in real time to the us-west-2 Region. A database specialist needs to perform the replication, which must include all existing table data and any new data that is added in the future, in an automated way. Which solution will meet these requirements?
A. Enable DynamoDB streams. Configure streams for new and old images. Create a global table replica in us-west-2. Monitor the progress of the replication until the status changes to Active.
B. Create global table replica in us-west-2. Monitor the progress of the replication until the status changes to Active.
C. Enable DynamoDB streams. Configure streams for new and old images. Create a global table replica in us-west-2. Copy all existing data from us-east-1 to us-west-2 by using an export and batch import.
D. Create a global table replica in us-west-2. Copy all existing data from us-east-1 to us-west-2 by using an export and batch import.
A financial services company uses Amazon RDS for Oracle with Transparent Data Encryption (TDE). The company is required to encrypt its data at rest at all times. The key required to decrypt the data has to be highly available, and access to the key must be limited. As a regulatory requirement, the company must have the ability to rotate the encryption key on demand. The company must be able to make the key unusable if any potential security breaches are spotted. The company also needs to accomplish these tasks with minimum overhead. What should the database administrator use to set up the encryption to meet these requirements?
A. AWS CloudHSM
B. AWS Key Management Service (AWS KMS) with an AWS managed key
C. AWS Key Management Service (AWS KMS) with server-side encryption
D. AWS Key Management Service (AWS KMS) CMK with customer-provided material
A company performs an audit on various data stores and discovers that an Amazon S3 bucket is storing a credit card number. The S3 bucket is the target of an AWS Database Migration Service (AWS DMS) continuous replication task that uses change data capture (CDC). The company determines that this field is not needed by anyone who uses the target data. The company has manually removed the existing credit card data from the S3 bucket. What is the MOST operationally efficient way to prevent new credit card data from being written to the S3 bucket?
A. Add a transformation rule to the DMS task to ignore the column from the source data endpoint.
B. Add a transformation rule to the DMS task to mask the column by using a simple SQL query.
C. Configure the target S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS).
D. Remove the credit card number column from the data source so that the DMS task does not need to be altered.
A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis. How should a database specialist automate the process of backing up the cluster data in compliance with these policies?
A. Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis.
B. Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot.
C. Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destination Region, specifying the source and destination KMS key IDs in the replication configuration.
D. Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot.
A financial services company is using AWS Database Migration Service (AWS DMS) to migrate its databases from on-premises to AWS. A database administrator is working on replicating a database to AWS from on-premises using full load and change data capture (CDC). During the CDC replication, the database administrator observed that the target latency was high and slowly increasing. What could be the root causes for this high target latency? (Choose two.)
A. There was ongoing maintenance on the replication instance.
B. The source endpoint was changed by modifying the task.
C. Loopback changes had affected the source and target instances.
D. There was no primary key or index in the target database.
E. There were resource bottlenecks in the replication instance.
A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads: ✑ Real-time inserts through Amazon Kinesis Data Firehose ✑ Bulk inserts through COPY commands from Amazon S3 ✑ Analytics through SQL queries Recently, the cluster has started to experience performance issues. Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)
A. Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.
B. Stream real-time data into Redshift temporary tables before loading the data into permanent tables.
C. For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.
D. For bulk inserts, use the parallel parameter in the COPY command to enable multi-threading.
E. Optimize analytics SQL queries to use sort keys.
F. Avoid using temporary tables in analytics SQL queries.
A company has more than 100 AWS accounts that need Amazon RDS instances. The company wants to build an automated solution to deploy the RDS instances with specific compliance parameters. The data does not need to be replicated. The company needs to create the databases within 1 day. Which solution will meet these requirements in the MOST operationally efficient way?
A. Create RDS resources by using AWS CloudFormation. Share the CloudFormation template with each account.
B. Create an RDS snapshot. Share the snapshot with each account. Deploy the snapshot into each account.
C. Use AWS CloudFormation to create RDS instances in each account. Run AWS Database Migration Service (AWS DMS) replication to each of the created instances.
D. Create a script by using the AWS CLI to copy the RDS instance into the other accounts from a template account.
A database specialist needs to reduce the cost of an application's database. The database is running on a Multi-AZ deployment of an Amazon RDS for Microsoft SQL Server DB instance. The application requires the database to support stored procedures, SQL Server Wire Protocol (TDS), and T-SQL. The database must also be highly available. The database specialist is using AWS Database Migration Service (AWS DMS) to migrate the database to a new data store. Which solution will reduce the cost of the database with the LEAST effort?
A. Use AWS Database Migration Service (DMS) to migrate to an RDS for MySQL Multi-AZ database. Update the application code to use the features of MySQL that correspond to SQL Server. Update the application to use the MySQL port.
B. Use AWS Database Migration Service (DMS) to migrate to an RDS for PostgreSQL Multi-AZ database. Turn on the SQL_COMPAT optional extension within the database to allow the required features. Update the application to use the PostgreSQL port.
C. Use AWS Database Migration Service (DMS) to migrate to an RDS for SQL Server Single-AZ database. Update the application to use the new database endpoint.
D. Use AWS Database Migration Service (DMS) to migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish for Aurora PostgreSQL. Update the application to use the Babelfish TDS port.
A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora. Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)
A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.
B. Use Oracle’s Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.
C. Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.
D. Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
E. Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost-effective and able to handle unpredictable application traffic. What should a Database Specialist recommend for this user?
A. Create an Amazon DynamoDB table with provisioned capacity mode
B. Create an Amazon DocumentDB cluster
C. Create an Amazon DynamoDB table with on-demand capacity mode
D. Create an Amazon Aurora Serverless DB cluster
A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned. Which solution will enable this change?
A. Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack’s mappings.
B. Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
C. Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.
D. Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials. Which combination of steps should a database specialist take to meet this requirement? (Choose three.)
A. Extend the on-premises Active Directory to AWS by using AD Connector.
B. Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.
C. Create a directory by using AWS Directory Service for Microsoft Active Directory.
D. Create an Active Directory domain controller on Amazon EC2.
E. Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.
F. Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory.
A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity. What should the company do to achieve this in the shortest amount of time?
A. Use a blue-green deployment with a complete application-level failover test
B. Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
C. Use RDS fault injection queries to simulate the primary node failure
D. Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
A large automobile company is migrating the database of a critical financial application to Amazon DynamoDB. The company's risk and compliance policy requires that every change in the database be recorded as a log entry for audits. The system is anticipating more than 500,000 log entries each minute. Log entries should be stored in batches of at least 100,000 records in each file in Apache Parquet format. How should a database specialist implement these requirements with DynamoDB?
A. Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.
B. Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.
C. Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.
D. Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.
A company is using Amazon Redshift. A database specialist needs to allow an existing Redshift cluster to access data from other Redshift clusters. Amazon RDS for PostgreSQL databases, and AWS Glue Data Catalog tables. Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
A. Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.
B. Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.
C. Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.
D. Use federated queries to access data in Amazon RDS.
E. Use data sharing to access data from the other Redshift clusters.
F. Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.
A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days. What should a database specialist do to meet this requirement with the LEAST amount of downtime?
A. Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.
B. Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.
C. Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.
D. Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.
A news portal is looking for a data store to store 120 GB of metadata about its posts and comments. The posts and comments are not frequently looked up or updated. However, occasional lookups are expected to be served with single-digit millisecond latency on average. What is the MOST cost-effective solution?
A. Use Amazon DynamoDB with on-demand capacity mode. Purchase reserved capacity.
B. Use Amazon ElastiCache for Redis for data storage. Turn off cluster mode.
C. Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for data storage and use Amazon Athena to query the data.
D. Use Amazon DynamoDB with on-demand capacity mode. Switch the table class to DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA).
An online bookstore recently migrated its database from on-premises Oracle to Amazon Aurora PostgreSQL 13. The bookstore uses scheduled jobs to run customized SQL scripts to administer the Oracle database, running hours-long maintenance tasks, such as partition maintenance and statistics gathering. The bookstore's application team has reached out to a database specialist seeking an ideal replacement for scheduling jobs with Aurora PostgreSQL. What should the database specialist implement to meet these requirements with MINIMAL operational overhead?
A. Configure an Amazon EC2 instance to run on a schedule to initiate database maintenance jobs
B. Configure AWS Batch with AWS Step Functions to schedule long-running database maintenance tasks
C. Create an Amazon EventBridae (Amazon CloudWatch Events) rule with AWS Lambda that runs on a schedule to initiate database maintenance jobs
D. Turn on the pg_cron extension in the Aurora PostgreSOL database and schedule the database maintenance tasks by using the cron.schedule function
A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times. What could be causing these slow response times?
A. New volumes created from snapshots load lazily in the background
B. Long-running statements on the master
C. Insufficient resources on the master
D. Overload of a single replication thread by excessive writes on the master
A company is running an Amazon RDS for PostgreSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime. What is the FASTEST way to accomplish this?
A. Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
B. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
C. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.
A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on-premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely. Which approach should the Database Specialist take to securely manage the database credentials?
A. Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
B. Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
C. Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.
D. Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
A company has an Amazon Redshift cluster with database audit logging enabled. A security audit shows that raw SQL statements that run against the Redshift cluster are being logged to an Amazon S3 bucket. The security team requires that authentication logs are generated for use in an intrusion detection system (IDS), but the security team does not require SQL queries. What should a database specialist do to remediate this issue?
A. Set the use_fips_ssl parameter to true in the database parameter group.
B. Turn off the query monitoring rule in the Redshift cluster’s workload management (WLM).
C. Set the enable_user_activity_logging parameter to false in the database parameter group.
D. Disable audit logging on the Redshift cluster.
A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. The company recently conducted tests on the database after business hours, and the tests generated additional database logs. As a result, free storage of the DB instance is low and is expected to be exhausted in 2 days. The company wants to recover the free storage that the additional logs consumed. The solution must not result in downtime for the database. Which solution will meet these requirements?
A. Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save the changes.
B. Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours for database logs to be deleted.
C. Modify the temp_file_limit parameter to a smaller value to reclaim space on the DB instance.
D. Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to save the changes.
A database specialist observes several idle connections in an Amazon RDS for MySQL DB instance. The DB instance is using RDS Proxy. An application is configured to connect to the proxy endpoint. What should the database specialist do to control the idle connections in the database?
A. Modify the MaxConnectionsPercent parameter through the RDS Proxy console.
B. Use CALL mysql.rds_kill(thread-id) for the IDLE threads that are returned from the SHOW FULL PROCESSLIST command.
C. Modify the MaxIdleConnectionsPercent parameter for the RDS proxy.
D. Modify the max_connections configuration setting for the DB instance. Modify the ConnectionBorrowTimeout parameter for the RDS proxy.
A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis. Which solution meets these requirements?
A. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.
B. Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.
C. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.
D. Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.
A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC. The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access. Which security strategy should a database specialist implement to meet these requirements?
A. Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet.
B. Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.
C. Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.
D. Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.
An ecommerce company is migrating its core application database to Amazon Aurora MySQL. The company is currently performing online transaction processing (OLTP) stress testing with concurrent database sessions. During the first round of tests, a database specialist noticed slow performance for some specific write operations. Reviewing Amazon CloudWatch metrics for the Aurora DB cluster showed 90% CPU utilization. Which steps should the database specialist take to MOST effectively identify the root cause of high CPU utilization and slow performance? (Choose two.)
A. Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
B. Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
C. Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
D. Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
E. Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.
A company is working on migrating a large Oracle database schema with 3,500 stored procedures to Amazon Aurora PostgreSQL. An application developer is using the AWS Schema Conversion Tool (AWS SCT) to convert code from Oracle to Aurora PostgreSQL. However, the code conversion is taking a longer time with performance issues. The application team has reached out to a database specialist to improve the performance of the AWS SCT conversion. What should the database specialist do to resolve the performance issues?
A. In AWS SCT, turn on the balance speed with memory consumption performance option with the optimal memory settings on local desktop.
B. Provision the target Aurora PostgreSQL database with a higher instance class. In AWS SCT. turn on the balance speed with memory consumption performance option.
C. In AWS SCT, turn on the fast conversion with large memory consumption performance option and set the JavaOptions section to the maximum memory available.
D. Provision a client Amazon EC2 machine with more CPU and memory resources in the same AWS Region as the Aurora PostgreSQL database.
A company has a variety of Amazon Aurora DB clusters. Each of these DB clusters has various configurations that meet specific sets of requirements. Depending on the team and the use case, these configurations can be organized into broader categories. A database specialist wants to implement a solution to make the storage and modification of the configuration parameters more systematic. Which AWS service or feature should the database specialist use to meet these requirements?
A. AWS Systems Manager Parameter Store
B. DB parameter group
C. AWS Config with the Amazon RDS managed rules
D. AWS Secrets Manager
A company wants to move its on-premises Oracle database to an Amazon Aurora PostgreSQL DB cluster. The source database includes 500 GB of data. 900 stored procedures and functions, and application source code with embedded SQL statements. The company understands there are some database code objects and custom features that may not be automatically converted and may need some manual intervention. Management would like to complete this migration as fast as possible with minimal downtime. Which tools and approach should be used to meet these requirements?
A. Use AWS DMS to perform data migration and to automatically create all schemas with Aurora PostgreSQL
B. Use AWS DMS to perform data migration and use the AWS Schema Conversion Tool (AWS SCT) to automatically generate the converted code
C. Use the AWS Schema Conversion Tool (AWS SCT) to automatically convert all types of Oracle schemas to PostgreSQL and migrate the data to Aurora
D. Use the dump and pg_dump utilities for both data migration and schema conversion
A media company wants to use zero-downtime patching (ZDP) for its Amazon Aurora MySQL database. Multiple processing applications are using SSL certificates to connect to database endpoints and the read replicas. Which factor will have the LEAST impact on the success of ZDP?
A. Binary logging is enabled, or binary log replication is in progress.
B. Current SSL connections are open to the database.
C. Temporary tables or table locks are in use.
D. The value of the lower_case_table_names server parameter was set to 0 when the tables were created.
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details. When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a `could not connect to server: Connection times out` error message to Amazon CloudWatch Logs. What is the cause of this error?
A. The user name and password the application is using are incorrect.
B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
D. The user name and password are correct, but the user is not authorized to use the DB instance.
A company is setting up a new Amazon RDS for SQL Server DB instance. The company wants to enable SQL Server auditing on the database. Which combination of steps should a database specialist take to meet this requirement? (Choose two.)
A. Create a service-linked role for Amazon RDS that grants permissions for Amazon RDS to store audit logs on Amazon S3.
B. Set up a parameter group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the parameter group with the DB instance.
C. Disable Multi-AZ on the DB instance, and then enable auditing. Enable Multi-AZ after auditing is enabled.
D. Disable automated backup on the DB instance, and then enable auditing. Enable automated backup after auditing is enabled.
E. Set up an options group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the options group with the DB instance.
A financial services company is running a MySQL database on premises. The database holds details about all customer interactions and the financial advice that the company provided. The write traffic to the database is well known and consistent. However, the read traffic is subject to significant and sudden increases for end-of-month reporting. The database is becoming overloaded during these periods of heavy read activity. The company decides to move the database to AWS. A database specialist needs to propose a solution in the AWS Cloud that will scale to meet the variable read traffic requirements without affecting the performance of write traffic. Scaling events must not require any downtime. What is the MOST operationally efficient solution that meets these requirements?
A. Deploy a MySQL primary node on Amazon EC2 in one Availability Zone. Deploy a MySQL read replica on Amazon EC2 in a different Availability Zone. Configure a scheduled scaling event to increase the CPU capacity and RAM capacity within the MySQL read replica the day before each known traffic surge. Configure a scheduled scaling event to reduce the CPU capacity and RAM capacity within the MySQL read replica the day after each known traffic surge.
B. Deploy an Amazon Aurora MySQL DB cluster. Select a Cross-AZ configuration with an Aurora Replica. Create an Aurora Auto Scaling policy to adjust the number of Aurora Replicas based on CPU utilization. Direct all read-only reporting traffic to the reader endpoint for the DB cluster.
C. Deploy an Amazon RDS for MySQL Multi-AZ database as a write database. Deploy a second RDS for MySQL Multi-AZ database that is configured as an auto scaling read-only database. Use AWS Database Migration Service (AWS DMS) to continuously replicate data from the write database to the read-only database. Direct all read-only reporting traffic to the reader endpoint for the read-only database.
D. Deploy an Amazon DynamoDB database. Create a DynamoDB auto scaling policy to adjust the read capacity of the database based on target utilization. Direct all read traffic and write traffic to the DynamoDB database.
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region. Where should the AWS DMS replication instance be placed for the MOST optimal performance?
A. In the same Region and VPC of the source DB instance
B. In the same Region and VPC as the target DB instance
C. In the same VPC and Availability Zone as the target DB instance
D. In the same VPC and Availability Zone as the source DB instance
A financial company is hosting its web application on AWS. The application's database is hosted on Amazon RDS for MySQL with automated backups enabled. The application has caused a logical corruption of the database, which is causing the application to become unresponsive. The specific time of the corruption has been identified, and it was within the backup retention period. How should a database specialist recover the database to the most recent point before corruption?
A. Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.
B. Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.
C. Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.
D. Restore using the appropriate automated backup. No changes to the application connection string are required.
A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora. Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?
A. Stop the DB cluster and analyze how the website responds
B. Use Aurora fault injection to crash the master DB instance
C. Remove the DB cluster endpoint to simulate a master DB instance failure
D. Use Aurora Backtrack to crash the DB cluster
A social media company recently launched a new feature that gives users the ability to share live feeds of their daily activities with their followers. The company has an Amazon RDS for MySQL DB instance that stores data about follower engagement. After the new feature launched, the company noticed high CPU utilization and high database latency during reads and writes. The company wants to implement a solution that will identify the source of the high CPU utilization. Which solution will meet these requirements with the LEAST administrative oversight?
A. Use Amazon DevOps Guru insights.
B. Use AWS CloudTrail.
C. Use Amazon CloudWatch Logs.
D. Use Amazon Aurora Database Activity Streams.
A company stores critical data for a department in Amazon RDS for MySQL DB instances. The department was closed for 3 weeks and notified a database specialist that access to the RDS DB instances should not be granted to anyone during this time. To meet this requirement, the database specialist stopped all the DB instances used by the department but did not select the option to create a snapshot. Before the 3 weeks expired, the database specialist discovered that users could connect to the database successfully. What could be the reason for this?
A. When stopping the DB instance, the option to create a snapshot should have been selected.
B. When stopping the DB instance, the duration for stopping the DB instance should have been selected.
C. Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.
D. Stopped DB instances will automatically restart if the instance is not manually started after 7 days.
A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests. Which should the database specialist do to allow the database team to create the test tables?
A. Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
B. Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
C. Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
D. Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
Access Full DBS-C01 Dump Free
Looking for even more practice questions? Click here to access the complete DBS-C01 Dump Free collection, offering hundreds of questions across all exam objectives.
We regularly update our content to ensure accuracy and relevance—so be sure to check back for new material.
Begin your certification journey today with our DBS-C01 dump free questions — and get one step closer to exam success!