Google Professional Cloud Database Engineer Dump Free – 50 Practice Questions to Sharpen Your Exam Readiness.
Looking for a reliable way to prepare for your Google Professional Cloud Database Engineer certification? Our Google Professional Cloud Database Engineer Dump Free includes 50 exam-style practice questions designed to reflect real test scenarios—helping you study smarter and pass with confidence.
Using an Google Professional Cloud Database Engineer dump free set of questions can give you an edge in your exam prep by helping you:
- Understand the format and types of questions you’ll face
- Pinpoint weak areas and focus your study efforts
- Boost your confidence with realistic question practice
Below, you will find 50 free questions from our Google Professional Cloud Database Engineer Dump Free collection. These cover key topics and are structured to simulate the difficulty level of the real exam, making them a valuable tool for review or final prep.
You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices. What should you do?
A. Maintain a target of 23% CPU utilization by locating:cluster-a in zone us-central1-acluster-b in zone europe-west1-dcluster-c in zone asia-east1-b
B. Maintain a target of 23% CPU utilization by locating:cluster-a in zone us-central1-acluster-b in zone us-central1-bcluster-c in zone us-east1-a
C. Maintain a target of 35% CPU utilization by locating:cluster-a in zone us-central1-acluster-b in zone australia-southeast1-acluster-c in zone europe-west1-dcluster-d in zone asia-east1-b
D. Maintain a target of 35% CPU utilization by locating:cluster-a in zone us-central1-acluster-b in zone us-central2-acluster-c in zone asia-northeast1-bcluster-d in zone asia-east1-b
Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hotspots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)
A. Use an auto-incrementing value as the primary key.
B. Normalize the data model.
C. Promote low-cardinality attributes in multi-attribute primary keys.
D. Promote high-cardinality attributes in multi-attribute primary keys.
E. Use bit-reverse sequential value as the primary key.
Your organization is migrating 50 TB Oracle databases to Bare Metal Solution for Oracle. Database backups must be available for quick restore. You also need to have backups available for 5 years. You need to design a cost-effective architecture that meets a recovery time objective (RTO) of 2 hours and recovery point objective (RPO) of 15 minutes. What should you do?
A. 1 Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on all flash storage.3. Keep backups older than one day stored in Actifio OnVault storage.
B. 1 Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on standard storage.3. Keep backups older than one day stored in Actifio OnVault storage.
C. 1. Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on standard storage.3. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to a Coldline Storage bucket.
D. 1. Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on all flash storage.3. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to an Archive Storage bucket.
You are responsible for designing a new database for an airline ticketing application in Google Cloud. This application must be able to: Work with transactions and offer strong consistency. Work with structured and semi-structured (JSON) data. Scale transparently to multiple regions globally as the operation grows. You need a Google Cloud database that meets all the requirements of the application. What should you do?
A. Use Cloud SQL for PostgreSQL with both cross-region read replicas.
B. Use Cloud Spanner in a multi-region configuration.
C. Use Firestore in Datastore mode.
D. Use a Bigtable instance with clusters in multiple regions.
You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?
A. Use the default app profile with single-cluster routing.
B. Use the default app profile with multi-cluster routing.
C. Create a custom app profile with multi-cluster routing.
D. Create a custom app profile with single-cluster routing.
Your company is migrating the existing infrastructure for a highly transactional application to Google Cloud. You have several databases in a MySQL database instance and need to decide how to transfer the data to Cloud SQL. You need to minimize the downtime for the migration of your 500 GB instance. What should you do?
A. 1. Create a Cloud SQL for MySQL instance for your databases, and configure Datastream to stream your database changes to Cloud SQL.2. Select the Backfill historical data check box on your stream configuration to initiate Datastream to backfill any data that is out of sync between the source and destination.3. Delete your stream when all changes are moved to Cloud SQL for MySQL, and update your application to use the new instance.
B. 1. Create migration job using Database Migration Service.2. Set the migration job type to Continuous, and allow the databases to complete the full dump phase and start sending data in change data capture (CDC) mode.3. Wait for the replication delay to minimize, initiate a promotion of the new Cloud SQL instance, and wait for the migration job to complete.4. Update your application connections to the new instance.
C. 1. Create migration job using Database Migration Service.2. Set the migration job type to One-time, and perform this migration during a maintenance window.3. Stop all write workloads to the source database and initiate the dump. Wait for the dump to be loaded into the Cloud SQL destination database and the destination database to be promoted to the primary database.4. Update your application connections to the new instance.
D. 1. Use the mysqldump utility to manually initiate a backup of MySQL during the application maintenance window.2. Move the files to Cloud Storage, and import each database into your Cloud SQL instance.3. Continue to dump each database until all the databases are migrated.4. Update your application connections to the new instance.
Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?
A. Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.
B. Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.
C. Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.
D. Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.
You have a Cloud SQL instance (DB-1) with two cross-region read replicas (DB-2 and DB-3). During a business continuity test, the primary instance (DB-1) was taken offline and a replica (DB-2) was promoted. The test has concluded and you want to return to the pre-test configuration. What should you do?
A. Bring DB-1 back online.
B. Delete DB-1, and re-create DB-1 as a read replica in the same region as DB-1.
C. Delete DB-2 so that DB-1 automatically reverts to the primary instance.
D. Create DB-4 as a read replica in the same region as DB-1, and promote DB-4 to primary.
You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?
A. Migrate to Compute Engine.
B. Migrate to Bare Metal Solution for Oracle.
C. Migrate to Google Kubernetes Engine (GKE)
D. Migrate to Google Cloud VMware Engine
You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available. What should you do?
A. Use Firestore.
B. Use Cloud SQL with an external (public) IP address.
C. Use an in-app embedded database.
D. Use Cloud Spanner.
You are migrating an on-premises application to Google Cloud. The application requires a high availability (HA) PostgreSQL database to support business-critical functions. Your company's disaster recovery strategy requires a recovery time objective (RTO) and recovery point objective (RPO) within 30 minutes of failure. You plan to use a Google Cloud managed service. What should you do to maximize uptime for your application?
A. Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.
B. Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.
C. Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.
D. Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.
Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?
A. 1. Use Database Migration Service to connect to your on-premises database, and choose continuous replication.2. After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.
B. 1. Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL.2. Schedule downtime to run each Cloud Data Fusion pipeline.3. Verify that the migration was successful.4. Re-point the applications to the Cloud SQL for MySQL instance.
C. 1. Pause the on-premises applications.2. Use the mysqldump utility to dump the database content in compressed format.3. Run gsutil –m to move the dump file to Cloud Storage.4. Use the Cloud SQL for MySQL import option.5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
D. 1 Pause the on-premises applications.2. Use the mysqldump utility to dump the database content in CSV format.3. Run gsutil –m to move the dump file to Cloud Storage.4. Use the Cloud SQL for MySQL import option.5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
You are migrating your 2 TB on-premises PostgreSQL cluster to Compute Engine. You want to set up your new environment in an Ubuntu virtual machine instance in Google Cloud and seed the data to a new instance. You need to plan your database migration to ensure minimum downtime. What should you do?
A. 1. Take a full export while the database is offline.2. Create a bucket in Cloud Storage.3. Transfer the dump file to the bucket you just created.4. Import the dump file into the Google Cloud primary server.
B. 1. Take a full export while the database is offline.2. Create a bucket in Cloud Storage.3. Transfer the dump file to the bucket you just created.4. Restore the backup into the Google Cloud primary server.
C. 1. Take a full backup while the database is online.2. Create a bucket in Cloud Storage.3. Transfer the backup to the bucket you just created.4. Restore the backup into the Google Cloud primary server.5. Create a recovery.conf file in the $PG_DATA directory.6. Stop the source database.7. Transfer the write ahead logs to the bucket you created before.8. Start the PostgreSQL service.9. Wait until Google Cloud primary server syncs with the running primary server.
D. 1. Take a full export while the database is online.2. Create a bucket in Cloud Storage.3. Transfer the dump file and write-ahead logs to the bucket you just created.4. Restore the dump file into the Google Cloud primary server.5. Create a recovery.conf file in the $PG_DATA directory.6. Stop the source database.7. Transfer the write-ahead logs to the bucket you created before.8. Start the PostgreSQL service.9. Wait until the Google Cloud primary server syncs with the running primary server.
You need to migrate a 1 TB PostgreSQL database from a Compute Engine VM to Cloud SQL for PostgreSQL. You want to ensure that there is minimal downtime during the migration. What should you do?
A. Export the data from the existing database, and load the data into a new Cloud SQL database.
B. Use Migrate for Compute Engine to complete the migration.
C. Use Datastream to complete the migration.
D. Use Database Migration Service to complete the migration.
You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?
A. Configure the automated backups to use a regional Cloud Storage bucket as a custom location.
B. Use the default configuration for the automated backups location.
C. Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.
D. Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.
You manage a meeting booking application that uses Cloud SQL. During an important launch, the Cloud SQL instance went through a maintenance event that resulted in a downtime of more than 5 minutes and adversely affected your production application. You need to immediately address the maintenance issue to prevent any unplanned events in the future. What should you do?
A. Set your production instance’s maintenance window to non-business hours.
B. Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.
C. Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.
D. Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.
You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi-zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances. What should you do?
A. Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.
B. Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.
C. Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.
D. Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.
Your company's mission-critical, globally available application is supported by a Cloud Spanner database. Experienced users of the application have read and write access to the database, but new users are assigned read-only access to the database. You need to assign the appropriate Cloud Spanner Identity and Access Management (IAM) role to new users being onboarded soon. What roles should you set up?
A. roles/spanner.databaseReader
B. roles/spanner.databaseUser
C. roles/spanner.viewer
D. roles/spanner.backupWriter
You are designing a database strategy for a new web application in one region. You need to minimize write latency. What should you do?
A. Use Cloud SQL with cross-region replicas.
B. Use high availability (HA) Cloud SQL with multiple zones.
C. Use zonal Cloud SQL without high availability (HA).
D. Use Cloud Spanner in a regional configuration.
You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working. What should you do?
A. Use the Google Cloud Console or gcloud CLI to manually create a new clone database.
B. Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.
C. Verify that the new replica is created automatically.
D. Start the original primary instance and resume replication.
Your company is migrating their MySQL database to Cloud SQL and cannot afford any planned downtime during the month of December. The company is also concerned with cost, so you need the most cost-effective solution. What should you do?
A. Open a support ticket in Google Cloud to prevent any maintenance in that MySQL instance during the month of December.
B. Use Cloud SQL maintenance settings to prevent any maintenance during the month of December.
C. Create MySQL read replicas in different zones so that, if any downtime occurs, the read replicas will act as the primary instance during the month of December.
D. Create a MySQL regional instance so that, if any downtime occurs, the standby instance will act as the primary instance during the month of December.
Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag. What should you do?
A. Identify and optimize slow running queries, or set parallel replication flags.
B. Stop all running queries, and re-create the replicas.
C. Edit the primary instance to upgrade to a larger disk, and increase vCPU count.
D. Edit the primary instance to add additional memory.
You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring configuration changes to your database. What should you do?
A. Define a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.
B. Define a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.
C. Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.
D. Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.
Your digital-native business runs its database workloads on Cloud SQL. Your website must be globally accessible 24/7. You need to prepare your Cloud SQL instance for high availability (HA). You want to follow Google-recommended practices. What should you do? (Choose two.)
A. Set up manual backups.
B. Create a PostgreSQL database on-premises as the HA option.
C. Configure single zone availability for automated backups.
D. Enable point-in-time recovery.
E. Schedule automated backups.
You are configuring a brand new PostgreSQL database instance in Cloud SQL. Your application team wants to have an optimal and highly available environment with automatic failover to avoid any unplanned outage. What should you do?
A. Create one regional Cloud SQL instance with a read replica in another region.
B. Create one regional Cloud SQL instance in one zone with a standby instance in another zone in the same region.
C. Create two read-write Cloud SQL instances in two different zones with a standby instance in another region.
D. Create two read-write Cloud SQL instances in two different regions with a standby instance in another zone.
You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse. What should you do?
A. Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
B. Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences
C. Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
D. Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?
A. Use Firestore.
B. Use Bigtable
C. Use BigQuery.
D. Use Cloud Spanner.
You are designing for a write-heavy application. During testing, you discover that the write workloads are performant in a regional Cloud Spanner instance but slow down by an order of magnitude in a multi-regional instance. You want to make the write workloads faster in a multi-regional instance. What should you do?
A. Place the bulk of the read and write workloads closer to the default leader region.
B. Use staleness of at least 15 seconds.
C. Add more read-write replicas.
D. Keep the total CPU utilization under 45% in each region.
You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency. What should you do?
A. Configure Firestore, and modify your application to offload queries.
B. Configure Bigtable, and modify your application to offload queries.
C. Configure Cloud SQL for PostgreSQL read replicas to offload queries.
D. Configure Memorystore, and modify your application to offload queries.
Your organization has a security policy to ensure that all Cloud SQL for PostgreSQL databases are secure. You want to protect sensitive data by using a key that meets specific locality or residency requirements. Your organization needs to control the key's lifecycle activities. You need to ensure that data is encrypted at rest and in transit. What should you do?
A. Create the database with Google-managed encryption keys.
B. Create the database with customer-managed encryption keys.
C. Create the database persistent disk with Google-managed encryption keys.
D. Create the database persistent disk with customer-managed encryption keys.
You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You need to test the high availability of your Cloud SQL instance by performing a failover. You want to use the cloud command. What should you do?
A. Use gcloud sql instances failover .
B. Use gcloud sql instances failover .
C. Use gcloud sql instances promote-replica .
D. Use gcloud sql instances promote-replica .
You are designing a payments processing application on Google Cloud. The application must continue to serve requests and avoid any user disruption if a regional failure occurs. You need to use AES-256 to encrypt data in the database, and you want to control where you store the encryption key. What should you do?
A. Use Cloud Spanner with a customer-managed encryption key (CMEK).
B. Use Cloud Spanner with default encryption.
C. Use Cloud SQL with a customer-managed encryption key (CMEK).
D. Use Bigtable with default encryption.
Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?
A. Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.
B. Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.
C. Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.
D. Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.
You need to issue a new server certificate because your old one is expiring. You need to avoid a restart of your Cloud SQL for MySQL instance. What should you do in your Cloud SQL instance?
A. Issue a rollback, and download your server certificate.
B. Create a new client certificate, and download it.
C. Create a new server certificate, and download it.
D. Reset your SSL configuration, and download your server certificate.
You are migrating a telehealth care company's on-premises data center to Google Cloud. The migration plan specifies: PostgreSQL databases must be migrated to a multi-region backup configuration with cross-region replicas to allow restore and failover in multiple scenarios. MySQL databases handle personally identifiable information (PII) and require data residency compliance at the regional level. You want to set up the environment with minimal administrative effort. What should you do?
A. Set up Cloud Logging and Cloud Monitoring with Cloud Functions to send an alert every time a new database instance is created, and manually validate the region.
B. Set up different organizations for each database type, and apply policy constraints at the organization level.
C. Set up Pub/Sub to ingest data from Cloud Logging, send an alert every time a new database instance is created, and manually validate the region.
D. Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level.
You need to migrate existing databases from Microsoft SQL Server 2016 Standard Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for SQL Server instance. During the discovery phase of your project, you notice that your on-premises server peaks at around 25,000 read IOPS. You need to ensure that your Cloud SQL instance is sized appropriately to maximize read performance. What should you do?
A. Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).
B. Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
C. Create a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.
D. Create a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.
You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You have a primary instance in region 1 and a read replica in region 2. After a failure of region 1, you need to make the Cloud SQL instance available again. You want to minimize data loss and follow Google-recommended practices. What should you do?
A. Restore the Cloud SQL instance from the automatic backups in region 3.
B. Restore the Cloud SQL instance from the automatic backups in another zone in region 1.
C. Check “Lag Bytes” in the monitoring dashboard for the primary instance in the read replica instance. Check the replication status using pg_catalog.pg_last_wal_receive_lsn(). Then, fail over to region 2 by promoting the read replica instance.
D. Check your instance operational log for the automatic failover status. Look for time, type, and status of the operations. If the failover operation is successful, no action is necessary. Otherwise, manually perform gcloud sql instances failover .
You have a large Cloud SQL for PostgreSQL instance. The database instance is not mission-critical, and you want to minimize operational costs. What should you do to lower the cost of backups in this environment?
A. Set the automated backups to occur every other day to lower the frequency of backups.
B. Change the storage tier of the automated backups from solid-state drive (SSD) to hard disk drive (HDD).
C. Select a different region to store your backups.
D. Reduce the number of automated backups that are retained to two (2).
Your organization is currently updating an existing corporate application that is running in another public cloud to access managed database services in Google Cloud. The application will remain in the other public cloud while the database is migrated to Google Cloud. You want to follow Google-recommended practices for authentication. You need to minimize user disruption during the migration. What should you do?
A. Use workload identity federation to impersonate a service account.
B. Ask existing users to set their Google password to match their corporate password.
C. Migrate the application to Google Cloud, and use Identity and Access Management (IAM).
D. Use Google Workspace Password Sync to replicate passwords into Google Cloud.
You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your application team is running important transactions on the database when another DBA starts an on-demand backup. You want to verify the status of the backup. What should you do?
A. Check the cloudsql.googleapis.com/postgres.log instance log.
B. Perform the gcloud sql operations list command.
C. Use Cloud Audit Logs to verify the status.
D. Use the Google Cloud Console.
Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over. What should you do? (Choose two.)
A. Use an external read replica to migrate the databases to Cloud SQL.
B. Use a read replica to migrate the databases to Cloud SQL.
C. Use Database Migration Service to migrate the databases to Cloud SQL.
D. Use a cross-region read replica to migrate the databases to Cloud SQL.
E. Use replication from an external server to migrate the databases to Cloud SQL.
Your company uses the Cloud SQL out-of-disk recommender to analyze the storage utilization trends of production databases over the last 30 days. Your database operations team uses these recommendations to proactively monitor storage utilization and implement corrective actions. You receive a recommendation that the instance is likely to run out of disk space. What should you do to address this storage alert?
A. Normalize the database to the third normal form.
B. Compress the data using a different compression algorithm.
C. Manually or automatically increase the storage capacity.
D. Create another schema to load older data.
You are setting up a Bare Metal Solution environment. You need to update the operating system to the latest version. You need to connect the Bare Metal Solution environment to the internet so you can receive software updates. What should you do?
A. Setup a static external IP address in your VPC network.
B. Set up bring your own IP (BYOIP) in your VPC.
C. Set up a Cloud NAT gateway on the Compute Engine VM.
D. Set up Cloud NAT service.
Your company is developing a new global transactional application that must be ACID-compliant and have 99.999% availability. You are responsible for selecting the appropriate Google Cloud database to serve as a datastore for this new application. What should you do?
A. Use Firestore.
B. Use Cloud Spanner.
C. Use Cloud SQL.
D. Use Bigtable.
Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. The database will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?
A. In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.
B. In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use
C. In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.
D. In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.
Your organization is running a MySQL workload in Cloud SQL. Suddenly you see a degradation in database performance. You need to identify the root cause of the performance degradation. What should you do?
A. Use Logs Explorer to analyze log data.
B. Use Cloud Monitoring to monitor CPU, memory, and storage utilization metrics.
C. Use Error Reporting to count, analyze, and aggregate the data.
D. Use Cloud Debugger to inspect the state of an application.
You use Python scripts to generate weekly SQL reports to assess the state of your databases and determine whether you need to reorganize tables or run statistics. You want to automate this report but need to minimize operational costs and overhead. What should you do?
A. Create a VM in Compute Engine, and run a cron job.
B. Create a Cloud Composer instance, and create a directed acyclic graph (DAG).
C. Create a Cloud Function, and call the Cloud Function using Cloud Scheduler.
D. Create a Cloud Function, and call the Cloud Function from a Cloud Tasks queue.
You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history. What should you do?
A. Use Cloud SQL with read replicas for throughput.
B. Use Firestore, and rely on automatic serverless scaling.
C. Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.
D. Use Bigtable, and add nodes as necessary to achieve the required throughput.
You work for a financial services company that wants to use fully managed database services. Traffic volume for your consumer services products has increased annually at a constant rate with occasional spikes around holidays. You frequently need to upgrade the capacity of your database. You want to use Cloud Spanner and include an automated method to increase your hardware capacity to support a higher level of concurrency. What should you do?
A. Use linear scaling to implement the Autoscaler-based architecture
B. Use direct scaling to implement the Autoscaler-based architecture.
C. Upgrade the Cloud Spanner instance on a periodic basis during the scheduled maintenance window.
D. Set up alerts that are triggered when Cloud Spanner utilization metrics breach the threshold, and then schedule an upgrade during the scheduled maintenance window.
You manage a production MySQL database running on Cloud SQL at a retail company. You perform routine maintenance on Sunday at midnight when traffic is slow, but you want to skip routine maintenance during the year-end holiday shopping season. You need to ensure that your production system is available 24/7 during the holidays. What should you do?
A. Define a maintenance window on Sundays between 12 AM and 1 AM, and deny maintenance periods between November 1 and January 15.
B. Define a maintenance window on Sundays between 12 AM and 5 AM, and deny maintenance periods between November 1 and February 15.
C. Build a Cloud Composer job to start a maintenance window on Sundays between 12 AM and 1AM, and deny maintenance periods between November 1 and January 15.
D. Create a Cloud Scheduler job to start maintenance at 12 AM on Sundays. Pause the Cloud Scheduler job between November 1 and January 15.
Access Full Google Professional Cloud Database Engineer Dump Free
Looking for even more practice questions? Click here to access the complete Google Professional Cloud Database Engineer Dump Free collection, offering hundreds of questions across all exam objectives.
We regularly update our content to ensure accuracy and relevance—so be sure to check back for new material.
Begin your certification journey today with our Google Professional Cloud Database Engineer dump free questions — and get one step closer to exam success!