Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
  • Login
  • Register
Quesions Library
  • Cisco
    • 200-301
    • 200-901
      • Multiple Choice
      • Drag Drop
    • 350-401
      • Multiple Choice
      • Drag Drop
    • 350-701
    • 300-410
      • Multiple Choice
      • Drag Drop
    • 300-415
      • Multiple Choice
      • Drag Drop
    • 300-425
    • Others
  • AWS
    • CLF-C02
    • SAA-C03
    • SAP-C02
    • ANS-C01
    • Others
  • Microsoft
    • AZ-104
    • AZ-204
    • AZ-305
    • AZ-900
    • AI-900
    • SC-900
    • Others
  • CompTIA
    • SY0-601
    • N10-008
    • 220-1101
    • 220-1102
    • Others
  • Google
    • Associate Cloud Engineer
    • Professional Cloud Architect
    • Professional Cloud DevOps Engineer
    • Others
  • ISACA
    • CISM
    • CRIS
    • Others
  • LPI
    • 101-500
    • 102-500
    • 201-450
    • 202-450
  • Fortinet
    • NSE4_FGT-7.2
  • VMware
  • >>
    • Juniper
    • EC-Council
      • 312-50v12
    • ISC
      • CISSP
    • PMI
      • PMP
    • Palo Alto Networks
    • RedHat
    • Oracle
    • GIAC
    • F5
    • ITILF
    • Salesforce
Contribute
Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
Practice Test Free
No Result
View All Result
Home Practice Exam Free

Google Professional Cloud Database Engineer Practice Exam Free

Table of Contents

Toggle
  • Google Professional Cloud Database Engineer Practice Exam Free – 50 Questions to Simulate the Real Exam
  • Free Access Full Google Professional Cloud Database Engineer Practice Exam Free

Google Professional Cloud Database Engineer Practice Exam Free – 50 Questions to Simulate the Real Exam

Are you getting ready for the Google Professional Cloud Database Engineer certification? Take your preparation to the next level with our Google Professional Cloud Database Engineer Practice Exam Free – a carefully designed set of 50 realistic exam-style questions to help you evaluate your knowledge and boost your confidence.

Using a Google Professional Cloud Database Engineer practice exam free is one of the best ways to:

  • Experience the format and difficulty of the real exam
  • Identify your strengths and focus on weak areas
  • Improve your test-taking speed and accuracy

Below, you will find 50 realistic Google Professional Cloud Database Engineer practice exam free questions covering key exam topics. Each question reflects the structure and challenge of the actual exam.

Question 1

You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance. What should you do?

A. Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.

B. Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.

C. Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.

D. Create a CSV file by running the SQL statement SELECT…INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.

 


Correct Answer: C

Question 2

Your DevOps team is using Terraform to deploy applications and Cloud SQL databases. After every new application change is rolled out, the environment is torn down and recreated, and the persistent database layer is lost. You need to prevent the database from being dropped. What should you do?

A. Set Terraform deletion_protection to true.

B. Rerun terraform apply.

C. Create a read replica.

D. Use point-in-time-recovery (PITR) to recover the database.

 


Correct Answer: A

Question 3

You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency. What should you do?

A. Configure Firestore, and modify your application to offload queries.

B. Configure Bigtable, and modify your application to offload queries.

C. Configure Cloud SQL for PostgreSQL read replicas to offload queries.

D. Configure Memorystore, and modify your application to offload queries.

 


Correct Answer: D

Question 4

Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?

A. 1. Use Database Migration Service to connect to your on-premises database, and choose continuous replication.2. After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.

B. 1. Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL.2. Schedule downtime to run each Cloud Data Fusion pipeline.3. Verify that the migration was successful.4. Re-point the applications to the Cloud SQL for MySQL instance.

C. 1. Pause the on-premises applications.2. Use the mysqldump utility to dump the database content in compressed format.3. Run gsutil –m to move the dump file to Cloud Storage.4. Use the Cloud SQL for MySQL import option.5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

D. 1 Pause the on-premises applications.2. Use the mysqldump utility to dump the database content in CSV format.3. Run gsutil –m to move the dump file to Cloud Storage.4. Use the Cloud SQL for MySQL import option.5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

 


Correct Answer: B

Question 5

Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports. Which two approaches should you implement? (Choose two.)

A. Create secondary indexes on the replica.

B. Create additional read replicas, and partition your analytics users to use different read replicas.

C. Disable replication on the read replica, and set the flag for parallel replication on the read replica. Re-enable replication and optimize performance by setting flags on the primary instance.

D. Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.

E. Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.

 


Correct Answer: BE

Question 6

Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over. What should you do? (Choose two.)

A. Use Database Migration Service to migrate the databases to Cloud SQL.

B. Use a cross-region read replica to migrate the databases to Cloud SQL.

C. Use replication from an external server to migrate the databases to Cloud SQL.

D. Use an external read replica to migrate the databases to Cloud SQL.

E. Use a read replica to migrate the databases to Cloud SQL.

 


Correct Answer: CE

Question 7

You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You need to test the high availability of your Cloud SQL instance by performing a failover. You want to use the cloud command. What should you do?

A. Use gcloud sql instances failover .

B. Use gcloud sql instances failover .

C. Use gcloud sql instances promote-replica .

D. Use gcloud sql instances promote-replica .

 


Correct Answer: D

Question 8

You are migrating a telehealth care company's on-premises data center to Google Cloud. The migration plan specifies:
PostgreSQL databases must be migrated to a multi-region backup configuration with cross-region replicas to allow restore and failover in multiple scenarios.
MySQL databases handle personally identifiable information (PII) and require data residency compliance at the regional level.
You want to set up the environment with minimal administrative effort. What should you do?

A. Set up Cloud Logging and Cloud Monitoring with Cloud Functions to send an alert every time a new database instance is created, and manually validate the region.

B. Set up different organizations for each database type, and apply policy constraints at the organization level.

C. Set up Pub/Sub to ingest data from Cloud Logging, send an alert every time a new database instance is created, and manually validate the region.

D. Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level.

 


Correct Answer: B

Question 9

You are working on a new centralized inventory management system to track items available in 200 stores, which each have 500 GB of data. You are planning a gradual rollout of the system to a few stores each week. You need to design an SQL database architecture that minimizes costs and user disruption during each regional rollout and can scale up or down on nights and holidays. What should you do?

A. Use Oracle Real Application Cluster (RAC) databases on Bare Metal Solution for Oracle.

B. Use sharded Cloud SQL instances with one or more stores per database instance.

C. Use a Biglable cluster with autoscaling.

D. Use Cloud Spanner with a custom autoscaling solution.

 


Correct Answer: B

Question 10

Your organization is migrating 50 TB Oracle databases to Bare Metal Solution for Oracle. Database backups must be available for quick restore. You also need to have backups available for 5 years. You need to design a cost-effective architecture that meets a recovery time objective (RTO) of 2 hours and recovery point objective (RPO) of 15 minutes. What should you do?

A. 1 Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on all flash storage.3. Keep backups older than one day stored in Actifio OnVault storage.

B. 1 Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on standard storage.3. Keep backups older than one day stored in Actifio OnVault storage.

C. 1. Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on standard storage.3. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to a Coldline Storage bucket.

D. 1. Create the database on a Bare Metal Solution server with the database running on flash storage.2. Keep a local backup copy on all flash storage.3. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to an Archive Storage bucket.

 


Correct Answer: B

Question 11

Your organization has an existing app that just went viral. The app uses a Cloud SQL for MySQL backend database that is experiencing slow disk performance while using hard disk drives (HDDs). You need to improve performance and reduce disk I/O wait times. What should you do?

A. Export the data from the existing instance, and import the data into a new instance with solid-state drives (SSDs).

B. Edit the instance to change the storage type from HDD to SSD.

C. Create a high availability (HA) failover instance with SSDs, and perform a failover to the new instance.

D. Create a read replica of the instance with SSDs, and perform a failover to the new instance

 


Correct Answer: C

Question 12

Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. The database will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?

A. In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.

B. In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use

C. In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.

D. In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.

 


Correct Answer: D

Question 13

You are writing an application that will run on Cloud Run and require a database running in the Cloud SQL managed service. You want to secure this instance so that it only receives connections from applications running in your VPC environment in Google Cloud. What should you do?

A. 1. Create your instance with a specified external (public) IP address.2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.3. Use Cloud SQL Auth proxy to connect to the instance.

B. 1. Create your instance with a specified external (public) IP address.2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.3. Connect to the instance using a connection pool to best manage connections to the instance.

C. 1. Create your instance with a specified internal (private) IP address.2. Choose the VPC with private service connection configured.3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.4. Use Cloud SQL Auth proxy to connect to the instance.

D. 1. Create your instance with a specified internal (private) IP address.2. Choose the VPC with private service connection configured.3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.4. Connect to the instance using a connection pool to best manage connections to the instance.

 


Correct Answer: C

Question 14

You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out. What should you do?

A. Close idle connections or restart the instance before beginning the import operation.

B. Increase the amount of memory allocated to your instance.

C. Ensure that the service account has the Storage Admin role.

D. Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.

 


Correct Answer: C

Question 15

You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your application team is running important transactions on the database when another DBA starts an on-demand backup. You want to verify the status of the backup. What should you do?

A. Check the cloudsql.googleapis.com/postgres.log instance log.

B. Perform the gcloud sql operations list command.

C. Use Cloud Audit Logs to verify the status.

D. Use the Google Cloud Console.

 


Correct Answer: C

Question 16

Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?

A. Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.

B. Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.

C. Use Memorystore to handle your low-latency requirements and for real-time analytics.

D. Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.

 


Correct Answer: B

Question 17

Your company is using Cloud SQL for MySQL with an internal (private) IP address and wants to replicate some tables into BigQuery in near-real time for analytics and machine learning. You need to ensure that replication is fast and reliable and uses Google-managed services. What should you do?

A. Develop a custom data replication service to send data into BigQuery.

B. Use Cloud SQL federated queries.

C. Use Database Migration Service to replicate tables into BigQuery.

D. Use Datastream to capture changes, and use Dataflow to write those changes to BigQuery.

 


Correct Answer: D

Question 18

You are the DBA of an online tutoring application that runs on a Cloud SQL for PostgreSQL database. You are testing the implementation of the cross-regional failover configuration. The database in region R1 fails over successfully to region R2, and the database becomes available for the application to process data. During testing, certain scenarios of the application work as expected in region R2, but a few scenarios fail with database errors. The application-related database queries, when executed in isolation from Cloud SQL for PostgreSQL in region R2, work as expected. The application performs completely as expected when the database fails back to region R1. You need to identify the cause of the database errors in region R2. What should you do?

A. Determine whether the versions of Cloud SQL for PostgreSQL in regions R1 and R2 are different.

B. Determine whether the database patches of Cloud SQI for PostgreSQL in regions R1 and R2 are different.

C. Determine whether the failover of Cloud SQL for PostgreSQL from region R1 to region R2 is in progress or has completed successfully.

D. Determine whether Cloud SQL for PostgreSQL in region R2 is a near-real-time copy of region R1 but not an exact copy.

 


Correct Answer: B

Question 19

You manage a meeting booking application that uses Cloud SQL. During an important launch, the Cloud SQL instance went through a maintenance event that resulted in a downtime of more than 5 minutes and adversely affected your production application. You need to immediately address the maintenance issue to prevent any unplanned events in the future. What should you do?

A. Set your production instance’s maintenance window to non-business hours.

B. Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.

C. Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.

D. Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.

 


Correct Answer: B

Question 20

You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-premises PostgreSQL instances. Geography is becoming increasingly relevant to customer privacy worldwide. Your solution must support data residency requirements and include a strategy to: configure where data is stored control where the encryption keys are stored govern the access to data
What should you do?

A. Replicate Cloud SQL databases across different zones.

B. Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that does not need to adhere to data residency requirements. Keep the data that must adhere to data residency requirements on-premises. Make application changes to support both databases.

C. Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.

D. Use features like customer-managed encryption keys (CMEK), VPC Service Controls, and Identity and Access Management (IAM) policies.

 


Correct Answer: C

Question 21

You are using Compute Engine on Google Cloud and your data center to manage a set of MySQL databases in a hybrid configuration. You need to create replicas to scale reads and to offload part of the management operation. What should you do?

A. Use external server replication.

B. Use Data Migration Service.

C. Use Cloud SQL for MySQL external replica.

D. Use the mysqldump utility and binary logs.

 


Correct Answer: B

Question 22

You work for a financial services company that wants to use fully managed database services. Traffic volume for your consumer services products has increased annually at a constant rate with occasional spikes around holidays. You frequently need to upgrade the capacity of your database. You want to use Cloud Spanner and include an automated method to increase your hardware capacity to support a higher level of concurrency. What should you do?

A. Use linear scaling to implement the Autoscaler-based architecture

B. Use direct scaling to implement the Autoscaler-based architecture.

C. Upgrade the Cloud Spanner instance on a periodic basis during the scheduled maintenance window.

D. Set up alerts that are triggered when Cloud Spanner utilization metrics breach the threshold, and then schedule an upgrade during the scheduled maintenance window.

 


Correct Answer: C

Question 23

Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hotspots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)

A. Use an auto-incrementing value as the primary key.

B. Normalize the data model.

C. Promote low-cardinality attributes in multi-attribute primary keys.

D. Promote high-cardinality attributes in multi-attribute primary keys.

E. Use bit-reverse sequential value as the primary key.

 


Correct Answer: AD

Question 24

Your company wants to migrate an Oracle-based application to Google Cloud. The application team currently uses Oracle Recovery Manager (RMAN) to back up the database to tape for long-term retention (LTR). You need a cost-effective backup and restore solution that meets a 2-hour recovery time objective (RTO) and a 15-minute recovery point objective (RPO). What should you do?

A. Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.

B. Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.

C. Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.

D. Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.

 


Correct Answer: C

Question 25

You are designing a payments processing application on Google Cloud. The application must continue to serve requests and avoid any user disruption if a regional failure occurs. You need to use AES-256 to encrypt data in the database, and you want to control where you store the encryption key. What should you do?

A. Use Cloud Spanner with a customer-managed encryption key (CMEK).

B. Use Cloud Spanner with default encryption.

C. Use Cloud SQL with a customer-managed encryption key (CMEK).

D. Use Bigtable with default encryption.

 


Correct Answer: C

Question 26

You are responsible for designing a new database for an airline ticketing application in Google Cloud. This application must be able to:
Work with transactions and offer strong consistency.
Work with structured and semi-structured (JSON) data.
Scale transparently to multiple regions globally as the operation grows.
You need a Google Cloud database that meets all the requirements of the application. What should you do?

A. Use Cloud SQL for PostgreSQL with both cross-region read replicas.

B. Use Cloud Spanner in a multi-region configuration.

C. Use Firestore in Datastore mode.

D. Use a Bigtable instance with clusters in multiple regions.

 


Correct Answer: A

Question 27

You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?

A. Use the default app profile with single-cluster routing.

B. Use the default app profile with multi-cluster routing.

C. Create a custom app profile with multi-cluster routing.

D. Create a custom app profile with single-cluster routing.

 


Correct Answer: A

Question 28

You are migrating an on-premises application to Compute Engine and Cloud SQL. The application VMs will live in their own project, separate from the Cloud SQL instances which have their own project. What should you do to configure the networks?

A. Create a new VPC network in each project, and use VPC Network Peering to connect the two together.

B. Create a Shared VPC that both the application VMs and Cloud SQL instances will use.

C. Use the default networks, and leverage Cloud VPN to connect the two together.

D. Place both the application VMs and the Cloud SQL instances in the default network of each project.

 


Correct Answer: A

Question 29

You support a consumer inventory application that runs on a multi-region instance of Cloud Spanner. A customer opened a support ticket to complain about slow response times. You notice a Cloud Monitoring alert about high CPU utilization. You want to follow Google-recommended practices to address the CPU performance issue. What should you do first?

A. Increase the number of processing units.

B. Modify the database schema, and add additional indexes.

C. Shard data required by the application into multiple instances.

D. Decrease the number of processing units.

 


Correct Answer: A

Question 30

Your organization has a critical business app that is running with a Cloud SQL for MySQL backend database. Your company wants to build the most fault-tolerant and highly available solution possible. You need to ensure that the application database can survive a zonal and regional failure with a primary region of us-central1 and the backup region of us-east1. What should you do?

A. 1. Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-west1-b.3. Create a read replica in us-east1-c.

B. 1. Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-central1-b.3. Create a read replica in us-east1-b.

C. 1. Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-east-b.3. Create a read replica in us-east1-c.

D. 1. Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-east1-b.3. Create a read replica in us-central1-b.

 


Correct Answer: B

Question 31

You want to migrate your PostgreSQL database from another cloud provider to Cloud SQL. You plan on using Database Migration Service and need to assess the impact of any known limitations. What should you do? (Choose two.)

A. Identify whether the database has over 512 tables.

B. Identify all tables that do not have a primary key.

C. Identity all tables that do not have at least one foreign key.

D. Identify whether the source database is encrypted using pgcrypto extension.

E. Identify whether the source database uses customer-managed encryption keys (CMEK).

 


Correct Answer: CE

Question 32

Your company is evaluating Google Cloud database options for a mission-critical global payments gateway application. The application must be available 24/7 to users worldwide, horizontally scalable, and support open source databases. You need to select an automatically shardable, fully managed database with 99.999% availability and strong transactional consistency. What should you do?

A. Select Bare Metal Solution for Oracle.

B. Select Cloud SQL.

C. Select Bigtable.

D. Select Cloud Spanner.

 


Correct Answer: A

Question 33

You need to issue a new server certificate because your old one is expiring. You need to avoid a restart of your Cloud SQL for MySQL instance. What should you do in your Cloud SQL instance?

A. Issue a rollback, and download your server certificate.

B. Create a new client certificate, and download it.

C. Create a new server certificate, and download it.

D. Reset your SSL configuration, and download your server certificate.

 


Correct Answer: D

Question 34

You are designing a database strategy for a new web application in one region. You need to minimize write latency. What should you do?

A. Use Cloud SQL with cross-region replicas.

B. Use high availability (HA) Cloud SQL with multiple zones.

C. Use zonal Cloud SQL without high availability (HA).

D. Use Cloud Spanner in a regional configuration.

 


Correct Answer: A

Question 35

You are designing a physician portal app in Node.js. This application will be used in hospitals and clinics that might have intermittent internet connectivity. If a connectivity failure occurs, the app should be able to query the cached data. You need to ensure that the application has scalability, strong consistency, and multi-region replication. What should you do?

A. Use Firestore and ensure that the PersistenceEnabled option is set to true.

B. Use Memorystore for Memcached.

C. Use Pub/Sub to synchronize the changes from the application to Cloud Spanner.

D. Use Table.read with the exactStaleness option to perform a read of rows in Cloud Spanner.

 


Correct Answer: C

Question 36

Your organization is currently updating an existing corporate application that is running in another public cloud to access managed database services in Google Cloud. The application will remain in the other public cloud while the database is migrated to Google Cloud. You want to follow Google-recommended practices for authentication. You need to minimize user disruption during the migration. What should you do?

A. Use workload identity federation to impersonate a service account.

B. Ask existing users to set their Google password to match their corporate password.

C. Migrate the application to Google Cloud, and use Identity and Access Management (IAM).

D. Use Google Workspace Password Sync to replicate passwords into Google Cloud.

 


Correct Answer: C

Question 37

Your organization is running a Firestore-backed Firebase app that serves the same top ten news stories on a daily basis to a large global audience. You want to optimize content delivery while decreasing cost and latency. What should you do?

A. Enable serializable isolation in the Firebase app.

B. Deploy a US multi-region Firestore location.

C. Build a Firestore bundle, and deploy bundles to Cloud CDN.

D. Create a Firestore index on the news story date.

 


Correct Answer: C

Question 38

Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?

A. Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.

B. Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.

C. Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.

D. Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.

 


Correct Answer: B

Question 39

You released a popular mobile game and are using a 50 TB Cloud Spanner instance to store game data in a PITR-enabled production environment. When you analyzed the game statistics, you realized that some players are exploiting a loophole to gather more points to get on the leaderboard. Another DBA accidentally ran an emergency bugfix script that corrupted some of the data in the production environment. You need to determine the extent of the data corruption and restore the production environment. What should you do? (Choose two.)

A. If the corruption is significant, use backup and restore, and specify a recovery timestamp.

B. If the corruption is significant, perform a stale read and specify a recovery timestamp. Write the results back.

C. If the corruption is significant, use import and export.

D. If the corruption is insignificant, use backup and restore, and specify a recovery timestamp.

E. If the corruption is insignificant, perform a stale read and specify a recovery timestamp. Write the results back.

 


Correct Answer: BE

Question 40

Your organization stores marketing data such as customer preferences and purchase history on Bigtable. The consumers of this database are predominantly data analysts and operations users. You receive a service ticket from the database operations department citing poor database performance between 9 AM-10 AM every day. The application team has confirmed no latency from their logs. A new cohort of pilot users that is testing a dataset loaded from a third-party data provider is experiencing poor database performance. Other users are not affected. You need to troubleshoot the issue. What should you do?

A. Isolate the data analysts and operations user groups to use different Bigtable instances.

B. Check the Cloud Monitoring table/bytes_used metric from Bigtable.

C. Use Key Visualizer for Bigtable.

D. Add more nodes to the Bigtable cluster.

 


Correct Answer: B

Question 41

Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over.
What should you do? (Choose two.)

A. Use an external read replica to migrate the databases to Cloud SQL.

B. Use a read replica to migrate the databases to Cloud SQL.

C. Use Database Migration Service to migrate the databases to Cloud SQL.

D. Use a cross-region read replica to migrate the databases to Cloud SQL.

E. Use replication from an external server to migrate the databases to Cloud SQL.

 


Correct Answer: BD

Question 42

You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance. What should you do?

A. Take no backups, and turn off transaction log retention.

B. Take one manual backup per day, and turn off transaction log retention.

C. Turn on automated backup, and turn off transaction log retention.

D. Turn on automated backup, and turn on transaction log retention.

 


Correct Answer: B

Question 43

Your organization has a production Cloud SQL for MySQL instance. Your instance is configured with 16 vCPUs and 104 GB of RAM that is running between 90% and 100% CPU utilization for most of the day. You need to scale up the database and add vCPUs with minimal interruption and effort. What should you do?

A. Issue a gcloud sql instances patch command to increase the number of vCPUs.

B. Update a MySQL database flag to increase the number of vCPUs.

C. Issue a gcloud compute instances update command to increase the number of vCPUs.

D. Back up the database, create an instance with additional vCPUs, and restore the database.

 


Correct Answer: A

Question 44

You have deployed a Cloud SQL for SQL Server instance. In addition, you created a cross-region read replica for disaster recovery (DR) purposes. Your company requires you to maintain and monitor a recovery point objective (RPO) of less than 5 minutes. You need to verify that your cross-region read replica meets the allowed RPO. What should you do?

A. Use Cloud SQL instance monitoring.

B. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.

C. Use Cloud SQL logs.

D. Use the SQL Server Always On Availability Group dashboard.

 


Correct Answer: B

Question 45

Your company has PostgreSQL databases on-premises and on Amazon Web Services (flaws). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices and use Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy. What should you do?

A. Use Database Migration Service to migrate all databases to Cloud SQL.

B. Use Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.

C. Use data replication tools and CDC tools to enable migration.

D. Use a combination of Database Migration Service and partner tools to support the data migration strategy.

 


Correct Answer: B

Question 46

You are migrating your 2 TB on-premises PostgreSQL cluster to Compute Engine. You want to set up your new environment in an Ubuntu virtual machine instance in Google Cloud and seed the data to a new instance. You need to plan your database migration to ensure minimum downtime. What should you do?

A. 1. Take a full export while the database is offline.2. Create a bucket in Cloud Storage.3. Transfer the dump file to the bucket you just created.4. Import the dump file into the Google Cloud primary server.

B. 1. Take a full export while the database is offline.2. Create a bucket in Cloud Storage.3. Transfer the dump file to the bucket you just created.4. Restore the backup into the Google Cloud primary server.

C. 1. Take a full backup while the database is online.2. Create a bucket in Cloud Storage.3. Transfer the backup to the bucket you just created.4. Restore the backup into the Google Cloud primary server.5. Create a recovery.conf file in the $PG_DATA directory.6. Stop the source database.7. Transfer the write ahead logs to the bucket you created before.8. Start the PostgreSQL service.9. Wait until Google Cloud primary server syncs with the running primary server.

D. 1. Take a full export while the database is online.2. Create a bucket in Cloud Storage.3. Transfer the dump file and write-ahead logs to the bucket you just created.4. Restore the dump file into the Google Cloud primary server.5. Create a recovery.conf file in the $PG_DATA directory.6. Stop the source database.7. Transfer the write-ahead logs to the bucket you created before.8. Start the PostgreSQL service.9. Wait until the Google Cloud primary server syncs with the running primary server.

 


Correct Answer: C

Question 47

Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:
Person A is a database administrator.
Person B is an analyst who generates metric reports.
Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner. Which roles should you assign?

A. roles/spanner.databaseAdmin for Person Aroles/spanner.databaseReader for Person Broles/spanner.backupWriter for Application C

B. roles/spanner.databaseAdmin for Person Aroles/spanner.databaseReader for Person Broles/spanner.backupAdmin for Application C

C. roles/spanner.databaseAdmin for Person Aroles/spanner.databaseUser for Person Broles/spanner databaseReader for Application C

D. roles/spanner.databaseAdmin for Person Aroles/spanner.databaseUser for Person Broles/spanner.backupWriter for Application C

 


Correct Answer: B

Question 48

You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi-zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances. What should you do?

A. Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.

B. Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.

C. Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.

D. Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.

 


Correct Answer: B

Question 49

Your company wants to migrate its MySQL, PostgreSQL, and Microsoft SQL Server on-premises databases to Google Cloud. You need a solution that provides near-zero downtime, requires no application changes, and supports change data capture (CDC). What should you do?

A. Use the native export and import functionality of the source database.

B. Create a database on Google Cloud, and use database links to perform the migration.

C. Create a database on Google Cloud, and use Dataflow for database migration.

D. Use Database Migration Service.
–

 


Correct Answer: B

Question 50

During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?

A. Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.

B. Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.

C. Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.

D. Shut down your existing Cloud SQL for MySQL instance, and enable HA.

 


Correct Answer: A

Free Access Full Google Professional Cloud Database Engineer Practice Exam Free

Looking for additional practice? Click here to access a full set of Google Professional Cloud Database Engineer practice exam free questions and continue building your skills across all exam domains.

Our question sets are updated regularly to ensure they stay aligned with the latest exam objectives—so be sure to visit often!

Good luck with your Google Professional Cloud Database Engineer certification journey!

Share18Tweet11
Previous Post

Google Professional Cloud Architect Practice Exam Free

Next Post

Google Professional Cloud Developer Practice Exam Free

Next Post

Google Professional Cloud Developer Practice Exam Free

Google Professional Cloud DevOps Engineer Practice Exam Free

Google Professional Cloud Network Engineer Practice Exam Free

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Network+ Practice Test

Comptia Security+ Practice Test

A+ Certification Practice Test

Aws Cloud Practitioner Exam Questions

Aws Cloud Practitioner Practice Exam

Comptia A+ Practice Test

  • About
  • DMCA
  • Privacy & Policy
  • Contact

PracticeTestFree.com materials do not contain actual questions and answers from Cisco's Certification Exams. PracticeTestFree.com doesn't offer Real Microsoft Exam Questions. PracticeTestFree.com doesn't offer Real Amazon Exam Questions.

  • Login
  • Sign Up
No Result
View All Result
  • Quesions
    • Cisco
    • AWS
    • Microsoft
    • CompTIA
    • Google
    • ISACA
    • ECCouncil
    • F5
    • GIAC
    • ISC
    • Juniper
    • LPI
    • Oracle
    • Palo Alto Networks
    • PMI
    • RedHat
    • Salesforce
    • VMware
  • Courses
    • CCNA
    • ENCOR
    • VMware vSphere
  • Certificates

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.