Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
  • Login
  • Register
Quesions Library
  • Cisco
    • 200-301
    • 200-901
      • Multiple Choice
      • Drag Drop
    • 350-401
      • Multiple Choice
      • Drag Drop
    • 350-701
    • 300-410
      • Multiple Choice
      • Drag Drop
    • 300-415
      • Multiple Choice
      • Drag Drop
    • 300-425
    • Others
  • AWS
    • CLF-C02
    • SAA-C03
    • SAP-C02
    • ANS-C01
    • Others
  • Microsoft
    • AZ-104
    • AZ-204
    • AZ-305
    • AZ-900
    • AI-900
    • SC-900
    • Others
  • CompTIA
    • SY0-601
    • N10-008
    • 220-1101
    • 220-1102
    • Others
  • Google
    • Associate Cloud Engineer
    • Professional Cloud Architect
    • Professional Cloud DevOps Engineer
    • Others
  • ISACA
    • CISM
    • CRIS
    • Others
  • LPI
    • 101-500
    • 102-500
    • 201-450
    • 202-450
  • Fortinet
    • NSE4_FGT-7.2
  • VMware
  • >>
    • Juniper
    • EC-Council
      • 312-50v12
    • ISC
      • CISSP
    • PMI
      • PMP
    • Palo Alto Networks
    • RedHat
    • Oracle
    • GIAC
    • F5
    • ITILF
    • Salesforce
Contribute
Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
Practice Test Free
No Result
View All Result
Home Exam Prep Free

Google Professional Cloud Developer Exam Prep Free

Table of Contents

Toggle
  • Google Professional Cloud Developer Exam Prep Free – 50 Practice Questions to Get You Ready for Exam Day
  • Access Full Google Professional Cloud Developer Exam Prep Free

Google Professional Cloud Developer Exam Prep Free – 50 Practice Questions to Get You Ready for Exam Day

Getting ready for the Google Professional Cloud Developer certification? Our Google Professional Cloud Developer Exam Prep Free resource includes 50 exam-style questions designed to help you practice effectively and feel confident on test day

Effective Google Professional Cloud Developer exam prep free is the key to success. With our free practice questions, you can:

  • Get familiar with exam format and question style
  • Identify which topics you’ve mastered—and which need more review
  • Boost your confidence and reduce exam anxiety

Below, you will find 50 realistic Google Professional Cloud Developer Exam Prep Free questions that cover key exam topics. These questions are designed to reflect the structure and challenge level of the actual exam, making them perfect for your study routine.

Question 1

Your development team is using Cloud Build to promote a Node.js application built on App Engine from your staging environment to production. The application relies on several directories of photos stored in a Cloud Storage bucket named webphotos-staging in the staging environment. After the promotion, these photos must be available in a Cloud Storage bucket named webphotos-prod in the production environment. You want to automate the process where possible. What should you do?

A. Manually copy the photos to webphotos-prod.

B. Add a startup script in the application’s app.yami file to move the photos from webphotos-staging to webphotos-prod.

C. Add a build step in the cloudbuild.yaml file before the promotion step with the arguments:

D. Add a build step in the cloudbuild.yaml file before the promotion step with the arguments:

 


Correct Answer: C

Question 2

You are tasked with using C++ to build and deploy a microservice for an application hosted on Google Cloud. The codefineeds to be containerized and use several custom software libraries that your team has built. You do not want to maintain the underlying infrastructure of the application. How should you deploy the microservice?

A. Use Cloud Functions to deploy the microservice.

B. Use Cloud Build to create the container, and deploy it on Cloud Run.

C. Use Cloud Shell to containerize your microservice, and deploy it on a Container-Optimized OS Compute Engine instance.

D. Use Cloud Shell to containerize your microservice, and deploy it on standard Google Kubernetes Engine.

 


Correct Answer: D

Question 3

Your existing application keeps user state information in a single MySQL database. This state information is very user-specific and depends heavily on how long a user has been using an application. The MySQL database is causing challenges to maintain and enhance the schema for various users.
Which storage option should you choose?

A. Cloud SQL

B. Cloud Storage

C. Cloud Spanner

D. Cloud Datastore/Firestore

 


Correct Answer: A

Question 4

You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which function is consuming the most CPU and memory resources. What should you do?

A. Add print commands to the application source code to log when each function is called, and redeploy the application.

B. Create a Cloud Logging query that gathers the web application s logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application’s longest functions to identify time-intensive functions.

C. Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your application on the Trace overview page, and identify which functions cause the most latency.

D. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions.

 


Correct Answer: D

Question 5

You are a developer at a large organization. You have an application written in Go running in a production Google Kubernetes Engine (GKE) cluster. You need to add a new feature that requires access to BigQuery. You want to grant BigQuery access to your GKE cluster following Google-recommended best practices. What should you do?

A. Create a Google service account with BigQuery access. Add the JSON key to Secret Manager, and use the Go client library to access the JSON key.

B. Create a Google service account with BigQuery access. Add the Google service account JSON key as a Kubernetes secret, and configure the application to use this secret.

C. Create a Google service account with BigQuery access. Add the Google service account JSON key to Secret Manager, and use an init container to access the secret for the application to use.

D. Create a Google service account and a Kubernetes service account. Configure Workload Identity on the GKE cluster, and reference the Kubernetes service account on the application Deployment.

 


Correct Answer: D

Question 6

You are designing an application that consists of several microservices. Each microservice has its own RESTful API and will be deployed as a separate Kubernetes Service. You want to ensure that the consumers of these APIs aren't impacted when there is a change to your API, and also ensure that third-party systems aren't interrupted when new versions of the API are released. How should you configure the connection to the application following Google-recommended best practices?

A. Use an Ingress that uses the API’s URL to route requests to the appropriate backend.

B. Leverage a Service Discovery system, and connect to the backend specified by the request.

C. Use multiple clusters, and use DNS entries to route requests to separate versioned backends.

D. Combine multiple versions in the same service, and then specify the API version in the POST request.

 


Correct Answer: C

Question 7

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.
Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?

A. Cloud Spanner

B. Cloud Datastore

C. Cloud Memorystore as a cache

D. Separate Cloud SQL clusters for each region

 


Correct Answer: A

Question 8

You are designing a chat room application that will host multiple rooms and retain the message history for each room. You have selected Firestore as your database. How should you represent the data in Firestore?

A. Create a collection for the rooms. For each room, create a document that lists the contents of the messages

B. Create a collection for the rooms. For each room, create a collection that contains a document for each message

C. Create a collection for the rooms. For each room, create a document that contains a collection for documents, each of which contains a message.

D. Create a collection for the rooms, and create a document for each room. Create a separate collection for messages, with one document per message. Each room’s document contains a list of references to the messages.

 


Correct Answer: C

Question 9

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.
Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's .net-based auth service fails under intermittent load.
What should they do?

A. Use App Engine for autoscaling.

B. Use Cloud Functions for autoscaling.

C. Use a Compute Engine cluster for the service.

D. Use a dedicated Compute Engine virtual machine instance for the service.

 


Correct Answer: D

Question 10

Your application is controlled by a managed instance group. You want to share a large read-only data set between all the instances in the managed instance group. You want to ensure that each instance can start quickly and can access the data set via its filesystem with very low latency. You also want to minimize the total cost of the solution.
What should you do?

A. Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE.

B. Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup script.

C. Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple Compute Engine virtual machine instances.

D. Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot, and attach each disk to its own instance.

 


Correct Answer: C

Question 11

Your team is developing unit tests for Cloud Function code. The code is stored in a Cloud Source Repositories repository. You are responsible for implementing the tests. Only a specific service account has the necessary permissions to deploy the code to Cloud Functions. You want to ensure that the code cannot be deployed without first passing the tests. How should you configure the unit testing process?

A. Configure Cloud Build to deploy the Cloud Function. If the code passes the tests, a deployment approval is sent to you.

B. Configure Cloud Build to deploy the Cloud Function, using the specific service account as the build agent. Run the unit tests after successful deployment.

C. Configure Cloud Build to run the unit tests. If the code passes the tests, the developer deploys the Cloud Function.

D. Configure Cloud Build to run the unit tests, using the specific service account as the build agent. If the code passes the tests, Cloud Build deploys the Cloud Function.

 


Correct Answer: B

Question 12

You are deploying your application to a Compute Engine virtual machine instance. Your application is configured to write its log files to disk. You want to view the logs in Stackdriver Logging without changing the application code.
What should you do?

A. Install the Stackdriver Logging Agent and configure it to send the application logs.

B. Use a Stackdriver Logging Library to log directly from the application to Stackdriver Logging.

C. Provide the log file folder path in the metadata of the instance to configure it to send the application logs.

D. Change the application to log to /var/log so that its logs are automatically sent to Stackdriver Logging.

 


Correct Answer: A

Question 13

Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of your API.
Which two steps should you take? (Choose two.)

A. Use Zipkin collector to gather data.

B. Use Fluentd agent to gather data.

C. Use Stackdriver Trace to generate reports.

D. Use Stackdriver Debugger to generate report.

E. Use Stackdriver Profiler to generate report.

 


Correct Answer: CE

Question 14

You need to containerize a web application that will be hosted on Google Cloud behind a global load balancer with SSL certificates. You don’t have the time to develop authentication at the application level, and you want to offload SSL encryption and management from your application. You want to configure the architecture using managed services where possible. What should you do?

A. Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress Controller to handle authentication.

B. Host the application on Google Kubernetes Engine, and deploy cert-manager to manage SSL certificates.

C. Host the application on Compute Engine, and configure Cloud Endpoints for your application.

D. Host the application on Google Kubernetes Engine, and use Identity-Aware Proxy (IAP) with Cloud Load Balancing and Google-managed certificates.

 


Correct Answer: B

Question 15

Your application is deployed on hundreds of Compute Engine instances in a managed instance group (MIG) in multiple zones. You need to deploy a new instance template to fix a critical vulnerability immediately but must avoid impact to your service. What setting should be made to the MIG after updating the instance template?

A. Set the Update mode to Opportunistic.

B. Set the Maximum Unavailable to 100%.

C. Set the Minimum Wait time to 0 seconds.

 


Correct Answer: C

Question 16

Your company has created an application that uploads a report to a Cloud Storage bucket. When the report is uploaded to the bucket, you want to publish a message to a Cloud Pub/Sub topic. You want to implement a solution that will take a small amount to effort to implement.
What should you do?

A. Configure the Cloud Storage bucket to trigger Cloud Pub/Sub notifications when objects are modified.

B. Create an App Engine application to receive the file; when it is received, publish a message to the Cloud Pub/Sub topic.

C. Create a Cloud Function that is triggered by the Cloud Storage bucket. In the Cloud Function, publish a message to the Cloud Pub/Sub topic.

D. Create an application deployed in a Google Kubernetes Engine cluster to receive the file; when it is received, publish a message to the Cloud Pub/Sub topic.

 


Correct Answer: C

Question 17

You have an application deployed in production. When a new version is deployed, you want to ensure that all production traffic is routed to the new version of your application. You also want to keep the previous version deployed so that you can revert to it if there is an issue with the new version.
Which deployment strategy should you use?

A. Blue/green deployment

B. Canary deployment

C. Rolling deployment

D. Recreate deployment

 


Correct Answer: C

Question 18

Your service adds text to images that it reads from Cloud Storage. During busy times of the year, requests to Cloud Storage fail with an HTTP 429 "Too Many
Requests" status code.
How should you handle this error?

A. Add a cache-control header to the objects.

B. Request a quota increase from the GCP Console.

C. Retry the request with a truncated exponential backoff strategy.

D. Change the storage class of the Cloud Storage bucket to Multi-regional.

 


Correct Answer: C

Question 19

Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a table in Cloud Spanner. Your application is more sensitive to latency and less sensitive to consistency.
How should you perform reads from Cloud Spanner for this application?

A. Perform Read-Only transactions.

B. Perform stale reads using single-read methods.

C. Perform strong reads using single-read methods.

D. Perform stale reads using read-write transactions.

 


Correct Answer: D

Question 20

You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s effect on sales in a randomized way. How should you test this feature?

A. Split traffic between versions using weights.

B. Enable the new recommendation feature flag on a single instance.

C. Mirror traffic to the new version of your application.

D. Use HTTP header-based routing.

 


Correct Answer: C

Question 21

One of your deployed applications in Google Kubernetes Engine (GKE) is having intermittent performance issues. Your team uses a third-party logging solution. You want to install this solution on each node in your GKE cluster so you can view the logs. What should you do?

A. Deploy the third-party solution as a DaemonSet

B. Modify your container image to include the monitoring software

C. Use SSH to connect to the GKE node, and install the software manually

D. Deploy the third-party solution using Terraform and deploy the logging Pod as a Kubernetes Deployment

 


Correct Answer: A

Question 22

You are building a CI/CD pipeline that consists of a version control system, Cloud Build, and Container Registry. Each time a new tag is pushed to the repository, a Cloud Build job is triggered, which runs unit tests on the new code builds a new Docker container image, and pushes it into Container Registry. The last step of your pipeline should deploy the new container to your production Google Kubernetes Engine (GKE) cluster. You need to select a tool and deployment strategy that meets the following requirements:
• Zero downtime is incurred
• Testing is fully automated
• Allows for testing before being rolled out to users
• Can quickly rollback if needed
What should you do?

A. Trigger a Spinnaker pipeline configured as an A/B test of your new code and, if it is successful, deploy the container to production.

B. Trigger a Spinnaker pipeline configured as a canary test of your new code and, if it is successful, deploy the container to production.

C. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a canary test.

D. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a shadow test.

 


Correct Answer: D

Question 23

Your team develops services that run on Google Kubernetes Engine. You need to standardize their log data using Google-recommended practices and make the data more useful in the fewest number of steps. What should you do? (Choose two.)

A. Create aggregated exports on application logs to BigQuery to facilitate log analytics.

B. Create aggregated exports on application logs to Cloud Storage to facilitate log analytics.

C. Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs.

D. Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging.

E. Mandate the use of the Pub/Sub API to write structured data to Pub/Sub and create a Dataflow streaming pipeline to normalize logs and write them to BigQuery for analytics.

 


Correct Answer: AE

Question 24

Your development team has been tasked with maintaining a .NET legacy application. The application incurs occasional changes and was recently updated. Your goal is to ensure that the application provides consistent results while moving through the CI/CD pipeline from environment to environment. You want to minimize the cost of deployment while making sure that external factors and dependencies between hosting environments are not problematic. Containers are not yet approved in your organization. What should you do?

A. Rewrite the application using .NET Core, and deploy to Cloud Run. Use revisions to separate the environments.

B. Use Cloud Build to deploy the application as a new Compute Engine image for each build. Use this image in each environment.

C. Deploy the application using MS Web Deploy, and make sure to always use the latest, patched MS Windows Server base image in Compute Engine.

D. Use Cloud Build to package the application, and deploy to a Google Kubernetes Engine cluster. Use namespaces to separate the environments.

 


Correct Answer: A

Question 25

Your company’s development teams want to use various open source operating systems in their Docker builds. When images are created in published containers in your company’s environment, you need to scan them for Common Vulnerabilities and Exposures (CVEs). The scanning process must not impact software development agility. You want to use managed services where possible. What should you do?

A. Enable the Vulnerability scanning setting in the Container Registry.

B. Create a Cloud Function that is triggered on a code check-in and scan the code for CVEs.

C. Disallow the use of non-commercially supported base images in your development environment.

D. Use Cloud Monitoring to review the output of Cloud Build to determine whether a vulnerable version has been used.

 


Correct Answer: A

Question 26

You have an application in production. It is deployed on Compute Engine virtual machine instances controlled by a managed instance group. Traffic is routed to the instances via a HTTP(s) load balancer. Your users are unable to access your application. You want to implement a monitoring technique to alert you when the application is unavailable.
Which technique should you choose?

A. Smoke tests

B. Stackdriver uptime checks

C. Cloud Load Balancing – heath checks

D. Managed instance group – heath checks

 


Correct Answer: B

Question 27

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data, and that they analyze and respond to any issues that occur.
Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• State is stored in a single instance MySQL database in GCP.
• Release cycles include development freezes to allow for QA testing.
• The application has no logging.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
• Expand availability of the application to new regions.
• Support 10x as many concurrent users.
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
• Reduce infrastructure management time and cost.
• Adopt the Google-recommended practices for cloud computing.
â—‹ Develop standardized workflows and processes around application lifecycle management.
â—‹ Define service level indicators (SLIs) and service level objectives (SLOs).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
• Must scale to meet user demand.
For this question, refer to the HipLocal case study.
HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do?

A. Create an API key. Use the API key to interact with Google Cloud.

B. Use the default compute service account to interact with Google Cloud.

C. Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud.

D. Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.

 


Correct Answer: A

Question 28

You want to view the memory usage of your application deployed on Compute Engine.
What should you do?

A. Install the Stackdriver Client Library.

B. Install the Stackdriver Monitoring Agent.

C. Use the Stackdriver Metrics Explorer.

D. Use the Google Cloud Platform Console.

 


Correct Answer: C

Question 29

You are designing an application that uses a microservices architecture. You are planning to deploy the application in the cloud and on-premises. You want to make sure the application can scale up on demand and also use managed services as much as possible. What should you do?

A. Deploy open source Istio in a multi-cluster deployment on multiple Google Kubernetes Engine (GKE) clusters managed by Anthos.

B. Create a GKE cluster in each environment with Anthos, and use Cloud Run for Anthos to deploy your application to each cluster.

C. Install a GKE cluster in each environment with Anthos, and use Cloud Build to create a Deployment for your application in each cluster.

D. Create a GKE cluster in the cloud and install open-source Kubernetes on-premises. Use an external load balancer service to distribute traffic across the two environments.

 


Correct Answer: B

Question 30

You are using Cloud Run to host a global ecommerce web application. Your company’s design team is creating a new color scheme for the web app. You have been tasked with determining whether the new color scheme will increase sales. You want to conduct testing on live production traffic. How should you design the study?

A. Use an external HTTP(S) load balancer to route a predetermined percentage of traffic to two different color schemes of your application. Analyze the results to determine whether there is a statistically significant difference in sales.

B. Use an external HTTP(S) load balancer to route traffic to the original color scheme while the new deployment is created and tested. After testing is complete, reroute all traffic to the new color scheme. Analyze the results to determine whether there is a statistically significant difference in sales.

C. Use an external HTTP(S) load balancer to mirror traffic to the new version of your application. Analyze the results to determine whether there is a statistically significant difference in sales.

D. Enable a feature flag that displays the new color scheme to half of all users. Monitor sales to see whether they increase for this group of users.

 


Correct Answer: C

Question 31

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.
Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal is configuring their access controls.
Which firewall configuration should they implement?

A. Block all traffic on port 443.

B. Allow all traffic into the network.

C. Allow traffic on port 443 for a specific tag.

D. Allow all traffic on port 443 into the network.

 


Correct Answer: C

Question 32

Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal changes to existing data analytics jobs and existing architecture.
How should you proceed with the migration?

A. Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery instead of the on-premises Hadoop environment.

B. Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of your existing environment into the new one in Compute Engine instances.

C. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the new Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs.

D. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to the new cluster. Move your data to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on that data.

 


Correct Answer: D

Question 33

You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use?

A. Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account.

B. Use Bigtable, and assign the roles/bigtable.viewer role to the service account.

C. Use Firestore in Native mode and assign the roles/datastore.user role to the service account.

D. Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account.

 


Correct Answer: A

Question 34

You manage a microservice-based ecommerce platform on Google Cloud that sends confirmation emails to a third-party email service provider using a Cloud Function. Your company just launched a marketing campaign, and some customers are reporting that they have not received order confirmation emails. You discover that the services triggering the Cloud Function are receiving HTTP 500 errors. You need to change the way emails are handled to minimize email loss. What should you do?

A. Increase the Cloud Function’s timeout to nine minutes.

B. Configure the sender application to publish the outgoing emails in a message to a Pub/Sub topic. Update the Cloud Function configuration to consume the Pub/Sub queue.

C. Configure the sender application to write emails to Memorystore and then trigger the Cloud Function. When the function is triggered, it reads the email details from Memorystore and sends them to the email service.

D. Configure the sender application to retry the execution of the Cloud Function every one second if a request fails.

 


Correct Answer: C

Question 35

You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL. Due to regulatory requirements, your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while following Google-recommended best practices?

A. Configure your Cloud Run service with a Cloud SQL connection.

B. Configure your Cloud Run service to use a Serverless VPC Access connector.

C. Configure your application to use the Cloud SQL Java connector.

D. Configure your application to connect to an instance of the Cloud SQL Auth proxy.

 


Correct Answer: C

Question 36

You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do?

A. 1. Create a managed instance group. Replicate the static content across the virtual machines (VMs)2. Create an external HTTP(S) load balancer.3. Enable Cloud CDN, and send traffic to the managed instance group.

B. 1. Create an unmanaged instance group. Replicate the static content across the VMs.2. Create an external HTTP(S) load balancer3. Enable Cloud CDN, and send traffic to the unmanaged instance group.

C. 1. Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket2. Reserve an external IP address, and create an external HTTP(S) load balancer3. Enable Cloud CDN, and send traffic to your backend bucket

D. 1. Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket.2. Reserve an external IP address, and create an external HTTP(S) load balancer.3. Enable Cloud CDN, and send traffic to your backend bucket.

 


Correct Answer: B

Question 37

You have a container deployed on Google Kubernetes Engine. The container can sometimes be slow to launch, so you have implemented a liveness probe. You notice that the liveness probe occasionally fails on launch. What should you do?

A. Add a startup probe.

B. Increase the initial delay for the liveness probe.

C. Increase the CPU limit for the container.

D. Add a readiness probe.

 


Correct Answer: D

Question 38

You need to redesign the ingestion of audit events from your authentication service to allow it to handle a large increase in traffic. Currently, the audit service and the authentication system run in the same Compute Engine virtual machine. You plan to use the following Google Cloud tools in the new architecture:
✑ Multiple Compute Engine machines, each running an instance of the authentication service
✑ Multiple Compute Engine machines, each running an instance of the audit service
✑ Pub/Sub to send the events from the authentication services.
How should you set up the topics and subscriptions to ensure that the system can handle a large volume of messages and can scale efficiently?

A. Create one Pub/Sub topic. Create one pull subscription to allow the audit services to share the messages.

B. Create one Pub/Sub topic. Create one pull subscription per audit service instance to allow the services to share the messages.

C. Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to a load balancer in front of the audit services.

D. Create one Pub/Sub topic per authentication service. Create one pull subscription per topic to be used by one audit service.

E. Create one Pub/Sub topic per authentication service. Create one push subscription per topic, with the endpoint pointing to one audit service.

 


Correct Answer: D

Question 39

You are working on a new application that is deployed on Cloud Run and uses Cloud Functions. Each time new features are added, new Cloud Functions and Cloud Run services are deployed. You use ENV variables to keep track of the services and enable interservice communication, but the maintenance of the ENV variables has become difficult. You want to implement dynamic discovery in a scalable way. What should you do?

A. Configure your microservices to use the Cloud Run Admin and Cloud Functions APIs to query for deployed Cloud Run services and Cloud Functions in the Google Cloud project.

B. Create a Service Directory namespace. Use API calls to register the services during deployment, and query during runtime.

C. Rename the Cloud Functions and Cloud Run services endpoint is using a well-documented naming convention.

D. Deploy Hashicorp Consul on a single Compute Engine instance. Register the services with Consul during deployment, and query during runtime.

 


Correct Answer: C

Question 40

You want to migrate an on-premises container running in Knative to Google Cloud. You need to make sure that the migration doesn't affect your application's deployment strategy, and you want to use a fully managed service. Which Google Cloud service should you use to deploy your container?

A. Cloud Run

B. Compute Engine

C. Google Kubernetes Engine

D. App Engine flexible environment

 


Correct Answer: A

Question 41

Your application is composed of a set of loosely coupled services orchestrated by code executed on Compute Engine. You want your application to easily bring up new Compute Engine instances that find and use a specific version of a service. How should this be configured?

A. Define your service endpoint information as metadata that is retrieved at runtime and used to connect to the desired service.

B. Define your service endpoint information as label data that is retrieved at runtime and used to connect to the desired service.

C. Define your service endpoint information to be retrieved from an environment variable at runtime and used to connect to the desired service.

D. Define your service to use a fixed hostname and port to connect to the desired service. Replace the service at the endpoint with your new version.

 


Correct Answer: C

Question 42

Your team manages a large Google Kubernetes Engine (GKE) cluster. Several application teams currently use the same namespace to develop microservices for the cluster. Your organization plans to onboard additional teams to create microservices. You need to configure multiple environments while ensuring the security and optimal performance of each team’s work. You want to minimize cost and follow Google-recommended best practices. What should you do?

A. Create new role-based access controls (RBAC) for each team in the existing cluster, and define resource quotas.

B. Create a new namespace for each environment in the existing cluster, and define resource quotas.

C. Create a new GKE cluster for each team.

D. Create a new namespace for each team in the existing cluster, and define resource quotas.

 


Correct Answer: A

Question 43

You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by multiple clients within the same Virtual
Private Cloud (VPC). You want clients to be able to get the IP address of the service.
What should you do?

A. Reserve a static external IP address and assign it to an HTTP(S) load balancing service’s forwarding rule. Clients should use this IP address to connect to the service.

B. Reserve a static external IP address and assign it to an HTTP(S) load balancing service’s forwarding rule. Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service.

C. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/.

D. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.

 


Correct Answer: D

Question 44

Your company's security team uses Identity and Access Management (IAM) to track which users have access to which resources. You need to create a version control system that can integrate with your security team's processes. You want your solution to support fast release cycles and frequent merges to your main branch to minimize merge conflicts. What should you do?

A. Create a Cloud Source Repositories repository, and use trunk-based development.

B. Create a Cloud Source Repositories repository, and use feature-based development.

C. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use trunk-based development.

D. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use feature-based development.

 


Correct Answer: C

Question 45

You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to include a check that verifies that the containers can connect to the database. If the Pod is failing to connect, you want a script on the container to run to complete a graceful shutdown. How should you configure the Deployment?

A. Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod is failing.

B. Create the Deployment with a livenessProbe for the container that will fail if the container can’t connect to the database. Configure a Prestop lifecycle handler that runs the shutdown script if the container is failing.

C. Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs the shutdown script if the container is failing.

D. Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing.

 


Correct Answer: C

Question 46

You manage an application that runs in a Compute Engine instance. You also have multiple backend services executing in stand-alone Docker containers running in Compute Engine instances. The Compute Engine instances supporting the backend services are scaled by managed instance groups in multiple regions. You want your calling application to be loosely coupled. You need to be able to invoke distinct service implementations that are chosen based on the value of an HTTP header found in the request. Which Google Cloud feature should you use to invoke the backend services?

A. Traffic Director

B. Service Directory

C. Anthos Service Mesh

D. Internal HTTP(S) Load Balancing

 


Correct Answer: D

Question 47

You are developing an application that will allow users to read and post comments on news articles. You want to configure your application to store and display user-submitted comments using Firestore. How should you design the schema to support an unknown number of comments and articles?

A. Store each comment in a subcollection of the article.

B. Add each comment to an array property on the article.

C. Store each comment in a document, and add the comment’s key to an array property on the article.

D. Store each comment in a document, and add the comment’s key to an array property on the user profile.

 


Correct Answer: D

Question 48

You are working on a social media application. You plan to add a feature that allows users to upload images. These images will be 2 MB `" 1 GB in size. You want to minimize their infrastructure operations overhead for this feature.
What should you do?

A. Change the application to accept images directly and store them in the database that stores other user information.

B. Change the application to create signed URLs for Cloud Storage. Transfer these signed URLs to the client application to upload images to Cloud Storage.

C. Set up a web server on GCP to accept user images and create a file store to keep uploaded files. Change the application to retrieve images from the file store.

D. Create a separate bucket for each user in Cloud Storage. Assign a separate service account to allow write access on each bucket. Transfer service account credentials to the client application based on user information. The application uses this service account to upload images to Cloud Storage.

 


Correct Answer: B

Question 49

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.
Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's APIs are having occasional application failures. They want to collect application information specifically to troubleshoot the issue. What should they do?

A. Take frequent snapshots of the virtual machines.

B. Install the Cloud Logging agent on the virtual machines.

C. Install the Cloud Monitoring agent on the virtual machines.

D. Use Cloud Trace to look for performance bottlenecks.

 


Correct Answer: C

Question 50

Your team is developing an ecommerce platform for your company. Users will log in to the website and add items to their shopping cart. Users will be automatically logged out after 30 minutes of inactivity. When users log back in, their shopping cart should be saved. How should you store users' session and shopping cart information while following Google-recommended best practices?

A. Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL.

B. Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID.

C. Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances.

D. Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in Firestore.

 


Correct Answer: A

Access Full Google Professional Cloud Developer Exam Prep Free

Want to go beyond these 50 questions? Click here to unlock a full set of Google Professional Cloud Developer exam prep free questions covering every domain tested on the exam.

We continuously update our content to ensure you have the most current and effective prep materials.

Good luck with your Google Professional Cloud Developer certification journey!

Share18Tweet11
Previous Post

Google Professional Cloud Database Engineer Exam Prep Free

Next Post

Google Professional Cloud DevOps Engineer Exam Prep Free

Next Post

Google Professional Cloud DevOps Engineer Exam Prep Free

Google Professional Cloud Network Engineer Exam Prep Free

Google Professional Cloud Security Engineer Exam Prep Free

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Network+ Practice Test

Comptia Security+ Practice Test

A+ Certification Practice Test

Aws Cloud Practitioner Exam Questions

Aws Cloud Practitioner Practice Exam

Comptia A+ Practice Test

  • About
  • DMCA
  • Privacy & Policy
  • Contact

PracticeTestFree.com materials do not contain actual questions and answers from Cisco's Certification Exams. PracticeTestFree.com doesn't offer Real Microsoft Exam Questions. PracticeTestFree.com doesn't offer Real Amazon Exam Questions.

  • Login
  • Sign Up
No Result
View All Result
  • Quesions
    • Cisco
    • AWS
    • Microsoft
    • CompTIA
    • Google
    • ISACA
    • ECCouncil
    • F5
    • GIAC
    • ISC
    • Juniper
    • LPI
    • Oracle
    • Palo Alto Networks
    • PMI
    • RedHat
    • Salesforce
    • VMware
  • Courses
    • CCNA
    • ENCOR
    • VMware vSphere
  • Certificates

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.