google professional cloud architect practice test

Professional Cloud Architect on Google Cloud Platform

Note: Test Case questions are at the end of the exam
Last exam update: Dec 14 ,2024
Page 1 out of 17. Viewing questions 1-15 out of 259

Question 1 Topic 6, Mixed Questions

One of your primary business objectives is being able to trust the data stored in your application. You want to log all changes
to the application data.
How can you design your logging system to verify authenticity of your logs?

  • A. Write the log concurrently in the cloud and on premises
  • B. Use a SQL database and limit who can modify the log table
  • C. Digitally sign each timestamp and log entry and store the signature
  • D. Create a JSON dump of each log entry and store it in Google Cloud Storage
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 2 Topic 6, Mixed Questions

Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9
months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? (Choose two.)

  • A. Compute Engine with containers
  • B. Google Kubernetes Engine with containers
  • C. Google App Engine Standard Environment
  • D. Compute Engine with custom instance types
  • E. Compute Engine with managed instance groups
Answer:

B C


Explanation:
B: With Container Engine, Google will automatically deploy your cluster for you, update, patch, secure the nodes.
Kubernetes Engine's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to
run. C: Solutions like Datastore, BigQuery, AppEngine, etc are truly NoOps.
App Engine by default scales the number of instances running up and down to match the load, thus providing consistent
performance for your app at all times while minimizing idle instances and thus reducing cost.
Note: At a high level, NoOps means that there is no infrastructure to build out and manage during usage of the platform.
Typically, the compromise you make with NoOps is that you lose control of the underlying infrastructure.
Reference: https://www.quora.com/How-well-does-Google-Container-Engine-support-Google-Cloud-Platform%E2%80%99s-
NoOps-claim

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 3 Topic 6, Mixed Questions

Your marketing department wants to send out a promotional email campaign. The development team wants to minimize
direct operation management. They project a wide range of possible customer responses, from 100 to 500,000 click-through
per day. The link leads to a simple website that explains the promotion and collects user information and preferences. Which
infrastructure should you recommend? (Choose two.)

  • A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
  • B. Use a Google Container Engine cluster to serve the website and store data to persistent disk.
  • C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
  • D. Use a single Compute Engine virtual machine (VM) to host a web server, backend by Google Cloud SQL.
Answer:

A C


Explanation:

Reference: https://cloud.google.com/storage-options/

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 4 Topic 6, Mixed Questions

You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.
What should you do?

  • A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container clusters resize CLUSTER_Name -size 10
  • B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE - - tags enableautoscaling max-nodes-10
  • C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
  • D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
Answer:

C

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 5 Topic 6, Mixed Questions

Your company places a high value on being responsive and meeting customer needs quickly. Their primary business
objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced.
Which two actions can you take? (Choose two.)

  • A. Ensure every code check-in is peer reviewed by a security SME
  • B. Use source code security analyzers as part of the CI/CD pipeline
  • C. Ensure you have stubs to unit test all interfaces between components
  • D. Enable code signing and a trusted binary repository integrated with your CI/CD pipeline
  • E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline
Answer:

B E

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
Discussions
vote your answer:
A
B
C
D
E
0 / 1000

Question 6 Topic 6, Mixed Questions

Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of
all projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin.
What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?

  • A. Org viewer, project owner
  • B. Org viewer, project viewer
  • C. Org admin, project browser
  • D. Project owner, network admin
Answer:

B

User Votes:
A
50%
B 1 votes
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 7 Topic 6, Mixed Questions

You are helping the QA team to roll out a new load-testing tool to test the scalability of your primary cloud services that run
on Google Compute Engine with Cloud Bigtable.
Which three requirements should they include? (Choose three.)

  • A. Ensure that the load tests validate the performance of Cloud Bigtable
  • B. Create a separate Google Cloud project to use for the load-testing environment
  • C. Schedule the load-testing tool to regularly run against the production environment
  • D. Ensure all third-party systems your services use is capable of handling high load
  • E. Instrument the production services to record every transaction for replay by the load-testing tool
  • F. Instrument the load-testing tool and the target services with detailed logging and metrics collection
Answer:

A B F

User Votes:
A
50%
B
50%
C
50%
D
50%
E
50%
F
50%
Discussions
vote your answer:
A
B
C
D
E
F
0 / 1000

Question 8 Topic 6, Mixed Questions

Your company runs several databases on a single MySQL instance. They need to take backups of a specific database at
regular intervals. The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk
performance.
How should you configure the storage?

  • A. Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots.
  • B. Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage.
  • C. Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump.
  • D. Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use LVM to create snapshots to send to Cloud Storage
Answer:

B

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 9 Topic 6, Mixed Questions

You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage and replace
the copy easily if there are changes on the production virtual machine. You will deploy the copy as a new instance in a
different project in the US-East region.
What steps must you take?

  • A. Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine instance in the US-East region.
  • B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
  • C. Create an image file from the root disk with Linux dd command, create a new virtual machine instance in the US-East region
  • D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
Answer:

D

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 10 Topic 6, Mixed Questions

A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH
port open to the world. You want to discover this networks origin.
What should you do?

  • A. Search for Create VM entry in the Stackdriver alerting console
  • B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry
  • C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry
  • D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list
Answer:

C


Explanation:
Incorrect Answers:
A: To use the Stackdriver alerting console we must first set up alerting policies.
B: Data access logs only contain read-only operations.
Audit logs help you determine who did what, where, and when.
Cloud Audit Logging returns two types of logs:
Admin activity logs

Data access logs: Contains log entries for operations that perform read-only operations do not modify any data, such as

get, list, and aggregated list methods.

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 11 Topic 6, Mixed Questions

The application reliability team at your company this added a debug feature to their backend service to send all server
events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are
expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?

  • A. Append metadata to file body Compress individual files Name files with serverName Timestamp Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket.
  • A. Append metadata to file body Compress individual files Name files with serverName Timestamp Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket.
  • B. Batch every 10,000 events with a single manifest file for metadata Compress event files and manifest file into a single archive file Name files using serverName EventSequence Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
  • B. Batch every 10,000 events with a single manifest file for metadata Compress event files and manifest file into a single archive file Name files using serverName EventSequence Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
  • C. • Compress individual files • Name files with serverName – EventSequence • Save files to one bucket • Set custom metadata headers for each object after saving
  • C. Compress individual files Name files with serverName EventSequence Save files to one bucket Set custom metadata headers for each object after saving
  • D. Append metadata to file body Compress individual files Name files with a random prefix pattern Save files to one bucket
Answer:

D

User Votes:
A
50%
A
50%
B
50%
B
50%
C
50%
C
50%
D
50%
Discussions
vote your answer:
A
A
B
B
C
C
D
0 / 1000

Question 12 Topic 6, Mixed Questions

A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not
distributed across the web servers. You want to help him ensure his application will run properly on Google Cloud Platform.
What should you do?

  • A. Help the engineer to convert his websocket code to use HTTP streaming
  • B. Review the encryption requirements for websocket connections with the security team
  • C. Meet with the cloud operations team and the engineer to discuss load balancer options
  • D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
Answer:

C


Explanation:
Google Cloud Platform (GCP) HTTP(S) load balancing provides global load balancing for HTTP(S) requests destined for
your instances. The HTTP(S) load balancer has native support for the WebSocket protocol.
Incorrect Answers:
A: HTTP server push, also known as HTTP streaming, is a client-server communication pattern that sends information from
an HTTP server to a client asynchronously, without a client request. A server push architecture is especially effective for
highly interactive web or mobile applications, where one or more clients need to receive continuous information from the
server.
Reference: https://cloud.google.com/compute/docs/load-balancing/http/

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 13 Topic 6, Mixed Questions

Your companys test suite is a custom C++ application that runs tests throughout each day on Linux virtual machines. The
full test suite takes several hours to complete, running on a limited number of onpremises servers reserved for testing. Your
company wants to move the testing infrastructure to the cloud, to reduce the amount of time it takes to fully test a change to
the system, while changing the tests as little as possible.
Which cloud infrastructure should you recommend?

  • A. Google Compute Engine unmanaged instance groups and Network Load Balancer
  • B. Google Compute Engine managed instance groups with auto-scaling
  • C. Google Cloud Dataproc to run Apache Hadoop jobs to process each test
  • D. Google App Engine with Google StackDriver for logging
Answer:

B


Explanation:
Google Compute Engine enables users to launch virtual machines (VMs) on demand. VMs can be launched from the
standard images or custom images created by users.
Managed instance groups offer autoscaling capabilities that allow you to automatically add or remove instances from a
managed instance group based on increases or decreases in load. Autoscaling helps your applications gracefully handle
increases in traffic and reduces cost when the need for resources is lower.
Incorrect Answers:
B: There is no mention of incoming IP data traffic for the custom C++ applications.
C: Apache Hadoop is not fit for testing C++ applications. Apache Hadoop is an open-source software framework used for
distributed storage and processing of datasets of big data using the MapReduce programming model.
D: Google App Engine is intended to be used for web applications.
Google App Engine (often referred to as GAE or simply App Engine) is a web framework and cloud computing platform for
developing and hosting web applications in Google-managed data centers. Reference:
https://cloud.google.com/compute/docs/autoscaler/

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 14 Topic 6, Mixed Questions

You are working at an institution that processes medical data. You are migrating several workloads onto Google Cloud.
Company policies require all workloads to run on physically separated hardware, and workloads from different clients must
also be separated. You created a sole-tenant node group and added a node for each client. You need to deploy the
workloads on these dedicated hosts. What should you do?

  • A. Add the node group name as a network tag when creating Compute Engine instances in order to host each workload on the correct node group.
  • B. Add the node name as a network tag when creating Compute Engine instances in order to host each workload on the correct node.
  • C. Use node affinity labels based on the node group name when creating Compute Engine instances in order to host each workload on the correct node group.
  • D. Use node affinity labels based on the node name when creating Compute Engine instances in order to host each workload on the correct node.
Answer:

C


Explanation:
Reference: https://cloud.google.com/compute/docs/nodes/provisioning-sole-tenant-vms

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000

Question 15 Topic 6, Mixed Questions

Your company has a Google Cloud project that uses BigQuery for data warehousing. They have a VPN tunnel between the
on-premises environment and Google Cloud that is configured with Cloud VPN.
The security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing.
What should they do?

  • A. Configure Private Google Access for on-premises only.
  • B. Perform the following tasks: 1. Create a service account. 2. Give the BigQuery JobUser role and Storage Reader role to the service account. 3. Remove all other IAM access from the project.
  • C. Configure VPC Service Controls and configure Private Google Access.
  • D. Configure Private Google Access.
Answer:

A


Explanation:
Reference: https://cloud.google.com/vpc-service-controls/docs/overview

User Votes:
A
50%
B
50%
C
50%
D
50%
Discussions
vote your answer:
A
B
C
D
0 / 1000
To page 2