Summer Sale- Special Discount Limited Time 65% Offer - Ends in 0d 00h 00m 00s - Coupon code: netdisc

Google Associate-Cloud-Engineer Google Cloud Certified - Associate Cloud Engineer Exam Practice Test

Page: 1 / 33
Total 325 questions

Google Cloud Certified - Associate Cloud Engineer Questions and Answers

Question 1

(You are migrating your company’s on-premises compute resources to Google Cloud. You need to deploy batch processing jobs that run every night. The jobs require significant CPU and memory for several hours but can tolerate interruptions. You must ensure that the deployment is cost-effective. What should you do?)

Options:

A.

Containerize the batch processing jobs and deploy them on Compute Engine.

B.

Use custom machine types on Compute Engine.

C.

Use the M1 machine series on Compute Engine.

D.

Use Spot VMs on Compute Engine.

Question 2

You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has suggested uploading docker images to GCR in one project and running an application in a GKE cluster in a separate project. You want to store images in the project img-278322 and run the application in the project prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended practices. What should you do?

Options:

A.

Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace

B.

Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1

C.

Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace

D.

Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1

Question 3

You need to assign a Cloud Identity and Access Management (Cloud IAM) role to an external auditor. The auditor needs to have permissions to review your Google Cloud Platform (GCP) Audit Logs and also to review your Data Access logs. What should you do?

Options:

A.

Assign the auditor the IAM role roles/logging.privateLogViewer. Perform the export of logs to Cloud Storage.

B.

Assign the auditor the IAM role roles/logging.privateLogViewer. Direct the auditor to also review the logs for changes to Cloud IAM policy.

C.

Assign the auditor’s IAM user to a custom role that has logging.privateLogEntries.listpermission. Perform the export of logs to Cloud Storage.

D.

Assign the auditor’s IAM user to a custom role that has logging.privateLogEntries.listpermission. Direct the auditor to also review the logs for changes to Cloud IAM policy.

Question 4

You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?

Options:

A.

After the VM has been created, use your Google Account credentials to log in into the VM.

B.

After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.

C.

When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.

D.

After the VM has been created, download the JSON private key for the default Compute Engine service account. Use the credentials in the JSON file to log in to the VM.

Question 5

Your team has developed a stateless application which requires it to be run directly on virtual machines. The application is expected to receive a fluctuating amount of traffic and needs to scale automatically. You need to deploy the application. What should you do?

Options:

A.

Deploy the application on a managed instance group and configure autoscaling.

B.

Deploy the application on a Kubernetes Engine cluster and configure node pool autoscaling.

C.

Deploy the application on Cloud Functions and configure the maximum number instances.

D.

Deploy the application on Cloud Run and configure autoscaling.

Question 6

Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task. What should you do?

Options:

A.

Go to Data Catalog and search for employee_ssn in the search box.

B.

Write a shell script that uses the bq command line tool to loop through all the projects in your organization.

C.

Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.

D.

Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.

Question 7

You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google-recommended practices. What should you do?

Options:

A.

Add the auditors group to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.

B.

Add the auditors group to two new custom IAM roles.

C.

Add the auditor user accounts to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.

D.

Add the auditor user accounts to two new custom IAM roles.

Question 8

You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items.

How should you configure the auditor's permissions?

Options:

A.

Create a custom role with view-only project permissions. Add the user's account to the custom role.

B.

Create a custom role with view-only service permissions. Add the user's account to the custom role.

C.

Select the built-in IAM project Viewer role. Add the user's account to this role.

D.

Select the built-in IAM service Viewer role. Add the user's account to this role.

Question 9

Your company has developed a new application that consists of multiple microservices. You want to deploy the application to Google Kubernetes Engine (GKE), and you want to ensure that the cluster can scale as more applications are deployed in the future. You want to avoid manual intervention when each new application is deployed. What should you do?

Options:

A.

Deploy the application on GKE, and add a HorizontalPodAutoscaler to the deployment.

B.

Deploy the application on GKE, and add a VerticalPodAutoscaler to the deployment.

C.

Create a GKE cluster with autoscaling enabled on the node pool. Set a minimum and maximum for the size of the node pool.

D.

Create a separate node pool for each application, and deploy each application to its dedicated node pool.

Question 10

You are deploying an application to Google Kubernetes Engine (GKE) that needs to call an external third-party API. You need to provide the external API vendor with a list of IP addresses for their firewall to allow traffic from your application. You want to follow Google-recommended practices and avoid any risk of interrupting traffic to the API due to IP address changes. What should you do?

Options:

A.

Configure your GKE cluster with one node, and set the node to have a static external IP address. Ensure that the GKE cluster autoscaler is off. Send the external IP address of the node to the vendor to be added to the allowlist.

B.

Configure your GKE cluster with private nodes. Configure a Cloud NAT instance with static IP addresses. Provide these IP addresses to the vendor to be added to the allowlist.

C.

Configure your GKE cluster with public nodes. Write a Cloud Function that pulls the public IP addresses of each node in the cluster. Trigger the function to run every day with Cloud Scheduler. Send the list to the vendor by email every day.

D.

Configure your GKE cluster with private nodes. Configure a Cloud NAT instance with dynamic IP addresses. Provide these IP addresses to the vendor to be added to the allowlist.

Question 11

You are building a data lake on Google Cloud for your Internet of Things (loT) application. The loT application has millions of sensors that are constantly streaming structured and unstructured data to your backend in the cloud. You want to build a highly available and resilient architecture based on Google-recommended practices. What should you do?

Options:

A.

Stream data to Pub/Sub, and use Dataflow to send data to Cloud Storage

B.

Stream data to Pub/Sub. and use Storage Transfer Service to send data to BigQuery.

C.

Stream data to Dataflow, and use Storage Transfer Service to send data to BigQuery.

D.

Stream data to Dataflow, and use Dataprep by Trifacta to send data to Bigtable.

Question 12

Your organization has a dedicated person who creates and manages all service accounts for Google Cloud projects. You need to assign this person the minimum role for projects. What should you do?

Options:

A.

Add the user to roles/iam.roleAdmin role.

B.

Add the user to roles/iam.securityAdmin role.

C.

Add the user to roles/iam.serviceAccountUser role.

D.

Add the user to roles/iam.serviceAccountAdmin role.

Question 13

The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google's recommendations for setting permissions for the DevOps group. What should you do?

Options:

A.

Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.

B.

Create an IAM policy and grant all compute. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.

C.

Create a custom role at the folder level and grant all compute. instanceAdmln. * permissions to the role Grant the custom role to the DevOps group.

D.

Grant the basic role roles/editor to the DevOps group.

Question 14

You used the gcloud container clusters command to create two Google Cloud Kubernetes (GKE) clusters prod-cluster and dev-cluster.

• prod-cluster is a standard cluster.

• dev-cluster is an auto-pilot duster.

When you run the Kubect1 get nodes command, you only see the nodes from prod-cluster Which commands should you run to check the node status for dev-cluster?

Options:

A.

Option A14

B.

Option B14

C.

Option C14

D.

Option D14

Question 15

Your management has asked an external auditor to review all the resources in a specific project. The security team has enabled the Organization Policy called Domain Restricted Sharing on the organization node by specifying only your Cloud Identity domain. You want the auditor to only be able to view, but not modify, the resources in that project. What should you do?

Options:

A.

Ask the auditor for their Google account, and give them the Viewer role on the project.

B.

Ask the auditor for their Google account, and give them the Security Reviewer role on the project.

C.

Create a temporary account for the auditor in Cloud Identity, and give that account the Viewer role on the project.

D.

Create a temporary account for the auditor in Cloud Identity, and give that account the Security Reviewer role on the project.

Question 16

You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the problem. What should you do?

Options:

A.

Use the BigQuery interface to review the nightly Job and look for any errors

B.

Review the Error Reporting page in the Cloud Console to find any errors.

C.

In Cloud Logging create a filter for your Data Studio report

D.

Use the open source CLI tool. Snapshot Debugger, to find out why the data was not refreshed correctly.

Question 17

You need to host an application on a Compute Engine instance in a project shared with other teams. You want to prevent the other teams from accidentally causing downtime on that application. Which feature should you use?

Options:

A.

Use a Shielded VM.

B.

Use a Preemptible VM.

C.

Use a sole-tenant node.

D.

Enable deletion protection on the instance.

Question 18

You have a website hosted on App Engine standard environment. You want 1% of your users to see a new test version of the website. You want to minimize complexity. What should you do?

Options:

A.

Deploy the new version in the same application and use the --migrate option.

B.

Deploy the new version in the same application and use the --splits option to give a weight of 99 to the current version and a weight of 1 to the new version.

C.

Create a new App Engine application in the same project. Deploy the new version in that application. Use the App Engine library to proxy 1% of the requests to the new version.

D.

Create a new App Engine application in the same project. Deploy the new version in that application. Configure your network load balancer to send 1% of the traffic to that new application.

Question 19

You are building an archival solution for your data warehouse and have selected Cloud Storage to archive your data. Your users need to be able to access this archived data once a quarter for some regulatory requirements. You want to select a cost-efficient option. Which storage option should you use?

Options:

A.

Coldline Storage

B.

Nearline Storage

C.

Regional Storage

D.

Multi-Regional Storage

Question 20

You just installed the Google Cloud CLI on your new corporate laptop. You need to list the existing instances of your company on Google Cloud. What must you do before you run the gcloud compute instances list command?

Choose 2 answers

Options:

A.

Run gcloud auth login, enter your login credentials in the dialog window, and paste the received login token to gcloud CLI.

B.

Create a Google Cloud service account, and download the service account key. Place the key file in a folder on your machine where gcloud CLI can find it.

C.

Download your Cloud Identity user account key. Place the key file in a folder on your machine where gcloud CLI can find it.

D.

Run gcloud config set compute/zone $my_zone to set the default zone for gcloud CLI.

E.

Run gcloud config set project $my_project to set the default project for gcloud CLI.

Question 21

You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as quickly and easily as possible. What should you do?

Options:

A.

Deploy Jenkins through the Google Cloud Marketplace.

B.

Create a new Compute Engine instance. Run the Jenkins executable.

C.

Create a new Kubernetes Engine cluster. Create a deployment for the Jenkins image.

D.

Create an instance template with the Jenkins executable. Create a managed instance group with this template.

Question 22

You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure. What should you do?

Options:

A.

Deploy the application on GKE Autopilot.

B.

Deploy the application on GKE Standard.

C.

Deploy the application on Cloud Functions.

D.

Deploy the application on Cloud Run.

Question 23

You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single Compute Engine zone, eliminate downtime, and minimize cost. What should you do?

Options:

A.

– Create Compute Engine resources in us–central1–b.–Balance the load across both us–central1–a and us–central1–b.

B.

– Create a Managed Instance Group and specify us–central1–a as the zone.–Configure the Health Check with a short Health Interval.

C.

– Create an HTTP(S) Load Balancer.–Create one or more global forwarding rules to direct traffic to your VMs.

D.

– Perform regular backups of your application.–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.–Restore from backups when notified.

Question 24

You are the team lead of a group of 10 developers. You provided each developer with an individual Google Cloud Project that they can use as their personal sandbox to experiment with different Google Cloud solutions. You want to be notified if any of the developers are spending above $500 per month on their sandbox environment. What should you do?

Options:

A.

Create a single budget for all projects and configure budget alerts on this budget.

B.

Create a separate billing account per sandbox project and enable BigQuery billing exports. Create a Data Studio dashboard to plot the spending per billing account.

C.

Create a budget per project and configure budget alerts on all of these budgets.

D.

Create a single billing account for all sandbox projects and enable BigQuery billing exports. Create a Data Studio dashboard to plot the spending per project.

Question 25

You are monitoring an application and receive user feedback that a specific error is spiking. You notice that the error is caused by a Service Account having insufficient permissions. You are able to solve the problem but want to be notified if the problem recurs. What should you do?

Options:

A.

In the Log Viewer, filter the logs on severity 'Error' and the name of the Service Account.

B.

Create a sink to BigQuery to export all the logs. Create a Data Studio dashboard on the exported logs.

C.

Create a custom log-based metric for the specific error to be used in an Alerting Policy.

D.

Grant Project Owner access to the Service Account.

Question 26

You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account. What should you do?

Options:

A.

When creating the VM via the web console, specify the service account under the ‘Identity and API Access’ section.

B.

Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service-account.

C.

Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine-service-account.

D.

Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.

Question 27

You are about to deploy a new Enterprise Resource Planning (ERP) system on Google Cloud. The application holds the full database in-memory for fast data access, and you need to configure the most appropriate resources on Google Cloud for this application. What should you do?

Options:

A.

Provision preemptible Compute Engine instances.

B.

Provision Compute Engine instances with GPUs attached.

C.

Provision Compute Engine instances with local SSDs attached.

D.

Provision Compute Engine instances with M1 machine type.

Question 28

Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the amount of repetitive code needed to manage the environment What should you do?

Options:

A.

Create a bash script that contains all requirement steps as gcloud commands

B.

Develop templates for the environment using Cloud Deployment Manager

C.

Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.

D.

Use the Cloud Console interface to provision and manage all related resources

Question 29

You have a workload running on Compute Engine that is critical to your business. You want to ensure that the data on the boot disk of this workload is backed up regularly. You need to be able to restore a backup as quickly as possible in case of disaster. You also want older backups to be cleaned automatically to save on cost. You want to follow Google-recommended practices. What should you do?

Options:

A.

Create a Cloud Function to create an instance template.

B.

Create a snapshot schedule for the disk using the desired interval.

C.

Create a cron job to create a new disk from the disk using gcloud.

D.

Create a Cloud Task to create an image and export it to Cloud Storage.

Question 30

Your company developed a mobile game that is deployed on Google Cloud. Gamers are connecting to the game with their personal phones over the Internet. The game sends UDP packets to update the servers about the gamers' actions while they are playing in multiplayer mode. Your game backend can scale over multiple virtual machines (VMs), and you want to expose the VMs over a single IP address. What should you do?

Options:

A.

Configure an SSL Proxy load balancer in front of the application servers.

B.

Configure an Internal UDP load balancer in front of the application servers.

C.

Configure an External HTTP(s) load balancer in front of the application servers.

D.

Configure an External Network load balancer in front of the application servers.

Question 31

You recently received a new Google Cloud project with an attached billing account where you will work. You need to create instances, set firewalls, and store data in Cloud Storage. You want to follow Google-recommended practices. What should you do?

Options:

A.

Use the gcloud CLI services enablecloudresourcemanager.googleapis.comcommand to enable all resources.

B.

Use the gcloud services enablecompute.googleapis.comcommand to enable Compute Engineand thegcloud services enablestorage-api.googleapis.comcommand to enable the Cloud Storage APIs.

C.

Open the Google Cloud console and enable all Google Cloud APIs from the API dashboard.

D.

Open the Google Cloud console and run gcloud init --project in a Cloud Shell.

Question 32

You are planning to migrate your on-premises VMs to Google Cloud. You need to set up a landing zone in Google Cloud before migrating the VMs. You must ensure that all VMs in your production environment can communicate with each other through private IP addresses. You need to allow all VMs in your Google Cloud organization to accept connections on specific TCP ports. You want to follow Google-recommended practices, and you need to minimize your operational costs. What should you do?

Options:

A.

Create individual VPCs per Google Cloud project. Peer all the VPCs together. Apply organization policies on the organization level.

B.

Create individual VPCs for each Google Cloud project. Peer all the VPCs together. Apply hierarchical firewall policies on the organization level.

C.

Create a host VPC project with each production project as its service project. Apply organization policies on the organization level.

D.

Create a host VPC project with each production project as its service project. Apply hierarchical firewall policies on the organization level.

Question 33

Your organization needs to grant users access to query datasets in BigQuery but prevent them from accidentally deleting the datasets. You want a solution that follows Google-recommended practices. What should you do?

Options:

A.

Add users to roles/bigquery user role only, instead of roles/bigquery dataOwner.

B.

Add users to roles/bigquery dataEditor role only, instead of roles/bigquery dataOwner.

C.

Create a custom role by removing delete permissions, and add users to that role only.

D.

Create a custom role by removing delete permissions. Add users to the group, and then add the group to the custom role.

Question 34

(You manage a VPC network in Google Cloud with a subnet that is rapidly approaching its private IP address capacity. You expect the number of Compute Engine VM instances in the same region to double within a week. You need to implement a Google-recommended solution that minimizes operational costs and does not require downtime. What should you do?)

Options:

A.

Create a second VPC with the same subnet IP range, and connect this VPC to the existing VPC by using VPC Network Peering.

B.

Delete the existing subnet, and create a new subnet with double the IP range available.

C.

Use the Google Cloud CLI tool to expand the primary IP range of your subnet.

D.

Permit additional traffic from the expected range of private IP addresses to reach your VMs by configuring firewall rules.

Question 35

Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?

Options:

A.

Create a cluster with a single node-pool by using standard VMs. Label the fault-tolerant Deployments as spot-true.

B.

Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot-false.

C.

Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the critical.deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool by using standard VMs.

D.

Create a cluster with both a Spot VM node pool and by using standard VMs. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.

Question 36

Your continuous integration and delivery (CI/CD) server can't execute Google Cloud actions in a specific project because of permission issues. You need to validate whether the used service account has the appropriate roles in the specific project. What should you do?

Options:

A.

Open the Google Cloud console, and run a query to determine which resources this service account can access.

B.

Open the Google Cloud console, and run a query of the audit logs to find permission denied errors for this service account.

C.

Open the Google Cloud console, and check the organization policies.

D.

Open the Google Cloud console, and check the Identity and Access Management (IAM) roles assigned to the service account at the project or inherited from the folder or organization levels.

Question 37

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

Options:

A.

Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.

B.

Recreate all the nodes of the GKE cluster to enable GPUs on all of them.

C.

Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.

D.

Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.

Question 38

You manage three Google Cloud projects with the Cloud Monitoring API enabled. You want to follow Google-recommended practices to visualize CPU and network metrics for all three projects together. What should you do?

Options:

A.

1. Create a Cloud Monitoring Dashboard2. Collect metrics and publish them into the Pub/Sub topics 3. Add CPU and network Charts (or each of (he three projects

B.

1. Create a Cloud Monitoring Dashboard.2. Select the CPU and Network metrics from the three projects.3. Add CPU and network Charts lot each of the three protects.

C.

1 Create a Service Account and apply roles/viewer on the three projects2. Collect metrics and publish them lo the Cloud Monitoring API3. Add CPU and network Charts for each of the three projects.

D.

1. Create a fourth Google Cloud project2 Create a Cloud Workspace from the fourth project and add the other three projects

Question 39

You need to deploy a third-party software application onto a single Compute Engine VM instance. The application requires the highest speed read and write disk access for the internal database. You need to ensure the instance will recover on failure. What should you do?

Options:

A.

Create an instance template. Set the disk type to be an SSD Persistent Disk. Launch the instance template as part of a stateful managed instance group.

B.

Create an instance template. Set the disk type to be an SSD Persistent Disk. Launch the instance template as part of a stateless managed instance group.

C.

Create an instance template. Set the disk type to be Hyperdisk Extreme. Launch the instance template as part of a stateful managed instance group.

D.

Create an instance template. Set the disk type to be Hyperdisk Extreme. Launch the instance template as part of a stateless managed instance group.

Question 40

You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do?

Options:

A.

Use kubect1 to delete the topic resource.

B.

Use gcloud CLI to delete the topic.

C.

Use kubect1 to create the label deleted-by-cnrm and to change its value to true for the topic resource.

D.

Use gcloud CLI to update the topic label managed-by-cnrm to false.

Question 41

You are asked to set up application performance monitoring on Google Cloud projects A, B, and C as a single pane of glass. You want to monitor CPU, memory, and disk. What should you do?

Options:

A.

Enable API and then share charts from project A, B, and C.

B.

Enable API and then give the metrics.reader role to projects A, B, and C.

C.

Enable API and then use default dashboards to view all projects in sequence.

D.

Enable API, create a workspace under project A, and then add project B and C.

Question 42

You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed. What should you do?

Options:

A.

Create a new subnet in the same region as the subnet being used.

B.

Add an alias IP range to the subnet used by the GKE clusters.

C.

Create a new VPC, and set up VPC peering with the existing VPC.

D.

Expand the CIDR range of the relevant subnet for the cluster.

Question 43

You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service type, daily and monthly, for the next six months using standard query syntax. What should you do?

Options:

A.

Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.

B.

Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.

C.

Export your transactions to a local file, and perform analysis with a desktop tool.

D.

Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.

Question 44

Your web application is hosted on Cloud Run and needs to query a Cloud SOL database. Every morning during a traffic spike, you notice API quota errors in Cloud SOL logs. The project has already reached the maximum API quota. You want to make a configuration change to mitigate the issue. What should you do?

Options:

A.

Modify the minimum number of Cloud Run instances.

B.

Set a minimum concurrent requests environment variable for the application.

C.

Modify the maximum number of Cloud Run instances.

D.

Use traffic splitting.

Question 45

Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google’s recommended practices. Which storage option should you use?

Options:

A.

Multi-Regional Storage

B.

Regional Storage

C.

Nearline Storage

D.

Coldline Storage

Question 46

You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?

Options:

A.

Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host maintenance’ to Migrate VM instance. Add the instance template to an instance group.

B.

Create an instance template for the instances. Set ‘Automatic Restart’ to off. Set ‘On-host maintenance’ to Terminate VM instances. Add the instance template to an instance group.

C.

Create an instance group for the instances. Set the ‘Autohealing’ health check to healthy (HTTP).

D.

Create an instance group for the instance. Verify that the ‘Advanced creation options’ setting for ‘do not retry machine creation’ is set to off.

Question 47

You have several hundred microservice applications running in a Google Kubernetes Engine (GKE) cluster. Each microservice is a deployment with resource limits configured for each container in the deployment. You've observed that the resource limits for memory and CPU are not appropriately set for many of the microservices. You want to ensure that each microservice has right sized limits for memory and CPU. What should you do?

Options:

A.

Modify the cluster's node pool machine type and choose a machine type with more memory and CPU.

B.

Configure a Horizontal Pod Autoscaler for each microservice.

C.

Configure GKE cluster autoscaling.

D.

Configure a Vertical Pod Autoscaler for each microservice.

Question 48

Your customer wants you to create a secure website with autoscaling based on the compute instance CPU load. You want to enhance performance by storing static content in Cloud Storage. Which resources are needed to distribute the user traffic?

Options:

A.

An internal HTTP(S) load balancer together with Identity-Aware Proxy to allow only HTTPS traffic.

B.

An external HTTP(S) load balancer to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend. Install the HTTPS certificates on the instance.

C.

An external HTTP(S) load balancer with a managed SSL certificate to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend.

D.

An external network load balancer pointing to the backend instances to distribute the load evenly. The web servers will forward the request to the Cloud Storage as needed.

Question 49

(Your company has a rapidly growing social media platform and a user base primarily located in North America. Due to increasing demand, your current on-premises PostgreSQL database, hosted in your United States headquarters data center, no longer meets your needs. You need to identify a cloud-based database solution that offers automatic scaling, multi-region support for future expansion, and maintains low latency.)

Options:

A.

Use Bigtable.

B.

Use BigQuery.

C.

Use Spanner.

D.

Use Cloud SQL for PostgreSQL.

Question 50

Your company has a single sign-on (SSO) identity provider that supports Security Assertion Markup Language (SAML) integration with service providers. Your company has users in Cloud Identity. You would like users to authenticate using your company’s SSO provider. What should you do?

Options:

A.

In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps.

B.

In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider.

C.

Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps.

D.

Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications.

Question 51

Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project. What should you do?

Options:

A.

Add the group for the finance team to roles/billing user role.

B.

Add the group for the finance team to roles/billing admin role.

C.

Add the group for the finance team to roles/billing viewer role.

D.

Add the group for the finance team to roles/billing project/Manager role.

Question 52

You are deploying an application to App Engine. You want the number of instances to scale based on request rate. You need at least 3 unoccupied instances at all times. Which scaling type should you use?

Options:

A.

Manual Scaling with 3 instances.

B.

Basic Scaling with min_instances set to 3.

C.

Basic Scaling with max_instances set to 3.

D.

Automatic Scaling with min_idle_instances set to 3.

Question 53

You manage an App Engine Service that aggregates and visualizes data from BigQuery. The application is deployed with the default App Engine Service account. The data that needs to be visualized resides in a different project managed by another team. You do not have access to this project, but you want your application to be able to read data from the BigQuery dataset. What should you do?

Options:

A.

Ask the other team to grant your default App Engine Service account the role of BigQuery Job User.

B.

Ask the other team to grant your default App Engine Service account the role of BigQuery Data Viewer.

C.

In Cloud IAM of your project, ensure that the default App Engine service account has the role of BigQuery Data Viewer.

D.

In Cloud IAM of your project, grant a newly created service account from the other team the role of BigQuery Job User in your project.

Question 54

You have been asked to set up Object Lifecycle Management for objects stored in storage buckets. The objects are written once and accessed frequently for 30 days. After 30 days, the objects are not read again unless there is a special need. The object should be kept for three years, and you need to minimize cost. What should you do?

Options:

A.

Set up a policy that uses Nearline storage for 30 days and then moves to Archive storage for three years.

B.

Set up a policy that uses Standard storage for 30 days and then moves to Archive storage for three years.

C.

Set up a policy that uses Nearline storage for 30 days, then moves the Coldline for one year, and then moves to Archive storage for two years.

D.

Set up a policy that uses Standard storage for 30 days, then moves to Coldline for one year, and then moves to Archive storage for two years.

Question 55

You created several resources in multiple Google Cloud projects. All projects are linked to different billing accounts. To better estimate future charges, you want to have a single visual representation of all costs incurred. You want to include new cost data as soon as possible. What should you do?

Options:

A.

Configure Billing Data Export to BigQuery and visualize the data in Data Studio.

B.

Visit the Cost Table page to get a CSV export and visualize it using Data Studio.

C.

Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.

D.

Use the Reports view in the Cloud Billing Console to view the desired cost information.

Question 56

(Your company is modernizing its applications and refactoring them to containerized microservices. You need to deploy the infrastructure on Google Cloud so that teams can deploy their applications. The applications cannot be exposed publicly. You want to minimize management and operational overhead. What should you do?)

Options:

A.

Provision a Standard zonal Google Kubernetes Engine (GKE) cluster.

B.

Provision a fleet of Compute Engine instances and install Kubernetes.

C.

Provision a Google Kubernetes Engine (GKE) Autopilot cluster.

D.

Provision a Standard regional Google Kubernetes Engine (GKE) cluster.

Question 57

You are using multiple configurations for gcloud. You want to review the configured Kubernetes Engine cluster of an inactive configuration using the fewest possible steps. What should you do?

Options:

A.

Use gcloud config configurations describe to review the output.

B.

Use gcloud config configurations activate and gcloud config list to review the output.

C.

Use kubectl config get-contexts to review the output.

D.

Use kubectl config use-context and kubectl config view to review the output.

Question 58

Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not responsive. You want to replace this VM in the MIG quickly. What should you do?

Options:

A.

Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.

B.

Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.

C.

Use the gcloud compute instances update command with a REFRESH action for the VM.

D.

Update and apply the instance template of the MIG.

Question 59

Your company is moving from an on-premises environment to Google Cloud Platform (GCP). You have multiple development teams that use Cassandra environments as backend databases. They all need a development environment that is isolated from other Cassandra instances. You want to move to GCP quickly and with minimal support effort. What should you do?

Options:

A.

1. Build an instruction guide to install Cassandra on GCP.2. Make the instruction guide accessible to your developers.

B.

1. Advise your developers to go to Cloud Marketplace.2. Ask the developers to launch a Cassandra image for their development work.

C.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.2. Use the snapshot to create instances for your developers.

D.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.2.Upload the snapshot to Cloud Storage and make it accessible to your developers.3.Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves.

Question 60

For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

Options:

A.

1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add the following value: logs-destination: bq://platform-logs.

B.

1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2. Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.

C.

1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.

D.

1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTO dataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

Question 61

You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What should you do?

Options:

A.

Use granular logging statements within a Deployment Manager template authored in Python.

B.

Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console.

C.

Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures.

D.

Execute the Deployment Manager template using the –-preview option in the same project, and observe the state of interdependent resources.

Question 62

Your company's security vulnerability management policy wonts 3 member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance This Compute Engine instance hosts a critical application in your Goggle Cloud project. You need to implement your company's security vulnerability management policy. What should you dc?

Options:

A.

• Ensure that the Ops Agent Is Installed on the Compute Engine instance.• Create a custom metric in the Cloud Monitoring dashboard.• Provide the security team member with access to this dashboard.

B.

• Ensure that the Ops Agent is installed on tie Compute Engine instance.• Provide the security team member roles/configure.inventoryViewer permission.

C.

• Ensure that the OS Config agent Is Installed on the Compute Engine instance.• Provide the security team member roles/configure.vulnerabilityViewer permission.

D.

• Ensure that the OS Config agent is installed on the Compute Engine instance• Create a log sink Co a BigQuery dataset.• Provide the security team member with access to this dataset.

Question 63

An application generates daily reports in a Compute Engine virtual machine (VM). The VM is in the project corp-iot-insights. Your team operates only in the project corp-aggregate-reports and needs a copy of the daily exports in the bucket corp-aggregate-reports-storage. You want to configure access so that the daily reports from the VM are available in the bucket corp-aggregate-reports-storage and use as few steps as possible while following Google-recommended practices. What should you do?

Options:

A.

Move both projects under the same folder.

B.

Grant the VM Service Account the role Storage Object Creator on corp-aggregate-reports-storage.

C.

Create a Shared VPC network between both projects. Grant the VM Service Account the role Storage Object Creator on corp-iot-insights.

D.

Make corp-aggregate-reports-storage public and create a folder with a pseudo-randomized suffix name. Share the folder with the IoT team.

Question 64

You have experimented with Google Cloud using your own credit card and expensed the costs to your company. Your company wants to streamline the billing process and charge the costs of your projects to their monthly invoice. What should you do?

Options:

A.

Grant the financial team the IAM role ofג€Billing Account Userג€ on the billing account linked to your credit card.

B.

Set up BigQuery billing export and grant your financial department IAM access to query the data.

C.

Create a ticket with Google Billing Support to ask them to send the invoice to your company.

D.

Change the billing account of your projects to the billing account of your company.

Question 65

During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain.

You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do?

Options:

A.

Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.

B.

Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.

C.

Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.

D.

Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users.

Question 66

You are the Google Cloud systems administrator for your organization. User A reports that they received an error when attempting to access the Cloud SQL database in their Google Cloud project, while User B can access the database. You need to troubleshoot the issue for User A, while following Google-recommended practices.

What should you do first?

Options:

A.

Confirm that network firewall rules are not blocking traffic for User A.

B.

Review recent configuration changes that may have caused unintended modifications to permissions.

C.

Verify that User A has the Identity and Access Management (IAM) Project Owner role assigned.

D.

Review the error message that User A received.

Question 67

You have downloaded and installed the gcloud command line interface (CLI) and have authenticated with your Google Account. Most of your Compute Engine instances in your project run in the europe-west1-d zone. You want to avoid having to specify this zone with each CLI command when managing these instances. What should you do?

Options:

A.

Set the europe-west1-d zone as the default zone using the gcloud config subcommand.

B.

In the Settings page for Compute Engine under Default location, set the zone to europe–west1-d.

C.

In the CLI installation directory, create a file called default.conf containing zone=europe–west1–d.

D.

Create a Metadata entry on the Compute Engine page with key compute/zone and value europe–west1–d.

Question 68

You have an application that is currently processing transactions by using a group of managed VM instances. You need to migrate the application so that it is serverless and scalable. You want to implement an asynchronous transaction processing system, while minimizing management overhead. What should you do?

Options:

A.

Install Kafka on VM instances to acknowledge incoming transactions. Use Cloud Run to process transactions.

B.

Install Kafka on VM Instances to acknowledge incoming transactions. Use VM Instances to process transactions.

C.

Use Pub/Sub to acknowledge incoming transactions. Use VM instances to process transactions.

D.

Use Pub/Sub to acknowledge incoming transactions. Use Cloud Run to process transactions.

Question 69

(You are migrating your on-premises workload to Google Cloud. Your company is implementing its Cloud Billing configuration and requires access to a granular breakdown of its Google Cloud costs. You need to ensure that the Cloud Billing datasets are available in BigQuery so you can conduct a detailed analysis of costs. What should you do?)

Options:

A.

Enable the BigQuery API and ensure that the BigQuery User IAM role is selected. Change the BigQuery dataset to select a data location.

B.

Create a Cloud Billing account. Enable the BigQuery Data Transfer Service API to export pricing data.

C.

Enable Cloud Billing data export to BigQuery when you create a Cloud Billing account.

D.

Enable Cloud Billing on the project and link a Cloud Billing account. Then view the billing data table in the BigQuery dataset.

Question 70

Your company stores data from multiple sources that have different data storage requirements. These data include:

1. Customer data that is structured and read with complex queries

2. Historical log data that is large in volume and accessed infrequently

3. Real-time sensor data with high-velocity writes, which needs to be available for analysis but can tolerate some data loss

You need to design the most cost-effective storage solution that fulfills all data storage requirements. What should you do?

Options:

A.

Use Spanner for all data.

B.

Use Cloud SQL for customer data, Cloud Storage (Coldline) for historical logs, and BigQuery for sensor data.

C.

Use Cloud SQL for customer data, Cloud Storage (Archive) for historical logs, and Bigtable for sensor data.

D.

Use Firestore for customer data, Cloud Storage (Nearline) for historical logs, and Bigtable for sensor data.

Question 71

(Your company’s developers use an automation that you recently built to provision Linux VMs in Compute Engine within a Google Cloud project to perform various tasks. You need to manage the Linux account lifecycle and access for these users. You want to follow Google-recommended practices to simplify access management while minimizing operational costs. What should you do?)

Options:

A.

Enable OS Login for all VMs. Use IAM roles to grant user permissions.

B.

Enable OS Login for all VMs. Write custom startup scripts to update user permissions.

C.

Require your developers to create public SSH keys. Make the owner of the public key the root user.

D.

Require your developers to create public SSH keys. Write custom startup scripts to update user permissions.

Question 72

You have created a code snippet that should be triggered whenever a new file is uploaded to a Cloud Storage bucket. You want to deploy this code snippet. What should you do?

Options:

A.

Use App Engine and configure Cloud Scheduler to trigger the application using Pub/Sub.

B.

Use Cloud Functions and configure the bucket as a trigger resource.

C.

Use Google Kubernetes Engine and configure a CronJob to trigger the application using Pub/Sub.

D.

Use Dataflow as a batch job, and configure the bucket as a data source.

Question 73

You are creating an application that will run on Google Kubernetes Engine. You have identified MongoDB as the most suitable database system for your application and want to deploy a managed MongoDB environment that provides a support SLA. What should you do?

Options:

A.

Create a Cloud Bigtable cluster and use the HBase API

B.

Deploy MongoDB Alias from the Google Cloud Marketplace

C.

Download a MongoDB installation package and run it on Compute Engine instances

D.

Download a MongoDB installation package, and run it on a Managed Instance Group

Question 74

You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do?

Options:

A.

Create a signed URL with a four-hour expiration and share the URL with the company.

B.

Set object access to ‘public’ and use object lifecycle management to remove the object after four hours.

C.

Configure the storage bucket as a static website and furnish the object’s URL to the company. Delete the object from the storage bucket after four hours.

D.

Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed.

Question 75

You have an application that looks for its licensing server on the IP 10.0.3.21. You need to deploy the licensing server on Compute Engine. You do not want to change the configuration of the application and want the application to be able to reach the licensing server. What should you do?

Options:

A.

Reserve the IP 10.0.3.21 as a static internal IP address using gcloud and assign it to the licensing server.

B.

Reserve the IP 10.0.3.21 as a static public IP address using gcloud and assign it to the licensing server.

C.

Use the IP 10.0.3.21 as a custom ephemeral IP address and assign it to the licensing server.

D.

Start the licensing server with an automatic ephemeral IP address, and then promote it to a static internal IP address.

Question 76

Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running instances specified by the template to be able to process expected application traffic. What should you do?

Options:

A.

Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names.

B.

Create an instance template that contains valid syntax that will be used by the instance group. Verify that the instance name and persistent disk name values are not the same in the template.

C.

Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template.

D.

Delete the current instance template and replace it with a new instance template. Verify that the instance name and persistent disk name values are not the same in the template. Set the disks.autoDelete property to true in the instance template.

Question 77

Your company runs its Linux workloads on Compute Engine instances. Your company will be working with a new operations partner that does not use Google Accounts. You need to grant access to the instances to your operations partner so they can maintain the installed tooling. What should you do?

Options:

A.

Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud IAP Tunnel User.

B.

Tag all the instances with the same network tag. Create a firewall rule in the VPC to grant TCP access on port 22 for traffic from the operations partner to instances with the network tag.

C.

Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations partner.

D.

Ask the operations partner to generate SSH key pairs, and add the public keys to the VM instances.

Question 78

Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?

Options:

A.

Migrate the workload to a Compute Engine Preemptible VM.

B.

Migrate the workload to a Google Kubernetes Engine cluster with Preemptible nodes.

C.

Migrate the workload to a Compute Engine VM. Start and stop the instance as needed.

D.

Create an Instance Template with Preemptible VMs On. Create a Managed Instance Group from the template and adjust Target CPU Utilization. Migrate the workload.

Question 79

You need to configure optimal data storage for files stored in Cloud Storage for minimal cost. The files are used in a mission-critical analytics pipeline that is used continually. The users are in Boston, MA (United States). What should you do?

Options:

A.

Configure regional storage for the region closest to the users Configure a Nearline storage class

B.

Configure regional storage for the region closest to the users Configure a Standard storage class

C.

Configure dual-regional storage for the dual region closest to the users Configure a Nearline storage class

D.

Configure dual-regional storage for the dual region closest to the users Configure a Standard storage class

Question 80

You are given a project with a single virtual private cloud (VPC) and a single subnetwork in the us-central1 region. There is a Compute Engine instance hosting an application in thissubnetwork. You need to deploy a new instance in the same project in the europe-west1 region. This new instance needs access to the application. You want to follow Google-recommended practices. What should you do?

Options:

A.

1. Create a subnetwork in the same VPC, in europe-west1.2. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

B.

1. Create a VPC and a subnetwork in europe-west1.2. Expose the application with an internal load balancer.3. Create the new instance in the new subnetwork and use the load balancer's address as the endpoint.

C.

1. Create a subnetwork in the same VPC, in europe-west1.2. Use Cloud VPN to connect the two subnetworks.3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

D.

1. Create a VPC and a subnetwork in europe-west1.2. Peer the 2 VPCs.3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

Question 81

You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk read throttling on its Zonal SSD Persistent Disk. The application primarily reads large files from disk. The disk size is currently 350 GB. You want to provide the maximum amount of throughput while minimizing costs. What should you do?

Options:

A.

Increase the size of the disk to 1 TB.

B.

Increase the allocated CPU to the instance.

C.

Migrate to use a Local SSD on the instance.

D.

Migrate to use a Regional SSD on the instance.

Question 82

You have a Bigtable instance that consists of three nodes that store personally identifiable information (Pll) data. You need to log all read or write operations, including any metadata or configuration reads of this database table, in your company's Security Information and Event Management (SIEM) system. What should you do?

Options:

A.

• Navigate to Cloud Mentioning in the Google Cloud console, and create a custom monitoring job for theBigtable instance to track all changes.• Create an alert by using webhook endpoints. with the SIEM endpoint as a receiver

B.

• Navigate to the Audit Logs page in the Google Cloud console, and enable Data Read. Data Write and Admin Read logs for the Bigtable instance• Create a Pub/Sub topic as a Cloud Logging sink destination, and add your SIEM as a subscriber to the topic.

C.

• Install the Ops Agent on the Bigtable instance during configuration. K• Create a service account with read permissions for the Bigtable instance.• Create a custom Dataflow job with this service account to export logs to the company's SIEM system.

D.

• Navigate to the Audit Logs page in the Google Cloud console, and enable Admin Write logs for theBiglable instance.• Create a Cloud Functions instance to export logs from Cloud Logging to your SIEM.

Question 83

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects. What should you do?

Options:

A.

Assign the finance team only the Billing Account User role on the billing account.

B.

Assign the engineering team only the Billing Account User role on the billing account.

C.

Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

D.

Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Question 84

You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?

Options:

A.

Navigate to Stackdriver Logging and select resource.labels.project_id="*"

B.

Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days.

C.

Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days.

D.

Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days.

Question 85

You have just created a new project which will be used to deploy a globally distributed application. You will use Cloud Spanner for data storage. You want to create a Cloud Spanner instance. You want to perform the first step in preparation of creating the instance. What should you do?

Options:

A.

Grant yourself the IAM role of Cloud Spanner Admin

B.

Create a new VPC network with subnetworks in all desired regions

C.

Configure your Cloud Spanner instance to be multi-regional

D.

Enable the Cloud Spanner API

Question 86

A company wants to build an application that stores images in a Cloud Storage bucket and wants to generate thumbnails as well as resize the images. They want to use a google managed service that can scale up and scale down to zero automatically with minimal effort. You have been asked to recommend a service. Which GCP service would you suggest?

Options:

A.

Google Compute Engine

B.

Google App Engine

C.

Cloud Functions

D.

Google Kubernetes Engine

Question 87

You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do?

Options:

A.

Use kubectl app deploy .

B.

Use gcloud app deploy .

C.

Create a docker image from the Dockerfile and upload it to Container Registry. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

D.

Create a docker image from the Dockerfile and upload it to Cloud Storage. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

Question 88

Your company uses BigQuery to store and analyze data. Upon submitting your query in BigQuery, the query fails with a quotaExceeded error. You need to diagnose the issue causing the error. What should you do?

Choose 2 answers

Options:

A.

Search errors in Cloud Audit Logs to analyze the issue.

B.

Configure Cloud Trace to analyze the issue.

C.

View errors in Cloud Monitoring to analyze the issue.

D.

Use the information schema views to analyze the underlying issue.

E.

Use BigQuery Bl Engine to analyze the issue.

Question 89

(You are deploying a web application using Compute Engine. You created a managed instance group (MIG) to host the application. You want to follow Google-recommended practices to implement a secure and highly available solution. What should you do?)

Options:

A.

Use a proxy Network Load Balancer for the MIG and an A record in your DNS private zone with the load balancer's IP address.

B.

Use a proxy Network Load Balancer for the MIG and a CNAME record in your DNS public zone with the load balancer's IP address.

C.

Use an Application Load Balancer for the MIG and a CNAME record in your DNS private zone with the load balancer's IP address.

D.

Use an Application Load Balancer for the MIG and an A record in your DNS public zone with the load balancer's IP address.

Question 90

Your team maintains the infrastructure for your organization. The current infrastructure requires changes. You need to share your proposed changes with the rest of the team. You want to follow Google’s recommended best practices. What should you do?

Options:

A.

Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.

B.

Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.

C.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.

D.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.

Question 91

(Your digital media company stores a large number of video files on-premises. Each video file ranges from 100 MB to 100 GB. You are currently storing 150 TB of video data in your on-premises network, with no room for expansion. You need to migrate all infrequently accessed video files older than one year to Cloud Storage to ensure that on-premises storage remains available for new files. You must also minimize costs and control bandwidth usage. What should you do?)

Options:

A.

Create a Cloud Storage bucket. Establish an Identity and Access Management (IAM) role with write permissions to the bucket. Use the gsutil tool to directly copy files over the network to Cloud Storage.

B.

Set up a Cloud Interconnect connection between the on-premises network and Google Cloud. Establish a private endpoint for Filestore access. Transfer the data from the existing Network File System (NFS) to Filestore.

C.

Use Transfer Appliance to request an appliance. Load the data locally, and ship the appliance back to Google for ingestion into Cloud Storage.

D.

Use Storage Transfer Service to move the data from the selected on-premises file storage systems to a Cloud Storage bucket.

Question 92

You need to create a custom VPC with a single subnet. The subnet’s range must be as large as possible. Which range should you use?

Options:

A.

.00.0.0/0

B.

10.0.0.0/8

C.

172.16.0.0/12

D.

192.168.0.0/16

Question 93

You are a Google Cloud organization administrator. You need to configure organization policies and log sinks on Google Cloud projects that cannot be removed by project users to comply with your company's security policies. The security policies are different for each company department Each company department has a user with the Project Owner role assigned to their projects. What should you do?

Options:

A.

Organize projects under folders for each department. Configure both organization policies and log sinks on the folders

B.

Organize projects under folders for each department. Configure organization policies on the organization and log sinks on the folders.

C.

Use a standard naming convention for projects that includes the department name. Configure organization policies on the organization and log sinks on the projects.

D.

Use a standard naming convention for projects that includes the department name. Configure both organization policies and log sinks on the projects.

Question 94

You are deploying a web application using Compute Engine. You created a managed instance group (MIG) to host the application. You want to follow Google-recommended practices to implement a secure and highly available solution. What should you do?

Options:

A.

Use SSL proxy load balancing for the MIG and an A record in your DNS private zone with the load balancer's IP address.

B.

Use SSL proxy load balancing for the MIG and a CNAME record in your DNS public zone with the load balancer's IP address.

C.

Use HTTP(S) load balancing for the MIG and a CNAME record in your DNS private zone with the load balancer's IP address.

D.

Use HTTP(S) load balancing for the MIG and an A record in your DNS public zone with the load balancer's IP address.

Question 95

You need to extract text from audio files by using the Speech-to-Text API. The audio files are pushed to a Cloud Storage bucket. You need to implement a fully managed, serverless compute solution that requires authentication and aligns with Google-recommended practices. You want to automate the call to the API by submitting each file to the API as the audio file arrives in the bucket. What should you do?

Options:

A.

Run a Kubernetes job to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.

B.

Create an App Engine standard environment triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.

C.

Run a Python script by using a Linux cron job in Compute Engine to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.

D.

Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.

Question 96

You want to set up a Google Kubernetes Engine cluster Verifiable node identity and integrity are required for the cluster, and nodes cannot be accessed from the internet. You want to reduce the operational cost of managing your cluster, and you want to follow Google-recommended practices. What should you do?

Options:

A.

Deploy a private autopilot cluster

B.

Deploy a public autopilot cluster.

C.

Deploy a standard public cluster and enable shielded nodes.

D.

Deploy a standard private cluster and enable shielded nodes.

Question 97

Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes. What should you do?

Options:

A.

Use gcloud to expand the IP range of the current subnet.

B.

Delete the subnet, and recreate it using a wider range of IP addresses.

C.

Create a new project. Use Shared VPC to share the current network with the new project.

D.

Create a new subnet with the same starting IP but a wider range to overwrite the current subnet.

Page: 1 / 33
Total 325 questions