Special Summer Sale Limited Time Flat 70% Discount offer - Ends in 0d 00h 00m 00s - Coupon code: 70spcl

Google Associate-Data-Practitioner Google Cloud Associate Data Practitioner (ADP Exam) Exam Practice Test

Page: 1 / 11
Total 106 questions

Google Cloud Associate Data Practitioner (ADP Exam) Questions and Answers

Question 1

Your company’s customer support audio files are stored in a Cloud Storage bucket. You plan to analyze the audio files’ metadata and file content within BigQuery to create inference by using BigQuery ML. You need to create a corresponding table in BigQuery that represents the bucket containing the audio files. What should you do?

Options:

A.

Create an external table.

B.

Create a temporary table.

C.

Create a native table.

D.

Create an object table.

Question 2

Your organization has several datasets in their data warehouse in BigQuery. Several analyst teams in different departments use the datasets to run queries. Your organization is concerned about the variability of their monthly BigQuery costs. You need to identify a solution that creates a fixed budget for costs associated with the queries run by each department. What should you do?

Options:

A.

Create a custom quota for each analyst in BigQuery.

B.

Create a single reservation by using BigQuery editions. Assign all analysts to the reservation.

C.

Assign each analyst to a separate project associated with their department. Create a single reservation by using BigQuery editions. Assign all projects to the reservation.

D.

Assign each analyst to a separate project associated with their department. Create a single reservation for each department by using BigQuery editions. Create assignments for each project in the appropriate reservation.

Question 3

You are constructing a data pipeline to process sensitive customer data stored in a Cloud Storage bucket. You need to ensure that this data remains accessible, even in the event of a single-zone outage. What should you do?

Options:

A.

Set up a Cloud CDN in front of the bucket.

B.

Enable Object Versioning on the bucket.

C.

Store the data in a multi-region bucket.

D.

Store the data in Nearline storaqe.

Question 4

Your organization needs to implement near real-time analytics for thousands of events arriving each second in Pub/Sub. The incoming messages require transformations. You need to configure a pipeline that processes, transforms, and loads the data into BigQuery while minimizing development time. What should you do?

Options:

A.

Use a Google-provided Dataflow template to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.

B.

Create a Cloud Data Fusion instance and configure Pub/Sub as a source. Use Data Fusion to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.

C.

Load the data from Pub/Sub into Cloud Storage using a Cloud Storage subscription. Create a Dataproc cluster, use PySpark to perform transformations in Cloud Storage, and write the results to BigQuery.

D.

Use Cloud Run functions to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.

Question 5

You are migrating data from a legacy on-premises MySQL database to Google Cloud. The database contains various tables with different data types and sizes, including large tables with millions of rowsand transactional data. You need to migrate this data while maintaining data integrity, and minimizing downtime and cost. What should you do?

Options:

A.

Set up a Cloud Composer environment to orchestrate a custom data pipeline. Use a Python script to extract data from the MySQL database and load it to MySQL on Compute Engine.

B.

Export the MySQL database to CSV files, transfer the files to Cloud Storage by using Storage Transfer Service, and load the files into a Cloud SQL for MySQL instance.

C.

Use Database Migration Service to replicate the MySQL database to a Cloud SQL for MySQL instance.

D.

Use Cloud Data Fusion to migrate the MySQL database to MySQL on Compute Engine.

Question 6

Your company uses Looker as its primary business intelligence platform. You want to use LookML to visualize the profit margin for each of your company’s products in your Looker Explores and dashboards. You need to implement a solution quickly and efficiently. What should you do?

Options:

A.

Create a derived table that pre-calculates the profit margin for each product, and include it in the Looker model.

B.

Define a new measure that calculates the profit margin by using the existing revenue and cost fields.

C.

Create a new dimension that categorizes products based on their profit margin ranges (e.g., high, medium, low).

D.

Apply a filter to only show products with a positive profit margin.

Question 7

You work for a healthcare company that has a large on-premises data system containing patient records with personally identifiable information (PII) such as names, addresses, and medical diagnoses. You need a standardized managed solution that de-identifies PII across all your data feeds prior to ingestion to Google Cloud. What should you do?

Options:

A.

Use Cloud Run functions to create a serverless data cleaning pipeline. Store the cleaned data in BigQuery.

B.

Use Cloud Data Fusion to transform the data. Store the cleaned data in BigQuery.

C.

Load the data into BigQuery, and inspect the data by using SQL queries. Use Dataflow to transform the data and remove any errors.

D.

Use Apache Beam to read the data and perform the necessary cleaning and transformation operations. Store the cleaned data in BigQuery.

Question 8

You need to design a data pipeline to process large volumes of raw server log data stored in Cloud Storage. The data needs to be cleaned, transformed, and aggregated before being loaded into BigQuery for analysis. The transformation involves complex data manipulation using Spark scripts that your team developed. You need to implement a solution that leverages your team’s existing skillset, processes data at scale, and minimizes cost. What should you do?

Options:

A.

Use Dataflow with a custom template for the transformation logic.

B.

Use Cloud Data Fusion to visually design and manage the pipeline.

C.

Use Dataform to define the transformations in SQLX.

D.

Use Dataproc to run the transformations on a cluster.

Question 9

You are working on a data pipeline that will validate and clean incoming data before loading it into BigQuery for real-time analysis. You want to ensure that the data validation and cleaning is performed efficiently and can handle high volumes of data. What should you do?

Options:

A.

Write custom scripts in Python to validate and clean the data outside of Google Cloud. Load the cleaned data into BigQuery.

B.

Use Cloud Run functions to trigger data validation and cleaning routines when new data arrives in Cloud Storage.

C.

Use Dataflow to create a streaming pipeline that includes validation and transformation steps.

D.

Load the raw data into BigQuery using Cloud Storage as a staging area, and use SQL queries in BigQuery to validate and clean the data.

Question 10

Your organization has several datasets in BigQuery. The datasets need to be shared with your external partners so that they can run SQL queries without needing to copy the data to their own projects. You have organized each partner’s data in its own BigQuery dataset. Each partner should be able to access only their data. You want to share the data while following Google-recommended practices. What should you do?

Options:

A.

Use Analytics Hub to create a listing on a private data exchange for each partner dataset. Allow each partner to subscribe to their respective listings.

B.

Create a Dataflow job that reads from each BigQuery dataset and pushes the data into a dedicated Pub/Sub topic for each partner. Grant each partner the pubsub. subscriber IAM role.

C.

Export the BigQuery data to a Cloud Storage bucket. Grant the partners the storage.objectUser IAM role on the bucket.

D.

Grant the partners the bigquery.user IAM role on the BigQuery project.

Question 11

You are responsible for managing Cloud Storage buckets for a research company. Your company has well-defined data tiering and retention rules. You need to optimize storage costs while achieving your data retention needs. What should you do?

Options:

A.

Configure the buckets to use the Archive storage class.

B.

Configure a lifecycle management policy on each bucket to downgrade the storage class and remove objects based on age.

C.

Configure the buckets to use the Standard storage class and enable Object Versioning.

D.

Configure the buckets to use the Autoclass feature.

Question 12

Your organization plans to move their on-premises environment to Google Cloud. Your organization’s network bandwidth is less than 1 Gbps. You need to move over 500 ТВ of data to Cloud Storage securely, and only have a few days to move the data. What should you do?

Options:

A.

Request multiple Transfer Appliances, copy the data to the appliances, and ship the appliances back to Google Cloud to upload the data to Cloud Storage.

B.

Connect to Google Cloud using VPN. Use Storage Transfer Service to move the data to Cloud Storage.

C.

Connect to Google Cloud using VPN. Use the gcloud storage command to move the data to Cloud Storage.

D.

Connect to Google Cloud using Dedicated Interconnect. Use the gcloud storage command to move the data to Cloud Storage.

Question 13

You are developing a data ingestion pipeline to load small CSV files into BigQuery from Cloud Storage. You want to load these files upon arrival to minimize data latency. You want to accomplish this with minimal cost and maintenance. What should you do?

Options:

A.

Use the bq command-line tool within a Cloud Shell instance to load the data into BigQuery.

B.

Create a Cloud Composer pipeline to load new files from Cloud Storage to BigQuery and schedule it to run every 10 minutes.

C.

Create a Cloud Run function to load the data into BigQuery that is triggered when data arrives in Cloud Storage.

D.

Create a Dataproc cluster to pull CSV files from Cloud Storage, process them using Spark, and write the results to BigQuery.

Question 14

You manage a large amount of data in Cloud Storage, including raw data, processed data, and backups. Your organization is subject to strict compliance regulations that mandate data immutability for specific data types. You want to use an efficient process to reduce storage costs while ensuring that your storage strategy meets retention requirements. What should you do?

Options:

A.

Configure lifecycle management rules to transition objects to appropriate storage classes based on access patterns. Set up Object Versioning for all objects to meet immutability requirements.

B.

Move objects to different storage classes based on their age and access patterns. Use Cloud Key Management Service (Cloud KMS) to encrypt specific objects with customer-managed encryption keys (CMEK) to meet immutability requirements.

C.

Create a Cloud Run function to periodically check object metadata, and move objects to the appropriate storage class based on age and access patterns. Use object holds to enforce immutability for specific objects.

D.

Use object holds to enforce immutability for specific objects, and configure lifecycle management rules to transition objects to appropriate storage classes based on age and access patterns.

Question 15

You have a BigQuery dataset containing sales data. This data is actively queried for the first 6 months. After that, the data is not queried but needs to be retained for 3 years for compliance reasons. You need to implement a data management strategy that meets access and compliance requirements, while keeping cost and administrative overhead to a minimum. What should you do?

Options:

A.

Use BigQuery long-term storage for the entire dataset. Set up a Cloud Run function to delete the data from BigQuery after 3 years.

B.

Partition a BigQuery table by month. After 6 months, export the data to Coldline storage. Implement a lifecycle policy to delete the data from Cloud Storage after 3 years.

C.

Set up a scheduled query to export the data to Cloud Storage after 6 months. Write a stored procedure to delete the data from BigQuery after 3 years.

D.

Store all data in a single BigQuery table without partitioning or lifecycle policies.

Question 16

Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular. You need to design a storage system that is simple and cost-effective. What should you do?

Options:

A.

Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.

B.

Create a single-region bucket with Autoclass enabled.

C.

Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.

D.

Create a single-region bucket with Archive as the default storage class.

Question 17

Your organization is building a new application on Google Cloud. Several data files will need to be stored in Cloud Storage. Your organization has approved only two specific cloud regions where these data files can reside. You need to determine a Cloud Storage bucket strategy that includes automated high availability. What should you do?

Options:

A.

Create a dual-region bucket, and upload the files to this bucket.

B.

Create a single-region bucket in each of the two regions, and use the gcloud storage command to replicate the data across the buckets in both regions.

C.

Create a multi-region bucket, and upload the files to this bucket.

D.

Create a single-region bucket in each of the two regions, and use Storage Transfer Service to replicate the data across the buckets in both regions.

Question 18

Your organization has a BigQuery dataset that contains sensitive employee information such as salaries and performance reviews. The payroll specialist in the HR department needs to have continuous access to aggregated performance data, but they do not need continuous access to other sensitive data. You need to grant the payroll specialist access to the performance data without granting them access to the entire dataset using the simplest and most secure approach. What should you do?

Options:

A.

Use authorized views to share query results with the payroll specialist.

B.

Create row-level and column-level permissions and policies on the table that contains performance data in the dataset. Provide the payroll specialist with the appropriate permission set.

C.

Create a table with the aggregated performance data. Use table-level permissions to grant access to the payroll specialist.

D.

Create a SQL query with the aggregated performance data. Export the results to an Avro file in a Cloud Storage bucket. Share the bucket with the payroll specialist.

Question 19

Your company is adopting BigQuery as their data warehouse platform. Your team has experienced Python developers. You need to recommend a fully-managed tool to build batch ETL processes that extract data from various source systems, transform the data using a variety of Google Cloud services, and load the transformed data into BigQuery. You want this tool to leverage your team’s Python skills. What should you do?

Options:

A.

Use Dataform with assertions.

B.

Deploy Cloud Data Fusion and included plugins.

C.

Use Cloud Composer with pre-built operators.

D.

Use Dataflow and pre-built templates.

Question 20

Your organization needs to store historical customer order data. The data will only be accessed once a month for analysis and must be readily available within a few seconds when it is accessed. You need to choose a storage class that minimizes storage costs while ensuring that the data can be retrieved quickly. What should you do?

Options:

A.

Store the data in Cloud Storaqe usinq Nearline storaqe.

B.

Store the data in Cloud Storaqe usinq Coldline storaqe.

C.

Store the data in Cloud Storage using Standard storage.

D.

Store the data in Cloud Storage using Archive storage.

Question 21

You are predicting customer churn for a subscription-based service. You have a 50 PB historical customer dataset in BigQuery that includes demographics, subscription information, and engagement metrics. You want to build a churn prediction model with minimal overhead. You want to follow the Google-recommended approach. What should you do?

Options:

A.

Export the data from BigQuery to a local machine. Use scikit-learn in a Jupyter notebook to build the churn prediction model.

B.

Use Dataproc to create a Spark cluster. Use the Spark MLlib within the cluster to build the churn prediction model.

C.

Create a Looker dashboard that is connected to BigQuery. Use LookML to predict churn.

D.

Use the BigQuery Python client library in a Jupyter notebook to query and preprocess the data in BigQuery. Use the CREATE MODEL statement in BigQueryML to train the churn prediction model.

Question 22

Your organization uses scheduled queries to perform transformations on data stored in BigQuery. You discover that one of your scheduled queries has failed. You need to troubleshoot the issue as quickly as possible. What should you do?

Options:

A.

Navigate to the Logs Explorer page in Cloud Logging. Use filters to find the failed job, and analyze the error details.

B.

Set up a log sink using the gcloud CLI to export BigQuery audit logs to BigQuery. Query those logs to identify the error associated with the failed job ID.

C.

Request access from your admin to the BigQuery information_schema. Query the jobs view with the failed job ID, and analyze error details.

D.

Navigate to the Scheduled queries page in the Google Cloud console. Select the failed job, and analyze the error details.

Question 23

You want to build a model to predict the likelihood of a customer clicking on an online advertisement. You have historical data in BigQuery that includes features such as user demographics, ad placement,and previous click behavior. After training the model, you want to generate predictions on new data. Which model type should you use in BigQuery ML?

Options:

A.

Linear regression

B.

Matrix factorization

C.

Logistic regression

D.

K-means clustering

Question 24

Your organization has decided to move their on-premises Apache Spark-based workload to Google Cloud. You want to be able to manage the code without needing to provision and manage your own cluster. What should you do?

Options:

A.

Migrate the Spark jobs to Dataproc Serverless.

B.

Configure a Google Kubernetes Engine cluster with Spark operators, and deploy the Spark jobs.

C.

Migrate the Spark jobs to Dataproc on Google Kubernetes Engine.

D.

Migrate the Spark jobs to Dataproc on Compute Engine.

Question 25

Your organization has decided to migrate their existing enterprise data warehouse to BigQuery. The existing data pipeline tools already support connectors to BigQuery. You need to identify a data migration approach that optimizes migration speed. What should you do?

Options:

A.

Create a temporary file system to facilitate data transfer from the existing environment to Cloud Storage. Use Storage Transfer Service to migrate the data into BigQuery.

B.

Use the Cloud Data Fusion web interface to build data pipelines. Create a directed acyclic graph (DAG) that facilitates pipeline orchestration.

C.

Use the existing data pipeline tool’s BigQuery connector to reconfigure the data mapping.

D.

Use the BigQuery Data Transfer Service to recreate the data pipeline and migrate the data into BigQuery.

Question 26

You are a data analyst working with sensitive customer data in BigQuery. You need to ensure that only authorized personnel within your organization can query this data, while following the principle of least privilege. What should you do?

Options:

A.

Enable access control by using IAM roles.

B.

Update dataset privileges by using the SQL GRANT statement.

C.

Export the data to Cloud Storage, and use signed URLs to authorize access.

D.

Encrypt the data by using customer-managed encryption keys (CMEK).

Page: 1 / 11
Total 106 questions