Professional-Data-Engineer | Latest Google Professional Data Engineer Exam Professional-Data-Engineer Exam Engine

Master the Professional-Data-Engineer Google Professional Data Engineer Exam content and be ready for exam day success quickly with this Examcollection Professional-Data-Engineer practice question. We guarantee it!We make it a reality and give you real Professional-Data-Engineer questions in our Google Professional-Data-Engineer braindumps.Latest 100% VALID Google Professional-Data-Engineer Exam Questions Dumps at below page. You can use our Google Professional-Data-Engineer braindumps and pass your exam.

Also have Professional-Data-Engineer free dumps questions for you:

NEW QUESTION 1

Which of the following statements about the Wide & Deep Learning model are true? (Select 2 answers.)

  • A. The wide model is used for memorization, while the deep model is used for generalization.
  • B. A good use for the wide and deep model is a recommender system.
  • C. The wide model is used for generalization, while the deep model is used for memorization.
  • D. A good use for the wide and deep model is a small-scale linear regression problem.

Answer: AB

Explanation:
Can we teach computers to learn like humans do, by combining the power of memorization and generalization? It's not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It's useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.
Reference: https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html

NEW QUESTION 2

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

  • A. Rowkey: date#device_idColumn data: data_point
  • B. Rowkey: dateColumn data: device_id, data_point
  • C. Rowkey: device_idColumn data: date, data_point
  • D. Rowkey: data_pointColumn data: device_id, date
  • E. Rowkey: date#data_pointColumn data: device_id

Answer: D

NEW QUESTION 3

You use a dataset in BigQuery for analysis. You want to provide third-party companies with access to the same dataset. You need to keep the costs of data sharing low and ensure that the data is current. Which solution should you choose?

  • A. Create an authorized view on the BigQuery table to control data access, and provide third-party companies with access to that view.
  • B. Use Cloud Scheduler to export the data on a regular basis to Cloud Storage, and provide third-party companies with access to the bucket.
  • C. Create a separate dataset in BigQuery that contains the relevant data to share, and provide third-party companies with access to the new dataset.
  • D. Create a Cloud Dataflow job that reads the data in frequent time intervals, and writes it to the relevant BigQuery dataset or Cloud Storage bucket for third-party companies to use.

Answer: B

NEW QUESTION 4

Your infrastructure includes a set of YouTube channels. You have been tasked with creating a process for sending the YouTube channel data to Google Cloud for analysis. You want to design a solution that allows your world-wide marketing teams to perform ANSI SQL and other types of analysis on up-to-date YouTube channels log data. How should you set up the log data transfer into Google Cloud?

  • A. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
  • B. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination.
  • C. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
  • D. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.

Answer: B

NEW QUESTION 5

Google Cloud Bigtable indexes a single value in each row. This value is called the .

  • A. primary key
  • B. unique key
  • C. row key
  • D. master key

Answer: C

Explanation:
Cloud Bigtable is a sparsely populated table that can scale to billions of rows and thousands of columns, allowing you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key.
Reference: https://cloud.google.com/bigtable/docs/overview

NEW QUESTION 6

Which of the following is not possible using primitive roles?

  • A. Give a user viewer access to BigQuery and owner access to Google Compute Engine instances.
  • B. Give UserA owner access and UserB editor access for all datasets in a project.
  • C. Give a user access to view all datasets in a project, but not run queries on them.
  • D. Give GroupA owner access and GroupB editor access for all datasets in a project.

Answer: C

Explanation:
Primitive roles can be used to give owner, editor, or viewer access to a user or group, but they can't be used to separate data access permissions from job-running permissions.
Reference: https://cloud.google.com/bigquery/docs/access-control#primitive_iam_roles

NEW QUESTION 7

The Dataflow SDKs have been recently transitioned into which Apache service?

  • A. Apache Spark
  • B. Apache Hadoop
  • C. Apache Kafka
  • D. Apache Beam

Answer: D

Explanation:
Dataflow SDKs are being transitioned to Apache Beam, as per the latest Google directive Reference: https://cloud.google.com/dataflow/docs/

NEW QUESTION 8

You have a query that filters a BigQuery table using a WHERE clause on timestamp and ID columns. By using bq query – -dry_run you learn that the query triggers a full scan of the table, even though the filter on timestamp and ID select a tiny fraction of the overall data. You want to reduce the amount of data scanned by BigQuery with minimal changes to existing SQL queries. What should you do?

  • A. Create a separate table for each ID.
  • B. Use the LIMIT keyword to reduce the number of rows returned.
  • C. Recreate the table with a partitioning column and clustering column.
  • D. Use the bq query - -maximum_bytes_billed flag to restrict the number of bytes billed.

Answer: B

NEW QUESTION 9

You work for a bank. You have a labelled dataset that contains information on already granted loan application and whether these applications have been defaulted. You have been asked to train a model to predict default rates for credit applicants.
What should you do?

  • A. Increase the size of the dataset by collecting additional data.
  • B. Train a linear regression to predict a credit default risk score.
  • C. Remove the bias from the data and collect applications that have been declined loans.
  • D. Match loan applicants with their social profiles to enable feature engineering.

Answer: B

NEW QUESTION 10

You are building a model to make clothing recommendations. You know a user’s fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available.
How should you use this data to train the model?

  • A. Continuously retrain the model on just the new data.
  • B. Continuously retrain the model on a combination of existing data and the new data.
  • C. Train on the existing data while using the new data as your test set.
  • D. Train on the new data while using the existing data as your test set.

Answer: D

NEW QUESTION 11

An organization maintains a Google BigQuery dataset that contains tables with user-level datA. They want to expose aggregates of this data to other Google Cloud projects, while still controlling access to the user-level data. Additionally, they need to minimize their overall storage cost and ensure the analysis cost for other projects is assigned to those projects. What should they do?

  • A. Create and share an authorized view that provides the aggregate results.
  • B. Create and share a new dataset and view that provides the aggregate results.
  • C. Create and share a new dataset and table that contains the aggregate results.
  • D. Create dataViewer Identity and Access Management (IAM) roles on the dataset to enable sharing.

Answer: D

Explanation:
Reference: https://cloud.google.com/bigquery/docs/access-control

NEW QUESTION 12

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

  • A. Add capacity (memory and disk space) to the database server by the order of 200.
  • B. Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.
  • C. Normalize the master patient-record table into the patient table and the visits table, and create othernecessary tables to avoid self-join.
  • D. Partition the table into smaller tables, with one for each clini
  • E. Run queries against the smaller table pairs, and use unions for consolidated reports.

Answer: B

NEW QUESTION 13

You are building a new data pipeline to share data between two different types of applications: jobs generators and job runners. Your solution must scale to accommodate increases in usage and must accommodate the addition of new applications without negatively affecting the performance of existing ones. What should you do?

  • A. Create an API using App Engine to receive and send messages to the applications
  • B. Use a Cloud Pub/Sub topic to publish jobs, and use subscriptions to execute them
  • C. Create a table on Cloud SQL, and insert and delete rows with the job information
  • D. Create a table on Cloud Spanner, and insert and delete rows with the job information

Answer: A

NEW QUESTION 14

Which of these is not a supported method of putting data into a partitioned table?

  • A. If you have existing data in a separate file for each day, then create a partitioned table and upload each file into the appropriate partition.
  • B. Run a query to get the records for a specific day from an existing table and for the destination table, specify a partitioned table ending with the day in the format "$YYYYMMDD".
  • C. Create a partitioned table and stream new records to it every day.
  • D. Use ORDER BY to put a table's rows into chronological order and then change the table's type to "Partitioned".

Answer: D

Explanation:
You cannot change an existing table into a partitioned table. You must create a partitioned table from scratch. Then you can either stream data into it every day and the data will automatically be put in the right partition, or you can load data into a specific partition by using "$YYYYMMDD" at the end of the table name.
Reference: https://cloud.google.com/bigquery/docs/partitioned-tables

NEW QUESTION 15

Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour.
The data scientists have written the following code to read the data for a new key features in the logs. BigQueryIO.Read
.named(“ReadLogData”)
.from(“clouddataflow-readonly:samples.log_data”)
You want to improve the performance of this data read. What should you do?

  • A. Specify the TableReference object in the code.
  • B. Use .fromQuery operation to read specific fields from the table.
  • C. Use of both the Google BigQuery TableSchema and TableFieldSchema classes.
  • D. Call a transform that returns TableRow objects, where each element in the PCollexction represents asingle row in the table.

Answer: D

NEW QUESTION 16

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?

  • A. Issue a command to restart the database servers.
  • B. Retry the query with exponential backoff, up to a cap of 15 minutes.
  • C. Retry the query every second until it comes back online to minimize staleness of data.
  • D. Reduce the query frequency to once every hour until the database comes back online.

Answer: B

NEW QUESTION 17

Which of the following are feature engineering techniques? (Select 2 answers)

  • A. Hidden feature layers
  • B. Feature prioritization
  • C. Crossed feature columns
  • D. Bucketization of a continuous feature

Answer: CD

Explanation:
Selecting and crafting the right set of feature columns is key to learning an effective model. Bucketization is a process of dividing the entire range of a continuous feature into a set of consecutive
bins/buckets, and then converting the original numerical feature into a bucket ID (as a categorical feature) depending on which bucket that value falls into.
Using each base feature column separately may not be enough to explain the data. To learn the differences between different feature combinations, we can add crossed feature columns to the model.
Reference: https://www.tensorflow.org/tutorials/wide#selecting_and_engineering_features_for_the_model

NEW QUESTION 18

All Google Cloud Bigtable client requests go through a front-end server they are sent to a Cloud Bigtable node.

  • A. before
  • B. after
  • C. only if
  • D. once

Answer: A

Explanation:
In a Cloud Bigtable architecture all client requests go through a front-end server before they are sent to a Cloud Bigtable node.
The nodes are organized into a Cloud Bigtable cluster, which belongs to a Cloud Bigtable instance, which is a container for the cluster. Each node in the cluster handles a subset of the requests to the cluster.
When additional nodes are added to a cluster, you can increase the number of simultaneous requests that the cluster can handle, as well as the maximum throughput for the entire cluster.
Reference: https://cloud.google.com/bigtable/docs/overview

NEW QUESTION 19

Cloud Bigtable is a recommended option for storing very large amounts of _____ ?

  • A. multi-keyed data with very high latency
  • B. multi-keyed data with very low latency
  • C. single-keyed data with very low latency
  • D. single-keyed data with very high latency

Answer: C

Explanation:
Cloud Bigtable is a sparsely populated table that can scale to billions of rows and thousands of columns, allowing you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations.
Reference: https://cloud.google.com/bigtable/docs/overview

NEW QUESTION 20

You are designing a data processing pipeline. The pipeline must be able to scale automatically as load increases. Messages must be processed at least once, and must be ordered within windows of 1 hour. How should you design the solution?

  • A. Use Apache Kafka for message ingestion and use Cloud Dataproc for streaming analysis.
  • B. Use Apache Kafka for message ingestion and use Cloud Dataflow for streaming analysis.
  • C. Use Cloud Pub/Sub for message ingestion and Cloud Dataproc for streaming analysis.
  • D. Use Cloud Pub/Sub for message ingestion and Cloud Dataflow for streaming analysis.

Answer: C

NEW QUESTION 21

You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use?

  • A. Linear regression
  • B. Logistic classification
  • C. Recurrent neural network
  • D. Feedforward neural network

Answer: A

NEW QUESTION 22

Which of the following job types are supported by Cloud Dataproc (select 3 answers)?

  • A. Hive
  • B. Pig
  • C. YARN
  • D. Spark

Answer: ABD

Explanation:
Cloud Dataproc provides out-of-the box and end-to-end support for many of the most popular job types, including Spark, Spark SQL, PySpark, MapReduce, Hive, and Pig jobs.
Reference: https://cloud.google.com/dataproc/docs/resources/faq#what_type_of_jobs_can_i_run

NEW QUESTION 23
......

Recommend!! Get the Full Professional-Data-Engineer dumps in VCE and PDF From Dumpscollection.com, Welcome to Download: https://www.dumpscollection.net/dumps/Professional-Data-Engineer/ (New 239 Q&As Version)