Professional-Data-Engineer | Realistic Google Professional-Data-Engineer Test Online

Master the Professional-Data-Engineer Google Professional Data Engineer Exam content and be ready for exam day success quickly with this Passleader Professional-Data-Engineer study guide. We guarantee it!We make it a reality and give you real Professional-Data-Engineer questions in our Google Professional-Data-Engineer braindumps.Latest 100% VALID Google Professional-Data-Engineer Exam Questions Dumps at below page. You can use our Google Professional-Data-Engineer braindumps and pass your exam.

Also have Professional-Data-Engineer free dumps questions for you:

NEW QUESTION 1

Your software uses a simple JSON format for all messages. These messages are published to Google Cloud Pub/Sub, then processed with Google Cloud Dataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in the dashboard. You check the logs, and all messages are being published to Cloud Pub/Sub successfully. What should you do next?

  • A. Check the dashboard application to see if it is not displaying correctly.
  • B. Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.
  • C. Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.
  • D. Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.

Answer: B

NEW QUESTION 2

Which of these sources can you not load data into BigQuery from?

  • A. File upload
  • B. Google Drive
  • C. Google Cloud Storage
  • D. Google Cloud SQL

Answer: D

Explanation:
You can load data into BigQuery from a file upload, Google Cloud Storage, Google Drive, or Google Cloud Bigtable. It is not possible to load data into BigQuery directly from Google Cloud SQL. One way to get data from Cloud SQL to BigQuery would be to export data from Cloud SQL to Cloud Storage and then load it from there.
Reference: https://cloud.google.com/bigquery/loading-data

NEW QUESTION 3

What are two of the characteristics of using online prediction rather than batch prediction?

  • A. It is optimized to handle a high volume of data instances in a job and to run more complex models.
  • B. Predictions are returned in the response message.
  • C. Predictions are written to output files in a Cloud Storage location that you specify.
  • D. It is optimized to minimize the latency of serving predictions.

Answer: BD

Explanation:
Online prediction
Optimized to minimize the latency of serving predictions. Predictions returned in the response message.
Batch prediction
Optimized to handle a high volume of instances in a job and to run more complex models. Predictions written to output files in a Cloud Storage location that you specify.
Reference:
https://cloud.google.com/ml-engine/docs/prediction-overview#online_prediction_versus_batch_prediction

NEW QUESTION 4

You operate a database that stores stock trades and an application that retrieves average stock price for a given company over an adjustable window of time. The data is stored in Cloud Bigtable where the datetime of the stock trade is the beginning of the row key. Your application has thousands of concurrent users, and you notice that performance is starting to degrade as more stocks are added. What should you do to improve the performance of your application?

  • A. Change the row key syntax in your Cloud Bigtable table to begin with the stock symbol.
  • B. Change the row key syntax in your Cloud Bigtable table to begin with a random number per second.
  • C. Change the data pipeline to use BigQuery for storing stock trades, and update your application.
  • D. Use Cloud Dataflow to write summary of each day’s stock trades to an Avro file on Cloud Storage.Update your application to read from Cloud Storage and Cloud Bigtable to compute the responses.

Answer: A

NEW QUESTION 5

You are a retailer that wants to integrate your online sales capabilities with different in-home assistants, such as Google Home. You need to interpret customer voice commands and issue an order to the backend systems. Which solutions should you choose?

  • A. Cloud Speech-to-Text API
  • B. Cloud Natural Language API
  • C. Dialogflow Enterprise Edition
  • D. Cloud AutoML Natural Language

Answer: D

NEW QUESTION 6

You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark Scala job on a Google Cloud Dataproc cluster, and depositing the results into Google BigQuery.
How should you securely run this workload?

  • A. Restrict the Google Cloud Storage bucket so only you can see the files
  • B. Grant the Project Owner role to a service account, and run the job with it
  • C. Use a service account with the ability to read the batch files and to write to BigQuery
  • D. Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery

Answer: B

NEW QUESTION 7

If a dataset contains rows with individual people and columns for year of birth, country, and income, how many of the columns are continuous and how many are categorical?

  • A. 1 continuous and 2 categorical
  • B. 3 categorical
  • C. 3 continuous
  • D. 2 continuous and 1 categorical

Answer: D

Explanation:
The columns can be grouped into two types—categorical and continuous columns:
A column is called categorical if its value can only be one of the categories in a finite set. For example, the native country of a person (U.S., India, Japan, etc.) or the education level (high school, college, etc.) are categorical columns.
A column is called continuous if its value can be any numerical value in a continuous range. For example, the capital gain of a person (e.g. $14,084) is a continuous column.
Year of birth and income are continuous columns. Country is a categorical column.
You could use bucketization to turn year of birth and/or income into categorical features, but the raw columns are continuous.
Reference: https://www.tensorflow.org/tutorials/wide#reading_the_census_data

NEW QUESTION 8

Suppose you have a table that includes a nested column called "city" inside a column called "person", but when you try to submit the following query in BigQuery, it gives you an error.
SELECT person FROM `project1.example.table1` WHERE city = "London"
How would you correct the error?

  • A. Add ", UNNEST(person)" before the WHERE clause.
  • B. Change "person" to "person.city".
  • C. Change "person" to "city.person".
  • D. Add ", UNNEST(city)" before the WHERE clause.

Answer: A

Explanation:
To access the person.city column, you need to "UNNEST(person)" and JOIN it to table1 using a comma. Reference:
https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sql#nested_repeated_resu

NEW QUESTION 9

You have a data pipeline with a Cloud Dataflow job that aggregates and writes time series metrics to Cloud Bigtable. This data feeds a dashboard used by thousands of users across the organization. You need to support additional concurrent users and reduce the amount of time required to write the data. Which two actions should you take? (Choose two.)

  • A. Configure your Cloud Dataflow pipeline to use local execution
  • B. Increase the maximum number of Cloud Dataflow workers by setting maxNumWorkers in PipelineOptions
  • C. Increase the number of nodes in the Cloud Bigtable cluster
  • D. Modify your Cloud Dataflow pipeline to use the Flatten transform before writing to Cloud Bigtable
  • E. Modify your Cloud Dataflow pipeline to use the CoGroupByKey transform before writing to Cloud Bigtable

Answer: DE

NEW QUESTION 10

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible. What should you do?

  • A. Load the data every 30 minutes into a new partitioned table in BigQuery.
  • B. Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery
  • C. Store the data in Google Cloud Datastor
  • D. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore
  • E. Store the data in a file in a regional Google Cloud Storage bucke
  • F. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Answer: A

NEW QUESTION 11

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?

  • A. Update the current pipeline and use the drain flag.
  • B. Update the current pipeline and provide the transform mapping JSON object.
  • C. Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.
  • D. Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.

Answer: D

NEW QUESTION 12

Your team is working on a binary classification problem. You have trained a support vector machine (SVM) classifier with default parameters, and received an area under the Curve (AUC) of 0.87 on the validation set. You want to increase the AUC of the model. What should you do?

  • A. Perform hyperparameter tuning
  • B. Train a classifier with deep neural networks, because neural networks would always beat SVMs
  • C. Deploy the model and measure the real-world AUC; it’s always higher because of generalization
  • D. Scale predictions you get out of the model (tune a scaling factor as a hyperparameter) in order to get the highest AUC

Answer: D

NEW QUESTION 13

When you design a Google Cloud Bigtable schema it is recommended that you .

  • A. Avoid schema designs that are based on NoSQL concepts
  • B. Create schema designs that are based on a relational database design
  • C. Avoid schema designs that require atomicity across rows
  • D. Create schema designs that require atomicity across rows

Answer: C

Explanation:
All operations are atomic at the row level. For example, if you update two rows in a table, it's possible that one row will be updated successfully and the other update will fail. Avoid schema designs that require atomicity across rows.
Reference: https://cloud.google.com/bigtable/docs/schema-design#row-keys

NEW QUESTION 14

Your company has a hybrid cloud initiative. You have a complex data pipeline that moves data between cloud provider services and leverages services from each of the cloud providers. Which cloud-native service should you use to orchestrate the entire pipeline?

  • A. Cloud Dataflow
  • B. Cloud Composer
  • C. Cloud Dataprep
  • D. Cloud Dataproc

Answer: D

NEW QUESTION 15

You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules:
Professional-Data-Engineer dumps exhibit No interaction by the user on the site for 1 hour
Professional-Data-Engineer dumps exhibit Has added more than $30 worth of products to the basket
Professional-Data-Engineer dumps exhibit Has not completed a transaction
You use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design the pipeline?

  • A. Use a fixed-time window with a duration of 60 minutes.
  • B. Use a sliding time window with a duration of 60 minutes.
  • C. Use a session window with a gap time duration of 60 minutes.
  • D. Use a global window with a time based trigger with a delay of 60 minutes.

Answer: D

NEW QUESTION 16

You have a job that you want to cancel. It is a streaming pipeline, and you want to ensure that any data that is in-flight is processed and written to the output. Which of the following commands can you use on the Dataflow monitoring console to stop the pipeline job?

  • A. Cancel
  • B. Drain
  • C. Stop
  • D. Finish

Answer: B

Explanation:
Using the Drain option to stop your job tells the Dataflow service to finish your job in its current state. Your job will immediately stop ingesting new data from input sources, but the Dataflow
service will preserve any existing resources (such as worker instances) to finish processing and writing any buffered data in your pipeline.
Reference: https://cloud.google.com/dataflow/pipelines/stopping-a-pipeline

NEW QUESTION 17

Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while minimizing cost. What should they do?

  • A. Redefine the schema by evenly distributing reads and writes across the row space of the table.
  • B. The performance issue should be resolved over time as the site of the BigDate cluster is increased.
  • C. Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.
  • D. Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.

Answer: A

NEW QUESTION 18

By default, which of the following windowing behavior does Dataflow apply to unbounded data sets?

  • A. Windows at every 100 MB of data
  • B. Single, Global Window
  • C. Windows at every 1 minute
  • D. Windows at every 10 minutes

Answer: B

Explanation:
Dataflow's default windowing behavior is to assign all elements of a PCollection to a single, global window, even for unbounded PCollections
Reference: https://cloud.google.com/dataflow/model/pcollection

NEW QUESTION 19

Which of the following is NOT a valid use case to select HDD (hard disk drives) as the storage for Google Cloud Bigtable?

  • A. You expect to store at least 10 TB of data.
  • B. You will mostly run batch workloads with scans and writes, rather than frequently executing random reads of a small number of rows.
  • C. You need to integrate with Google BigQuery.
  • D. You will not use the data to back a user-facing or latency-sensitive application.

Answer: C

Explanation:
For example, if you plan to store extensive historical data for a large number of remote-sensing devices and then use the data to generate daily reports, the cost savings for HDD storage may justify the performance tradeoff. On the other hand, if you plan to use the data to display a real-time dashboard, it probably would not make sense to use HDD storage—reads would be much more frequent in this case, and reads are much slower with HDD storage.
Reference: https://cloud.google.com/bigtable/docs/choosing-ssd-hdd

NEW QUESTION 20

Which of the following statements is NOT true regarding Bigtable access roles?

  • A. Using IAM roles, you cannot give a user access to only one table in a project, rather than all tables in a project.
  • B. To give a user access to only one table in a project, grant the user the Bigtable Editor role for that table.
  • C. You can configure access control only at the project level.
  • D. To give a user access to only one table in a project, you must configure access through your application.

Answer: B

Explanation:
For Cloud Bigtable, you can configure access control at the project level. For example, you can grant the ability to:
Read from, but not write to, any table within the project.
Read from and write to any table within the project, but not manage instances. Read from and write to any table within the project, and manage instances. Reference: https://cloud.google.com/bigtable/docs/access-control

NEW QUESTION 21

You decided to use Cloud Datastore to ingest vehicle telemetry data in real time. You want to build a storage system that will account for the long-term data growth, while keeping the costs low. You also want to create snapshots of the data periodically, so that you can make a point-in-time (PIT) recovery, or clone a copy of the data for Cloud Datastore in a different environment. You want to archive these snapshots for a long time. Which two methods can accomplish this? Choose 2 answers.

  • A. Use managed export, and store the data in a Cloud Storage bucket using Nearline or Coldline class.
  • B. Use managed exportm, and then import to Cloud Datastore in a separate project under a unique namespace reserved for that export.
  • C. Use managed export, and then import the data into a BigQuery table created just for that export, and delete temporary export files.
  • D. Write an application that uses Cloud Datastore client libraries to read all the entitie
  • E. Treat each entity as a BigQuery table row via BigQuery streaming inser
  • F. Assign an export timestamp for each export, and attach it as an extra column for each ro
  • G. Make sure that the BigQuery table is partitioned using the export timestamp column.
  • H. Write an application that uses Cloud Datastore client libraries to read all the entitie
  • I. Format the exported data into a JSON fil
  • J. Apply compression before storing the data in Cloud Source Repositories.

Answer: CE

NEW QUESTION 22

You need to set access to BigQuery for different departments within your company. Your solution should comply with the following requirements:
Professional-Data-Engineer dumps exhibit Each department should have access only to their data.
Professional-Data-Engineer dumps exhibit Each department will have one or more leads who need to be able to create and update tables and provide them to their team.
Professional-Data-Engineer dumps exhibit Each department has data analysts who need to be able to query but not modify data.
How should you set access to the data in BigQuery?

  • A. Create a dataset for each departmen
  • B. Assign the department leads the role of OWNER, and assign the data analysts the role of WRITER on their dataset.
  • C. Create a dataset for each departmen
  • D. Assign the department leads the role of WRITER, and assign the data analysts the role of READER on their dataset.
  • E. Create a table for each departmen
  • F. Assign the department leads the role of Owner, and assign the data analysts the role of Editor on the project the table is in.
  • G. Create a table for each departmen
  • H. Assign the department leads the role of Editor, and assign the data analysts the role of Viewer on the project the table is in.

Answer: D

NEW QUESTION 23
......

Thanks for reading the newest Professional-Data-Engineer exam dumps! We recommend you to try the PREMIUM Dumps-hub.com Professional-Data-Engineer dumps in VCE and PDF here: https://www.dumps-hub.com/Professional-Data-Engineer-dumps.html (239 Q&As Dumps)