Free AWS-Certified-Data-Analytics-Specialty Exam Braindumps

Pass your AWS Certified Data Analytics - Specialty exam with these free Questions and Answers

Page 2 of 32
QUESTION 1

A software company wants to use instrumentation data to detect and resolve errors to improve application recovery time. The company requires API usage anomalies, like error rate and response time spikes, to be detected in near-real time (NRT) The company also requires that data analysts have access to dashboards for log analysis in NRT
Which solution meets these requirements'?

  1. A. Use Amazon Kinesis Data Firehose as the data transport layer for logging data Use Amazon Kinesis Data Analytics to uncover the NRT API usage anomalies Use Kinesis Data Firehose to deliver log data to Amazon OpenSearch Service (Amazon Elasticsearch Service) for search, log analytics, and application monitoring Use OpenSearch Dashboards (Kibana) in Amazon OpenSearch Service (Amazon Elasticsearch Service) for the dashboards.
  2. B. Use Amazon Kinesis Data Analytics as the data transport layer for logging dat
  3. C. Use Amazon Kinesis Data Streams to uncover NRT monitoring metric
  4. D. Use Amazon Kinesis Data Firehose to deliver log data to Amazon OpenSearch Service (Amazon Elasticsearch Service) for search, log analytics, and application monitoring Use Amazon QuickSight for the dashboards
  5. E. Use Amazon Kinesis Data Analytics as the data transport layer for logging data and to uncover NRT monitoring metrics Use Amazon Kinesis Data Firehose to deliver log data to Amazon OpenSearch Service (Amazon Elasticsearch Service) for search, log analytics, and application monitoring Use OpenSearch Dashboards (Kibana) in Amazon OpenSearch Service (Amazon Elasticsearch Service) for the dashboards
  6. F. Use Amazon Kinesis Data Firehose as the data transport layer for logging data Use Amazon Kinesis Data Analytics to uncover NRT monitoring metrics Use Amazon Kinesis Data Streams to deliver logdata to Amazon OpenSearch Service (Amazon Elasticsearch Service) for search, log analytics, and application monitoring Use Amazon QuickSight for the dashboards.

Correct Answer: C

QUESTION 2

A market data company aggregates external data sources to create a detailed view of product consumption in different countries. The company wants to sell this data to external parties through a subscription. To achieve this goal, the company needs to make its data securely available to external parties who are also AWS users.
What should the company do to meet these requirements with the LEAST operational overhead?

  1. A. Store the data in Amazon S3. Share the data by using presigned URLs for security.
  2. B. Store the data in Amazon S3. Share the data by using S3 bucket ACLs.
  3. C. Upload the data to AWS Data Exchange for storag
  4. D. Share the data by using presigned URLs for security.
  5. E. Upload the data to AWS Data Exchange for storag
  6. F. Share the data by using the AWS Data Exchange sharing wizard.

Correct Answer: A

QUESTION 3

A data analyst runs a large number of data manipulation language (DML) queries by using Amazon Athena with the JDBC driver. Recently, a query failed after It ran for 30 minutes. The query returned the following message Java.sql.SGLException: Query timeout
The data analyst does not immediately need the query results However, the data analyst needs a long-term solution for this problem
Which solution will meet these requirements?

  1. A. Split the query into smaller queries to search smaller subsets of data.
  2. B. In the settings for Athena, adjust the DML query timeout limit
  3. C. In the Service Quotas console, request an increase for the DML query timeout
  4. D. Save the tables as compressed .csv files

Correct Answer: A

QUESTION 4

A global company has different sub-organizations, and each sub-organization sells its products and services in various countries. The company's senior leadership wants to quickly identify which sub-organization is the strongest performer in each country. All sales data is stored in Amazon S3 in Parquet format.
Which approach can provide the visuals that senior leadership requested with the least amount of effort?

  1. A. Use Amazon QuickSight with Amazon Athena as the data sourc
  2. B. Use heat maps as the visual type.
  3. C. Use Amazon QuickSight with Amazon S3 as the data sourc
  4. D. Use heat maps as the visual type.
  5. E. Use Amazon QuickSight with Amazon Athena as the data sourc
  6. F. Use pivot tables as the visual type.
  7. G. Use Amazon QuickSight with Amazon S3 as the data sourc
  8. H. Use pivot tables as the visual type.

Correct Answer: A

QUESTION 5

A gaming company is collecting cllckstream data into multiple Amazon Kinesis data streams. The company uses Amazon Kinesis Data Firehose delivery streams to store the data in JSON format in Amazon S3 Data scientists use Amazon Athena to query the most recent data and derive business insights. The company wants to reduce its Athena costs without having to recreate the data pipeline. The company prefers a solution that will require less management effort
Which set of actions can the data scientists take immediately to reduce costs?

  1. A. Change the Kinesis Data Firehose output format to Apache Parquet Provide a custom S3 object YYYYMMDD prefix expression and specify a large buffer size For the existing data, run an AWS Glue ETL job to combine and convert small JSON files to large Parquet files and add the YYYYMMDD prefix Use ALTER TABLE ADD PARTITION to reflect the partition on the existing Athena table.
  2. B. Create an Apache Spark Job that combines and converts JSON files to Apache Parquet files Launch an Amazon EMR ephemeral cluster daily to run the Spark job to create new Parquet files in a different S3 location Use ALTER TABLE SET LOCATION to reflect the new S3 location on the existing Athena table.
  3. C. Create a Kinesis data stream as a delivery target for Kinesis Data Firehose Run Apache Flink on Amazon Kinesis Data Analytics on the stream to read the streaming data, aggregate ikand save it to Amazon S3 in Apache Parquet format with a custom S3 object YYYYMMDD prefix Use ALTER TABLE ADD PARTITION to reflect the partition on the existing Athena table
  4. D. Integrate an AWS Lambda function with Kinesis Data Firehose to convert source records to Apache Parquet and write them to Amazon S3 In parallel, run an AWS Glue ETL job to combine and convert existing JSON files to large Parquet files Create a custom S3 object YYYYMMDD prefix Use ALTER TABLE ADD PARTITION to reflect the partition on the existing Athena table.

Correct Answer: D

Page 2 of 32

Post your Comments and Discuss Amazon AWS-Certified-Data-Analytics-Specialty exam with other Community members: