Free DAS-C01 Exam Braindumps

Pass your AWS Certified Data Analytics - Specialty exam with these free Questions and Answers

Page 5 of 32
QUESTION 16

A marketing company collects clickstream data The company sends the data to Amazon Kinesis Data Firehose and stores the data in Amazon S3 The company wants to build a series of dashboards that will be used by hundreds of users across different departments The company will use Amazon QuickSight to develop these dashboards The company has limited resources and wants a solution that could scale and provide daily updates about clickstream activity
Which combination of options will provide the MOST cost-effective solution? (Select TWO )

  1. A. Use Amazon Redshift to store and query the clickstream data
  2. B. Use QuickSight with a direct SQL query
  3. C. Use Amazon Athena to query the clickstream data in Amazon S3
  4. D. Use S3 analytics to query the clickstream data
  5. E. Use the QuickSight SPICE engine with a daily refresh

Correct Answer: BD

QUESTION 17

An ecommerce company stores customer purchase data in Amazon RDS. The company wants a solution to store and analyze historical data. The most recent 6 months of data will be queried frequently for analytics workloads. This data is several terabytes large. Once a month, historical data for the last 5 years must be accessible and will be joined with the more recent data. The company wants to optimize performance and cost.
Which storage solution will meet these requirements?

  1. A. Create a read replica of the RDS database to store the most recent 6 months of dat
  2. B. Copy the historical data into Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3 and Amazon RD
  3. C. Run historical queries using Amazon Athena.
  4. D. Use an ETL tool to incrementally load the most recent 6 months of data into an Amazon Redshift cluste
  5. E. Run more frequent queries against this cluste
  6. F. Create a read replica of the RDS database to run queries on the historical data.
  7. G. Incrementally copy data from Amazon RDS to Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3. Use Amazon Athena to query the data.
  8. H. Incrementally copy data from Amazon RDS to Amazon S3. Load and store the most recent 6 months of data in Amazon Redshif
  9. I. Configure an Amazon Redshift Spectrum table to connect to all historical data.

Correct Answer: D

QUESTION 18

A marketing company has data in Salesforce, MySQL, and Amazon S3. The company wants to use data from these three locations and create mobile dashboards for its users. The company is unsure how it should create the dashboards and needs a solution with the least possible customization and coding.
Which solution meets these requirements?

  1. A. Use Amazon Athena federated queries to join the data source
  2. B. Use Amazon QuickSight to generate the mobile dashboards.
  3. C. Use AWS Lake Formation to migrate the data sources into Amazon S3. Use Amazon QuickSight to generate the mobile dashboards.
  4. D. Use Amazon Redshift federated queries to join the data source
  5. E. Use Amazon QuickSight to generate the mobile dashboards.
  6. F. Use Amazon QuickSight to connect to the data sources and generate the mobile dashboards.

Correct Answer: C

QUESTION 19

A university intends to use Amazon Kinesis Data Firehose to collect JSON-formatted batches of water quality readings in Amazon S3. The readings are from 50 sensors scattered across a local lake. Students will query the stored data using Amazon Athena to observe changes in a captured metric over time, such as water temperature or acidity. Interest has grown in the study, prompting the university to reconsider how data will be stored.
Which data format and partitioning choices will MOST significantly reduce costs? (Choose two.)

  1. A. Store the data in Apache Avro format using Snappy compression.
  2. B. Partition the data by year, month, and day.
  3. C. Store the data in Apache ORC format using no compression.
  4. D. Store the data in Apache Parquet format using Snappy compression.
  5. E. Partition the data by sensor, year, month, and day.

Correct Answer: CD

QUESTION 20

A large company has a central data lake to run analytics across different departments. Each department uses a separate AWS account and stores its data in an Amazon S3 bucket in that account. Each AWS account uses the AWS Glue Data Catalog as its data catalog. There are different data lake access requirements based on roles. Associate analysts should only have read access to their departmental data. Senior data analysts can have access in multiple departments including theirs, but for a subset of columns only.
Which solution achieves these required access patterns to minimize costs and administrative tasks?

  1. A. Consolidate all AWS accounts into one accoun
  2. B. Create different S3 buckets for each department and move all the data from every account to the central data lake accoun
  3. C. Migrate the individual data catalogs into a central data catalog and apply fine-grained permissions to give to each user the required access to tables and databases in AWS Glue and Amazon S3.
  4. D. Keep the account structure and the individual AWS Glue catalogs on each accoun
  5. E. Add a central data lake account and use AWS Glue to catalog data from various account
  6. F. Configure cross-account access for AWS Glue crawlers to scan the data in each departmental S3 bucket to identify the schema and populate the catalo
  7. G. Add the senior data analysts into the central account and apply highly detailed access controls in the Data Catalog and Amazon S3.
  8. H. Set up an individual AWS account for the central data lak
  9. I. Use AWS Lake Formation to catalog the cross- account location
  10. J. On each individual S3 bucket, modify the bucket policy to grant S3 permissions to the Lake Formation service-linked rol
  11. K. Use Lake Formation permissions to addfine-grained access controls to allow senior analysts to view specific tables and columns.
  12. L. Set up an individual AWS account for the central data lake and configure a central S3 bucke
  13. M. Use an AWS Lake Formation blueprint to move the data from the various buckets into the central S3 bucke
  14. N. On each individual bucket, modify the bucket policy to grant S3 permissions to the Lake Formation service-linked rol
  15. O. Use Lake Formation permissions to add fine-grained access controls for both associate and senior analysts to view specific tables and columns.

Correct Answer: C
Lake Formation provides secure and granular access to data through a new grant/revoke permissions model that augments AWS Identity and Access Management (IAM) policies. Analysts and data scientists can use the full portfolio of AWS analytics and machine learning services, such as Amazon Athena, to access the data. The configured Lake Formation security policies help ensure that users can access only the data that they are authorized to access. Source : https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html

Page 5 of 32

Post your Comments and Discuss Amazon-Web-Services DAS-C01 exam with other Community members: