DAS-C01 | Tested Amazon-Web-Services DAS-C01 Training Tools Online

Cause all that matters here is passing the Amazon-Web-Services DAS-C01 exam. Cause all that you need is a high score of DAS-C01 AWS Certified Data Analytics - Specialty exam. The only one thing you need to do is downloading Certleader DAS-C01 exam study guides now. We will not let you down with our money-back guarantee.

Online Amazon-Web-Services DAS-C01 free dumps demo Below:

NEW QUESTION 1
A large financial company is running its ETL process. Part of this process is to move data from Amazon S3 into an Amazon Redshift cluster. The company wants to use the most cost-efficient method to load the dataset into Amazon Redshift.
Which combination of steps would meet these requirements? (Choose two.)

  • A. Use the COPY command with the manifest file to load data into Amazon Redshift.
  • B. Use S3DistCp to load files into Amazon Redshift.
  • C. Use temporary staging tables during the loading process.
  • D. Use the UNLOAD command to upload data into Amazon Redshift.
  • E. Use Amazon Redshift Spectrum to query files from Amazon S3.

Answer: AC

NEW QUESTION 2
A company has an encrypted Amazon Redshift cluster. The company recently enabled Amazon Redshift audit logs and needs to ensure that the audit logs are also encrypted at rest. The logs are retained for 1 year. The auditor queries the logs once a month.
What is the MOST cost-effective way to meet these requirements?

  • A. Encrypt the Amazon S3 bucket where the logs are stored by using AWS Key Management Service (AWS KMS). Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basi
  • B. Query the data as required.
  • C. Disable encryption on the Amazon Redshift cluster, configure audit logging, and encrypt the Amazon Redshift cluste
  • D. Use Amazon Redshift Spectrum to query the data as required.
  • E. Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryptio
  • F. Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basi
  • G. Query the data as required.
  • H. Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryptio
  • I. Use Amazon Redshift Spectrum to query the data as required.

Answer: A

NEW QUESTION 3
An online retailer is rebuilding its inventory management system and inventory reordering system to automatically reorder products by using Amazon Kinesis Data Streams. The inventory management system uses the Kinesis Producer Library (KPL) to publish data to a stream. The inventory reordering system uses the Kinesis Client Library (KCL) to consume data from the stream. The stream has been configured to scale as needed. Just before production deployment, the retailer discovers that the inventory reordering system is receiving duplicated data.
Which factors could be causing the duplicated data? (Choose two.)

  • A. The producer has a network-related timeout.
  • B. The stream’s value for the IteratorAgeMilliseconds metric is too high.
  • C. There was a change in the number of shards, record processors, or both.
  • D. The AggregationEnabled configuration property was set to true.
  • E. The max_records configuration property was set to a number that is too high.

Answer: BD

NEW QUESTION 4
A technology company is creating a dashboard that will visualize and analyze time-sensitive data. The data will come in through Amazon Kinesis Data Firehose with the butter interval set to 60 seconds. The dashboard must support near-real-time data.
Which visualization solution will meet these requirements?

  • A. Select Amazon Elasticsearch Service (Amazon ES) as the endpoint for Kinesis Data Firehos
  • B. Set up a Kibana dashboard using the data in Amazon ES with the desired analyses and visualizations.
  • C. Select Amazon S3 as the endpoint for Kinesis Data Firehos
  • D. Read data into an Amazon SageMaker Jupyter notebook and carry out the desired analyses and visualizations.
  • E. Select Amazon Redshift as the endpoint for Kinesis Data Firehos
  • F. Connect Amazon QuickSight with SPICE to Amazon Redshift to create the desired analyses and visualizations.
  • G. Select Amazon S3 as the endpoint for Kinesis Data Firehos
  • H. Use AWS Glue to catalog the data and Amazon Athena to query i
  • I. Connect Amazon QuickSight with SPICE to Athena to create the desired analyses and visualizations.

Answer: A

NEW QUESTION 5
A company launched a service that produces millions of messages every day and uses Amazon Kinesis Data Streams as the streaming service.
The company uses the Kinesis SDK to write data to Kinesis Data Streams. A few months after launch, a data analyst found that write performance is significantly reduced. The data analyst investigated the metrics and determined that Kinesis is throttling the write requests. The data analyst wants to address this issue without significant changes to the architecture.
Which actions should the data analyst take to resolve this issue? (Choose two.)

  • A. Increase the Kinesis Data Streams retention period to reduce throttling.
  • B. Replace the Kinesis API-based data ingestion mechanism with Kinesis Agent.
  • C. Increase the number of shards in the stream using the UpdateShardCount API.
  • D. Choose partition keys in a way that results in a uniform record distribution across shards.
  • E. Customize the application code to include retry logic to improve performance.

Answer: CD

Explanation:
https://aws.amazon.com/blogs/big-data/under-the-hood-scaling-your-kinesis-data-streams/

NEW QUESTION 6
A banking company is currently using an Amazon Redshift cluster with dense storage (DS) nodes to store sensitive data. An audit found that the cluster is unencrypted. Compliance requirements state that a database with sensitive data must be encrypted through a hardware security module (HSM) with automated key rotation.
Which combination of steps is required to achieve compliance? (Choose two.)

  • A. Set up a trusted connection with HSM using a client and server certificate with automatic key rotation.
  • B. Modify the cluster with an HSM encryption option and automatic key rotation.
  • C. Create a new HSM-encrypted Amazon Redshift cluster and migrate the data to the new cluster.
  • D. Enable HSM with key rotation through the AWS CLI.
  • E. Enable Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) encryption in the HSM.

Answer: BD

NEW QUESTION 7
A university intends to use Amazon Kinesis Data Firehose to collect JSON-formatted batches of water quality readings in Amazon S3. The readings are from 50 sensors scattered across a local lake. Students will query the stored data using Amazon Athena to observe changes in a captured metric over time, such as water temperature or acidity. Interest has grown in the study, prompting the university to reconsider how data will be stored.
Which data format and partitioning choices will MOST significantly reduce costs? (Choose two.)

  • A. Store the data in Apache Avro format using Snappy compression.
  • B. Partition the data by year, month, and day.
  • C. Store the data in Apache ORC format using no compression.
  • D. Store the data in Apache Parquet format using Snappy compression.
  • E. Partition the data by sensor, year, month, and day.

Answer: CD

NEW QUESTION 8
A company wants to research user turnover by analyzing the past 3 months of user activities. With millions of users, 1.5 TB of uncompressed data is generated each day. A 30-node Amazon Redshift cluster with 2.56 TB of solid state drive (SSD) storage for each node is required to meet the query performance goals.
The company wants to run an additional analysis on a year’s worth of historical data to examine trends indicating which features are most popular. This analysis will be done once a week.
What is the MOST cost-effective solution?

  • A. Increase the size of the Amazon Redshift cluster to 120 nodes so it has enough storage capacity to hold 1 year of dat
  • B. Then use Amazon Redshift for the additional analysis.
  • C. Keep the data from the last 90 days in Amazon Redshif
  • D. Move data older than 90 days to Amazon S3 and store it in Apache Parquet format partitioned by dat
  • E. Then use Amazon Redshift Spectrum for the additional analysis.
  • F. Keep the data from the last 90 days in Amazon Redshif
  • G. Move data older than 90 days to Amazon S3 and store it in Apache Parquet format partitioned by dat
  • H. Then provision a persistent Amazon EMR cluster and use Apache Presto for the additional analysis.
  • I. Resize the cluster node type to the dense storage node type (DS2) for an additional 16 TB storage capacity on each individual node in the Amazon Redshift cluste
  • J. Then use Amazon Redshift for the additional analysis.

Answer: B

NEW QUESTION 9
A company has a data warehouse in Amazon Redshift that is approximately 500 TB in size. New data is imported every few hours and read-only queries are run throughout the day and evening. There is a particularly heavy load with no writes for several hours each morning on business days. During those hours, some queries are queued and take a long time to execute. The company needs to optimize query execution and avoid any downtime.
What is the MOST cost-effective solution?

  • A. Enable concurrency scaling in the workload management (WLM) queue.
  • B. Add more nodes using the AWS Management Console during peak hour
  • C. Set the distribution style to ALL.
  • D. Use elastic resize to quickly add nodes during peak time
  • E. Remove the nodes when they are not needed.
  • F. Use a snapshot, restore, and resize operatio
  • G. Switch to the new target cluster.

Answer: A

Explanation:
https://docs.aws.amazon.com/redshift/latest/dg/cm-c-implementing-workload-management.html

NEW QUESTION 10
A US-based sneaker retail company launched its global website. All the transaction data is stored in Amazon RDS and curated historic transaction data is stored in Amazon Redshift in the us-east-1 Region. The business intelligence (BI) team wants to enhance the user experience by providing a dashboard for sneaker trends.
The BI team decides to use Amazon QuickSight to render the website dashboards. During development, a team in Japan provisioned Amazon QuickSight in ap-northeast-1. The team is having difficulty connecting Amazon QuickSight from ap-northeast-1 to Amazon Redshift in us-east-1.
Which solution will solve this issue and meet the requirements?

  • A. In the Amazon Redshift console, choose to configure cross-Region snapshots and set the destination Region as ap-northeast-1. Restore the Amazon Redshift Cluster from the snapshot and connect to Amazon QuickSight launched in ap-northeast-1.
  • B. Create a VPC endpoint from the Amazon QuickSight VPC to the Amazon Redshift VPC so Amazon QuickSight can access data from Amazon Redshift.
  • C. Create an Amazon Redshift endpoint connection string with Region information in the string and use this connection string in Amazon QuickSight to connect to Amazon Redshift.
  • D. Create a new security group for Amazon Redshift in us-east-1 with an inbound rule authorizing access from the appropriate IP address range for the Amazon QuickSight servers in ap-northeast-1.

Answer: B

NEW QUESTION 11
An airline has been collecting metrics on flight activities for analytics. A recently completed proof of concept demonstrates how the company provides insights to data analysts to improve on-time departures. The proof of concept used objects in Amazon S3, which contained the metrics in .csv format, and used Amazon Athena for querying the data. As the amount of data increases, the data analyst wants to optimize the storage solution to improve query performance.
Which options should the data analyst use to improve performance as the data lake grows? (Choose three.)

  • A. Add a randomized string to the beginning of the keys in S3 to get more throughput across partitions.
  • B. Use an S3 bucket in the same account as Athena.
  • C. Compress the objects to reduce the data transfer I/O.
  • D. Use an S3 bucket in the same Region as Athena.
  • E. Preprocess the .csv data to JSON to reduce I/O by fetching only the document keys needed by the query.
  • F. Preprocess the .csv data to Apache Parquet to reduce I/O by fetching only the data blocks needed for predicates.

Answer: CDF

Explanation:
https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for-amazon-athena/

NEW QUESTION 12
A bank operates in a regulated environment. The compliance requirements for the country in which the bank operates say that customer data for each state should only be accessible by the bank’s employees located in the same state. Bank employees in one state should NOT be able to access data for customers who have provided a home address in a different state.
The bank’s marketing team has hired a data analyst to gather insights from customer data for a new campaign being launched in certain states. Currently, data linking each customer account to its home state is stored in a tabular .csv file within a single Amazon S3 folder in a private S3 bucket. The total size of the S3 folder is 2 GB uncompressed. Due to the country’s compliance requirements, the marketing team is not able to access this folder.
The data analyst is responsible for ensuring that the marketing team gets one-time access to customer data for their campaign analytics project, while being subject to all the compliance requirements and controls.
Which solution should the data analyst implement to meet the desired requirements with the LEAST amount of setup effort?

  • A. Re-arrange data in Amazon S3 to store customer data about each state in a different S3 folder within the same bucke
  • B. Set up S3 bucket policies to provide marketing employees with appropriate data access under compliance control
  • C. Delete the bucket policies after the project.
  • D. Load tabular data from Amazon S3 to an Amazon EMR cluster using s3DistC
  • E. Implement a customHadoop-based row-level security solution on the Hadoop Distributed File System (HDFS) to provide marketing employees with appropriate data access under compliance control
  • F. Terminate the EMR cluster after the project.
  • G. Load tabular data from Amazon S3 to Amazon Redshift with the COPY comman
  • H. Use the built-in row- level security feature in Amazon Redshift to provide marketing employees with appropriate data access under compliance control
  • I. Delete the Amazon Redshift tables after the project.
  • J. Load tabular data from Amazon S3 to Amazon QuickSight Enterprise edition by directly importing it as a data sourc
  • K. Use the built-in row-level security feature in Amazon QuickSight to provide marketing employees with appropriate data access under compliance control
  • L. Delete Amazon QuickSight data sources after the project is complete.

Answer: C

NEW QUESTION 13
A company uses the Amazon Kinesis SDK to write data to Kinesis Data Streams. Compliance requirements state that the data must be encrypted at rest using a key that can be rotated. The company wants to meet this encryption requirement with minimal coding effort.
How can these requirements be met?

  • A. Create a customer master key (CMK) in AWS KM
  • B. Assign the CMK an alia
  • C. Use the AWS Encryption SDK, providing it with the key alias to encrypt and decrypt the data.
  • D. Create a customer master key (CMK) in AWS KM
  • E. Assign the CMK an alia
  • F. Enable server-side encryption on the Kinesis data stream using the CMK alias as the KMS master key.
  • G. Create a customer master key (CMK) in AWS KM
  • H. Create an AWS Lambda function to encrypt and decrypt the dat
  • I. Set the KMS key ID in the function’s environment variables.
  • J. Enable server-side encryption on the Kinesis data stream using the default KMS key for Kinesis Data Streams.

Answer: B

NEW QUESTION 14
A data analyst is designing an Amazon QuickSight dashboard using centralized sales data that resides in Amazon Redshift. The dashboard must be restricted so that a salesperson in Sydney, Australia, can see only the Australia view and that a salesperson in New York can see only United States (US) data.
What should the data analyst do to ensure the appropriate data security is in place?

  • A. Place the data sources for Australia and the US into separate SPICE capacity pools.
  • B. Set up an Amazon Redshift VPC security group for Australia and the US.
  • C. Deploy QuickSight Enterprise edition to implement row-level security (RLS) to the sales table.
  • D. Deploy QuickSight Enterprise edition and set up different VPC security groups for Australia and the US.

Answer: D

NEW QUESTION 15
An online retail company with millions of users around the globe wants to improve its ecommerce analytics capabilities. Currently, clickstream data is uploaded directly to Amazon S3 as compressed files. Several times each day, an application running on Amazon EC2 processes the data and makes search options and reports available for visualization by editors and marketers. The company wants to make website clicks and aggregated data available to editors and marketers in minutes to enable them to connect with users more effectively.
Which options will help meet these requirements in the MOST efficient way? (Choose two.)

  • A. Use Amazon Kinesis Data Firehose to upload compressed and batched clickstream records to Amazon Elasticsearch Service.
  • B. Upload clickstream records to Amazon S3 as compressed file
  • C. Then use AWS Lambda to send data to Amazon Elasticsearch Service from Amazon S3.
  • D. Use Amazon Elasticsearch Service deployed on Amazon EC2 to aggregate, filter, and process the data.Refresh content performance dashboards in near-real time.
  • E. Use Kibana to aggregate, filter, and visualize the data stored in Amazon Elasticsearch Servic
  • F. Refresh content performance dashboards in near-real time.
  • G. Upload clickstream records from Amazon S3 to Amazon Kinesis Data Streams and use a Kinesis Data Streams consumer to send records to Amazon Elasticsearch Service.

Answer: AD

NEW QUESTION 16
A data analyst is using AWS Glue to organize, cleanse, validate, and format a 200 GB dataset. The data analyst triggered the job to run with the Standard worker type. After 3 hours, the AWS Glue job status is still RUNNING. Logs from the job run show no error codes. The data analyst wants to improve the job execution time without overprovisioning.
Which actions should the data analyst take?

  • A. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the executor-cores job parameter.
  • B. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the maximum capacity job parameter.
  • C. Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the spark.yarn.executor.memoryOverhead job parameter.
  • D. Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the num-executors job parameter.

Answer: B

NEW QUESTION 17
A data analytics specialist is setting up workload management in manual mode for an Amazon Redshift environment. The data analytics specialist is defining query monitoring rules to manage system performance and user experience of an Amazon Redshift cluster.
Which elements must each query monitoring rule include?

  • A. A unique rule name, a query runtime condition, and an AWS Lambda function to resubmit any failed queries in off hours
  • B. A queue name, a unique rule name, and a predicate-based stop condition
  • C. A unique rule name, one to three predicates, and an action
  • D. A workload name, a unique rule name, and a query runtime-based condition

Answer: C

NEW QUESTION 18
......

P.S. Certleader now are offering 100% pass ensure DAS-C01 dumps! All DAS-C01 exam questions have been updated with correct answers: https://www.certleader.com/DAS-C01-dumps.html (130 New Questions)