Free AWS-Certified-Solutions-Architect-Professional Exam Braindumps

Pass your Amazon AWS Certified Solutions Architect Professional exam with these free Questions and Answers

Page 7 of 60
QUESTION 26

- (Exam Topic 2)
A new startup is running a serverless application using AWS Lambda as the primary source of compute New versions of the application must be made available to a subset of users before deploying changes to all users Developers should also have the ability to stop the deployment and have access to an easy rollback mechanism A solutions architect decides to use AWS CodeDeploy to deploy changes when a new version is available.
Which CodeDeploy configuration should the solutions architect use?

  1. A. A blue/green deployment
  2. B. A linear deployment
  3. C. A canary deployment
  4. D. An all-at-once deployment

Correct Answer: C

QUESTION 27

- (Exam Topic 1)
An e-commerce company is revamping its IT infrastructure and is planning to use AWS services. The company's CIO has asked a solutions architect to design a simple, highly available, and loosely coupled order processing application. The application is responsible (or receiving and processing orders before storing them in an Amazon DynamoDB table. The application has a sporadic traffic pattern and should be able to scale during markeling campaigns to process the orders with minimal delays.
Which of the following is the MOST reliable approach to meet the requirements?

  1. A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances to process them.
  2. B. Receive the orders in an Amazon SOS queue and trigger an AWS Lambda function lo process them.
  3. C. Receive the orders using the AWS Step Functions program and trigger an Amazon ECS container lo process them.
  4. D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances to process them.

Correct Answer: B
Q: How does Amazon Kinesis Data Streams differ from Amazon SQS?
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).
https://aws.amazon.com/kinesis/data-streams/faqs/
https://aws.amazon.com/blogs/big-data/unite-real-time-and-batch-analytics-using-the-big-data-lambda-architect

QUESTION 28

- (Exam Topic 2)
A company is finalizing the architecture for its backup solution for applications running on AWS. All of the applications run on AWS and use at least two Availability Zones in each tier.
Company policy requires IT to durably store nightly backups of all its data in at least two locations: production and disaster recovery. The locations must be m different geographic regions. The company also needs the backup to be available to restore immediately at the production data center, and within 24 hours at the disaster recovery location backup processes must be fully automated.
What is the MOST cost-effective backup solution that will meet all requirements?

  1. A. Back up all the data to a large Amazon EBS volume attached to the backup media server m the production regio
  2. B. Run automated scripts to snapshot these volumes nightl
  3. C. and copy these snapshots to the disaster recovery region.
  4. D. Back up all the data to Amazon S3 in the disaster recovery region Use a Lifecycle policy to move this data to Amazon Glacier in the production region immediately Only the data is replicated: remove the data from the S3 bucket in the disaster recovery region.
  5. E. Back up all the data to Amazon Glacier in the production regio
  6. F. Set up cross-region replication of this data to Amazon Glacier in the disaster recovery regio
  7. G. Set up a lifecycle policy to delete any data o der than 60 days.
  8. H. Back up all the data to Amazon S3 in the production regio
  9. I. Set up cross-region replication of this S3 bucket to another region and set up a lifecycle policy in the second region to immediately move this data to Amazon Glacier

Correct Answer: D

QUESTION 29

- (Exam Topic 1)
A solutions architect is designing the data storage and retrieval architecture for a new application that a company will be launching soon. The application is designed to ingest millions of small records per minute from devices all around the world. Each record is less than 4 KB in size and needs to be stored in a durable location where it can be retrieved with low latency. The data is ephemeral and the company is required to store the data for 120 days only, after which the data can be deleted.
The solutions architect calculates that, during the course of a year, the storage requirements would be about 10-15 TB.
Which storage strategy is the MOST cost-effective and meets the design requirements?

  1. A. Design the application to store each incoming record as a single .csv file in an Amazon S3 bucket to allow for indexed retrieva
  2. B. Configure a lifecycle policy to delete data older than 120 days.
  3. C. Design the application to store each incoming record in an Amazon DynamoDB table properly configured for the scal
  4. D. Configure the DynamoOB Time to Live (TTL) feature to delete records older than 120 days.
  5. E. Design the application to store each incoming record in a single table in an Amazon RDS MySQL databas
  6. F. Run a nightly cron job that executes a query to delete any records older than 120 days.
  7. G. Design the application to batch incoming records before writing them to an Amazon S3 bucke
  8. H. Update the metadata for the object to contain the list of records in the batch and use the Amazon S3 metadata search feature to retrieve the dat
  9. I. Configure a lifecycle policy to delete the data after 120 days.

Correct Answer: B
DynamoDB with TTL, cheaper for sustained throughput of small items + suited for fast retrievals. S3 cheaper for storage only, much higher costs with writes. RDS not designed for this use case.

QUESTION 30

- (Exam Topic 1)
A finance company hosts a data lake in Amazon S3. The company receives financial data records over SFTP each night from several third parties. The company runs its own SFTP server on an Amazon EC2 instance in a public subnet of a VPC. After the files ate uploaded, they are moved to the data lake by a cron job that runs on the same instance. The SFTP server is reachable on DNS sftp.examWe.com through the use of Amazon Route 53.
What should a solutions architect do to improve the reliability and scalability of the SFTP solution?

  1. A. Move the EC2 instance into an Auto Scaling grou
  2. B. Place the EC2 instance behind an Application Load Balancer (ALB). Update the DNS record sftp.example.com in Route 53 to point to the ALB.
  3. C. Migrate the SFTP server to AWS Transfer for SFT
  4. D. Update the DNS record sftp.example.com in Route 53 to point to the server endpoint hostname.
  5. E. Migrate the SFTP server to a file gateway in AWS Storage Gatewa
  6. F. Update the DNS record sflp.example.com in Route 53 to point to the file gateway endpoint.
  7. G. Place the EC2 instance behind a Network Load Balancer (NLB). Update the DNS record sftp.example.com in Route 53 to point to the NLB.

Correct Answer: B
https://aws.amazon.com/aws-transfer-family/faqs/ https://docs.aws.amazon.com/transfer/latest/userguide/what-is-aws-transfer-family.html
https://aws.amazon.com/about-aws/whats-new/2018/11/aws-transfer-for-sftp-fully-managed-sftp-for-s3/?nc1=h_

Page 7 of 60

Post your Comments and Discuss Amazon AWS-Certified-Solutions-Architect-Professional exam with other Community members: