Free AWS-Certified-DevOps-Engineer-Professional Exam Braindumps

Pass your Amazon AWS Certified DevOps Engineer Professional exam with these free Questions and Answers

Page 7 of 28
QUESTION 26

A company is adopting serverless computing and is migrating some of its existing applications to AWS Lambda A DevOps engineer must come up with an automated deployment strategy using AWS CodePipeline that should include proper version controls, branching strategies, and rollback methods
Which combination of steps should the DevOps engineer follow when setting up the pipeline? (Select THREE)

  1. A. Use Amazon S3 as the source code repository
  2. B. Use AWS CodeCommit as the source code repository
  3. C. Use AWS CloudFormation to create an AWS Serverless Application Model (AWS SAM) template for deployment.
  4. D. Use AWS CodeBuild to create an AWS Serverless Application Model (AWS SAM) template for deployment
  5. E. Use AWS CloudFormation to deploy the application
  6. F. Use AWS CodeDeploy to deploy the application.

Correct Answer: ABC

QUESTION 27

A production account has a requirement that any Amazon EC2 instance that has been logged into manually must be terminated within 24 hours. All applications in the production account are using Auto Scaling groups with Amazon CloudWatch Logs agent configured.
How can this process be automated?

  1. A. Create a CloudWatch Logs subscription to an AWS Step Functions applicatio
  2. B. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissione
  3. C. Then create a CloudWatch Events rule to trigger a second AWS Lambda function once a day that will terminate all instances with this tag.
  4. D. Create a CloudWatch alarm that will trigger on the login even
  5. E. Send the notification to an Amazon SNS topic that the Operations team is subscribed to, and have them terminate the EC2 instance within 24 hours.
  6. F. Create a CloudWatch alarm that will trigger on the login even
  7. G. Configure the alarm to send to an Amazon SQS queu
  8. H. Use a group of worker instances to process messages from the queue, which then schedules the Amazon CloudWatch Events rule to trigger.
  9. I. Create a CloudWatch Logs subscription in an AWS Lambda functio
  10. J. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissione
  11. K. Create a CloudWatch Events rule to trigger a daily Lambda function that terminates all instances withthis ta

Correct Answer: D
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/cw-example-subscription-filters.html

QUESTION 28

An application is being deployed with two Amazon EC2 Auto Scaling groups, each configured with an Application Load Balancer. The application is deployed to one of the Auto Scaling groups and an Amazon Route 53 alias record is pointed to the Application Load Balancer of the last deployed Auto Scaling group.
Deployments alternate between the two Auto Scaling groups.
Home security devices are making requests into the application. The Development team notes that new requests are coming into the old stack days after the deployment. The issue is caused by devices that are not observing the Time to Live (TTL) setting on the Amazon Route 53 alias record.
What steps should the DevOps Engineer take to address the issue with requests coming to the old stacks, while creating minimal additional resources?

  1. A. Create a fleet of Amazon EC2 instances running HAProxy behind an Application Load Balance
  2. B. The HAProxy instances will proxy the requests to one of the existing Auto Scaling group
  3. C. After a deployment the HAProxy instances are updated to send requests to the newly deployed Auto Scaling group.
  4. D. Reduce the application to one Application Load Balance
  5. E. Create two target groups named Blue and Gree
  6. F. Create a rule on the Application Load Balancer pointed to a single target grou
  7. G. Add logic to the deployment to update the Application Load Balancer rule to the target group of the newly deployed Auto Scaling group.
  8. H. Move the application to an AWS Elastic Beanstalk application with two environment
  9. I. Perform new deployments on the non-live environmen
  10. J. After a deployment, perform an Elastic Beanstalk CNAME swap to make the newly deployed environment the live environment.
  11. K. Create an Amazon CloudFront distributio
  12. L. Set the two existing Application Load Balancers as origins on the distributio
  13. M. After a deployment, update the CloudFront distribution behavior to send requests to the newly deployed Auto Scaling group.

Correct Answer: B

QUESTION 29

The Deployment team has grown substantially in recent months and so has the number of projects that use separate code repositories. The current process involves configuring AWS CodePipeline manually, and there have been service limit alerts for the count of Amazon S3 buckets.
Which pipeline option will reduce S3 bucket sprawl alerts?

  1. A. Combine the multiple separate code repositories into a single one, and deploy using a global AWS CodePipeline that has logic for each project.
  2. B. Create new pipelines by using the AWS API or AWS CLI, and configure them to use a single global S3 bucket with separate prefixes for each project.
  3. C. Create a new pipeline in a different region for each project to bypass the service limits for S3 buckets in a single region.
  4. D. Create a new pipeline and for S3 bucket for each project by using the AWS API or AWS CLI to bypass the service limits for S3 buckets in a single account

Correct Answer: A

QUESTION 30

A DevOps engineer is assisting with a multi-Region disaster recovery solution for a new application. The application consists of Amazon EC2 instances running in an Auto Scaling group and an Amazon Aurora MySQL DB cluster. The application must be available with an RTO of 120 minutes and an RPO of 60 minutes.
What is the MOST cost-effective way to meet these requirements?

  1. A. Launch an Aurora DB cluster as an Aurora Replica in a different Regio
  2. B. Create an AWS CloudFormation template for all compute resources and create a stack in two Region
  3. C. Write a script that promotes the Aurora Replica to the primary instance in the event of a failure.
  4. D. Launch an Aurora DB cluster as an Aurora Replica in a different Region and configure automatic cross-Region failove
  5. E. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Region
  6. F. Write a script that updates the CloudFormation stack in the disaster recovery Region to increase the number of instances.
  7. G. Use AWS Lambda to create and copy a snapshot of the Aurora DB cluster to the destination Region hourl
  8. H. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Region
  9. I. Restore the Aurora DB cluster from a snapshot and update the Auto Scaling group to start launching instances.
  10. J. Configure Amazon DynamoDB cross-Region replicatio
  11. K. Create an AWS CloudFormation template that includes an Auto Scaling group, and create a stack in two Region
  12. L. Write a script that will update the CloudFormation stack in the disaster recovery Region and promote the DynamoDB replica to the primary instance in the event of a failure.

Correct Answer: D

Page 7 of 28

Post your Comments and Discuss Amazon AWS-Certified-DevOps-Engineer-Professional exam with other Community members: