DOP-C01 | Amazon-Web-Services DOP-C01 Cram 2019

Approved of DOP-C01 test question materials and guidance for Amazon-Web-Services certification for {examinee}, Real Success Guaranteed with Updated DOP-C01 pdf dumps vce Materials. 100% PASS AWS Certified DevOps Engineer- Professional exam Today!

NEW QUESTION 1
You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

  • A. Copy all log files into AWS S3 using a cron job on each instanc

  • B. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambd

  • C. Use the Lambda to analyze logs as soon as they come in and flag issues.

  • D. Begin using CloudWatch Logs on every servic

  • E. Stream all Log Groups into S3 object

  • F. Use AWS EMR clusterjobs to perform adhoc MapReduce analysis and write new queries when needed.

  • G. Copy all log files into AWS S3 using a cron job on each instanc

  • H. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesi

  • I. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.

  • J. Begin using CloudWatch Logs on every servic

  • K. Stream all Log Groups into an AWS Elastic search Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer: D

Explanation:
Amazon Dasticsearch Service makes it easy to deploy, operate, and scale dasticsearch for log analytics, full text search, application monitoring, and more. Amazon
Oasticsearch Service is a fully managed service that delivers Dasticsearch's easy-to-use APIs and real- time capabilities along with the availability, scalability, and security required by production workloads. The service offers built-in integrations with Kibana, Logstash, and AWS services including Amazon Kinesis Firehose, AWS Lambda, and Amazon Cloud Watch so that you can go from raw data to actionable insights quickly. For more information on Elastic Search, please refer to the below link:
• https://aws.amazon.com/elasticsearch-service/

NEW QUESTION 2
Your team is responsible for an AWS Elastic Beanstalk application. The business requires that you move to a continuous deployment model, releasing updates to the application multiple times per day with zero downtime. What should you do to enable this and still be able to roll back almost immediately in an emergency to the previous version?

  • A. Enablerolling updates in the Elastic Beanstalk environment, setting an appropriatepause time for application startup.

  • B. Createa second Elastic Beanstalk environment running the new application version, andswap theenvironment CNAMEs.

  • C. Developthe application to poll for a new application version in your code repository;download and install to each running Elastic Beanstalk instance.

  • D. Createa second Elastic Beanstalk environment with the new application version, andconfigure the old environment to redirect clients, using the HTTP 301 responsecode, to the new environment

Answer: B

Explanation:
The AWS Documentation mentions the below
Because Elastic Beanstalk performs an in-place update when you update your application versions, your application may become unavailable to users for a short
period of time. It is possible to avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then
swap CNAMCs of the two environments to redirect traffic to the new version instantly For more information on Elastic beanstalk swap URL please see the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAM CSwap.html

NEW QUESTION 3
Your application's Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this?

  • A. Set a longer cooldown period on the Group, so the system stops overshooting the target capacit

  • B. The issue is that the scaling system doesn't allow enough time for new instances to begin servicing requests before measuring aggregate load again.

  • C. Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency.

  • D. Raise the CloudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning.

  • E. Use larger instances instead of lots of smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.

Answer: B

Explanation:
The ideal case is that the right metric is not being used for the scale up and down.
Option A is not valid because it mentions that the cooldown is not happening when the traffic decreases, that means the metric threshold for the scale down is not occurring in Cloudwatch
Option C is not valid because increasing the Cloudwatch alarm metric will not ensure that the instances scale down when the traffic decreases.
Option D is not valid because the question does not mention any constraints that points to the instance size. For an example on using custom metrics for scaling in and out, please follow the below link for a use case.
• https://blog.powerupcloud.com/aws-autoscaling-based-on-database-query-custom-metrics- f396c16e5e6a

NEW QUESTION 4
You need to deploy a Node.js application and do not have any experience with AWS. Which deployment method will be the simplest for you to deploy?

  • A. AWS Elastic Beanstalk

  • B. AWSCIoudFormation

  • C. AWS EC2

  • D. AWSOpsWorks

Answer: A

Explanation:
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications.
AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring
For more information on Elastic beanstalk please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

NEW QUESTION 5
The company you work for has a huge amount of infrastructure built on AWS. However there has been some concerns recently about the security of this infrastructure, and an external auditor has been given the task of running a thorough check of all of your company's AWS assets. The auditor will be in the USA while your company's infrastructure resides in the Asia Pacific (Sydney) region on AWS. Initially, he needs to check all of your VPC assets, specifically, security groups and NACLs You have been assigned the task of providing the auditor with a login to be able to do this. Which of the following would be the best and most secure solution to provide the auditor with so he can begin his initial investigations? Choose the correct answer from the options below

  • A. Create an 1AM user tied to an administrator rol

  • B. Also provide an additional level of security with MFA.

  • C. Give him root access to your AWS Infrastructure, because he is an auditor he will need access to every service.

  • D. Create an 1AM user who will have read-only access to your AWS VPC infrastructure and provide the auditor with those credentials.

  • E. Create an 1AM user with full VPC access but set a condition that will not allow him to modify anything if the request is from any IP other than his own.

Answer: C

Explanation:
Generally you should refrain from giving high level permissions and give only the required permissions. In this case option C fits well by just providing the relevant access which is required.
For more information on 1AM please see the below link:
• https://aws.amazon.com/iam/

NEW QUESTION 6
Your development team is using access keys to develop an application that has access to S3 and DynamoDB. A new security policy has outlined that the credentials should not be older than 2 months, and should be rotated. How can you achieve this

  • A. Use the application to rotate the keys in every 2 months via the SDK

  • B. Use a script which will query the date the keys are create

  • C. If older than 2 months, delete them and recreate new keys

  • D. Delete the user associated with the keys after every 2 month

  • E. Then recreate the user again.D- Delete the I AM Role associated with the keys after every 2 month

  • F. Then recreate the I AM Roleagain.

Answer: B

Explanation:
One can use the CLI command list-access-keys to get the access keys. This command also returns the "CreateDate" of the keys. If the CreateDate is older than 2 months, then the keys can be deleted.
The Returns list-access-keys CLI command returns information about the access key IDs associated with the specified I AM user. If there are none, the action returns
an empty list.
For more information on the CLI command, please refer to the below link: http://docs.aws.amazon.com/cli/latest/reference/iam/list-access-keys.html

NEW QUESTION 7
The AWS Code Deploy service can be used to deploy code from which of the below mentioned source repositories. Choose 3 answers from the options given below

  • A. S3Buckets

  • B. GitHubrepositories

  • C. Subversionrepositories

  • D. Bit bucket repositories

Answer: ABD

Explanation:
The AWS documentation mentions the following
You can deploy a nearly unlimited variety of application content, such as code, web and configuration files, executables, packages, scripts, multimedia files, and so on. AWS CodeDeploy can deploy application content stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. You do not need to make changes to your existing code before you can use AWS CodeDeploy.
For more information on AWS Code Deploy, please refer to the below link:
• http://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html

NEW QUESTION 8
Which of the following is a reliable and durable logging solution to track changes made to your AWS resources?

  • A. Createa new CloudTrail trail with one new S3 bucket to store the logs and with theglobal services option selecte

  • B. Use 1AM roles S3 bucket policies and MultiFactor Authentication (MFA) Delete on the S3 bucket that stores your log

  • C. V

  • D. Createa new CloudTrail with one new S3 bucket to store the log

  • E. Configure SNS tosend log file delivery notifications to your management syste

  • F. Use 1AM rolesand S3 bucket policies on the S3 bucket that stores your logs.

  • G. Createa new CloudTrail trail with an existing S3 bucket to store the logs and withthe global services option selecte

  • H. Use S3 ACLs and Multi FactorAuthentication (M FA) Delete on the S3 bucket that stores your logs.

  • I. Createthree new CloudTrail trails with three new S3 buckets to store the logs one forthe AWS Management console, one for AWS SDKs and one for command line tools.Use 1AM roles and S3 bucket policies on the S3 buckets that store your logs.

Answer: A

Explanation:
AWS Identity and Access Management (1AM) is integrated with AWS CloudTrail, a sen/ice that logs AWS events made by or on behalf of your AWS account. CloudTrail logs authenticated AWS API calls and also AWS sign-in events, and collects this event information in files that are delivered to Amazon S3 buckets. You need to ensure that all services are included. Hence option B is partially correct.
Option B and D is wrong because it just adds an overhead for having 3 S3 buckets and SNS notifications.
For more information on Cloudtrail, please visit the below URL:
• http://docs.aws.a mazon.com/IAM/latest/UserGuide/cloudtrail-integration.htm I

NEW QUESTION 9
Which of the following services allows you to easily run and manage Docker-enabled applications across a cluster of Amazon EC2 instances

  • A. Elastic bean stalk

  • B. ElasticContainer service

  • C. Opswork

  • D. Cloudwatch

Answer: B

Explanation:
The AWS documentation provides the following information
Amazon EC2 Container Service (CCS) allows you to easily run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Applications packaged as containers locally will deploy and run in the same way as containers managed by Amazon ECS. Amazon CCS eliminates the need to install, operate, and scale your own cluster management infrastructure, and allows you to schedule Docker-enabled applications across your cluster based on your resource needs and availability requirements.
For more information on ECS, please visit the link:
• https://aws.amazon.com/ecs/details/

NEW QUESTION 10
You are in charge of designing a Cloudformation template which deploys a LAMP stack. After deploying a stack, you see that the status of the stack is showing as CREATE_COMPLETE, but the apache server is still not up and running and is experiencing issues while starting up. You want to ensure that the stack creation only shows the status of CREATE_COMPLETE after all resources defined in the stack are up and running. How can you achieve this?
Choose 2 answers from the options given below.

  • A. Definea stack policy which defines that all underlying resources should be up andrunning before showing a status of CREATE_COMPLETE.

  • B. Uselifecycle hooks to mark the completion of the creation and configuration of theunderlying resource.

  • C. Usethe CreationPolicy to ensure it is associated with the EC2 Instance resource.

  • D. Usethe CFN helper scripts to signal once the resource configuration is complete.

Answer: CD

Explanation:
The AWS Documentation mentions
When you provision an Amazon EC2 instance in an AWS Cloud Formation stack, you might specify additional actions to configure the instance, such as install software packages or bootstrap applications. Normally, CloudFormation proceeds with stack creation after the instance has been successfully created. However, you can use a Creation Pol icy so that CloudFormation proceeds with stack creation only after your configuration actions are done. That way you'll know your applications are ready to go after stack creation succeeds.
For more information on the Creation Policy, please visit the below url https://aws.amazon.com/blogs/devops/use-a-creationpolicy-to-wait-for-on-instance-configurations/

NEW QUESTION 11
You are creating a new API for video game scores. Reads are 100 times more common than writes, and the top 1% of scores are read 100 times more frequently than the rest of the scores. What's the best design for this system, using DynamoDB?

  • A. DynamoDB table with 100x higher read than write throughput, with CloudFront caching.

  • B. DynamoDB table with roughly equal read and write throughput, with CloudFront caching.

  • C. DynamoDB table with 100x higher read than write throughput, with ElastiCache caching.

  • D. DynamoDB table with roughly equal read and write throughput, with ElastiCache caching.

Answer: D

Explanation:
Because the lOOx read ratio is mostly driven by a small subset, with caching, only a roughly equal number of reads to writes will miss the cache, since the supermajority will hit the top 1% scores. Knowing we need to set the values roughly equal when using caching, we select AWS OastiCache, because CloudFront
cannot directly cache DynamoDB queries, and OastiCache is an excellent in-memory cache for database queries, rather than a distributed proxy cache for content delivery.
For more information on DynamoDB table gudelines please refer to the below link:
• http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html

NEW QUESTION 12
You are working for a startup company that is building an application that receives large amounts of data. Unfortunately, current funding has left the start-up short on cash, cannot afford to purchase thousands of dollars of storage hardware, and has opted to use AWS. Which services would you implement in order to store a virtually unlimited amount of data without any effort to scale when demand unexpectedly increases? Choose the correct answer from the options below

  • A. AmazonS3, because it provides unlimited amounts of storage data, scales automatically highlyavailable, and durable

  • B. AmazonGlacier, to keep costs low for storage and scale infinitely

  • C. Amazonlmport/Export, because Amazon assists in migrating large amounts of data toAmazon S3

  • D. AmazonEC2, because EBS volumes can scale to hold any amount of data and, when usedwith Auto Scaling, can be designed for fault tolerance and high availability

Answer: A

Explanation:
The best option is to use S3 because you can host a large amount of data in S3 and is the best storage option provided by AWS.
For more information on S3, please refer to the below link:
• http://docs.aws.a mazon.com/AmazonS3/latest/dev/We lcome.htmI

NEW QUESTION 13
You are creating a cloudformation templates which takes in a database password as a parameter. How can you ensure that the password is not visible when anybody tries to describes the stack

  • A. Usethe password attribute for the resource

  • B. Usethe NoEcho property for the parameter value

  • C. Usethe hidden property for the parameter value

  • D. Setthe hidden attribute for the Cloudformation resource.

Answer: B

Explanation:
The AWS Documentation mentions
For sensitive parameter values (such as passwords), set the NoEcho property to true. That way, whenever anyone describes your stack, the parameter value is shown as asterisks (*•*").
For more information on Cloudformation parameters, please visit the below URL:
• http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/parameters-section- structure.html

NEW QUESTION 14
Management has reported an increase in the monthly bill from Amazon Web Services, and they are extremely concerned with this increased cost. Management has asked you to determine the exact cause of this increase. After reviewing the billing report, you notice an increase in the data transfer cost. How can you provide management with a better insight into data transfer use?

  • A. Update your Amazon CloudWatch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies.

  • B. Use Amazon CloudWatch Logs to run a map-reduce on your logs to determine high usage and data transfer.

  • C. Deliver custom metrics to Amazon CloudWatch per application that breaks down application data transfer into multiple, more specific data points.D- Using Amazon CloudWatch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.

Answer: C

Explanation:
You can publish your own metrics to CloudWatch using the AWS CLI or an API. You can view statistical graphs of your published metrics with the AWS Management Console.
CloudWatch stores data about a metric as a series of data points. Each data point has an associated time stamp. You can even publish an aggregated set of data points called a statistic set.
If you have custom metrics specific to your application, you can give a breakdown to the management on the exact issue.
Option A won't be sufficient to provide better insights.
Option B is an overhead when you can make the application publish custom metrics Option D is invalid because just the ELB metrics will not give the entire picture
For more information on custom metrics, please refer to the below document link: from AWS http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.htmI

NEW QUESTION 15
Which of the following is incorrect when it comes to using the instances in an Opswork stack?

  • A. In a stack you can use a mix of both Windowsand Linux operating systems

  • B. You can start and stop instances manually in a stack

  • C. You can use custom AMI'S as long as they are based on one of the AWS OpsWorks Stacks- supported AMIs

  • D. You can use time-based automatic scaling with any stack

Answer: A

Explanation:
The AWS documentation mentions the following about Opswork stack
• A stack's instances can run either Linux or Windows.
A stack can have different Linux versions or distributions on different instances, but you cannot mix Linux and Windows instances.
• You can use custom AMIs (Amazon Machine Images), but they must be based on one of the AWS Ops Works Stacks-supported AMIs
• You can start and stop instances manually or have AWS OpsWorks Stacks automatically scale the number of instances. You can use time-based automatic scaling with any stack; Linux stacks also can use load-based scaling.
• In addition to using AWS OpsWorks Stacks to create Amazon EC2 instances, you can also register instances with a Linux stack that were created outside of AWS OpsWorks Stacks.
For more information on Opswork stacks, please visit the below link: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html

NEW QUESTION 16
You are working as an AWS Devops admins for your company. You are in-charge of building the infrastructure for the company's development teams using Cloudformation. The template will include building the VPC and networking components, installing a LAMP stack and securing the created resources. As per the AWS best practices what is the best way to design this template

  • A. Create a single cloudformation template to create all the resources since it would be easierfrom the maintenance perspective.

  • B. Create multiple cloudformation templates based on the number of VPC's in the environment.

  • C. Create multiple cloudformation templates based on the number of development groups in the environment.

  • D. Create multiple cloudformation templates for each set of logical resources, one for networking, the otherfor LAMP stack creation.

Answer: D

Explanation:
Creating multiple cloudformation templates is an example of using nested stacks. The advantage of using nested stacks is given below as per the AWS documentation
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single,
unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference
other templates.
For more information on Cloudformation best practices, please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 17
One of the instances in your Auto Scaling group health check returns the status of Impaired to Auto Scaling. What will Auto Scaling do in this case.

  • A. Terminate the instance and launch a new instance

  • B. Send an SNS notification

  • C. Perform a health check until cool down before declaring that the instance has failed

  • D. Wait for the instance to become healthy before sending traffic

Answer: A

Explanation:
Auto Scaling periodically performs health checks on the instances in your Auto Scaling group and identifies any instances that are unhealthy. You can configure Auto Scaling to determine the health status of an instance using Amazon EC2 status checks. Clastic Load Balancing health checks, or custom health checks
By default. Auto Scaling health checks use the results of the CC2 status checks to determine the health status of an instance. Auto Scaling marks an instance as
unhealthy if its instance fails one or more of the status checks.
For more information monitoring in Autoscaling, please visit the below URL: http://docs.aws.a mazon.com/autoscaling/latest/userguide/as-mon itoring-features.html

NEW QUESTION 18
Your company uses an application hosted in AWS which conists of EC2 Instances. The logs of the EC2 instances need to be processed and analyzed in real time, since this is a requirement from the IT Security department. Which of the following can be used to process the logs in real time.

  • A. UseCloudwatch logs to process and analyze the logs in real time

  • B. UseAmazon Glacier to store the logs and then use Amazon Kinesis to process andanalyze the logs in real time

  • C. UseAmazon S3 to store the logs and then use Amazon Kinesis to process and analyzethe logs inreal time

  • D. Useanother EC2 Instance with a larger instance type to process the logs

Answer: C

Explanation:
The AWS Documentation mentions the below Real-time metrics and reporting
You can use data collected into Kinesis Streams for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data.
Real-time data analytics
This combines the power of parallel processing with the value of real-time data. For example, process website clickstreams in real time, and then analyze site usability engagement using multiple different Kinesis Streams applications running in parallel.
Amazon Glacier is meant for Archival purposes and should not be used for storing the logs for real time processing.
For more information on Amazon Kinesis, please refer to the below link:
• http://docs.aws.amazon.com/streams/latest/dev/introduction.html

NEW QUESTION 19
A company has developed a Ruby on Rails content management platform. Currently, OpsWorks with several stacks for dev, staging, and production is being used to deploy and manage the application. Now the company wants to start using Python instead of Ruby. How should the company manage the new deployment? Choose the correct answer from the options below

  • A. Update the existing stack with Python application code and deploy the application using the deploy life-cycle action to implement the application code.

  • B. Create a new stack that contains a new layer with the Python cod

  • C. To cut over to the new stack the company should consider using Blue/Green deployment

  • D. Create a new stack that contains the Python application code and manage separate deployments of the application via the secondary stack using the deploy lifecycle action to implement the application code.

  • E. Create a new stack that contains the Python application code and manages separate deployments of the application via the secondary stack.

Answer: B

Explanation:
Blue/green deployment is a technique for releasing applications by shifting traffic between two identical environments running different versions of the application.
Blue/green deployments can mitigate common risks associated with deploying software, such as downtime and rollback capability
DOP-C01 dumps exhibit
Please find the below link on a white paper for blue green deployments
• https://d03wsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 20
You have carried out a deployment using Elastic Beanstalk with All at once method, but the application is unavailable. What could be the reason for this

  • A. You need to configure ELB along with Elastic Beanstalk

  • B. You need to configure Route53 along with Elastic Beanstalk

  • C. There will always be a few seconds of downtime before the application is available

  • D. The cooldown period is not properly configured for Elastic Beanstalk

Answer: C

Explanation:
The AWS Documentation mentions
Because Elastic Beanstalk uses a drop-in upgrade process, there might be a few seconds of downtime. Use rolling deployments to minimize the effect of deployments on your production environments.
For more information on troubleshooting Elastic Beanstalk, please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/troubleshooting-deployments.html
• https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.de ploy-existing- version, html

NEW QUESTION 21
You work as a Devops Engineer for your company. There are currently a number of environments hosted via Elastic beanstalk. There is a requirement to ensure to ensure that the rollback time for a new version application deployment is kept to a minimal. Which elastic beanstalk deployment method would fulfil this requirement ?

  • A. Rollingwith additional batch

  • B. AllatOnce

  • C. Blue/Green

  • D. Rolling

Answer: C

Explanation:
The below table from the AWS documentation shows that the least amount of time is spent in rollbacks when it comes to Blue Green deployments. This is because the only thing that needs to be done is for URL's to be swapped.
DOP-C01 dumps exhibit
For more information on Elastic beanstalk deployment strategies, please visit the below URL: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version, htmI

NEW QUESTION 22
Which of the following is the right sequence of initial steps in the deployment of application revisions using Code Deploy
1) Specify deployment configuration
2) Upload revision
3) Create application
4) Specify deployment group

  • A. 3, 2, 1 and 4

  • B. 3,1,2 and 4

  • C. 3,4,1 and 2

  • D. 3,4,2 and 1

Answer: C

Explanation:
The below diagram from the AWS documentation shows the deployment steps
DOP-C01 dumps exhibit
For more information on the deployment steps please refer to the below link:
• http://docs.aws.amazon.com/codedeploy/latest/userguide/de ployment-steps.html

NEW QUESTION 23
Your IT company is currently hosting a production environment in Elastic beanstalk. You understand that the Elastic beanstalk service provides a facility known as Managed updates which are minor and patch version updates which are periodically required for your system. Your IT supervisor is worried about the impact that these updates would have on the system. What can you tell about the Elastic beanstalk service with regards to managed updates

  • A. Package updates can be configurable weekly maintenance window

  • B. Elastic Beanstalk applies managed updates with no downtime

  • C. Elastic Beanstalk applies managed updates with no reduction in capacity

  • D. All of the above

Answer: D

Explanation:
The AWS Documentation mentions the following on package updates for the Clastic beanstalk environment
You can configure your environment to apply minor and patch version updates automatically during a configurable weekly maintenance window with Managed Platform Updates. Elastic Beanstalk applies managed updates with no downtime or reduction in capacity, and cancels the update immediately if instances running your application on the new version fail health checks.
For more information on Elastic beanstalk managed updates please refer to the URL: https://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/environment- platform -update-managed, html
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.platform.upgrade.html

NEW QUESTION 24
There is a requirement for an application hosted on a VPC to access the On-premise LDAP server. The VPC and the On-premise location are connected via an I PSec VPN. Which of the below are the right options for the application to authenticate each user. Choose 2 answers from the options below

  • A. Develop an identity broker that authenticates against 1AM security Token service to assume a 1AM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials.

  • B. The application authenticates against LDAP and retrieves the name of an 1AM role associated with the use

  • C. The application then calls the 1AM Security Token Service to assume that 1AM rol

  • D. The application can use the temporary credentials to access any AWS resources.

  • E. Develop an identity broker that authenticates against LDAP and then calls 1AM Security Token Service to get 1AM federated user credential

  • F. The application calls the identity broker to get 1AM federated user credentials with access to the appropriate AWS service.

  • G. The application authenticates against LDAP the application then calls the AWS identity and Access Management (1AM) Security service to log in to 1AM using the LDAP credentials the application can use the 1AM temporary credentials to access the appropriate AWS service.

Answer: BC

Explanation:
When you have the need for an in-premise environment to work with a cloud environment, you would normally have 2 artefacts for authentication purposes
• An identity store - So this is the on-premise store such as Active Directory which stores all the information for the user's and the groups they below to.
• An identity broker - This is used as an intermediate agent between the on-premise location and the cloud environment. In Windows you have a system known as Active Directory Federation services to provide this facility.
Hence in the above case, you need to have an identity broker which can work with the identity store and the Security Token service in aws. An example diagram of how this works from the aws documentation is given below.
DOP-C01 dumps exhibit
For more information on federated access, please visit the below link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated- users.htm I

NEW QUESTION 25
You have an Auto Scaling group with an Elastic Load Balancer. You decide to suspend the Auto Scaling AddToLoadBalancer for a short period of time. What will happen to the instances launched during the suspension period?

  • A. The instances will be registered with ELB once the process has resumed

  • B. Auto Scaling will not launch the instances during this period because of the suspension

  • C. The instances will not be registered with EL

  • D. You must manually register when the process is resumed */

  • E. It is not possible to suspend the AddToLoadBalancer process

Answer: C

Explanation:
If you suspend AddTo Load Balancer, Auto Scaling launches the instances but does not add them to the load balancer or target group. If you resume
the AddTo Load Balancer process. Auto Scaling resumes adding instances to the load balancer or target group when they are launched. However, Auto Scaling does
not add the instances that were launched while this process was suspended. You must register those instances manually.
For more information on the Suspension and Resumption process, please visit the below U RL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html

NEW QUESTION 26
You need to store a large volume of data. The data needs to be readily accessible for a short period, but then needs to be archived indefinitely after that. What is a cost-effective solution?

  • A. Storeall the data in S3 so that it can be more cost effective

  • B. Storeyour data in Amazon S3, and use lifecycle policies to archive to Amazon Glacier

  • C. Storeyour data in an EBS volume, and use lifecycle policies to archive to AmazonGlacier.

  • D. Storeyour data in Amazon S3, and use lifecycle policies to archive toS3-lnfrequently Access

Answer: B

Explanation:
The AWS documentation mentions the following on Lifecycle policies
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule
defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARDJ A (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
Expiration actions - In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. For more information on S3 Lifecycle policies, please visit the below URL
• http://docs.aws.a mazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

NEW QUESTION 27
Your application uses Cloud Formation to orchestrate your application's resources. During your testing phase before the application went live, your Amazon RDS instance type was changed and caused the instance to be re-created, resulting In the loss of test data. How should you prevent this from occurring in the future?

  • A. Within the AWS CloudFormation parameter with which users can select the Amazon RDS instance type, set AllowedValues to only contain the current instance type.

  • B. Use an AWS CloudFormation stack policy to deny updates to the instanc

  • C. Only allow UpdateStack permission to 1AM principals that are denied SetStackPolicy.

  • D. In the AWS CloudFormation template, set the AWS::RDS::DBInstance's DBInstanceClass property to be read-only.

  • E. Subscribe to the AWS CloudFormation notification "BeforeResourcellpdate," and call CancelStackUpdate if the resource identified is the Amazon RDS instance.

  • F. Update the stack using ChangeSets

Answer: E

Explanation:
When you need to update a stack, understanding how your changes will affect running resources before you implement them can help you update stacks with confidence. Change sets allow you to preview how proposed changes to a stack might impact your running resources, for example, whether your changes will delete or replace any critical resources, AWS CloudFormation makes the changes to your stack only when you decide to execute the change set, allowing you to decide whether to proceed with your proposed changes or explore other changes by creating another change set
For example, you can use a change set to verify that AWS CloudFormation won't replace your stack's database instances during an update.

NEW QUESTION 28
Which of the following file needs to be included along with your source code binaries when your application uses the EC2/On-Premises compute platform, and deploy it using the AWS Code Deploy service.

  • A. appspec.yml

  • B. appconfig.yml

  • C. appspecjson

  • D. appconfigjson

Answer: A

Explanation:
The AWS Documentation mentions the below
The application specification file (AppSpec file) is a YAML-formatted file used by AWS CodeDeploy to determine:
what it should install onto your instances from your application revision in Amazon S3 or GitHub. which lifecycle event hooks to run in response to deployment lifecycle events. An AppSpec file must be named appspec.yml and it must be placed in the root of an application's source code's directory structure. Otherwise, deployments will fail. For more information on the appspec file, please visit the below URL:
http://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Note: If you deploying your code on AWS Lambda compute platform. An AppSpec file can be YAML- formatted or JSON-for matted. You can also enter the contents of an AppSpec file directly into AWS CodeDeploy console when you create a deployment.
However, this question is about along with your source code binaries on an CC2/On-Premises Compute Platform. So, an AppSpec file must be a YAM L-formatted file named appspec.yml and it must be placed in the root of the directory structure of an application's source code. Otherwise, deployments fail.

NEW QUESTION 29
Your company has a set of EC2 Instances that access data objects stored in an S3 bucket. Your IT Security department is concerned about the security of this arhitecture and wants you to implement the following
1) Ensure that the EC2 Instance securely accesses the data objects stored in the S3 bucket
2) Ensure that the integrity of the objects stored in S3 is maintained.
Which of the following would help fulfil the requirements of the IT Security department. Choose 2 answers from the options given below

  • A. Createan IAM user and ensure the EC2 Instances uses the IAM user credentials toaccess the data in the bucket.

  • B. Createan IAM Role and ensure the EC2 Instances uses the IAM Role to access the datain the bucket.

  • C. UseS3 Cross Region replication to replicate the objects so that the integrity ofdata is maintained.

  • D. Usean S3 bucket policy that ensures that MFA Delete is set on the objects in thebucket

Answer: BD

Explanation:
The AWS Documentation mentions the following
I AM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using 1AM roles
For more information on 1AM Roles, please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2. htmI
MFS Delete can be used to add another layer of security to S3 Objects to prevent accidental deletion of objects. For more information on MFA Delete, please refer to the below link:
• https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/

NEW QUESTION 30
Your application requires long-term storage for backups and other data that you need to keep readily available but with lower cost. Which S3 storage option should you use?

  • A. AmazonS3 Standard- Infrequent Access

  • B. S3Standard

  • C. Glacier

  • D. ReducedRedundancy Storage

Answer: A

Explanation:
The AWS Documentation mentions the following
Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Standard - IA offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee.
For more information on S3 Storage classes, please visit the below URL:
• https://aws.amazon.com/s3/storage-classes/

NEW QUESTION 31
......

Recommend!! Get the Full DOP-C01 dumps in VCE and PDF From Dumpscollection, Welcome to Download: https://www.passcertsure.com/{productsort}-test/ (New 116 Q&As Version)