SAA-C03 | Leading SAA-C03 Dump For AWS Certified Solutions Architect - Associate (SAA-C03) Certification

Want to know Certleader SAA-C03 Exam practice test features? Want to lear more about Amazon-Web-Services AWS Certified Solutions Architect - Associate (SAA-C03) certification experience? Study Actual Amazon-Web-Services SAA-C03 answers to Renew SAA-C03 questions at Certleader. Gat a success with an absolute guarantee to pass Amazon-Web-Services SAA-C03 (AWS Certified Solutions Architect - Associate (SAA-C03)) test on your first attempt.

Amazon-Web-Services SAA-C03 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day.
Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?

  • A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
  • B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2 instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
  • C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14 days
  • D. Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue

Answer: A

Explanation:
Explanation
https://aws.amazon.com/kinesis/datafirehose/features/?nc=sn&loc=2#:~:text=into%20Amazon%20S3%2C%20Amazon%20Redshift%2C%20Amazon%20OpenSearch%20Service%2C%20Kinesis,Delivery%20streams

NEW QUESTION 2
A company is building a solution that will report Amazon EC2 Auto Scaling events across all the applications In an AWS account. The company needs to use a serverless solution to store the EC2 Auto Scaling status data in Amazon S3 The company then will use the data m Amazon S3 to provide near-real time updates in a dashboard The solution must not affect the speed of EC2 instance launches.
How should the company move the data to Amazon S3 to meet these requirements?

  • A. Use an Amazon CioudWatch metric stream to send the EC2 Auto Scaling status data to Amazon Kinesis Data Firehose Store the data in Amazon S3
  • B. Launch an Amazon EMR duster to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehose Store the data in Amazon S3
  • C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda (unction on a schedule Configure the Lambda function to send the EC2 Auto Scaling status data directly to Amazon S3
  • D. Use a bootstrap script during the launch of an EC2 instance to install Amazon Kinesis Agent Configure Kinesis Agent to collect the EC2 Auto Scaling status data and send the data to Amazon Kinesis Data Firehose Store the data in Amazon S3

Answer: B

NEW QUESTION 3
A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed key The company was recently acquired and must securely share a backup of the database with the acquiring company's AWS account in ap-southeast-3.
What should a solutions architect do to meet these requirements?

  • A. Create a database snapshot Copy the snapshot to a new unencrypted snapshot Share the new snapshot with the acquiring company's AWS account
  • B. Create a database snapshot Add the acquiring company's AWS account to the KMS key policy Share the snapshot with the acquiring company's AWS account
  • C. Create a database snapshot that uses a different AWS managed KMS key Add the acquiring company's AWS account to the KMS key alia
  • D. Share the snapshot with the acquiring company's AWS account.
  • E. Create a database snapshot Download the database snapshot Upload the database snapshot to an Amazon S3 bucket Update the S3 bucket policy to allow access from the acquiring company's AWS account

Answer: A

NEW QUESTION 4
A company has a web application that runs on Amazon EC2 instances. The company wants end users to authenticate themselves before they use the web application. The web application accesses AWS resources, such as Amazon S3 buckets, on behalf of users who are logged on.
Which combination of actions must a solutions architect take to meet these requirements? (Select TWO).

  • A. Configure AWS App Mesh to log on users.
  • B. Enable and configure AWS Single Sign-On in AWS Identity and Access Management (IAM).
  • C. Define a default (AM role for authenticated users.
  • D. Use AWS Identity and Access Management (IAM) for user authentication.
  • E. Use Amazon Cognito for user authentication.

Answer: BE

NEW QUESTION 5
A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all the data for 5 years and must delete all the data after 5 years. The company also must indefinitely keep audit logs of actions that are performed within the database. Currently, the company has automated backups configured for Aurora.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

  • A. Take a manual snapshot of the DB cluster.
  • B. Create a lifecycle policy for the automated backups.
  • C. Configure automated backup retention for 5 years.
  • D. Configure an Amazon CloudWatch Logs export for the DB cluster.
  • E. Use AWS Backup to take the backups and to keep the backups for 5 years.

Answer: AD

NEW QUESTION 6
A company needs the ability to analyze the log files of its proprietary application. The logs are stored
in JSON format in an Amazon S3 bucket Queries will be simple and will run on-demand A solutions
architect needs to perform the analysis with minimal changes to the existing architecture
What should the solutions architect do to meet these requirements with the LEAST amount of
operational overhead?

  • A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed
  • B. Use Amazon CloudWatch Logs to store the logs Run SQL queries as needed from the AmazonCloudWatch console
  • C. Use Amazon Athena directly with Amazon S3 to run the queries as needed
  • D. Use AWS Glue to catalog the logs Use a transient Apache Spark cluster on Amazon EMR to run theSQL queries as needed

Answer: C

Explanation:
Explanation
Amazon Athena can be used to query JSON in S3

NEW QUESTION 7
A public-facing web application queries a database hosted on a Amazon EC2 instance in a private subnet. A large number of queries involve multiple table joins, and the application performance has been degrading due to an increase in complex queries. The application team will be performing updates to improve performance.
What should a solutions architect recommend to the application team? (Select TWO.)

  • A. Cache query data in Amazon SQS
  • B. Create a read replica to offload queries
  • C. Migrate the database to Amazon Athena
  • D. Implement Amazon DynamoDB Accelerator to cache data.
  • E. Migrate the database to Amazon RDS

Answer: BE

NEW QUESTION 8
A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?

  • A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
  • B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
  • C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day.
  • D. Submit a support ticket through the AWS Management Console Request the removal of S3 service limits from the account.

Answer: B

NEW QUESTION 9
A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.
What should a solutions architect propose to improve the performance of the workload?

  • A. Choose a cluster placement group while launching Amazon EC2 instances.
  • B. Choose dedicated instance tenancy while launching Amazon EC2 instances.
  • C. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
  • D. Choose the required capacity reservation while launching Amazon EC2 instances.

Answer: A

Explanation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-placementgroup.html "A cluster placement group is a logical grouping of instances within a single Availability Zone that benefit from low network latency, high network throughput"

NEW QUESTION 10
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores useruploaded documents in an Amazon EBS volume. For better scalability and availability, the company
duplicated the architecture and created a second EC2 instance and EBS volume in another Availability
Zone placing both behind an Application Load Balancer After completing this change, users reported
that, each time they refreshed the website, they could see one subset of their documents or the
other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?

  • A. Copy the data so both EBS volumes contain all the documents.
  • B. Configure the Application Load Balancer to direct a user to the server with the documents
  • C. Copy the data from both EBS volumes to Amazon EFS Modify the application to save newdocuments to Amazon EFS
  • D. Configure the Application Load Balancer to send the request to both servers Return eachdocument from the correct server.

Answer: C

Explanation:
Explanation
Amazon EFS provides file storage in the AWS Cloud. With Amazon EFS, you can create a file system,
mount the file system on an Amazon EC2 instance, and then read and write data to and from your file
system. You can mount an Amazon EFS file system in your VPC, through the Network File System
versions 4.0 and 4.1 (NFSv4) protocol. We recommend using a current generation Linux NFSv4.1 client, such as those found in the latest Amazon Linux, Redhat, and Ubuntu
AMIs, in conjunction with the Amazon EFS Mount Helper. For instructions, see Using the amazon-efsutils
Tools.
For a list of Amazon EC2 Linux Amazon Machine Images (AMIs) that support this protocol, see NFS
Support. For some AMIs, you'll need to install an NFS client to mount your file system on your
Amazon EC2 instance. For instructions, see Installing the NFS Client.
You can access your Amazon EFS file system concurrently from multiple NFS clients, so applications
that scale beyond a single connection can access a file system. Amazon EC2 instances running in
multiple Availability Zones within the same AWS Region can access the file system, so that many
users can access and share a common data source.

NEW QUESTION 11
A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt
all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
  • B. Create a customer managed multi-Region KMS ke
  • C. Create an S3 bucket in each Regio
  • D. Configure replication between the S3 bucket
  • E. Configure the application to use the KMS key with client-side encryption.
  • F. Create a customer managed KMS key and an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
  • G. Create a customer managed KMS key and an S3 bucket m each Region Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS) Configure replication between the S3 buckets.

Answer: C

Explanation:
Explanation
From https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.htmlFor most users, the default AWS KMS key store, which is protected by FIPS 140-2 validatedcryptographic modules, fulfills their security requirements. There is no need to add an extra layer ofmaintenance responsibility or a dependency on an additional service. However, you might considercreating a custom key store if your organization has any of the following requirements: Key materialcannot be stored in a shared environment. Key material must be subject to a secondary, independentaudit path. The HSMs that generate and store key material must be certified at FIPS 140-2 Level 3.
https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.html

NEW QUESTION 12
A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10- year period. The records must be stored with maximum resiliency.
Which solution will meet these requirements?

  • A. Store the records in S3 Glacier for the entire 10-year perio
  • B. Use an access control policy to deny deletion of the records for a period of 10 years.
  • C. Store the records by using S3 Intelligent-Tierin
  • D. Use an IAM policy to deny deletion of the records.After 10 years, change the IAM policy to allow deletion.
  • E. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 yea
  • F. Use S3 Object Lock in compliance mode for a period of 10 years.
  • G. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 yea
  • H. Use S3 Object Lock in governance mode for a period of 10 years.

Answer: C

NEW QUESTION 13
A company has an on-premises MySQL database that handles transactional data The company is migrating the database to the AWS Cloud The migrated database must maintain compatibility with the company's applications that use the database The migrated database also must scale automatically during periods of increased demand.
Which migration solution will meet these requirements?

  • A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL Configure elastic storage scaling
  • B. Migrate the database to Amazon Redshift by using the mysqldump utility Turn on Auto Scaling for the Amazon Redshift cluster
  • C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora Turn on Aurora Auto Scaling.
  • D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB Configure an Auto Scaling policy.

Answer: C

NEW QUESTION 14
A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to Integrate the web application with the appliance to inspect all traffic to the application before the traffic teaches the web server. Which solution will moot these requirements with the LEAST operational overhead?

  • A. Create a Network Load Balancer the public subnet of the application's VPC to route the traffic lo the appliance for packet inspection
  • B. Create an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection
  • C. Deploy a transit gateway m the inspection VPC Configure route tables to route the incoming pockets through the transit gateway
  • D. Deploy a Gateway Load Balancer in the inspection VPC Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance

Answer: D

NEW QUESTION 15
A company wants to migrate its existing on-premises monolithic application to AWS.
The company wants to keep as much of the front- end code and the backend code as possible. However, the company wants to break the application into smaller applications. A different team will manage each application. The company needs a highly scalable solution that minimizes operational overhead.
Which solution will meet these requirements?

  • A. Host the application on AWS Lambda Integrate the application with Amazon API Gateway.
  • B. Host the application with AWS Amplif
  • C. Connect the application to an Amazon API Gateway API that is integrated with AWS Lambda.
  • D. Host the application on Amazon EC2 instance
  • E. Set up an Application Load Balancer with EC2 instances in an Auto Scaling group as targets.
  • F. Host the application on Amazon Elastic Container Service (Amazon ECS) Set up an Application Load Balancer with Amazon ECS as the target.

Answer: C

NEW QUESTION 16
A company hosts its web application on AWS using seven Amazon EC2 instances. The company requires that the IP addresses of all healthy EC2 instances be returned in response to DNS queries.
Which policy should be used to meet this requirement?

  • A. Simple routing policy
  • B. Latency routing policy
  • C. Multivalue routing policy
  • D. Geolocation routing policy

Answer: C

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/multivalue-versus-simple-policies/
"Use a multivalue answer routing policy to help distribute DNS responses across multiple resources. For example, use multivalue answer routing when you want to associate your routing records with a Route 53 health check."
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-multivalue

NEW QUESTION 17
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes The application data must be stored in a standard file system structure
The company wants a solution that scales automatically, is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?

  • A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage
  • B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage
  • C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
  • D. Use Amazon Elastic File System (Amazon EFS) for storage.
  • E. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
  • F. Use Amazon Elastic Block Store (Amazon EBS) for storage.

Answer: C

NEW QUESTION 18
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB) The website serves static content Website traffic is increasing, and the company is concerned about a potential increase in cost.
What should a solutions architect do to reduce the cost of the website?

  • A. Create an Amazon CloudFront distribution to cache static files at edge locations.
  • B. Create an Amazon ElastiCache cluster Connect the ALB to the ElastiCache cluster to serve cached files.
  • C. Create an AWS WAF web ACL, and associate it with the ALB Add a rule to the web ACL to cache static files.
  • D. Create a second ALB in an alternative AWS Region Route user traffic to the closest Region to minimize data transfer costs.

Answer: A

NEW QUESTION 19
A company hosts an application on AWS Lambda functions mat are invoked by an Amazon API Gateway API The Lambda functions save customer data to an Amazon Aurora MySQL database Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is complete The result is that customer data Is not recorded for some of the event
A solutions architect needs to design a solution that stores customer data that is created during database upgrades
Which solution will meet these requirements?

  • A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database Configure the Lambda functions to connect to the RDS proxy
  • B. Increase the run time of me Lambda functions to the maximum Create a retry mechanism in the code that stores the customer data in the database
  • C. Persist the customer data to Lambda local storag
  • D. Configure new Lambda functions to scan the local storage to save the customer data to the database.
  • E. Store the customer data m an Amazon Simple Queue Service (Amazon SOS) FIFO queue Create a new Lambda function that polls the queue and stores the customer data in the database

Answer: C

NEW QUESTION 20
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly available.
Which combination of configuration options will meet these requirements? (Choose two.)

  • A. Use an Auto Scaling group to launch the EC2 instances in private subnet
  • B. Deploy an RDS Multi-AZ DB instance in private subnets.
  • C. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones.Deploy an Application Load Balancer in the private subnets.
  • D. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones.Deploy an RDS Multi-AZ DB instance in private subnets.
  • E. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zone
  • F. Deploy an Application Load Balancer in the public subnet.
  • G. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zone
  • H. Deploy an Application Load Balancer in the public subnets.

Answer: AE

Explanation:
Explanation
Before you begin: Decide which two Availability Zones you will use for your EC2 instances. Configure your
virtual private cloud (VPC) with at least one public subnet in each of these Availability Zones. These public subnets are used to configure the load balancer. You can launch your EC2 instances in other subnets of these Availability Zones instead.

NEW QUESTION 21
......

P.S. Allfreedumps.com now are offering 100% pass ensure SAA-C03 dumps! All SAA-C03 exam questions have been updated with correct answers: https://www.allfreedumps.com/SAA-C03-dumps.html (0 New Questions)