Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
  • Login
  • Register
Quesions Library
  • Cisco
    • 200-301
    • 200-901
      • Multiple Choice
      • Drag Drop
    • 350-401
      • Multiple Choice
      • Drag Drop
    • 350-701
    • 300-410
      • Multiple Choice
      • Drag Drop
    • 300-415
      • Multiple Choice
      • Drag Drop
    • 300-425
    • Others
  • AWS
    • CLF-C02
    • SAA-C03
    • SAP-C02
    • ANS-C01
    • Others
  • Microsoft
    • AZ-104
    • AZ-204
    • AZ-305
    • AZ-900
    • AI-900
    • SC-900
    • Others
  • CompTIA
    • SY0-601
    • N10-008
    • 220-1101
    • 220-1102
    • Others
  • Google
    • Associate Cloud Engineer
    • Professional Cloud Architect
    • Professional Cloud DevOps Engineer
    • Others
  • ISACA
    • CISM
    • CRIS
    • Others
  • LPI
    • 101-500
    • 102-500
    • 201-450
    • 202-450
  • Fortinet
    • NSE4_FGT-7.2
  • VMware
  • >>
    • Juniper
    • EC-Council
      • 312-50v12
    • ISC
      • CISSP
    • PMI
      • PMP
    • Palo Alto Networks
    • RedHat
    • Oracle
    • GIAC
    • F5
    • ITILF
    • Salesforce
Contribute
Practice Test Free
  • QUESTIONS
  • COURSES
    • CCNA
    • Cisco Enterprise Core
    • VMware vSphere: Install, Configure, Manage
  • CERTIFICATES
No Result
View All Result
Practice Test Free
No Result
View All Result
Home Free IT Exam Dumps

SAP-C01 Dump Free

Table of Contents

Toggle
  • SAP-C01 Dump Free – 50 Practice Questions to Sharpen Your Exam Readiness.
  • Access Full SAP-C01 Dump Free

SAP-C01 Dump Free – 50 Practice Questions to Sharpen Your Exam Readiness.

Looking for a reliable way to prepare for your SAP-C01 certification? Our SAP-C01 Dump Free includes 50 exam-style practice questions designed to reflect real test scenarios—helping you study smarter and pass with confidence.

Using an SAP-C01 dump free set of questions can give you an edge in your exam prep by helping you:

  • Understand the format and types of questions you’ll face
  • Pinpoint weak areas and focus your study efforts
  • Boost your confidence with realistic question practice

Below, you will find 50 free questions from our SAP-C01 Dump Free collection. These cover key topics and are structured to simulate the difficulty level of the real exam, making them a valuable tool for review or final prep.

Question 1

A financial company needs to create a separate AWS account for a new digital wallet application. The company uses AWS Organizations to manage its accounts.
A solutions architect uses the IAM user Support1 from the master account to create a new member account with
finance1@example.com
as the email address.
What should the solutions architect do to create IAM users in the new member account?

A. Sign in to the AWS Management Console with AWS account root user credentials by using the 64-character password from the initial AWS Organizations email sent to finance1@example.com. Set up the IAM users as required.

B. From the master account, switch roles to assume the OrganizationAccountAccessRole role with the account ID of the new member account. Set up the IAM users as required.

C. Go to the AWS Management Console sign-in page. Choose ג€Sign in using root account credentials.ג€ Sign in by using the email address finance1@example.com and the master account’s root password. Set up the IAM users as required.

D. Go to the AWS Management Console sign-in page. Sign in by using the account ID of the new member account and the Support1 IAM credentials. Set up the IAM users as required.

 


Suggested Answer: A

Community Answer: B

Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html

Question 2

You create a VPN connection, and your VPN device supports Border Gateway Protocol (BGP).
Which of the following should be specified to configure the VPN connection?

A. Classless routing

B. Classfull routing

C. Dynamic routing

D. Static routing

 


Suggested Answer: C

Community Answer: C

If you create a VPN connection, you must specify the type of routing that you plan to use, which will depend upon on the make and model of your VPN devices. If your VPN device supports Border Gateway Protocol (BGP), you need to specify dynamic routing when you configure your VPN connection. If your device does not support BGP, you should specify static routing.
Reference:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

Question 3

You control access to S3 buckets and objects with:

A. Identity and Access Management (IAM) Policies.

B. Access Control Lists (ACLs).

C. Bucket Policies.

D. All of the above

 


Suggested Answer: D

Community Answer: D

Question 4

An organization is setting a website on the AWS VPC. The organization has blocked a few IPs to avoid a D-DOS attack.
How can the organization configure that a request from the above mentioned IPs does not access the application instances?

A. Create an IAM policy for VPC which has a condition to disallow traffic from that IP address.

B. Configure a security group at the subnet level which denies traffic from the selected IP.

C. Configure the security group with the EC2 instance which denies access from that IP address.

D. Configure an ACL at the subnet which denies the traffic from that IP address.

 


Suggested Answer: D

Community Answer: D

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. AWS provides two features that the user can use to increase security in VPC: security groups and network ACLs. Security group works at the instance level while ACL works at the subnet level. ACL allows both allow and deny rules. Thus, when the user wants to reject traffic from the selected IPs it is recommended to use ACL with subnets.
Reference:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

Question 5

A company is developing a messaging application that is based on a microservices architecture. A separate team develops each microservice by using Amazon
Elastic Container Service (Amazon ECS). The teams deploy the microservices multiple times daily by using AWS CloudFormation and AWS CodePipeline.
The application recently grew in size and complexity. Each service operates correctly on its own during development, but each service produces error messages when it has to interact with other services in production. A solutions architect must improve the application's availability.
Which solution will meet these requirements with the LEAST amount of operational overhead?

A. Add an extra stage to CodePipeline for each service. Use the extra stage to deploy each service to a test environment. Test each service after deployment to make sure that no error messages occur.

B. Add an AWS::CodeDeployBlueGreen Transform section and Hook section to the template to enable blue/green deployments by using AWS CodeDeploy in CloudFormation. Configure the template to perform ECS blue/green deployments in production.

C. Add an extra stage to CodePipeline for each service. Use the extra stage to deploy each service to a test environment. Write integration tests for each service. Run the tests automatically after deployment.

D. Use an ECS DeploymentConfiguration parameter in the template to configure AWS CodeDeploy to perform a rolling update of the service. Use a CircuitBreaker property to roll back the deployment if any error occurs during deployment.

 


Suggested Answer: A

Community Answer: B

Reference:
https://aws.amazon.com/blogs/devops/using-aws-codepipeline-for-deploying-container-images-to-microservices-architecture-involving-aws-lambda-
functions/

Question 6

An organization has setup RDS with VPC. The organization wants RDS to be accessible from the internet. Which of the below mentioned configurations is not required in this scenario?

A. The organization must enable the parameter in the console which makes the RDS instance publicly accessible.

B. The organization must allow access from the internet in the RDS VPC security group,

C. The organization must setup RDS with the subnet group which has an external IP.

D. The organization must enable the VPC attributes DNS hostnames and DNS resolution.

 


Suggested Answer: C

Community Answer: A

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. It enables the user to launch AWS resources, such as RDS into a virtual network that the user has defined. Subnets are segments of a VPC’s IP address range that the user can designate to a group of VPC resources based on security and operational needs. A DB subnet group is a collection of subnets (generally private) that the user can create in a VPC and which the user assigns to the RDS
DB instances. A DB subnet group allows the user to specify a particular VPC when creating DB instances. If the RDS instance is required to be accessible from the internet:
The organization must setup that the RDS instance is enabled with the VPC attributes, DNS hostnames and DNS resolution.
The organization must enable the parameter in the console which makes the RDS instance publicly accessible.
The organization must allow access from the internet in the RDS VPC security group.
Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html

Question 7

Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority.
How should you implement such a system?

A. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.

B. Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.

C. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.

D. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.

 


Suggested Answer: C

Community Answer: C

Question 8

An organization (account ID 123412341234) has configured the IAM policy to allow the user to modify his credentials.
What will the below mentioned statement allow the user to perform?
 Image

A. Allow the IAM user to update the membership of the group called TestingGroup

B. The IAM policy will throw an error due to an invalid resource name

C. The IAM policy will allow the user to subscribe to any IAM group

D. Allow the IAM user to delete the TestingGroup

 


Suggested Answer: A

Community Answer: A

AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. If the organization (account ID 123412341234) wants their users to manage their subscription to the groups, they should create a relevant policy for that. The below mentioned policy allows the respective IAM user to update the membership of the group called MarketingGroup.
{
“Version”: “2012-10-17”,
“Statement”: [{
“Effect”: “Allow”, “Action”: [ “iam:AddUserToGroup”,
“iam:RemoveUserFromGroup”, “iam:GetGroup”
],
“Resource”: “arn:aws:iam:: 123412341234:group/ TestingGroup ” }]
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/Credentials-Permissions-examples.html#creds-policies-credentials

Question 9

An IAM user is trying to perform an action on an object belonging to some other root account's bucket.
Which of the below mentioned options will AWS S3 not verify?

A. The object owner has provided access to the IAM user

B. Permission provided by the parent of the IAM user on the bucket

C. Permission provided by the bucket owner to the IAM user

D. Permission provided by the parent of the IAM user

 


Suggested Answer: B

Community Answer: B

If the IAM user is trying to perform some action on the object belonging to another AWS user’s bucket, S3 will verify whether the owner of the IAM user has given sufficient permission to him. It also verifies the policy for the bucket as well as the policy defined by the object owner.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-auth-workflow-object-operation.html

Question 10

A company needs to deploy its document storage application across two AWS Regions. The company is storing PDF documents that have an average file size of 512 KiB and a minimum file size of 200 KiB. The company needs protection for accidental document overwrites in the primary Region. The secondary Region must have cost-optimized storage. The company needs a solution that provides an SLA of 99.99% that files will be replicated to the secondary Region within 15 minutes.
Which solution will meet these requirements?

A. Deploy an Amazon FSx cluster for multiple application hosts to mount in the primary Region. Configure a second Amazon FSx deployment in the secondary Region. Configure replication from the Amazon FSx cluster in the primary Region to the Amazon FSx deployment in the secondary Region.

B. Deploy two Amazon S3 buckets, one in each Region. Enable S3 Versioning for each bucket. Enable S3 Replication Time Control (S3 RTC) to replicate objects to the secondary Region. Specify S3 Glacier Deep Archive as the storage class in the secondary Region.

C. Deploy two Amazon S3 buckets, one in each Region. Enable S3 Versioning for the bucket in the primary Region. Set up S3 Cross-Region Replication (CRR) from the primary Region to the secondary Region. Create an S3 event secondary bucket to invoke an AWS Lambda function that reviews each replicated object and specifies S3 Glacier Deep Archive as the storage class in the secondary Region.

D. Deploy an Amazon FSx multi-Region cluster. Configure the multi-Region cluster with object versioning. Mount the file system as ZFS with versioning support. Activate S3 archiving from Amazon FSx.

 


Suggested Answer: D

Community Answer: B

Question 11

A user has suspended the scaling process on the Auto Scaling group. A scaling activity to increase the instance count was already in progress.
What effect will the suspension have on that activity?

A. No effect. The scaling activity continues

B. Pauses the instance launch and launches it only after Auto Scaling is resumed

C. Terminates the instance

D. Stops the instance temporary

 


Suggested Answer: A

 

The user may want to stop the automated scaling processes on the Auto Scaling groups either to perform manual operations or during emergency situations. To perform this, the user can suspend one or more scaling processes at any time. When this process is suspended, Auto Scaling creates no new scaling activities for that group. Scaling activities that were already in progress before the group was suspended continue until completed.
Reference:
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html

Question 12

An enterprise company is building an infrastructure services platform for its users. The company has the following requirements:
✑ Provide least privilege access to users when launching AWS infrastructure so users cannot provision unapproved services.
✑ Use a central account to manage the creation of infrastructure services.
✑ Provide the ability to distribute infrastructure services to multiple accounts in AWS Organizations.
Provide the ability to enforce tags on any infrastructure that is started by users.
 Image
Which combination of actions using AWS services will meet these requirements? (Choose three.)

A. Develop infrastructure services using AWS Cloud Formation templates. Add the templates to a central Amazon S3 bucket and add the-IAM roles or users that require access to the S3 bucket policy.

B. Develop infrastructure services using AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the Organizations structure created for the company.

C. Allow user IAM roles to have AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3.

D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use an automation script to import the central portfolios to local AWS accounts, copy the TagOption assign users access and apply launch constraints.

E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by the company. Apply the TagOption to AWS Service Catalog products or portfolios.

F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags to any CloudFormation templates that will be created for users.

 


Suggested Answer: ABE

Community Answer: BDE

Question 13

Which of the following is true while using an IAM role to grant permissions to applications running on Amazon EC2 instances?

A. All applications on the instance share the same role, but different permissions.

B. All applications on the instance share multiple roles and permissions.

C. Multiple roles are assigned to an EC2 instance at a time.

D. Only one role can be assigned to an EC2 instance at a time.

 


Suggested Answer: D

Community Answer: D

Only one role can be assigned to an EC2 instance at a time, and all applications on the instance share the same role and permissions.
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.html

Question 14

A company hosts a legacy application that runs on an Amazon EC2 instance inside a VPC without internet access. Users access the application with a desktop program installed on their corporate laptops. Communication between the laptops and the VPC flows through AWS Direct Connect (DX). A new requirement states that all data in transit must be encrypted between users and the VPC.
Which strategy should a solutions architect use to maintain consistent network performance while meeting this new requirement?

A. Create a client VPN endpoint and configure the laptops to use an AWS client VPN to connect to the VPC over the internet.

B. Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.

C. Create a new Site-to-Site VPN that connects to the VPC over the internet.

D. Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.

 


Suggested Answer: D

Community Answer: B

Question 15

A company's processing team has an AWS account with a production application. The application runs on Amazon EC2 instances behind a Network Load
Balancer (NLB). The EC2 instances are hosted in private subnets in a VPC in the eu-west-1 Region. The VPC was assigned the CIDR block of 10.0.0.0/16. The billing team recently created a new AWS account and deployed an application on EC2 instances that are hosted in private subnets in a VPC in the eu-central-1
Region. The new VPC is assigned the CIDR block of 10.0.0.0/16.
The processing application needs to securely communicate with the billing application over a proprietary TCP port.
What should a solutions architect do to meet this requirement with the LEAST amount of operational effort?

A. In the billing team’s account, create a new VPC and subnets in eu-central-1 that use the CIDR block of 192.168.0.0/16. Redeploy the application to the new subnets. Configure a VPC peering connection between the two VPCs.

B. In the processing team’s account, add an additional CIDR block of 192.168.0.0/16 to the VPC in eu-west-1. Restart each of the EC2 instances so that they obtain a new IP address. Configure an inter-Region VPC peering connection between the two VPCs.

C. In the billing team’s account, create a new VPC and subnets in eu-west-1 that use the CIDR block of 192.168.0.0/16. Create a VPC endpoint service (AWS PrivateLink) in the processing team’s account and an interface VPC endpoint in the new VPC. Configure an inter-Region VPC peering connection in the billing team’s account between the two VPCs.

D. In each account, create a new VPC with the CIDR blocks of 192.168.0.0/16 and 172.16.0.0/16. Create inter-Region VPC peering connections between the billing team’s VPCs and the processing team’s VPCs. Create gateway VPC endpoints to allow traffic to route between the VPCs.

 


Suggested Answer: A

Community Answer: A

Question 16

A startup company recently migrated a large ecommerce website to AWS. The website has experienced a 70% increase in sales. Software engineers are using a private GitHub repository to manage code. The DevOps team is using Jenkins for builds and unit testing. The engineers need to receive notifications for bad builds and zero downtime during deployments. The engineers also need to ensure any changes to production are seamless for users and can be rolled back in the event of a major issue.
The software engineers have decided to use AWS CodePipeline to manage their build and deployment process.
Which solution will meet these requirements?

A. Use GitHub websockets to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in an in-place, all-at-once deployment configuration using AWS CodeDeploy.

B. Use GitHub webhooks to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.

C. Use GitHub websockets to trigger the CodePipeline pipeline. Use AWS X-Ray for unit testing and static code analysis. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.

D. Use GitHub webhooks to trigger the CodePipeline pipeline. Use AWS X-Ray for unit testing and static code analysis. Send alerts to an Amazon SNS topic for any bad builds. Deploy in an in-place, all-at-once deployment configuration using AWS CodeDeploy.

 


Suggested Answer: B

Community Answer: B

Question 17

An organization is hosting a scalable web application using AWS. The organization has configured ELB and Auto Scaling to make the application scalable.
Which of the below mentioned statements is not required to be followed for ELB when the application is planning to host a web application on VPC?

A. The ELB and all the instances should be in the same subnet.

B. Configure the security group rules and network ACLs to allow traffic to be routed between the subnets in the VPC.

C. The internet facing ELB should have a route table associated with the internet gateway.

D. The internet facing ELB should be only in a public subnet.

 


Suggested Answer: A

Community Answer: A

Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services
(AWS) cloud. The user has complete control over the virtual networking environment. Within this virtual private cloud, the user can launch AWS resources, such as an ELB, and EC2 instances. There are two ELBs available with VPC: internet facing and internal (private) ELB. For the internet facing ELB it is required that the
ELB should be in a public subnet. After the user creates the public subnet, he should ensure to associate the route table of the public subnet with the internet gateway to enable the load balancer in the subnet to connect with the internet. The ELB and instances can be in a separate subnet. However, to allow communication between the instance and the ELB the user must configure the security group rules and network ACLs to allow traffic to be routed between the subnets in his VPC.
Reference:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/CreateVPCForELB.html

Question 18

A company has a large number of AWS accounts in an organization in AWS Organizations. A different business group owns each account. All the AWS accounts are bound by legal compliance requirements that restrict all operations outside the eu-west-2 Region.
The company's security team has mandated the use of AWS Systems Manager Session Manager across all AWS accounts.
Which solution should a solutions architect recommend to meet these requirements?

A. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from the restriction. In AWS Organizations, apply the SCP to the root of the organization.

B. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from the restriction. For each AWS account, use the AmNotLike condition key to add the ARN of the IAM role that is associated with the Session Manager instance profile to the condition element of the SCP. In AWS Organizations apply, the SCP to the root of the organization.

C. Create an SCP that denies access to all requests that do not target eu-west-2. Use the NotAction element to exempt global services from the restriction. In AWS Organizations, apply the SCP to the root of the organization. In each AWS account, create an IAM permissions boundary that allows access to the IAM role that is associated with the Session Manager instance profile.

D. For each AWS account, create an IAM permissions boundary that denies access to all requests that do not target eu-west-2. For each AWS account, apply the permissions boundary to the IAM role that is associated with the Session Manager instance profile.

 


Suggested Answer: A

Community Answer: A

Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-requested-region.html

Question 19

To serve Web traffic for a popular product your chief financial officer and IT director have purchased 10 m1.large heavy utilization Reserved Instances (RIs), evenly spread across two availability zones; Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product grows even more popular and you need additional capacity. As a result, your company purchases two C3.2xlarge medium utilization Ris. You register the two c3.2xlarge instances with your ELB and quickly find that the m1.large instances are at 100% of capacity and the c3.2xlarge instances have significant capacity that's unused.
Which option is the most cost effective and uses EC2 capacity most effectively?

A. Configure Autoscaling group and Launch Configuration with ELB to add up to 10 more on-demand m1.large instances when triggered by Cloudwatch. Shut off c3.2xlarge instances.

B. Configure ELB with two c3.2xlarge instances and use on-demand Autoscaling group for up to two additional c3.2xlarge instances. Shut off m1.large instances.

C. Route traffic to EC2 m1.large and c3.2xlarge instances directly using Route 53 latency based routing and health checks. Shut off ELB.

D. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin.

 


Suggested Answer: D

Community Answer: D

Reference:
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Question 20

True or false: In a CloudFormation template, you can reuse the same logical ID several times to reference the resources in other parts of the template.

A. True, a logical ID can be used several times to reference the resources in other parts of the template.

B. False, a logical ID must be unique within the template.

C. False, you can mention a resource only once and you cannot reference it in other parts of a template.

D. False, you cannot reference other parts of the template.

 


Suggested Answer: B

Community Answer: A

In AWS CloudFormation, the logical ID must be alphanumeric (A-Za-z0-9) and unique within the template. You use the logical name to reference the resource in other parts of the template.
Reference:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/concept-resources.html

Question 21

Does Autoscaling automatically assign tags to resources?

A. No, not unless they are configured via API.

B. Yes, it does.

C. Yes, by default.

D. No, it does not.

 


Suggested Answer: B

Community Answer: C

Tags don’t have any semantic meaning to Amazon EC2 and are interpreted strictly as a string of characters.
Tags are assigned automatically to the instances created by an Auto Scaling group. Auto Scaling adds a tag to the instance with a key of aws: autoscaling:groupName and a value of the name of the Auto Scaling group.
Reference:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Using_Tags.html

Question 22

A company asks a solution architect to optimize the cost of a solution. The solution handles requests from multiple customers. The solution includes a multi-tier architecture that uses Amazon API Gateway, AWS Lambda, AWS Fargate, Amazon Simple Queue Service (Amazon SQS), and Amazon EC2.
In the current setup, requests go through API Gateway to Lambda and either start a container in Fargate or push a message to an SQS queue. An EC2 Fleet provides EC2 instances that serve as workers for the SQS queue. The EC2 Fleet scales based on the number of items in the SQS queue.
Which combination of steps should the solutions architect recommend to reduce cost the MOST? (Choose three.)

A. Determine the minimum number of EC2 instances that are needed during a day. Reserve this number of instances in a 3-year plan with payment all upfront.

B. Examine the last 6 months of compute utilization across the services. Use this information to determine the needed compute for the solution. Commit to a Savings Plan for this amount.

C. Determine the average number of EC2 instances that are needed during a day. Reserve this number of instances in a 3-year plan with payment all upfront.

D. Remove the SQS queue from the solution and from the solution infrastructure.

E. Change the solution so that it runs as a container instead of on EC2 instances. Configure Lambda to start up the solution in Fargate by using environment variables to give the solution the message.

F. Change the Lambda function so that it posts the message directly to the EC2 instances through an Application Load Balancer.

 


Suggested Answer: CDE

Community Answer: BDE

Reference:
https://aws.amazon.com/ec2/pricing/reserved-instances/

<img src=”https://www.examtopics.com/assets/media/exam-media/04241/0054400001.jpg” alt=”Reference Image” />

Question 23

A company is processing financial records in the AWS Cloud. Throughout the day, records are uploaded to an Amazon S3 bucket for processing. Every night at midnight, an application processes the records. The application runs on a set of Amazon EC2 instances and is invoked by a cron job on each instance. The application processes all the records in a total of approximately 60 minutes and stores the result in a second S3 bucket.
A solutions architect needs to modernize the application by implementing a solution that processes the records with the least possible operational overhead.
Which solution will meet these requirements?

A. Use an AWS Lambda function to process a single record. Create an AWS Step Functions state machine to invoke the Lambda function for each record. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule the state machine to run at midnight.

B. Containerize the processing logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that runs in AWS Fargate mode. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule invocation of the Fargate tasks at midnight.

C. Use a single AWS Lambda function to process all the records. Use S3 Event Notifications to invoke the Lambda function at midnight.

D. Containerize the processing logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster that runs in AWS Fargate mode. Configure Amazon Simple Notification Service (Amazon SNS) to schedule invocation of the Fargate tasks at midnight.

 


Suggested Answer: D

Community Answer: B

Question 24

A company implements a containerized application by using Amazon Elastic Container Service (Amazon ECS) and Amazon API Gateway. The application data is stored in Amazon Aurora databases and Amazon DynamoDB databases. The company automates infrastructure provisioning by using AWS CloudFormation. The company automates application deployment by using AWS CodePipeline.
A solutions architect needs to implement a disaster recovery (DR) strategy that meets an RPO of 2 hours and an RTO of 4 hours.
Which solution will meet these requirements MOST cost-effectively?

A. Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon CloudFront with origin failover to route traffic to the secondary Region during a DR scenario.

B. Use AWS Database Migration Service (AWS DMS), Amazon EventBridge (Amazon CloudWatch Events), and AWS Lambda to replicate the Aurora databases to a secondary AWS Region. Use DynamoDB Streams, EventBridge (CloudWatch Events), and Lambda to replicate the DynamoDB databases to the secondary Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.

C. Use AWS Backup to create backups of the Aurora databases and the DynamoDB databases in a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.

D. Set up an Aurora global database and DynamoDB global tables to replicate the databases to a secondary AWS Region. In the primary Region and in the secondary Region, configure an API Gateway API with a Regional endpoint. Implement Amazon Route 53 failover routing to switch traffic from the primary Region to the secondary Region.

 


Suggested Answer: B

Community Answer: C

Question 25

A company wants to move an application from on premises to the AWS Cloud. The application uses MySQL servers to store backend data. However, the application does not scale properly. The databases have become unresponsive as the user base has increased.
The company needs a solution to make the application highly available with low latency across multiple AWS Regions. The solution must require the least possible operational overhead and development effort.
Which solution will meet these requirements?

A. Create an Amazon RDS for MySQL DB cluster that includes a cross-Region read replica. Use AWS Database Migration Service (AWS DMS) to migrate existing databases.

B. Deploy Amazon DynamoDB with global tables. Use AWS Database Migration Service (AWS DMS) to migrate existing databases. Adapt the application to work with DynamoDB.

C. Create an Amazon Aurora global database. Use native MySQL tools to migrate existing databases.

D. Create MySQL servers on Amazon EC2 instances in two Regions. Set up asynchronous software replication across Regions.

 


Suggested Answer: C

Community Answer: C

Question 26

A company wants to use Amazon S3 to back up its on-premises file storage solution. The company's on-premises file storage solution supports NFS, and the company wants its new solution to support NFS. The company wants to archive the backup files after 5 days. If the company needs archived files for disaster recovery, the company is willing to wait a few days for the retrieval of those files.
Which solution meets these requirements MOST cost-effectively?

A. Deploy an AWS Storage Gateway files gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the file gateway. Create an S3 Lifecycle rule to move the file to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.

B. Deploy an AWS Storage Gateway volume gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the volume gateway. Create an S3 Lifecycle rule to move the files to S3 Glacier Deep Archive after 5 days.

C. Deploy an AWS Storage Gateway tape gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the tape gateway. Create an S3 Lifecycle rule to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.

D. Deploy an AWS Storage Gateway file gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the tape gateway. Create an S3 Lifecycle rule to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 5 days.

E. Deploy an AWS Storage Gateway file gateway that is associated with an S3 bucket. Move the files from the on-premises file storage solution to the file gateway. Create an S3 Lifecycle rule to move the files to S3 Glacier Deep Archive after 5 days.

 


Suggested Answer: A

Community Answer: E

Reference:
https://aws.amazon.com/blogs/database/storing-sql-server-backups-in-amazon-s3-using-aws-storage-gateway/

<img src=”https://www.examtopics.com/assets/media/exam-media/04241/0053900001.jpg” alt=”Reference Image” />

Question 27

You have subscribed to the AWS Business and Enterprise support plan.
Your business has a backlog of problems, and you need about 20 of your IAM users to open technical support cases.
How many users can open technical support cases under the AWS Business and Enterprise support plan?

A. 5 users

B. 10 users

C. Unlimited

D. 1 user

 


Suggested Answer: C

 

In the context of AWS support, the Business and Enterprise support plans allow an unlimited number of users to open technical support cases (supported by AWS
Identity and Access Management (IAM)).
Reference:
https://aws.amazon.com/premiumsupport/faqs/

Question 28

A company is running a legacy application on Amazon EC2 instances in multiple Availability Zones behind a software load balancer that runs on an active/standby set of EC2 instances. For disaster recovery, the company has created a warm standby version of the application environment that is deployed in another AWS
Region. The domain for the application uses a hosted zone from Amazon Route 53.
The company needs the application to use static IP addresses, even in the case of a failover event to the secondary Region. The company also requires the client's source IP address to be available for auditing purposes.
Which solution meets these requirements with the LEAST amount of operational overhead?

A. Replace the software load balancer with an AWS Application Load Balancer. Create an AWS Global Accelerator accelerator. Add an endpoint group for each Region. Configure Route 53 health checks. Add an alias record that points to the accelerator.

B. Replace the software load balancer with an AWS Network Load Balancer. Create an AWS Global Accelerator accelerator. Add an endpoint group for each Region. Configure Route 53 health checks. Add a CNAME record that points to the DNS name of the accelerator.

C. Replace the software load balancer with an AWS Application Load Balancer. Use AWS Global Accelerator to create two separate accelerators. Add an endpoint group for each Region. Configure Route 53 health checks. Add a record set that is configured for active-passive DNS failover. Point the record set to the DNS names of the two accelerators.

D. Replace the software load balancer with an AWS Network Load Balancer. Use AWS Global Accelerator to create two separate accelerators. Add an endpoint group for each Region. Configure Route 53 health checks. Add a record set that is configured for weighted round-robin DNS failover. Point the record set to the DNS names of the two accelerators.

 


Suggested Answer: C

Community Answer: A

Question 29

A company is developing a gene reporting device that will collect genomic information to assist researchers will collecting large samples of data from a diverse population. The device will push 8 KB of genomic data every second to a data platform that will need to process and analyze the data and provide information back to researchers. The data platform must meet the following requirements:
✑ Provide near-real-time analytics of the inbound genomic data
✑ Ensure the data is flexible, parallel, and durable
✑ Deliver results of processing to a data warehouse
Which strategy should a solutions architect use to meet these requirements?

A. Use Amazon Kinesis Data Firehouse to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance.

B. Use Amazon Kinesis Data Streams to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR.

C. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster.

D. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.

 


Suggested Answer: B

Community Answer: B

Question 30

AWS Direct Connect itself has NO specific resources for you to control access to. Therefore, there are no AWS Direct Connect Amazon Resource Names (ARNs) for you to use in an Identity and Access Management (IAM) policy.
With that in mind, how is it possible to write a policy to control access to AWS Direct Connect actions?

A. You can leave the resource name field blank.

B. You can choose the name of the AWS Direct Connection as the resource.

C. You can use an asterisk (*) as the resource.

D. You can create a name for the resource.

 


Suggested Answer: C

 

AWS Direct Connect itself has no specific resources for you to control access to. Therefore, there are no AWS Direct Connect ARNs for you to use in an IAM policy. You use an asterisk (*) as the resource when writing a policy to control access to AWS Direct Connect actions.
Reference:
http://docs.aws.amazon.com/directconnect/latest/UserGuide/using_iam.html

Question 31

A user is creating a PIOPS volume. What is the maximum ratio the user should configure between PIOPS and the volume size?

A. 5

B. 10

C. 20

D. 30

 


Suggested Answer: D

 

Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput. A provisioned IOPS volume can range in size from 10 GB to 1 TB and the user can provision up to 4000 IOPS per volume.
The ratio of IOPS provisioned to the volume size requested can be a maximum of 30; for example, a volume with 3000 IOPS must be at least 100 GB.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

Question 32

A company has released a new version of a website to target an audience in Asia and South America. The website's media assets are hosted on Amazon S3 and have an Amazon CloudFront distribution to improve end-user performance. However, users are having a poor login experience, the authentication service is only available in the us-east-1 AWS Region.
How can the Solutions Architect improve the login experience and maintain high security and performance with minimal management overhead?

A. Replicate the setup in each new geography and use Amazon Route 53 geo-based routing to route traffic to the AWS Region closest to the users.

B. Use an Amazon Route 53 weighted routing policy to route traffic to the CloudFront distribution. Use CloudFront cached HTTP methods to improve the user login experience.

C. Use Amazon Lambda@Edge attached to the CloudFront viewer request trigger to authenticate and authorize users by maintaining a secure cookie token with a session expiry to improve the user experience in multiple geographies.

D. Replicate the setup in each geography and use Network Load Balancers to route traffic to the authentication service running in the closest region to users.

 


Suggested Answer: C

Community Answer: C

Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-how-to-use-lambdaedge-and-json-web-tokens-to-enhance-web-application-
security/

Question 33

The Principal element of an IAM policy refers to the specific entity that should be allowed or denied permission, whereas the translates to everyone except the specified entity.

A. NotPrincipal

B. Vendor

C. Principal

D. Action

 


Suggested Answer: A

 

The element NotPrincipal that is included within your IAM policy statements allows you to specify an exception to a list of principals to whom the access to a specific resource is either allowed or denied. Use the NotPrincipal element to specify an exception to a list of principals. For example, you can deny access to all principals except the one named in the NotPrincipal element.
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal

Question 34

A Solutions Architect is working with a company that is extremely sensitive to its IT costs and wishes to implement controls that will result in a predictable AWS spend each month.
Which combination of steps can help the company control and monitor its monthly AWS usage to achieve a cost that is as close as possible to the target amount?
(Choose three.)

A. Implement an IAM policy that requires users to specify a ‘workload’ tag for cost allocation when launching Amazon EC2 instances.

B. Contact AWS Support and ask that they apply limits to the account so that users are not able to launch more than a certain number of instance types.

C. Purchase all upfront Reserved Instances that cover 100% of the account’s expected Amazon EC2 usage.

D. Place conditions in the users’ IAM policies that limit the number of instances they are able to launch.

E. Define ‘workload’ as a cost allocation tag in the AWS Billing and Cost Management console.

F. Set up AWS Budgets to alert and notify when a given workload is expected to exceed a defined cost.

 


Suggested Answer: AEF

Community Answer: AEF

Question 35

An ecommerce company has an order processing application it wants to migrate to AWS. The application has inconsistent data volume patterns, but needs to be avail at all times. Orders must be processed as they occur and in the order that they are received.
Which set of steps should a solutions architect take to meet these requirements?

A. Use AWS Transfer for SFTP and upload orders as they occur. Use On-Demand Instances in multiple Availability Zones for processing.

B. Use Amazon SNS with FIFO and send orders as they occur. Use a single large Reserved Instance for processing.

C. Use Amazon SQS with FIFO and send orders as they occur. Use Reserved Instances in multiple Availability Zones for processing.

D. Use Amazon SQS with FIFO and send orders as they occur. Use Spot Instances in multiple Availability Zones for processing.

 


Suggested Answer: C

Community Answer: C

Question 36

A company is currently in the design phase of an application that will need an RPO of less than 5 minutes and an RTO of less than 10 minutes. The solutions architecture team is forecasting that the database will store approximately 10 TB of data. As part of the design, they are looking for a database solution that will provide the company with the ability to fail over to a secondary Region.
Which solution will meet these business requirements at the LOWEST cost?

A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5 minutes. Once a snapshot is complete, copy the snapshot to a secondary Region to serve as a backup in the event of a failure.

B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondary Region. In the event of a failure, promote the read replica to become the primary.

C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondary Region. Use AWS DMS to keep the secondary Region in sync.

D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event of a failure, promote the read replica to become the primary.

 


Suggested Answer: B

Community Answer: B

Question 37

A company has developed a custom tool used in its workflow that runs within a Docker container. The company must perform manual steps each time the container code is updated to make the container image available to new workflow executions. The company wants to automate this process to eliminate manual effort and ensure a new container image is generated every time the tool code is updated.
Which combination of actions should a solutions architect take to meet these requirements? (Choose three.)

A. Configure an Amazon ECR repository for the tool. Configure an AWS CodeCommit repository containing code for the tool being deployed to the container image in Amazon ECR.

B. Configure an AWS CodeDeploy application that triggers an application version update that pulls the latest tool container image from Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR.

C. Configuration an AWS CodeBuild project that pulls the latest tool container image from Amazon ECR, updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR.

D. Configure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeDeploy application update.

E. Configure an Amazon EventBridge rule that triggers on commits to the AWS CodeCommit repository for the tool. Configure the event to trigger an update to the tool container image in Amazon ECR. Push the updated container image to Amazon ECR.

F. Configure an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild build.

 


Suggested Answer: ACD

Community Answer: ACF

Question 38

A company is running an Apache Hadoop cluster on Amazon EC2 instances. The Hadoop cluster stores approximately 100 TB of data for weekly operational reports and allows occasional access for data scientists to retrieve data. The company needs to reduce the cost and operational complexity for storing and serving this data.
Which solution meets these requirements in the MOST cost-effective manner?

A. Move the Hadoop cluster from EC2 instances to Amazon EMR. Allow data access patterns to remain the same.

B. Write a script that resizes the EC2 instances to a smaller instance type during downtime and resizes the instances to a larger instance type before the reports are created.

C. Move the data to Amazon S3 and use Amazon Athena to query the data for reports. Allow the data scientists to access the data directly in Amazon S3.

D. Migrate the data to Amazon DynamoDB and modify the reports to fetch data from DynamoDB. Allow the data scientists to access the data directly in DynamoDB.

 


Suggested Answer: C

Community Answer: A

Question 39

A data analytics company has an Amazon Redshift cluster that consists of several reserved nodes. The cluster is experiencing unexpected bursts of usage because a team of employees is compiling a deep audit analysis report. The queries to generate the report are complex read queries and are CPU intensive.
Business requirements dictate that the cluster must be able to service read and write queries at all times. A solutions architect must devise a solution that accommodates the bursts of usage.
Which solution meets these requirements MOST cost-effectively?

A. Provision an Amazon EMR cluster. Offload the complex data processing tasks.

B. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster’s CPU metrics in Amazon CloudWatch reach 80%.

C. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster’s CPU metrics in Amazon CloudWatch reach 80%

D. Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.

 


Suggested Answer: A

Community Answer: D

Question 40

A company has multiple AWS accounts as part of an organization created with AWS Organizations. Each account has a VPC in the us-east-2 Region and is used for either production or development workloads. Amazon EC2 instances across production accounts need to communicate with each other, and EC2 instances across development accounts need to communicate with each other, but production and development instances should not be able to communicate with each other.
To facilitate connectivity, the company created a common network account. The company used AWS Transit Gateway to create a transit gateway in the us-east-2
Region in the network account and shared the transit gateway with the entire organization by using AWS Resource Access Manager. Network administrators then attached VPCs in each account to the transit gateway, after which the EC2 instances were able to communicate across accounts. However, production and development accounts were also able to communicate with one another.
Which set of steps should a solutions architect take to ensure production traffic and development traffic are completely isolated?

A. Modify the security groups assigned to development EC2 instances to block traffic from production EC2 instances. Modify the security groups assigned to production EC2 instances to block traffic from development EC2 instances.

B. Create a tag on each VPC attachment with a value of either production or development, according to the type of account being attached. Using the Network Manager feature of AWS Transit Gateway, create policies that restrict traffic between VPCs based on the value of this tag.

C. Create separate route tables for production and development traffic. Delete each account’s association and route propagation to the default AWS Transit Gateway route table. Attach development VPCs to the development AWS Transit Gateway route table and production VPCs to the production route table, and enable automatic route propagation on each attachment.

D. Create a tag on each VPC attachment with a value of either production or development, according to the type of account being attached. Modify the AWS Transit Gateway routing table to route production tagged attachments to one another and development tagged attachments to one another.

 


Suggested Answer: B

Community Answer: C

Reference:
https://docs.aws.amazon.com/vpc/latest/tgw/tgw-transit-gateways.html

Question 41

A solutions architect is troubleshooting an application that runs on Amazon EC2 instances. The EC2 instances run in an Auto Scaling group. The application needs to access user data in an Amazon DynamoDB table that has fixed provisioned capacity.
To match the increased workload, the solutions architect recently doubled the maximum size of the Auto Scaling group. Now, when many instances launch at the same time, some application components are throttled when the components scan the DynamoDB table. The Auto Scaling group terminates the failing instances and starts new instances until all applications are running
A solutions architect must implement a solution to mitigate the throttling issue in the MOST cost-effective manner
Which solution will meet these requirements?

A. Double the provisioned read capacity of the DynamoDB table.

B. Duplicate the DynamoDB table. Configure the running copy of the application to select at random which table it access.

C. Set the DynamoDB table to on-demand mode.

D. Add DynamoDB Accelerator (DAX) to the table.

 


Suggested Answer: C

Community Answer: C

Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/

Question 42

Your Application is not highly available, and your on-premises server cannot access the mount target because the Availability Zone (AZ) in which the mount target exists is unavailable.
Which of the following actions is recommended?

A. The application must implement the checkpoint logic and recreate the mount target.

B. The application must implement the shutdown logic and delete the mount target in the AZ.

C. The application must implement the delete logic and connect to a different mount target in the same AZ.

D. The application must implement the restart logic and connect to a mount target in a different AZ.

 


Suggested Answer: D

 

To make sure that there is continuous availability between your on-premises data center and your Amazon Virtual Private Cloud (VPC), it is suggested that you configure two AWS Direct Connect connections. Your application should implement restart logic and connect to a mount target in a different AZ if your application is not highly available and your on-premises server cannot access the mount target because the AZ in which the mount target exists becomes unavailable.
Reference:
http://docs.aws.amazon.com/efs/latest/ug/performance.html#performance-onpremises

Question 43

A company would like to implement a serverless application by using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. They deployed a proof of concept and stated that the average response time is greater than what their upstream services can accept. Amazon CloudWatch metrics did not indicate any issues with DynamoDB but showed that some Lambda functions were hitting their timeout.
Which of the following actions should the Solutions Architect consider to improve performance? (Choose two.)

A. Configure the AWS Lambda function to reuse containers to avoid unnecessary startup time.

B. Increase the amount of memory and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal memory and timeout configuration for the Lambda function.

C. Create an Amazon ElastiCache cluster running Memcached, and configure the Lambda function for VPC integration with access to the Amazon ElastiCache cluster.

D. Enable API cache on the appropriate stage in Amazon API Gateway, and override the TTL for individual methods that require a lower TTL than the entire stage.

E. Increase the amount of CPU, and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal CPU and timeout configuration for the Lambda function.

 


Suggested Answer: BD

Community Answer: BD

Reference:
https://lumigo.io/blog/aws-lambda-timeout-best-practices/

Question 44

You are responsible for a legacy web application whose server environment is approaching end of life You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations:
✑ The VM's single 10GB VMDK is almost full;
✑ Me virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized;
✑ It is currently running on a highly customized. Windows VM within a VMware environment;
✑ You do not have me installation media;
This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour.
How could you best migrate this application to AWS while meeting your business continuity requirements?

A. Use the EC2 VM Import Connector for vCenter to import the VM into EC2.

B. Use Import/Export to import the VM as an ESS snapshot and attach to EC2.

C. Use S3 to create a backup of the VM and restore the data into EC2.

D. Use me ec2-bundle-instance API to Import an Image of the VM into EC2

 


Suggested Answer: A

Community Answer: A

Reference:
https://aws.amazon.com/developertools/2759763385083070

Question 45

Your customer wishes to deploy an enterprise application to AWS, which will consist of several web servers, several application servers and a small (50GB)
Oracle database. Information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database.
Which backup architecture will meet these requirements?

A. Backup RDS using automated daily DB backups. Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore.

B. Backup RDS using a Multi-AZ Deployment. Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.

C. Backup RDS using automated daily DB backups. Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore.

D. Backup RDS database to S3 using Oracle RMAN. Backup the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.

 


Suggested Answer: A

Community Answer: A

Point-In-Time Recovery –
In addition to the daily automated backup, Amazon RDS archives database change logs. This enables you to recover your database to any point in time during the backup retention period, up to the last five minutes of database usage.
Amazon RDS stores multiple copies of your data, but for Single-AZ DB instances these copies are stored in a single availability zone. If for any reason a Single-AZ
DB instance becomes unusable, you can use point-in-time recovery to launch a new DB instance with the latest restorable data. For more information on working with point-in-time recovery, go to
Restoring a DB Instance to a Specified Time
.
Note –
Multi-AZ deployments store copies of your data in different Availability Zones for greater levels of data durability. For more information on Multi-AZ deployments, see
High Availability (Multi-AZ)
.

Question 46

A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The database tier uses shared-storage clustering to provide database fall over capability, and uses several read slaves for scaling. Data on all servers and the distributed file system directory is backed up weekly to off-site tapes.
Which AWS storage and database architecture meets the requirements of the application?

A. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.

B. Web servers: store read-only data in an EC2 NFS server; mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

 


Suggested Answer: MySQLC

Community Answer: C

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Benefits –
Enhanced Durability –
Multi-AZ deployments for the –
,
Oracle –
, and
PostgreSQL –
engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the
SQL Server –
engine use synchronous logical replication to achieve the same result, employing SQL Server-native Mirroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a
Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available.
Amazon Aurora –
employs a highly durable, SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically.
Increased Availability –
You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines (see the
RDS FAQ –
for details).
The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB
Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete.
Unlike Single-AZ deployments, I/O activity is not suspended on your primary during backup for Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for Multi-
AZ deployments.
On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically.
No Administrative Intervention –
DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions.
Failover conditions –
Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following:
✑ Loss of availability in primary Availability Zone
✑ Loss of network connectivity to primary
✑ Compute unit failure on primary
✑ Storage failure on primary
Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.

Question 47

A company has an Amazon EC2 deployment that has the following architecture:
✑ An application tier that contains 8 m4.xlarge instances
✑ A Classic Load Balancer
✑ Amazon S3 as a persistent data store
After one of the EC2 instances fails, users report very slow processing of their requests. A Solutions Architect must recommend design changes to maximize system reliability. The solution must minimize costs.
What should the Solutions Architect recommend?

A. Migrate the existing EC2 instances to a serverless deployment using AWS Lambda functions

B. Change the Classic Load Balancer to an Application Load Balancer

C. Replace the application tier with m4.large instances in an Auto Scaling group

D. Replace the application tier with 4 m4.2xlarge instances

 


Suggested Answer: B

Community Answer: C

By default, connection draining is enabled for Application Load Balancers but must be enabled for Classic Load Balancers. When Connection Draining is enabled and configured, the process of deregistering an instance from an Elastic Load Balancer gains an additional step. For the duration of the configured timeout, the load balancer will allow existing, in-flight requests made to an instance to complete, but it will not send any new requests to the instance. During this time, the API will report the status of the instance as InService, along with a message stating that ג€Instance deregistration currently in progress.ג€ Once the timeout is reached, any remaining connections will be forcibly closed.
Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html
https://aws.amazon.com/blogs/aws/elb-connection-draining-remove-instances-from-service-with-care/

Question 48

A Provisioned IOPS volume must be at least __________ GB in size:

A. 20

B. 10

C. 50

D. 1

 


Suggested Answer: B

Community Answer: B

A Provisioned IOPS volume must be at least 10 GB in size
Reference:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html

Question 49

A Solutions Architect is designing a system that will collect and store data from 2,000 internet-connected sensors. Each sensor produces 1 KB of data every second. The data must be available for analysis within a few seconds of it being sent to the system and stored for analysis indefinitely.
Which is the MOST cost-effective solution for collecting and storing the data?

A. Put each record in Amazon Kinesis Data Streams. Use an AWS Lambda function to write each record to an object in Amazon S3 with a prefix that organizes the records by hour and hashes the record’s key. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3.

B. Put each record in Amazon Kinesis Data Streams. Set up Amazon Kinesis Data Firehouse to read records from the stream and group them into objects in Amazon S3. Analyze recent data from Kinesis Data Streams and historical data from Amazon S3.

C. Put each record into an Amazon DynamoDB table. Analyze the recent data by querying the table. Use an AWS Lambda function connected to a DynamoDB stream to group records together, write them into objects in Amazon S3, and then delete the record from the DynamoDB table. Analyze recent data from the DynamoDB table and historical data from Amazon S3

D. Put each record into an object in Amazon S3 with a prefix what organizes the records by hour and hashes the record’s key. Use S3 lifecycle management to transition objects to S3 infrequent access storage to reduce storage costs. Analyze recent and historical data by accessing the data in Amazon S3

 


Suggested Answer: C

Community Answer: B

Question 50

A company has a requirement that only allows specially hardened AMIs to be launched into public subnets in a VPC, and for the AMIs to be associated with a specific security group. Allowing non-compliant instances to launch into the public subnet could present a significant security risk if they are allowed to operate.
A mapping of approved AMIs to subnets to security groups exists in an Amazon DynamoDB table in the same AWS account. The company created an AWS
Lambda function that, when invoked, will terminate a given Amazon EC2 instance if the combination of AMI, subnet, and security group are not approved in the
DynamoDB table.
What should the Solutions Architect do to MOST quickly mitigate the risk of compliance deviations?

A. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched using one of the allowed AMIs, and associate it with the Lambda function as the target.

B. For the Amazon S3 bucket receiving the AWS CloudTrail logs, create an S3 event notification configuration with a filter to match when logs contain the ec2:RunInstances action, and associate it with the Lambda function as the target.

C. Enable AWS CloudTrail and configure it to stream to an Amazon CloudWatch Logs group. Create a metric filter in CloudWatch to match when the ec2:RunInstances action occurs, and trigger the Lambda function when the metric is greater than 0.

D. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched, and associate it with the Lambda function as the target.

 


Suggested Answer: D

Community Answer: D

Access Full SAP-C01 Dump Free

Looking for even more practice questions? Click here to access the complete SAP-C01 Dump Free collection, offering hundreds of questions across all exam objectives.

We regularly update our content to ensure accuracy and relevance—so be sure to check back for new material.

Begin your certification journey today with our SAP-C01 dump free questions — and get one step closer to exam success!

Share18Tweet11
Previous Post

SAA-C03 Dump Free

Next Post

SAP-C02 Dump Free

Next Post

SAP-C02 Dump Free

SC-100 Dump Free

SC-200 Dump Free

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

CLF-C02 Practice Test Free

Aws Cloud Practitioner Practice Exam

Comptia A+ Practice Test

Azure 104 Exam Questions

Azure 104 Practice Test

Azure 104 Dumps

  • About
  • DMCA
  • Privacy & Policy
  • Contact

PracticeTestFree.com materials do not contain actual questions and answers from Cisco's Certification Exams. PracticeTestFree.com doesn't offer Real Microsoft Exam Questions. PracticeTestFree.com doesn't offer Real Amazon Exam Questions.

  • Login
  • Sign Up
No Result
View All Result
  • Quesions
    • Cisco
    • AWS
    • Microsoft
    • CompTIA
    • Google
    • ISACA
    • ECCouncil
    • F5
    • GIAC
    • ISC
    • Juniper
    • LPI
    • Oracle
    • Palo Alto Networks
    • PMI
    • RedHat
    • Salesforce
    • VMware
  • Courses
    • CCNA
    • ENCOR
    • VMware vSphere
  • Certificates

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Insert/edit link

Enter the destination URL

Or link to existing content

    No search term specified. Showing recent items. Search or use up and down arrow keys to select an item.