AWS Solution Architect Practice Exam

Take a free AWS solution architect exam and test out where you stand.

Assessment Test (25 Questions)

Q1: Which of the following best practices applies to IAM? (Select 3 responses)

A. A backup administrator should be designated to utilize the root access key.
B. Create privileged users and enable multi-factor authentication.
C. Discontinue use of the root access key.
D. Assign permissions to IAM groups instead of IAM users.

Explanation: To send programmatic requests to AWS, you need an access key, which consists of a secret access key and an access key ID. Use your root access key for your AWS account instead. To make modifications for everyone in a group in a single location, create groups that are related to job functions.

For more information on Amazon’s best practices, please refer to the following link:

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials

AWS Certified Solutions Architect - Associate IAM best practices
Source: https://miro.medium.com/

Q2: You recently moved your small business to AWS, and now you want to know what general best practices are for the system. What AWS service can you use to enhance your AWS setup by providing suggestions for cost-, performance-, and safety improvements?

A. AWS QuickSight
B. AWS Trusted Advisor
C. AWS Fargate
D. AWS Athena

Explanation: Right: AWS With the aid of the web tool Trusted Advisor, you may provide your resources according to AWS best practices.

Trusted Advisor audits aid in AWS infrastructure optimization, improve security and performance, save overall costs, and keep an eye on service constraints.

Use the Trusted Advisor’s advice often to maintain your solutions provisioned optimally, whether you’re creating new processes, apps, or as part of continuing improvement.

For more information on AWS Trusted Advisor, please refer to the following link:

https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

AWS Certified Solutions Architect - Associate AWS Trusted Advisor
Source: d1.awsstatic.com

Q3: You have been given the responsibility of implementing the multi-region disaster recovery plan that your organization is currently developing for your database. The RPO is 15 minutes, while the needed RTO is 1 hour. What actions can you take to guarantee that these criteria are met?

A. Take nightly EBS snapshots of the necessary EC2 instances. Restore the snapshots to a different location in the event of a disaster.

B. Host your database using RDS. Set your database's Multi-AZ option to on. Switch to the backup database in the case of a failure.

C. Your database should be hosted via Redshift. With Redshift, enable "multi-region" failover. Do nothing in the case of a failure since Redshift will take care of it.

D. Use RDS to host your database. Create a cross-region read replica of your database. In the event of a failure, promote the read replica to be a standalone database. Send new reads and writes to this database.

Explanation: Your Recovery Time Objective and Recovery Point Objective would be handled by this. Your data is stored at a secondary location from where it is simple to retrieve it when required.

For more information on Amazon Read Replicas, please refer to the following link:

https://aws.amazon.com/rds/features/read-replicas/

AWS Certified Solutions Architect - Associate disaster recover plan
Source: docs.aws.amazon.com

Q4: You are employed by a marketing firm that uses a real-time bidding tool. In order to serve a user base from all over the world, you are also employing CloudFront on the front end. Real-time bidding delays and slow response times start to annoy your users. What is the ideal service to utilize in order to significantly decrease DynamoDB response times (from milliseconds to microseconds)?

A. Amazon Redshift
B. Amazon Aurora
C. Amazon DocumentDB
D. DAX

Explanation: Correct. Even at millions of queries per second, the Amazon DynamoDB Accelerator (DAX), a fully managed, highly available, in-memory cache, may cut Amazon DynamoDB response times from milliseconds to microseconds.

While DynamoDB delivers continuous single-digit millisecond latency, DynamoDB with DAX boosts performance with microsecond response times for millions of queries per second for applications that include a lot of reading.

With DAX, your applications continue to run quickly and effectively even when you are receiving extraordinary quantities of requests due to a well-known event or news item. No tuning is necessary.

For more information on Amazon DynamoDB Accelerator (DAX), please refer to the following link:

https://aws.amazon.com/dynamodb/dax/

AWS Certified Solutions Architect - Associate DAX
Source: d1.awsstatic.com

Q5: To host various reference apps, a business will employ several EC2 instances. The apps should experience continuous, relatively little traffic. These programs are anticipated to function for three years before being assessed for upgrades. What kind of EC2 will satisfy this need taking into account extra costs?

A. Reserved
B. Spot
C. On-Demand
D. Dedicated instances

Explanation: You receive a huge discount (up to 75%) on Reserved Instances versus On-Demand instance pricing. Additionally, Reserved Instances offer a capacity reserve when they are allocated to a certain Availability Zone, providing you more assurance that you may deploy instances when you need them.

When compared to employing On-Demand instances, Reserved Instances might result in considerable cost savings for applications with steady state or predictable use. It is advised to use reserved instances for:

  1. Applications that use a steady state
  2. Application for which reserved capacity may be necessary
  3. Customers that are willing to use EC2 for a period of one or three years in order to cut their overall computing expenses

For more information on AWS Reserved instances, please refer to the following link:

https://aws.amazon.com/ec2/pricing/reserved-instances/

AWS Certified Solutions Architect - Associate EC2 costs
Source: docs.aws.amazon.com

Q6: You are employed by a government organization that needs to use AWS to run a private internal application. They need to manage all keys and have root access to the HSM that creates the keys since they have very severe encryption needs. Which AWS service ought they to utilize?

A. AWS Kinesis

B. AWS Fargate

C. AWS CloudHSM

D. Amazon FinSpace

Explanation: AWS CloudHSM is a service that enables you to generate, store, and manage your own encryption keys on HSMs that are dedicated to your AWS account.

CloudHSM provides you with complete control over your keys, including the ability to generate keys, import keys, and export keys. It also provides you with root access to the HSMs, allowing you to perform low-level operations such as installing custom software or modifying the HSM’s firmware.

For more information about AWS CloudHSM, please refer to the following link:

https://docs.aws.amazon.com/cloudhsm/

AWS Certified Solutions Architect - Associate CloudHSM
Source: docs.aws.amazon.com

Q7: You are employed for a major gaming business situated in Norway. The business must use UDP to split gaming traffic among several servers and store this data in a NoSQL database. Which solution should you apply?

A. Utilizing Amazon Aurora as an application load balancer.
B. With Amazon Aurora Serverless, a network load balancer.
C. Network Load Balancer using DynamoDB.
D. Microsoft SQL Server and an application load balancer in RDS.

Explanation: NLB is a type of load balancer in Amazon Elastic Compute Cloud (EC2) that routes traffic based on IP address and TCP port. It is designed to handle millions of requests per second while maintaining ultra-low latencies.

For more information about Network Load Balancing, please refer to the following link:

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html

AWS Certified Solutions Architect - Associate Network Load Balancer
Source: d2908q01vomqb2.cloudfront.net

Q8: On EC2, you are developing an application that needs access to S3 and DynamoDB. You must provide your developers the most secure access you can to these services. How should you go about doing this?

A. Assign a role to the EC2 instance allowing access to S3 and DynamoDB.

B.Share a master IAM username and password with your developers. This username and password should have power user access.

C. To grant access to these resources, use AWS KMS.

D. With Admin Access, create a master IAM login and password that you can then give to your developers.

Explanation: This is the most secure way to provide access to these services because it allows the developers to access the resources using the permissions granted to the role, rather than using a shared set of credentials.

It also allows you to easily control and monitor access to the resources through the role’s permissions, rather than managing individual user credentials. Additionally, using a role avoids the risk of shared credentials being compromised or misused.

For more information about EC2 Instance Access to S3 Bucket, please refer to the following link:

https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/

AWS Certified Solutions Architect - Associate EC2 Instance to S3 Bucket
Source: miro.medium.com

Q9: A static website must be hosted on S3. Your employer wants to merge the two services, so he asks you to utilize a domain name you registered with Route53. What is a need to make sure you can accomplish this?

A. You must have a bucket name the same name as the domain name.
B. To enable a static website, you must activate CORS in your S3 bucket.
C. In order for Route53 to point to your bucket, you must construct an A Record.
D. To point to your bucket's DNS address, you must set up a CNAME in Route53.

Explanation: To host a static website on Amazon S3, you must create an Amazon S3 bucket with the same name as the domain name. This is because when you configure a static website hosting, you specify the name of the Amazon S3 bucket that contains the files for your website.

It is also a good idea to configure a CNAME in Route53 to point to the DNS address of your bucket and create an A Record in Route53 to point to your bucket. This will allow you to use a custom domain name for your static website, rather than the default Amazon S3 website endpoint.

Finally, you may also need to enable CORS in your S3 bucket in order to allow your website to load resources from other domains. This is not always necessary, but it can be helpful if you are using resources such as fonts or images from other websites.

For more information about Bucket naming rules, please refer to the following link:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html

AWS Certified Solutions Architect - Associate Bucket naming rules
Source: cloudkatha.com

Q10: Using API Gateway, Lambda, and DynamoDB, you host a serverless website on AWS. It is quite resilient. The client and the final DynamoDB table appear to be losing track of database modifications lately. You must examine what is taking place. Which AWS service ought to be used to examine this problem?

A. CloudWatch
B. CloudCDN
C. AWS Macie
D. AWS X-Ray

Explanation: AWS X-Ray aids developers in the analysis and debugging of distributed systems in use, including those created with a microservices architecture.

For more information about AWS X-Ray, Please refer to the following link:

https://aws.amazon.com/blogs/developer/new-analyze-and-debug-distributed-applications-interactively-using-aws-x-ray-analytics/

AWS Certified Solutions Architect - Associate AWS X-Ray
Source: docs.aws.amazon.com

Q11: Currently, you run a file server on EC2 and store your files on EBS. You must create a fault-tolerant architecture after an hour-long outage to ensure that your clients can still access their files even if the EC2 instance is down. What of the following architectures is the most fault-tolerant?

A. Three EC2 instances behind an Application Load Balancer and Autoscaling group, connected to an EFS mount.

B. Two EC2 instances linked to two EBS volumes in different regions through an autoscaling group and an application load balancer.

C. API Gateway's Lambda function has three EBS volumes deployed to it.

D. An EBS volume attached to an EC2 instance behind a conventional load balancer.

Explanation: Using three EC2 instances behind an Application Load Balancer and Autoscaling group, connected to an Elastic File System (EFS) mount, can provide a fault-tolerant architecture for hosting a file server.

This architecture can help to ensure that the service remains available even if one of the EC2 instances goes down, as the traffic will be automatically redirected to the other instances.

For more information regarding Load balancer, Please refer to the following link:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html

AWS Certified Solutions Architect - Associate Load Balancer
Source: miro.medium.com

Q12: You are employed by a security firm that runs a secure file hosting service on S3. AES-256 Server Side Encryption must be used to encrypt all data uploaded to S3 buckets (SSE-S3). Which request headers from the list below are required?

A. x-enable-server-side-encryption-s3
B. amz-server-side-encryption-enable
C. x-amz-server-side-encryption
D. x-enable-server-side-encryption-ecs

Explanation: To enable AES-256 encryption using Server Side Encryption (SSE-S3) for files uploaded to an S3 bucket, you should use the x-amz-server-side-encryption request header.

This header allows you to specify the server-side encryption algorithm to be used when storing the object in the bucket.

To specify AES-256 encryption, you should set the value of the x-amz-server-side-encryption header to “AES256”. This will ensure that all files uploaded to the bucket are encrypted using AES-256 encryption using SSE-S3.

For more information on server-side encryption, please refer to the following link:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html

AWS Certified Solutions Architect - Associate Server Side Encryption
Source: yarkons3.com

Q13: You are employed at a medical practice with thousands of patients in New York City. The patient data is stored locally, however, the backups must be kept as securely as possible on S3. Which of the following approaches of accomplishing this is the completely safe?

A. Directly upload the backups to a public S3 bucket.

B. Utilizing your personal encryption keys, locally encrypt the data. Utilizing HTTP, upload the data to AWS S3. The S3 bucket should be encrypted using AES 256 server side encryption.

C. Utilize AWS Fargate to host the data, and encrypt the backups using server-side encryption.

D. Encrypt the data locally using your own encryption keys. Upload the the data to AWS S3 using HTTPS. Use AES 256 server-side encryption on the S3 bucket to encrypt the bucket.

Explanation: This is the most secure way of storing the patient data backups on S3, as it involves encrypting the data locally using your own encryption keys, and then uploading the encrypted data to S3 using a secure protocol (HTTPS).

This ensures that the data is encrypted at rest (when it is stored in the S3 bucket) as well as in transit (when it is being transferred over the network). Additionally, using AES-256 encryption for the S3 bucket provides an additional layer of security, as AES-256 is a strong and widely-used encryption algorithm.

For more information on server-side encryption and bucket encryption, please refer to the following links:

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html

AWS Certified Solutions Architect - Associate bucket encryption
Source: nakivo.com

Q14: To ensure that a new EBS volume created using an unencrypted EBS snapshot provided by another team is encrypted, what action should the solutions architect take?

A. During volume creation, manually enable encryption. The volume won't be encrypted if that happens.

B. Create a new EBS volume from the encrypted snapshot after encrypting the unencrypted one using the standard AWS KMS.

C. From the unencrypted snapshot, establish a new encrypted EBS volume using Amazon Data Lifecycle Manager.

D. No additional action is necessary. The volume from the unencrypted snapshot will automatically be encrypted. Create a new EBS volume from a copy of the unencrypted snapshot.

Explanation: Having EBS encryption enabled by default for the AWS Account is the best approach to guarantee that your EBS volumes are secure.

Since encryption is enabled by default in this scenario, there is no action required on your part to encrypt the volume at the time of creation; the system will handle it on its own even if the snapshot is not encrypted.

Although it is optional, you can explicitly activate encryption throughout the creation process.

For more information on EBS encryption, Please refer to the following link:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-examples

AWS Certified Solutions Architect - Associate EBS Encryption
Source: docs.aws.amazon.com

Q15: To ensure low-latency network performance, high network throughput, and tightly-coupled node-to-node communication for EC2 workloads in a VPC within a single Availability Zone, what is the best measure you can take?

A. Make use of Auto Scaling Groups
B. Launch your instances in a cluster placement group
C. Make use of elastic network interfaces.
D. increase the instances' sizes

Explanation: An Availability Zone’s single cluster placement group is a logical collection of instances. Peered VPCs in the same Region can be included in a cluster placement group.

Instances installed in the same cluster placement group share a high-bisection bandwidth network segment and have a higher TCP/IP traffic throughput limit per flow.

For more information on Placement Groups, Please refer to the following link:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

AWS Certified Solutions Architect - Associate Partition Placement Groups
Source: d2908q01vomqb2.cloudfront.net

Q16: How can you maximize your savings opportunities for the Amazon SageMaker service when creating a new machine learning application over the next two years, which will leverage numerous Amazon SageMaker instances and components?

A. Purchase an All Upfront Compute Savings Plan for one year. Any AWS Region's SageMaker instances and components are affected by this.

B. Purchase a SageMaker All Upfront Savings Plan for three years. Any AWS Region's SageMaker instances and components are affected by this.

C. Purchase an All Upfront Compute Savings Plan for three years. Any AWS Region's SageMaker instances and components are affected by this.

D. Purchase a one-year All Upfront SageMaker Savings Plan. This applies to all SageMaker instances and components within any AWS Region.

Explanation: For all SageMaker components, SageMaker Savings Plans provide the highest potential for savings, and the one-year agreement type is covered by the two-year window.

For more information on savings plans and sp services, please refer to the following links:

https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html

https://docs.aws.amazon.com/savingsplans/latest/userguide/sp-services.html#sp-sagemaker

AWS Certified Solutions Architect - Associate Savings Plans
Source: densify.com

Q17: How can you meet the requirement in S3 of keeping stack trace files for application errors only for four weeks before purging them, while they are being reviewed by engineers when addressing application issues?

A. Configure the S3 Lifecycle rules to purge the files after a month.
B. Make a bucket policy that will remove the regulations after a month.
C. Create a cron task that will delete the files after a month.
D. To archive these files to Glacier after one month, add an S3 Lifecycle rule.

Explanation: Configure the Amazon S3 Lifecycle of your objects to manage them so that they are cost-effectively stored over the course of their lives.

An S3 Lifecycle configuration is a set of guidelines that specify the activities Amazon S3 should take in relation to a collection of items. Two categories of acts exist: When items move from one storage class to another, transition actions determine this.

For instance, you may decide to archive things to the S3 Glacier storage class one year after generating them or transfer them to the S3 Standard-IA storage class after 30 days.

When an item expires depends on its expiration actions. On your part, Amazon S3 removes expired items.

For more information on object lifecycle management, please refer to the following link:

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

AWS Certified Solutions Architect - Associate object lifecycle management
Source: lepczynski.it

Q18: To implement an AWS service for streaming video from doorbell cameras to potentially millions of mobile devices, which service would best suit running analytics and other processing on the streams for a security company that manufactures doorbells with cameras built in?

A. Amazon SNS
B. Amazon Fargate
C. Amazon Kinesis Video Streams
D. Amazon CloudFront

Explanation: Several devices may transmit media material to Amazon Kinesis Video Streams, which can subsequently do analytics, machine learning, playback, and other processing.

For more information on Amazon Kinesis Video Streams, please refer to the following link:

https://aws.amazon.com/kinesis/video-streams/

AWS Certified Solutions Architect - Associate Amazon Kinesis Video Streams
Source: d1.awsstatic.com

Q19: To enhance the performance and reduce the load on the source DB instance for a travel application that serves travel updates to users all over the world and uses an Amazon RDS database, which can have performance issues at certain times of the year, what can you do?

A. Add read replicas
B. Set up S3 Multi-AZ
C. Set up multiple-region RDS
D. Place the Database in front of the CloudFront.

Explanation: Improved performance and durability are offered by Amazon RDS Read Replicas for RDS database (DB) instances.

For read-intensive database workloads, they make it simple to elastically expand out beyond the capacity limitations of a single DB instance.

They can be inside an Availability Zone, Cross-AZ, or Cross-Region. You can make one or more copies of a source DB Instance and use these copies of your data to fulfill high-volume application read requests, therefore, unites read throughput.

When necessary, read replicas can also be promoted to become independent DB instances. For MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora, read replicas are available in Amazon RDS.

For more information on read replicas, please refer to the following link:

https://aws.amazon.com/rds/features/read-replicas/

read replicas
Source: docs.aws.amazon.com

Q20: You manage a serverless image-sharing website that stores high-quality images in S3. However, competitors have started linking to the website and using your photos without permission. How can you prevent this from occurring?

A. Restrict public access to the bucket and turn on presigned URLs with expiry dates.
B. Access should be limited while storing the photos in an RDS database.
C. On the webpage, enable CloudFront.
D. Using AWS WAF, block the websites' IP addresses.

Explanation:  Yes, that’s correct. By restricting public access to your bucket and enabling presigned URLs with expiry dates, you can prevent unauthorized access to your photos.

Restricting public access to your bucket means that only authorized users, who have been granted specific permissions by you, will be able to access the contents of the bucket.

This can be done through the use of AWS Identity and Access Management (IAM) policies, which allow you to specify which users and resources have access to your bucket and its contents. Presigned URLs are special URLs that grant temporary access to an Amazon S3 object.

They can be used to give access to users who do not have direct access to your bucket, and can be configured to expire after a certain period of time.

By setting an expiry date on your presigned URLs, you can ensure that they will only remain valid for a certain period of time, after which they will no longer grant access to your objects.

This can be an effective way to prevent unauthorized access to your photos, as the URLs will become invalid once their expiry date has passed.

For more information on access control and resigned URL upload object, please refer to the following links:

https://docs.aws.amazon.com/AmazonS3/latest/user-guide/access-control-overview.html#access-control-how-to-restrict-access

https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

access control presigned URL
Source: miro.medium.com

Q21: As a solutions architect working for a biotech company with a large private cloud deployment using VMware, you have been tasked with setting up their disaster recovery solution on AWS. What is the most straightforward way to accomplish this?

A. Contact your VMware representative to provision dedicated hardware within AWS in which you can deploy vCenter yourself.

B. To provision an EC2 instance with VMware Vcenter already installed, use the VMware landing page on AWS.

C. Install vCenter on an EC2 instance that has been deployed into a public subnet.

D. Install vCenter on an EC2 instance that has been deployed into a private subnet.

Explanation: Yes, if you want to deploy vCenter on dedicated hardware within AWS, you can contact your VMware representative to discuss options for provisioning the necessary hardware.

AWS offers a range of options for running vCenter on dedicated hardware, including bare metal instances and dedicated hosts.

These options allow you to have full control over the underlying hardware and can be useful if you have specific requirements for your vCenter deployment, such as compliance or performance needs.

Your VMware representative will be able to provide you with more information about the different options available and help you choose the one that best meets your needs. They can also help you with the process of provisioning the dedicated hardware and deploying vCenter on it.

For more information on VMware, please refer to the following link:

https://aws.amazon.com/vmware/

VMware
Source: d1.awsstatic.com

Q22: A small startup is in the process of configuring IAM for their organization. After creating user logins, the focus has shifted to granting permissions to those users. An admin starts creating identity-based policies. To which item cannot an identity-based policy be attached?

A. Users
B. Managers
C. Authorizers
D. Resources

Explanation: A resource is associated with the resource-based policy. For instance, you may add resource-based policies to AWS Key Management Service encryption keys, Amazon S3 buckets, and Amazon SQS queues. See AWS services that operate with IAM for a list of services that support resource-based policies.

For more information on access policies identity and resources, please refer to the following link:

https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html

resources
Source: docs.aws.amazon.com

Q23: Your image-sharing website sits on EC2 and uses EBS as the backend storage, but you frequently run out of space and have to mount additional EBS volumes. Your boss asks if there are any other AWS services you can use to store images or videos. What service would you recommend?

A. CloudWatch

B. CloudFront
C. Fargate
D. S3

Explanation: An object storage service called Amazon Simple Storage Service (Amazon S3) provides performance, security, and scalability that are unparalleled in the industry.

For more information on Amazon S3, please refer to the following link:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html

 Amazon S3
Source: d1.awsstatic.com

Q24: A healthcare institution needs a storage solution that is durable, elastic, and supports the NFSv4 protocol, primarily for infrequent use by auditors. Which storage option would be the best fit for these requirements?

A. EFS Zone-Infrequent Access (IA)
B. EFS Standard
C. EFS Multi Zone

D. EFS Standard–Infrequent Access (IA)

Explanation: This choice is correct because the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol works well with EFS Standard-Infrequent Access (IA), which combines low cost, high availability, high durability, and elasticity.

For more information on Amazon storage classes, please refer to the following link:

https://docs.aws.amazon.com/efs/latest/ug/storage-classes.html

storage classes
Source: docs.aws.amazon.com

Q25: A company wants to migrate their SQL Servers to the cloud with minimal downtime and perform prerequisite testing before making the AWS servers primary. What automated tool could be used for this migration?

A. AWS Server Migration Service
B. AWS Snowball
C. AWS Application Migration Service
D. AWS Transfer Family 

Explanation:  Servers located on-premises can be moved to AWS using the AWS Application Migration Service. It facilitates the migration of programs like SAP, Oracle, and SQL servers to the AWS Cloud.

There is no need for any application-specific professionals to accomplish migration or for several migration solutions to be built while utilizing AWS Application Migration Service. This enables us to migrate critical apps to the cloud while saving money.

The data from the source servers is duplicated to AWS first when using the AWS Application Migration Service. Post data is duplicated to AWS; after tests are completed, cutover is carried out.

This ensures that the application will run smoothly from the AWS cloud. This guarantees a brief cutover window without affecting the performance of the application.

For more information on Amazon migration services, please refer to the following links:

https://aws.amazon.com/application-migration-service/faqs/

https://aws.amazon.com/application-migration-service/

Migration services
Migration services

Did you manage to get most of them right? If you are eager to take the exam then learn everything about AWS solutions architect certification here.

Frequently Asked Questions

Is AWS solution architect exam hard?

Yes, the AWS solution architect exam is comparatively one of the hardest. It’s significantly focused on specific scenarios and you must have a deep understanding to actually clear the exam.

How do I pass the AWS solution architect exam?

To pass the AWS solution architect exam you will need a clear roadmap and learn about all the important details of AWS solutions architect certification exam. The simplest way is to learn everything about the certification and its requirements; take a concise course; enroll in simulated practice exams and repeat them until you can easily score the highest, and take a look at whitepapers and sample questions before attempting the exam.

What happens if you fail the AWS solution architect exam?

If you fail the AWS solution architect exam, there is a 14 day waiting period before retaking the exam. The same rule applies to all AWS certification exams.

What is the passing score for AWS solution architect exam?

The passing score for AWS solution architect exam is 720, which is about 20 marks above the foundational level exams.