AWS Certified Solutions Architect Associate Practice Test (SAA-C03)
Use the form below to configure your AWS Certified Solutions Architect Associate Practice Test (SAA-C03). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

AWS Certified Solutions Architect Associate SAA-C03 Information
AWS Certified Solutions Architect - Associate showcases knowledge and skills in AWS technology, across a wide range of AWS services. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. This certification can enhance the career profile and earnings of certified individuals and increase your credibility and confidence in stakeholder and customer interactions.
The AWS Certified Solutions Architect - Associate (SAA-C03) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s ability to design solutions based on the AWS Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following tasks:
- Design solutions that incorporate AWS services to meet current business requirements and future projected needs
- Design architectures that are secure, resilient, high-performing, and cost optimized
- Review existing solutions and determine improvements
Scroll down to see your responses and detailed results
Free AWS Certified Solutions Architect Associate SAA-C03 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Design Secure ArchitecturesDesign Resilient ArchitecturesDesign High-Performing ArchitecturesDesign Cost-Optimized Architectures
Your client has a set of batch processing workloads that run at consistent times every week for the same duration. These workloads are not time-sensitive and can be interrupted without causing major issues. The client wants to minimize the cost of running these workloads on AWS. Which compute solution would offer the most cost savings for this scenario?
Commit to a Compute Savings Plan that covers the compute usage expected for the batch processing.
Use Amazon EC2 Spot Instances for running the batch processing workloads.
Reserve capacity by purchasing Reserved Instances for the batch processing workloads.
Run the workloads on On-Demand Instances to maintain flexibility without making a long-term commitment.
Answer Description
Amazon EC2 Spot Instances provide the most cost savings for workloads that can tolerate interruptions, offering up to a 90% discount compared to On-Demand pricing. Spot Instances are ideal for workloads with flexible start and end times, such as the client's batch processing tasks that are not time-sensitive. Reserved Instances would not provide as much cost savings and are typically used for steady-state, uninterrupted workloads. Savings Plans offer discounts but require a commitment to a consistent amount of usage (e.g., compute power or instance families) for 1 or 3 years, which may not align with the client's intermittent batches. On-Demand Instances are the most expensive option and are best suited for short-term, irregular, or unpredictable workloads that cannot tolerate interruptions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are EC2 Spot Instances and how do they work?
What is the difference between Spot Instances and On-Demand Instances?
What are the benefits of using EC2 Spot Instances for batch processing workloads?
Your company is looking to store large sets of infrequently accessed data for long-term preservation. Which AWS service should they use to optimize costs while ensuring the data remains available when needed?
Amazon EFS
Amazon EBS
Amazon S3 Standard
Amazon Glacier
Answer Description
Amazon Glacier is the correct service for archiving large sets of infrequently accessed data due to its low cost and reliable long-term storage capabilities. Glacier is designed for data archiving, offering options that balance retrieval times with cost. Amazon S3 Standard is not the best choice for archival purposes because it is designed for frequently accessed data and comes at a higher cost. Amazon EFS and Amazon EBS are file and block storage services designed for active workloads, not for low-cost, infrequently accessed long-term data storage.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the retrieval time options available in Amazon Glacier?
How does Amazon Glacier ensure data durability?
How does Amazon Glacier differ from Amazon S3 in terms of pricing?
A financial institution utilizes a key management service to enhance the security of its data-at-rest within cloud storage services. They aim to adhere to a stringent security protocol that requires the automatic renewal of encryption materials. Which approach can the institution implement to fulfill this requirement without altering the existing key identifiers or metadata?
Creating a new key manually every five years while disabling the old one.
Establishing a manual process where the keys are only updated in response to a security incident.
Enabling automatic renewal for the encryption keys through the service's management console or API.
Delegating the renewal process until the key reaches its designated expiration period.
Answer Description
The service that manages customer encryption keys offers the capability to automate the rotation of the underlying encryption material used by a managed key, usually on an annual basis. This automation ensures that the material is updated regularly without the need to change the key identifier or associated metadata, thus adhering to strict security protocols. The operation does not demand manual intervention, nor does it rely on a reactive approach to potential key compromises. Moreover, a fixed five-year rotation period is not a feature currently supported by the service provider.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a key management service (KMS)?
What does data-at-rest mean?
Why is automatic key rotation important?
A company needs to store data that is infrequently accessed but requires millisecond retrieval when needed. The data must be stored cost-effectively. Which Amazon S3 storage class should the company use?
Amazon S3 Standard-Infrequent Access.
Amazon S3 Glacier Instant Retrieval.
Amazon S3 Standard.
Amazon S3 Glacier Deep Archive.
Answer Description
Amazon S3 Glacier Instant Retrieval is designed for data that is infrequently accessed but requires millisecond retrieval. It offers the lowest storage cost for such data while providing immediate access when needed. Although Amazon S3 Standard-Infrequent Access also provides millisecond retrieval, it has a higher storage cost compared to S3 Glacier Instant Retrieval. Amazon S3 Glacier Deep Archive is more cost-effective in terms of storage cost but does not support millisecond retrieval, as retrieval times can take up to 12 hours. Amazon S3 Standard is intended for frequently accessed data and is more expensive for infrequent access patterns.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier Instant Retrieval?
How does Amazon S3 Glacier Instant Retrieval compare to other S3 storage classes?
What types of use cases are suitable for Amazon S3 Glacier Instant Retrieval?
Auto Scaling policies that rely solely on CPU utilization metrics are sufficient for all workloads when designing a horizontal scaling strategy.
Correct
Incorrect
Answer Description
Auto Scaling policies should not rely solely on CPU utilization for all workloads because different applications may experience bottlenecks in areas other than CPU usage, such as memory, disk I/O, or network throughput. Depending on the specific workload, it may be necessary to configure Auto Scaling to respond to other performance metrics or a combination of metrics to efficiently handle load variations. For instance, a memory-intensive application might require scaling actions based on memory usage rather than CPU utilization. A well-designed scaling strategy takes into account the unique characteristics of the application workload.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Auto Scaling policies?
What are some alternative metrics to CPU utilization for scaling?
Why is it important to consider workload characteristics when designing a scaling strategy?
Your client's online retail system is being redesigned to enhance scalability and ensure that the inventory tracking component can sequentially process transactions as they occur. To prevent any loss or misordering of transaction data, which service should be implemented?
Use a managed message queuing service with FIFO capabilities
Implement a publish/subscribe service for event notifications
Deploy a serverless function with an event processing trigger
Utilize a workflow orchestration service to manage the application's tasks
Answer Description
The correct service for ensuring ordered message processing and reliable delivery between decoupled components is a managed message queuing service that offers FIFO capabilities. FIFO queues ensure that messages are processed in the exact order they are received, which is crucial for maintaining accurate inventory counts. A publish/subscribe service would not provide the required ordering guarantees for this scenario. Workflow orchestration services offer task coordination but do not inherently queue messages. A serverless compute service running code in response to events is not a messaging queue system and does not guarantee message ordering.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does FIFO mean in a managed message queuing service?
Can you explain the difference between a managed message queuing service and a publish/subscribe service?
What are some examples of managed message queuing services with FIFO capabilities?
An application needs to send notifications to multiple downstream systems simultaneously when an event occurs. Which service is best suited for this requirement?
AWS Step Functions.
Amazon SQS.
Amazon Kinesis Data Streams.
Amazon SNS.
Answer Description
Amazon Simple Notification Service (SNS) is designed for sending messages to multiple subscribers simultaneously using the publish/subscribe messaging model. It allows a publisher to send messages to a topic, which then delivers the messages to all subscribed endpoints. Amazon SQS (Simple Queue Service) is intended for point-to-point messaging, where each message is consumed by a single consumer, making it less suitable for broadcasting messages to multiple recipients. AWS Step Functions orchestrates workflows, and Amazon Kinesis Data Streams is used for real-time data streaming and processing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon SNS and how does it work?
What is the difference between Amazon SNS and Amazon SQS?
What are some common use cases for Amazon SNS?
An organization is looking to migrate sensitive financial records to Amazon S3 for storage and regulatory compliance purposes. The Chief Security Officer (CSO) wants to ensure that the data is encrypted at rest using a managed service that allows control over the encryption keys and their rotation. Which service should be used to encrypt the data at rest while allowing the organization full control over the encryption keys and their rotation schedules?
Amazon S3 with AWS Certificate Manager (ACM)
Amazon S3 with AWS CloudHSM
Amazon S3 with Amazon Macie
Amazon S3 with AWS Key Management Service (KMS)
Answer Description
AWS Key Management Service (KMS) provides control over the encryption keys, including key creation, rotation, and usage policies. It supports key rotation, enabling the organization to adhere to security best practices. Using KMS customer managed keys (CMKs), the organization can define rotation policies including automatic rotation of the keys every year. S3 supports server-side encryption with KMS-managed keys (SSE-KMS) for encrypting data at rest. AWS CloudHSM does provide control over encryption keys but typically is used when organizations require dedicated hardware security modules within their AWS environment. Amazon Macie is a data security service focused on data discovery and protection rather than key management. While AWS Certificate Manager manages SSL/TLS certificates, it does not manage keys for data at rest encryption.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Key Management Service (KMS) and how does it work?
What does it mean to encrypt data at rest and why is it important?
What is the difference between customer managed keys (CMKs) and AWS managed keys in KMS?
An emerging fintech startup requires a database solution for processing and storing large volumes of financial transaction records. Transactions must be quickly retrievable based on the transaction ID, and new records are ingested at a high velocity throughout the day. Consistency is important immediately after transaction write. The startup is looking to minimize costs while ensuring the database can scale to meet growing demand. Which AWS database service should the startup utilize?
Amazon DocumentDB
Amazon RDS with Provisioned IOPS
Amazon Neptune
Amazon DynamoDB with on-demand capacity
Answer Description
Amazon DynamoDB is the optimal solution for this use case as it provides a NoSQL database with the ability to scale automatically to accommodate high ingest rates of transaction records. It is designed for applications that require consistent, single-digit millisecond latency for any scale. Additionally, DynamoDB offers strong consistency, ensuring that after a write, any subsequent read will reflect the change. In contrast, RDS is better suited for structured data requiring relational capabilities, Neptune is tailored for graph database use cases, and DocumentDB is optimized for JSON document storage which, while capable of handling key-value pairs, is not as cost-effective or performant for this specific scenario as DynamoDB.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DynamoDB and how does it differ from RDS?
What are the benefits of using on-demand capacity in DynamoDB?
What is strong consistency in DynamoDB and why is it important?
A company has a legacy application that generates large log files which are periodically analyzed for troubleshooting and performance tuning. The application is running on an EC2 instance and the analysis tool can only access files over NFS. The company wants a scalable and durable storage solution that can be accessed concurrently from multiple EC2 instances in the same Availability Zone. Which storage solution should the company implement?
Amazon Elastic File System (Amazon EFS)
Amazon Elastic Block Store (Amazon EBS)
Amazon FSx for Windows File Server
Amazon Simple Storage Service (Amazon S3)
Answer Description
Amazon EFS is the correct choice because it is a managed file storage service that can be shared between multiple EC2 instances and supports the NFS protocol, which is required by the application. This makes it ideal for concurrent access to shared file systems. Amazon S3, while highly durable and suited for object storage, does not support the NFS protocol natively and would require additional steps to mount as a file system, making it less appropriate for this use case. Amazon EBS is block storage which does not support file-sharing capabilities and typically can be mounted to a single instance. Amazon FSx for Windows File Server provides fully managed file storage but uses the SMB protocol, which is not compatible with the requirement for NFS.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon EFS and how does it work?
What are the benefits of using NFS with Amazon EFS?
Why wouldn't Amazon S3 or EBS be suitable for this use case?
A healthcare organization needs to establish a reliable and secure network connection between its on-premises data center and its cloud environment to support real-time data processing with minimal latency. Which service should the organization utilize to achieve this?
A dedicated, private network connection service offered by the cloud provider
A global DNS service to route traffic efficiently
A site-to-site encrypted virtual network connection over the public internet
A content delivery network service to cache data closer to users
Answer Description
A dedicated, private network connection service offered by the cloud provider is the correct choice as it provides a reliable and low-latency connection compared to internet-based options. A site-to-site VPN relies on the public internet and may not meet the performance and reliability requirements. A content delivery network (CDN) is designed to cache content closer to users, not to establish dedicated connections. A global DNS service is used for routing traffic efficiently but does not provide a private network connection.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a dedicated, private network connection service?
Why is low latency important for healthcare data processing?
How does a site-to-site VPN differ from a dedicated network connection?
A financial institution requires an archiving solution for critical data stored on local file servers. The data must be accessible with minimal delay when requested by on-premises users, yet older, less frequently accessed files should be economically archived in the cloud. However, after a specific period of inactivity, these older files should be transitioned to a less expensive storage class. Which solution should the architect recommend to meet these needs in a cost-efficient manner?
File gateway mode of a certain hybrid storage service
An online data transfer service
A fully managed file storage service for Windows files
A managed file transfer service
Answer Description
File gateway mode of a certain hybrid storage service provides a seamless way to integrate on-premises file systems with cloud storage like Amazon S3, ensuring low-latency access via local caching. It also offers automatic tiering capabilities to transition data to cost-saving storage classes after set periods of inactivity. This makes it a suitable solution for the financial institution's requirements. The alternatives mentioned do not offer the same combined functionality regarding local caching, seamless integration with cloud storage, and automatic tiering based on inactivity, thus, they would not present the most efficient solution for the given scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a File Gateway in AWS?
What is automatic tiering in cloud storage?
Why is low-latency access important for on-premises users?
An emergent FinTech startup is developing a mobile banking application which anticipates sporadic and significant usage peaks, primarily during monthly payroll periods. They need to implement a feature that processes various customer transactions and runs complex computations on demand. The startup aims to maintain infrastructure management to a minimum while ensuring costs remain aligned with their actual consumption levels. Which option is the MOST suitable for the dynamic transaction processing component of their application?
Utilize AWS Lambda functions triggered by the application, ensuring on-demand scaling and billing for compute time without server management.
Deploy the computational logic to a managed Kubernetes service using Amazon EKS, leveraging Kubernetes Horizontal Pod Autoscaler to scale based on demand.
Use AWS Batch to manage transaction processing jobs, taking advantage of its ability to efficiently run batch computing workloads across a full EC2 instance fleet.
Implement a server fleet using Amazon EC2 with Scheduled Scaling to handle expected peak periods based on predictable payroll cycles.
Configure an Amazon SQS queue to decouple incoming transactions and process them using an Auto Scaling group of EC2 instances based on queue length.
Answer Description
Using AWS Lambda for the transaction processing feature is most suitable as it allows the company to run code in response to triggers such as user actions or system events without the need for provisioning or managing servers. Lambda is capable of automatically scaling the execution in response to incoming requests, and charges are only incurred for the compute time consumed, which directly aligns with the startup's variable workload and desire for cost alignment with usage. This option also satisfies their preference for minimal infrastructure management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Lambda and how does it work?
What are the benefits of using serverless architecture in applications?
What do you mean by 'triggered by the application' in the context of AWS Lambda?
A company is deploying a web application that will experience unpredictable, spikey traffic, which may see sudden surges during marketing events. The application must automatically adjust its compute capacity to maintain performance. Which of the following solutions will BEST meet these requirements?
Deploy the application on a fixed-size group of Amazon EC2 instances sized for peak load
Implement an Amazon EC2 Auto Scaling group with a target tracking scaling policy
Provision a single, large EC2 instance optimized for high compute power to handle the unexpected traffic
Use an Application Load Balancer without Auto Scaling to distribute traffic evenly to EC2 instances
Answer Description
An Amazon EC2 Auto Scaling group with a target tracking scaling policy is the best solution for the described use case as it allows automatic adjustment of the number of EC2 instances in response to the load on the application. Target tracking scaling policies adjust the capacity to maintain a specific target for a selected metric such as average CPU utilization or number of requests per target. This is suitable for handling unpredictable, spikey traffic because it provides elasticity by automatically increasing or decreasing the number of EC2 instances as required to meet the target. The other options, such as fixed-size EC2 groups or an Application Load Balancer without Auto Scaling, do not offer the same level of elasticity and cannot automatically adjust the compute capacity in response to varying loads.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon EC2 Auto Scaling and how does it work?
What is a target tracking scaling policy?
What is the difference between an Application Load Balancer and EC2 Auto Scaling?
A financial team at a growing company needs to generate predictive spend reports for new applications set to launch the next quarter while also keeping an eye on ongoing services. Which service within the cloud provider platform should be utilized by the Solutions Architect to fulfill this requirement for cost forecast reporting?
AWS Cost Explorer
AWS Billing Dashboard
Detailed Billing Report
Trusted Advisor
Answer Description
The correct service to use in this case is AWS Cost Explorer because it allows users to create detailed and customizable reports that include both historical and forecasted data regarding cloud expenses. It can help the financial team understand future costs associated with new and existing cloud resources. While AWS Budgets assists in setting budget thresholds and alerts, it doesn't offer predictive reporting features. AWS Cost and Usage Report delivers comprehensive usage and cost data but lacks forecasting capabilities. Lastly, the AWS Pricing Calculator is designed to estimate costs prior to committing to services and does not produce past usage reports or future spend predictions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does AWS Cost Explorer generate predictive spend reports?
What other features does AWS Cost Explorer offer beyond predictive reporting?
What are the differences between AWS Cost Explorer and AWS Budgets?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.