An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs. Which solution is the MOST cost-effective
C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances
The most cost-effective solution is to use DEV with Scheduled Reserved Instances and PROD with Reserved Instances. This strategy allows for the utilization of reserved instances, which offer significant cost savings compared to on-demand instances. By using scheduled reserved instances for DEV, the instances can be run for a specific number of hours each day, aligning with the required 10-hour runtime. For PROD, running the instances 24/7 makes the use of reserved instances the most cost-effective option. This strategy optimizes costs by leveraging reserved instances for both environments while efficiently utilizing the instances based on their specific requirements.
A company runs multiple Amazon EC2 Linux instances in a VPC with applications that use a hierarchical directory structure. The applications need to rapidly and concurrently read and write to shared storage. How can this be achieved?
A. Create an Amazon EFS file system and mount it from each EC2 instance
To achieve rapid and concurrent read and write access to shared storage, the best solution is to create an Amazon EFS (Elastic File System) file system and mount it from each EC2 instance. Amazon EFS provides a scalable and fully managed file storage service that can be easily shared across multiple instances. By mounting the EFS file system on each instance, the applications can access and modify the hierarchical directory structure concurrently and efficiently. This ensures consistent and reliable access to the shared storage for all instances in the VPC.
A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete. What should the solutions architect do to meet these requirements?
C. Configure scheduled scaling to scale up to the desired compute level
To meet the requirements of reaching the desired EC2 capacity quickly and allowing the Auto Scaling group to scale down after batch jobs are complete, the solutions architect should configure scheduled scaling. By setting up a schedule, the Auto Scaling group can automatically scale up to the desired compute level before the batch jobs start at 1 AM every night. This ensures that the peak capacity is reached in a timely manner. Once the batch jobs are complete, the Auto Scaling group can then scale down, optimizing costs and resource utilization.
A Solutions Architect must design a web application that will be hosted on AWS, allowing users to
purchase access to premium, shared content that is stored in an S3 bucket. upon payment consent
will be available for download for 14 days before the user is denied access. Which of the following
would be the LEAST complicated implementation?
C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an
Amazon S3 origin to provide access to the file through signed URLs. Design the application to
sot an expiration of 14 days for the URL
The correct answer is to use an Amazon CloudFront distribution with an OAI and configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. The application should set an expiration of 14 days for the URL. This implementation is the least complicated because it leverages the CloudFront content delivery network to improve performance and security. By using signed URLs, access to the content is controlled and limited to a specific time period. The expiration of 14 days ensures that users have access to the content for a limited time before being denied access.
A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2
instances behind an Application Load Balancer and a relational database. The database should be
highly available and fault tolerant. Which database implementations will meet these requirements?
D. MySQL-compatible Amazon Aurora Multi-AZ
E. Amazon RDS for SQL Server Standard Edition Multi-AZ
The correct answer is MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ.
These two database implementations, MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ, are designed to provide high availability and fault tolerance.
Amazon Aurora Multi-AZ provides automatic failover to a standby replica in the event of a failure, ensuring that the database remains available even in the case of a hardware or software failure.
Similarly, Amazon RDS for SQL Server Standard Edition Multi-AZ also provides high availability by automatically replicating the database to a standby instance in a different Availability Zone.
By leveraging these two database implementations, the mission-critical web application can ensure that the database remains highly available and fault tolerant.
A company's web application is running on Amazon EC2 instances behind an Application Load
Balancer. The company recently changed its policy, which now requires the application to be
accessed from one specific country only. Which configuration will meet this requirement?
C. Configure AWS WAF on the Application Load Balancer in a VPC.
A company has an Amazon EC2 instance running on a private subnet that needs to access a
public websites to download patches and updates. The company does not want external websites to
see the EC2 instance IP address or initiate connection to it. How can a solution architect achieve this
B. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet
through the NAI gateway
To achieve the objective of allowing the EC2 instance in the private subnet to access public websites without revealing its IP address or allowing incoming connections, a NAT gateway can be created in a public subnet. By routing outbound traffic from the private subnet through the NAT gateway, the EC2 instance's IP address is hidden from external websites. This ensures that only outbound connections are initiated from the EC2 instance, providing the desired level of security and privacy.
A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days.
The company's network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What
should a solutions architect do to meet these requirements?
A. Use AWS Snowball
AWS Snowball is a service that allows for the migration of large amounts of data to and from the AWS Cloud. It is specifically designed for situations where the network bandwidth is limited or the data size is too large to be transferred over the network within a reasonable time frame. In this scenario, with a limited network bandwidth of 15 Mbps, it would not be feasible to transfer 20 TB of data within 30 days. Therefore, using AWS Snowball, which physically transfers the data using a secure appliance, would be the most appropriate solution to meet the requirements.
A company has a website running on Amazon EC2 instances across two Availability Zones. The
company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user
experience. How can a solutions architect meet this requirement
D. Use scheduled scaling.
To meet the requirement of providing a consistent user experience during spikes in traffic on specific holidays, a solutions architect can use scheduled scaling. With scheduled scaling, the architect can configure the auto scaling group to automatically adjust the number of EC2 instances based on predefined schedules. This allows the architect to anticipate the spikes in traffic during holidays and scale up the resources accordingly, ensuring that the website can handle the increased load and provide a consistent user experience.
An ecommerce company is running a multi-tier application on AWS. The front-end and backend
tiers both run on Amazon EC2, and the database runs on Amazon RDS for MySQL. The backend tier
communicates with the RDS instance. There are frequent calls to return identical datasets from the
database that are causing performance slowdowns. Which action should be taken to improve the
performance of the backend?
B. Implement Amazon ElastiCache to cache the large datasets
Implementing Amazon ElastiCache to cache the large datasets can improve the performance of the backend. ElastiCache is an in-memory data store that can be used to cache frequently accessed data, reducing the need to fetch it from the database every time. By caching the large datasets, the backend tier can retrieve the data faster, resulting in improved performance and reduced latency. This solution is especially effective for identical datasets that are frequently accessed, as it eliminates the need to make repeated calls to the database.
A company has an on-premises data center that is running out of storage capacity. The
company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The
solution must allow for immediate retrieval of data at no additional cost. How can these requirements
C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage
Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3
To meet the requirements of minimizing bandwidth costs and allowing for immediate retrieval of data at no additional cost, the best solution is to deploy AWS Storage Gateway using stored volumes to store data locally. This allows the company to retain copies of frequently accessed data subsets locally, reducing the need for frequent data retrieval from Amazon S3 and minimizing bandwidth costs. Additionally, using Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 ensures data protection and availability without incurring additional costs for immediate retrieval.
A company is processing data on a daily basis. The results of the operations are stored in an
Amazon S3 bucket, analyzed daily for one week, and then must remain immediately accessible for
occasional analysis. What is the MOST cost-effective storage solution alternative to the current
D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent
Access (S3 One Zone-IA) after 30 days
Configuring a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days is the most cost-effective storage solution alternative. This is because S3 One Zone-IA offers lower storage costs compared to S3 Standard-IA, while still providing immediate access to the data. By transitioning the objects to S3 One Zone-IA, the company can save on storage costs without sacrificing accessibility for occasional analysis.
A company delivers files in Amazon S3 to certain users who do not have AWS credentials.
These users must be given access for a limited lime. What should a solutions architect do to securely
meet these requirements?
B. Generate a pre signed URL to share with the users
To securely meet the requirements of providing limited access to users without AWS credentials, a solutions architect should generate a pre-signed URL to share with the users. A pre-signed URL is a time-limited URL that provides temporary access to specific objects in an S3 bucket. This allows the users to access the files without needing AWS credentials, while also ensuring that the access is limited to a specific time period. This approach provides a secure and controlled method for sharing files with external users.
A company wants to run a hybrid workload for data processing. The data needs to be accessed
by on-premises applications for local data processing using an NFS protocol, and must also be
accessible from the AWS Cloud for further analytics and batch processing. Which solution will meet
A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform
analytics on this data in the AWS Cloud
The correct solution is to use an AWS Storage Gateway file gateway to provide file storage to AWS and then perform analytics on this data in the AWS Cloud. This solution allows the company to access the data from on-premises applications for local data processing using an NFS protocol, while also making the data accessible from the AWS Cloud for further analytics and batch processing. The file gateway provides a seamless integration between on-premises and cloud storage, allowing the company to leverage the benefits of both environments for their hybrid workload.
A company plans to store sensitive user data on Amazon S3. Internal security compliance
requirement mandate encryption of data before sending it to Amazon S3. What should a solution
architect recommend to satisfy these requirements?
D. Client-side encryption with a master key stored in AWS Key Management Service (AWS
The solution architect should recommend client-side encryption with a master key stored in AWS Key Management Service (AWS KMS) to satisfy the internal security compliance requirement of encrypting data before sending it to Amazon S3. This approach ensures that the sensitive user data is encrypted before it leaves the client's environment, providing an additional layer of security. The master key stored in AWS KMS allows for secure management and control of the encryption keys.
A solutions architect is moving the static content from a public website hosted on Amazon EC2
instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the
static assets. The security group used by the EC2 instances restricts access to a limited set of IP
ranges. Access to the static content should be similarly restricted. Which combination of steps will
meet these requirements? (Choose two.)
A. Create an origin access identity (OAI) and associate it with the distribution. Change the
permissions in the bucket policy so that only the OAI can read the objects
B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2
security group. Associate this new web ACL with the CloudFront distribution
To meet the requirements of restricting access to the static content, the architect should create an origin access identity (OAI) and associate it with the CloudFront distribution. By changing the permissions in the bucket policy to only allow the OAI to read the objects, access to the static content is limited. Additionally, the architect should create an AWS WAF web ACL that includes the same IP restrictions as the EC2 security group. By associating this web ACL with the CloudFront distribution, the IP restrictions are enforced and access to the static assets is further restricted.
A company is investigating potential solutions that would collect, process, and store users'
service usage data. The business objective is to create an analytics capability that will enable the
company to gather operational insights quickly using standard SQL queries. The solution should be
highly available and ensure Atomicity, Consistency, Isolation, and Durability (ACID) compliance in the
data tier. Which solution should a solutions architect recommend
C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design
The recommended solution is to use a fully managed Amazon RDS for MySQL database in a Multi-AZ design. This solution ensures high availability and ACID compliance in the data tier. Amazon RDS for MySQL is a managed database service that handles routine tasks like backups, software patching, and automatic failure detection and recovery. The Multi-AZ design provides redundancy by automatically replicating data to a standby instance in a different Availability Zone. This design ensures that data is protected and available even in the event of a failure.
A company recently launched its website to serve content to its global user base. The company
wants to store and accelerate the delivery of static content to its users by leveraging Amazon
CloudFront with an Amazon EC2 instance attached as its origin. How should a solutions architect
optimize high availability for the application?
A. Use Lambda@Edge for CloudFront
Using Lambda@Edge for CloudFront allows for the execution of custom code at AWS edge locations, which helps optimize the delivery of content to users. This can include modifying responses, making decisions based on user requests, or implementing additional security measures. By leveraging Lambda@Edge, the company can enhance the availability and performance of its website by customizing the content delivery process according to the specific needs of its global user base.
An application running on an Amazon EC2 instance in VPC-A needs to access files in another
EC2 instance in VPC-B. Both are in separate. AWS accounts. The network administrator needs to
design a solution to enable secure access to EC2 instance in VPC-B from VPC-A. The connectivity
should not have a single point of failure or bandwidth concerns. Which solution will meet these
A. Set up a VPC peering connection between VPC-A and VPC-B.
Setting up a VPC peering connection between VPC-A and VPC-B will meet the requirements of secure access without a single point of failure or bandwidth concerns. VPC peering allows communication between instances in different VPCs using private IP addresses, without the need for internet gateways, VPN connections, or NAT devices. It provides a secure and reliable connection between the two VPCs, ensuring that the application running in VPC-A can access files in the EC2 instance in VPC-B.
A company currently stores symmetric encryption keys in a hardware security module (HSM). A
solution architect must design a solution to migrate key management to AWS. The solution should
allow for key rotation and support the use of customer provided keys. Where should the key material
be stored to meet these requirements?
D. AWS Key Management Service (AWS KMS)
The AWS Key Management Service (AWS KMS) is the appropriate service to store the key material in order to meet the requirements of key rotation and support for customer provided keys. AWS KMS is a managed service that allows for the creation and control of encryption keys. It provides features such as key rotation, which allows for the automatic generation of new keys to enhance security. Additionally, AWS KMS supports the use of customer provided keys, allowing the company to have full control over their encryption keys.
A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The
company's chief information officer wants to simplify the on- premises backup infrastructure and
reduce costs by eliminating the use of physical backup tapes. The company must preserve the
existing investment in the on- premises backup applications and workflows. What should a solutions
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSIvirtual
tape library (VTL) interface
The solution architect should recommend setting up AWS Storage Gateway to connect with the backup applications using the iSCSI virtual tape library (VTL) interface. This solution allows the company to eliminate the use of physical backup tapes, reducing costs and simplifying the on-premises backup infrastructure. Additionally, it preserves the existing investment in the on-premises backup applications and workflows.
A company hosts an application on an Amazon EC2 instance that requires a maximum of 200
GB storage space. The application is used infrequently, with peaks during mornings and evenings.
Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned
about costs and has asked a solutions architect to recommend the most cost-effective storage option
that does not sacrifice performance. Which solution should the solutions architect recommend?
B. Amazon EBS General Purpose SSD (gp2)
The solutions architect should recommend Amazon EBS General Purpose SSD (gp2) as the most cost-effective storage option that does not sacrifice performance. Although the application is used infrequently, it requires a maximum of 200 GB storage space and experiences peaks in disk I/O. General Purpose SSD (gp2) offers a balance between performance and cost, providing consistent performance for a wide range of workloads. It is suitable for applications with moderate I/O requirements, making it the appropriate choice in this scenario.
A company's application hosted on Amazon EC2 instances needs to access an Amazon S3
bucket. Due to data sensitivity, traffic cannot traverse the internet How should a solutions architect
B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.
To ensure that the company's application can access the Amazon S3 bucket without traffic traversing the internet, a solutions architect should configure a VPC gateway endpoint for Amazon S3 in the VPC. This allows the application to connect directly to the S3 bucket within the VPC, without needing to go over the internet. This ensures a secure and private connection for accessing the sensitive data in the S3 bucket.
A company has two applications it wants to migrate to AWS. Both applications process a large
set of files by accessing the same files at the same time. Both applications need to read the files with
low latency. Which architecture should a solutions architect recommend for this situation?
D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic
File System (Amazon EFS) with General Purpose performance mode and Bursting
Throughput mode to store the data
The recommended architecture is to configure two Amazon EC2 instances to run both applications and to configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data. This architecture allows both applications to access the same files at the same time with low latency. Amazon EFS provides a scalable file storage system that can handle concurrent access from multiple instances, making it suitable for this scenario. The General Purpose performance mode ensures low latency for file access, and Bursting Throughput mode allows for bursts of high throughput when needed.
An ecommerce company has noticed performance degradation of its Amazon RDS based web
application. The performance degradation is attributed to an increase in the number of read-only SQL
queries triggered by business analysts. A solution architect needs to solve the problem with minimal
changes to the existing web application. What should the solution architect recommend?
C. Create a read replica of the primary database and have the business analysts run their
The solution architect should recommend creating a read replica of the primary database and having the business analysts run their queries on it. This solution allows the business analysts to perform their read-only queries without impacting the performance of the primary database. By offloading the read workload to the read replica, the web application's performance degradation can be minimized, and the existing architecture can remain largely unchanged.
A company is running a highly sensitive application on Amazon EC2 backed by an Amazon
RDS database. Compliance regulations mandate that all personally identifiable information (PII) be
encrypted at rest. Which solution should a solutions architect recommend to meet this requirement
with the LEAST amount of changes to the infrastructure?
D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS
encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and
To meet the compliance regulations and encrypt personally identifiable information (PII) at rest, the recommended solution is to configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys. This solution ensures that both the instance and database volumes are encrypted using AWS KMS keys, providing a secure environment for the highly sensitive application. It requires the least amount of changes to the existing infrastructure while meeting the encryption requirement.
A company running an on-premises application is migrating the application to AWS to increase
its elasticity and availability. The current architecture uses a Microsoft SQL Server database with
heavy read activity. The company wants to explore alternate database options and migrate database
engines, if needed. Every 4 hours, the development team does a full copy of the production database
to populate a test database. During this period, users experience latency. What should a solution
architect recommend as replacement database?
D. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and
restore snapshots from RDS for the test database
The solution architect should recommend using Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database. This option provides high availability and scalability by using Multi-AZ deployment and read replicas. It also allows for easy restoration of the test database from snapshots, minimizing the impact on users during the copy process.
A company has enabled AWS CloudTrail logs to deliver log files to an Amazon S3 bucket for
each of its developer accounts. The company has created a central AWS account for streamlining
management and audit reviews. An internal auditor needs to access the CloudTrail logs, yet access
needs to be restricted for all developer account users. The solution must be secure and optimized.
How should a solutions architect meet these requirements?
A. Configure an AWS Lambda function in each developer account to copy the log files to the
central account. Create an IAM role in the central account for the auditor. Attach an IAM policy
providing read-only permissions to the bucket
The correct answer is to configure an AWS Lambda function in each developer account to copy the log files to the central account. This solution ensures that the CloudTrail logs from each developer account are securely and efficiently transferred to the central account. By creating an IAM role in the central account for the auditor and attaching an IAM policy with read-only permissions to the bucket, the auditor can access the logs without granting unnecessary access to the developer account users. This solution meets the requirements of providing secure and optimized access to the CloudTrail logs.
A company has several business systems that require access to data stored in a file share. the
business systems will access the file share using the Server Message Block (SMB) protocol. The file
share solution should be accessible from both of the company's legacy on-premises environment and
with AWS. Which services mod the business requirements? (Choose two.)
C. Amazon FSx for Windows
E. AWS Storage Gateway file gateway
The company's business systems require access to data stored in a file share using the Server Message Block (SMB) protocol. To meet this requirement, the company can use Amazon FSx for Windows, which provides fully managed Windows file servers that are accessible over the SMB protocol. Additionally, the company can also use AWS Storage Gateway file gateway, which is a hybrid cloud storage service that enables on-premises applications to seamlessly use AWS cloud storage, including file storage.
A company is using Amazon EC2 to run its big data analytics workloads. These variable
workloads run each night, and it is critical they finish by the start of business the following day. A
solutions architect has been tasked with designing the MOST cost-effective solution. Which solution
will accomplish this?
C. Reserved Instances
Reserved Instances are the most cost-effective solution for running variable workloads that have a predictable schedule. By purchasing Reserved Instances, the company can commit to using a specific instance type in a specific region for a one or three-year term, which provides a significant discount compared to On-Demand Instances. This allows the company to save costs while ensuring the availability of the required resources for their big data analytics workloads. Spot Instances may provide even greater cost savings, but they are not suitable for workloads that have strict time constraints and need to finish by a specific time.
A company has a Microsoft Windows-based application that must be migrated to AWS. This
application requires the use of a shared Windows file system attached to multiple Amazon EC2
Windows instances. What should a solution architect do to accomplish this?
C. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each
To accomplish the migration of the Microsoft Windows-based application to AWS with a shared Windows file system, the solution architect should configure Amazon FSx for Windows File Server. This service provides a fully managed native Windows file system that is accessible from multiple Amazon EC2 Windows instances. By mounting the Amazon FSx volume to each Windows instance, the application can continue to use the shared file system seamlessly. This option is the most appropriate and efficient solution for the given scenario.
A company has created an isolated backup of its environment in another Region. The
application is running in warm standby mode and is fronted by an Application Load Balancer (ALB).
The current failover process is manual and requires updating a DNS alias record to point to the
secondary ALB in another Region. What should a solution architect do to automate the failover
C. Crate an CNAME record on Amazon Route 53 pointing to the ALB endpoint
To automate the failover process, a solution architect should create a CNAME record on Amazon Route 53 pointing to the ALB endpoint. This allows the DNS alias record to be updated automatically, directing traffic to the secondary ALB in another Region. By using a CNAME record, the failover process can be seamlessly automated without the need for manual updates.
Company has a mobile chat application with a data store based in Amazon DynamoDB. Users
would like new messages to be read with as little latency as possible. A solutions architect needs to
design an optimal solution that requires minimal application changes. Which method should the
solutions architect select?
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update
the code to use the DAX endpoint
Adding Amazon DynamoDB Accelerator (DAX) to the mobile chat application's data store can significantly reduce the latency for reading new messages. By configuring DAX for the new messages table and updating the code to use the DAX endpoint, the application can benefit from the in-memory caching provided by DAX. This allows for faster access to frequently accessed data, improving the overall performance of the application without requiring major changes to the existing codebase.
A company is creating an architecture for a mobile app that requires minimal latency for its
users. The company's architecture consists of Amazon EC2 instances behind an Application Load
Balancer running in an Auto Scaling group. The EC2 instances connect to Amazon RDS. Application
beta testing showed there was a slowdown when reading the data. However the metrics indicate that
the EC2 instances do not cross any CPU utilization thresholds. How can this issue be addressed?
C. Add read replicas for the RDS instances and direct read traffic to the replica
To address the slowdown in reading data while minimizing latency, the company should add read replicas for the RDS instances and direct read traffic to the replica. By adding read replicas, the workload can be distributed across multiple instances, reducing the load on the main RDS instance and improving read performance. This solution is more effective than reducing the CPU utilization threshold or replacing the load balancer. Adding Multi-AZ support to the RDS instances would improve availability but may not directly address the latency issue.
A company runs a website on Amazon EC2 instances behind an ELB Application Load Balancer.
Amazon Route 53 is used for the DNS. The company wants to set up a backup website with a
message including a phone number and email address that users can reach if the primary website is
How should the company deploy this solution?
A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing
The company should use Amazon S3 website hosting for the backup website and Route 53 failover routing policy. This solution allows the company to host the backup website on Amazon S3, which provides high availability and durability. Route 53's failover routing policy ensures that traffic is directed to the backup website if the primary website is down. This setup allows users to reach the backup website and contact the company through the provided phone number and email address.
A media company is evaluating the possibility of moving its systems to the AWS Cloud. The
company needs at least 10 TB of storage with the maximum possible
I/O performance for video processing. 300 TB of very durable storage for storing media content, and
900 TB of storage to meet requirements for archival media that is not in use anymore. Which set of
services should a solutions architect recommend to meet these requirements?
A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and
Amazon S3 Glacier for archival storage
The recommended set of services includes Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage. Amazon EBS provides high-performance block storage for the company's systems, ensuring fast I/O performance for video processing. Amazon S3 offers durable storage for media content, ensuring that the data remains intact and accessible. Lastly, Amazon S3 Glacier provides long-term archival storage for media that is no longer in use, meeting the company's requirements for storing large amounts of data in a cost-effective manner.
A company uses Amazon S3 as its object storage solution. The company has thousands of S3
buckets it uses to store data. Some of the S3 bucket have data that is accessed less frequently than
others. A solutions architect found that lifecycle policies are not consistently implemented or are
implemented partially? resulting in data being stored in high-cost storage. Which solution will lower
costs without compromising the availability of objects
C. Use S3 Intelligent-Tiering storage
Using S3 Intelligent-Tiering storage will lower costs without compromising the availability of objects. This storage class automatically moves objects between two access tiers: frequent access and infrequent access. It uses machine learning to analyze access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier, which has a lower storage cost. If the objects are accessed again, they are automatically moved back to the frequent access tier. This ensures that less frequently accessed data is stored in a lower-cost storage tier while still being readily available when needed.
An application is running on Amazon EC2 instances. Sensitive information required for the
application is stored in an Amazon S3 bucket. The bucket needs to be protected from internet access
while only allowing services within the VPC access to the bucket. Which combination of actions
should solutions archived take to accomplish this? (Choose two.)
A. Create a VPC endpoint for Amazon S3
C. Apply a bucket policy to restrict access to the S3 endpoint
To protect the Amazon S3 bucket from internet access and only allow access from services within the VPC, two actions should be taken. First, a VPC endpoint for Amazon S3 should be created. This allows communication between the VPC and the S3 bucket without going over the internet. Second, a bucket policy should be applied to restrict access to the S3 endpoint. This policy can specify which services or resources within the VPC are allowed to access the bucket, ensuring that only authorized entities can access the sensitive information.
A web application runs on Amazon EC2 instances behind an Application Load Balancer. The
application allows users to create custom reports of historical weather data. Generating a report can
take up to 5 minutes. These long-running requests use many of the available incoming connections,
making the system unresponsive to other users. How can a solutions architect make the system
A. Use Amazon SQS with AWS Lambda to generate reports
By using Amazon SQS with AWS Lambda to generate reports, the long-running requests can be offloaded from the web application and processed asynchronously. This means that the web application can quickly respond to other users' requests, making the system more responsive. SQS acts as a buffer, storing the requests until they can be processed by the Lambda function. This solution allows for scalability and ensures that the system can handle a large number of requests without becoming unresponsive.
A solutions architect must create a highly available bastion host architecture. The solution needs
to be resilient within a single AWS Region and should require only minimal effort to maintain. What
should the solutions architect do to meet these requirements?
D. Create a Network Load Balancer backed by an Auto Scaling with instances in multiple Availability zones as the target
To create a highly available bastion host architecture, the solutions architect should use a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target. This setup ensures that the bastion host is distributed across multiple zones, providing resilience within a single AWS Region. Additionally, using Auto Scaling allows for automatic scaling of the bastion host based on demand, reducing the effort required for maintenance.
A three-tier web application processes orders from customers. The web tier consists of Amazon
EC2 instances behind an Application Load Balancer, a middle tier of three EC2 instances decoupled
from the web tier using Amazon SQS. and an Amazon DynamoDB backend. At peak times,
customers who submit orders using the site have to wait much longer than normal to receive
confirmations due to lengthy processing times. A solutions architect needs to reduce these
Which action will be MOST effective in accomplishing this?
D. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SOS
Adding more instances to the middle tier using Amazon EC2 Auto Scaling based on the SQS queue depth will be the most effective action to reduce processing times. By scaling out the middle tier, the system can handle a higher volume of incoming orders and process them more quickly. This will help to alleviate the bottleneck and reduce the wait times for customers receiving order confirmations.
A company relies on an application that needs at least 4 Amazon EC2 instances during regular
traffic and must scale up to 12 EC2 instances during peak loads.
The application is critical to the business and must be highly available. Which solution will meet these
A. Deploy the EC2 instances in an Auto Scaling group Set the minimum to 4 and the
maximum to M, with 2 in Availability Zone A and 2 in Availability Zone B
Deploying the EC2 instances in an Auto Scaling group with a minimum of 4 and a maximum of M, with 2 instances in Availability Zone A and 2 instances in Availability Zone B, will meet the requirements. This configuration ensures that the application has at least 4 instances during regular traffic, providing the necessary capacity. During peak loads, the Auto Scaling group will automatically scale up to a maximum of M instances, allowing the application to handle the increased demand. Distributing the instances across Availability Zones also improves the availability of the application, as it can continue to operate even if one Availability Zone experiences issues.
A solutions architect must design a solution for a persistent database that is being migrated from
on-premises to AWS. The database requires 64,000 IOPS according to the database administrator. If
possible, the database administrator wants to use a single Amazon Elastic Block Store (Amazon
EBS) volume to host the database instance. Which solution effectively meets the database
B. Create an Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS
SSD (io1) volume attached. Configure the volume to have 64,000 IOPS
The correct solution is to create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached and configure the volume to have 64,000 IOPS. This solution meets the criteria of the database administrator by providing the required IOPS for the database. The Nitro-based instances are optimized for high-performance and can handle the workload efficiently. The use of Provisioned IOPS SSD ensures consistent and predictable performance for the database.
Solutions architect is designing an architecture for a new application that requires low network
latency and high network throughput between Amazon EC2 instances. Which component should be
included in the architectural design?
B. Placement group using a cluster placement strategy
A placement group using a cluster placement strategy should be included in the architectural design. This is because a cluster placement strategy ensures that EC2 instances are placed in close proximity to each other, reducing network latency. It also allows for high network throughput as it enables instances within the placement group to communicate with each other using enhanced networking. This makes it the ideal choice for an application that requires low network latency and high network throughput between EC2 instances.
A company has global users accessing an application deployed in different AWS Regions,
exposing public static IP addresses. The users are experiencing poor performance when accessing
the application over the internet.
What should a solutions architect recommend to reduce internet latency
A. Set up AWS Global Accelerator and add endpoints
To reduce internet latency for global users accessing the application deployed in different AWS Regions, a solutions architect should recommend setting up AWS Global Accelerator and adding endpoints. AWS Global Accelerator is a service that improves the performance and availability of applications by directing traffic to the nearest AWS edge location. By adding endpoints, the architect can distribute the traffic across multiple regions, reducing latency and improving the user experience. This solution ensures that users can access the application with better performance and reduced latency.
A company wants to migrate a workload to AWS. The chief information security officer requires
that all data be encrypted at rest when stored in the cloud. The company wants complete control of
encryption key lifecycle management.
The company must be able to immediately remove the key material and audit key usage
independently of AWS CloudTrail. The chosen services should integrate with other storage services
that will be used on AWS. Which services satisfies these security requirements?
A. AWS CloudHSM with the CloudHSM client
AWS CloudHSM with the CloudHSM client satisfies the security requirements because it provides complete control of encryption key lifecycle management. With CloudHSM, the company can immediately remove the key material and audit key usage independently of AWS CloudTrail. Additionally, CloudHSM integrates with other storage services on AWS, allowing the company to securely store and manage their encryption keys while migrating their workload to the cloud.
A company recently deployed a two-tier application in two Availability Zones in the us-east-1
Region. The databases are deployed in a private subnet while the web servers are deployed in a
public subnet. An internet gateway is attached to the VPC. The application and database run on
Amazon EC2 instances. The database servers are unable to access patches on the internet. A
solutions architect needs to design a solution that maintains database security with the least
Which solution meets these requirements?
A. Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it
with an Elastic IP address. Update the routing table of the private subnet to use it as the
The correct solution is to deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. By doing this, the database servers in the private subnet will be able to access patches on the internet through the NAT gateway. Updating the routing table of the private subnet to use the NAT gateway as the default route ensures that all outgoing traffic from the private subnet is directed through the NAT gateway, maintaining database security. This solution requires the least operational overhead as it leverages the built-in NAT gateway service provided by AWS.
A company has an application with a REST-based Interface that allows data to be received in
near-real time from a third-party vendor Once received, the application processes and stores the data
for further analysis. The application Is running on Amazon EC2 instances. The third-party vendor has
received many 503 Service Unavailable Errors when sending data to the application. When the data
volume spikes, the compute capacity reaches its maximum limit and the application is unable to
process all requests. Which design should a solutions architect recommend to provide a more
A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS
Using Amazon Kinesis Data Streams to ingest the data and processing it with AWS Lambda functions would provide a more scalable solution. Kinesis Data Streams can handle high volumes of data and can scale automatically to accommodate spikes in data volume. AWS Lambda functions can be used to process the data in near-real time, allowing for efficient analysis. This combination of services would ensure that the application can handle the increased data load and prevent 503 Service Unavailable Errors.
Solutions architect needs to design a low-latency solution for a static single-page application
accessed by users utilizing a custom domain name. The solution must be serverless, encrypted in
transit, and cost-effective. Which combination of AWS services and features should the solutions
architect use? (Choose two.)
A. Amazon S3
D. Amazon CloudFront
The solutions architect should use Amazon S3 and Amazon CloudFront for this low-latency, serverless, encrypted in transit, and cost-effective solution. Amazon S3 is a highly scalable storage service that can host static assets for the single-page application. Amazon CloudFront is a content delivery network (CDN) that can cache and distribute the application's content globally, reducing latency for users accessing the application from different locations. Together, these services provide a reliable and efficient solution for hosting and delivering the static single-page application.
A company is migrating to the AWS Cloud. A file server is the first workload to migrate. Users
must be able to access the file share using the Server Message
Block (SMB) protocol. Which AWS managed service meets these requirements?
C. Amazon FSx
Amazon FSx is the correct answer because it is an AWS managed service that provides fully managed Windows file servers that are accessible using the Server Message Block (SMB) protocol. It is designed for migrating Windows-based applications that require file storage, making it suitable for the company's file server workload migration. Amazon EBS and Amazon S3 are not specifically designed for SMB protocol access, while Amazon EC2 is a virtual server and does not provide a fully managed file server solution.