AWS-02

Reviewed by Editorial Team
The ProProfs editorial team is comprised of experienced subject matter experts. They've collectively created over 10,000 quizzes and lessons, serving over 100 million users. Our team includes in-house content moderators and subject matter experts, as well as a global network of rigorously trained contributors. All adhere to our comprehensive editorial guidelines, ensuring the delivery of high-quality content.
Learn about Our Editorial Process
| By Siva Neelam
S
Siva Neelam
Community Contributor
Quizzes Created: 1 | Total Attempts: 658
| Attempts: 658 | Questions: 87
Please wait...
Question 1 / 87
0 %
0/100
Score 0/100
1. A solutions architect is working on optimizing a legacy document management application running on Microsoft Windows Server in an on-premises data center. The application stores a large number of files on a network file share. The chief information officer wants to reduce the on-premises  data center footprint and minimize storage costs by moving on-premises storage to AWS. What should the solutions architect do to meet these requirements?

Explanation

To meet the requirements of reducing the on-premises data center footprint and minimizing storage costs, the solutions architect should set up an AWS Storage Gateway file gateway. This service allows the application to store files in Amazon S3, reducing the need for on-premises storage. The file gateway provides a seamless integration between the application and Amazon S3, allowing the files to be accessed and managed in the same way as they were on the network file share. This solution enables cost savings by leveraging the scalable and cost-effective storage of Amazon S3 while still providing the necessary functionality for the legacy document management application.

Submit
Please wait...
About This Quiz
AWS-02 - Quiz

AWS-02 focuses on optimizing costs and enhancing functionality in AWS environments. It assesses skills in choosing cost-effective solutions for different operational needs, configuring shared storage, and ensuring high... see moreavailability and regional access controls for web applications. see less

2. A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company's network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What should a solutions architect do to meet these requirements?

Explanation

AWS Snowball is a service that allows for the migration of large amounts of data to and from the AWS Cloud. It is specifically designed for situations where the network bandwidth is limited or the data size is too large to be transferred over the network within a reasonable time frame. In this scenario, with a limited network bandwidth of 15 Mbps, it would not be feasible to transfer 20 TB of data within 30 days. Therefore, using AWS Snowball, which physically transfers the data using a secure appliance, would be the most appropriate solution to meet the requirements.

Submit
3. A company has a website running on Amazon EC2 instances across two Availability Zones. The company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user experience. How can a solutions architect meet this requirement

Explanation

To meet the requirement of providing a consistent user experience during spikes in traffic on specific holidays, a solutions architect can use scheduled scaling. With scheduled scaling, the architect can configure the auto scaling group to automatically adjust the number of EC2 instances based on predefined schedules. This allows the architect to anticipate the spikes in traffic during holidays and scale up the resources accordingly, ensuring that the website can handle the increased load and provide a consistent user experience.

Submit
4. A company runs a website on Amazon EC2 instances behind an ELB Application Load Balancer. Amazon Route 53 is used for the DNS. The company wants to set up a backup website with a message including a phone number and email address that users can reach if the primary website is down. How should the company deploy this solution?

Explanation

The company should use Amazon S3 website hosting for the backup website and Route 53 failover routing policy. This solution allows the company to host the backup website on Amazon S3, which provides high availability and durability. Route 53's failover routing policy ensures that traffic is directed to the backup website if the primary website is down. This setup allows users to reach the backup website and contact the company through the provided phone number and email address.

Submit
5. Company's website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the company's website demands globally. The solution should be cost effective, limit the provisioning of infrastructure resources and provide the fastest possible response time. Which combination should a solutions architect recommend to meet these requirements?

Explanation

The combination of Amazon CloudFront and Amazon S3 is the recommended solution because it meets all the given requirements. Amazon CloudFront is a content delivery network (CDN) that provides low latency and high transfer speeds globally. It can distribute the downloadable historical performance reports efficiently to users around the world, ensuring the fastest possible response time. Amazon S3 is a cost-effective and scalable storage service that can securely store the reports. This combination eliminates the need for provisioning infrastructure resources, as both services are managed by AWS, making it a cost-effective solution.

Submit
6. A company wants to deploy a shared file system for its .NET application servers and Microsoft SQL Server database running on Amazon EC2 instance with Windows Server 2016. The solution must be able to be integrated in to the corporate Active Directory domain, be highly durable, be managed by AWS, and provided levels of throughput and IOPS. Which solution meets these requirements?

Explanation

Amazon FSx for Windows File Server is the correct solution for this scenario. FSx for Windows File Server provides a fully managed shared file system that is integrated with the corporate Active Directory domain. It offers high durability, is managed by AWS, and provides the required levels of throughput and IOPS. This solution is specifically designed for Windows workloads and is the best fit for the given requirements.

Submit
7. A solution architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user's on-premises networkattached storage (NAS).The solution architected has proposed migrating the IIS web servers Which replacement to the on-promises file share is MOST resilient and durable?

Explanation

Migrating the file share to Amazon FSx for Windows File Server is the most resilient and durable replacement for the on-premises file share. Amazon FSx for Windows File Server is a fully managed native Windows file system that is built on Windows Server and provides compatibility with Windows applications. It offers high durability and availability, with automatic backups and continuous replication across multiple Availability Zones. This ensures that the data is protected against failures and provides a reliable file storage solution for the IIS web application.

Submit
8. An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both are in separate. AWS accounts. The network administrator needs to design a solution to enable secure access to EC2 instance in VPC-B from VPC-A. The connectivity should not have a single point of failure or bandwidth concerns. Which solution will meet these requirements?

Explanation

Setting up a VPC peering connection between VPC-A and VPC-B will meet the requirements of secure access without a single point of failure or bandwidth concerns. VPC peering allows communication between instances in different VPCs using private IP addresses, without the need for internet gateways, VPN connections, or NAT devices. It provides a secure and reliable connection between the two VPCs, ensuring that the application running in VPC-A can access files in the EC2 instance in VPC-B.

Submit
9. A company decides to migrate its three-tier web application from on-premises to the AWS Cloud. The new database must be capable of dynamically scaling storage capacity and performing table joins. Which AWS service meets these requirements?

Explanation

Amazon Aurora is the correct answer because it is a fully managed relational database service that is compatible with MySQL and PostgreSQL. It provides the capability to dynamically scale storage capacity, allowing the company to easily adjust the storage capacity as needed. Additionally, Aurora supports table joins, making it suitable for the company's requirement of performing table joins in their web application.

Submit
10. A company is running a two-tier ecommerce website using services. The current architect uses a publish-facing Elastic Load Balancer that sends traffic to Amazon EC2 instances in a private subnet. The static content is hosted on EC2 instances, and the dynamic content is retrieved from a MYSQL database. The application is running in the United States. The company recently started selling to users in Europe and Australia. A solution architect needs to design solution so their international users have an improved browsing experience. Which solution is MOST cost-effective?

Explanation

The solution of using Amazon CloudFront and Amazon S3 to host static images is the most cost-effective because it allows for the caching and distribution of static content closer to the international users, reducing latency and improving browsing experience. This solution leverages the global network of CloudFront edge locations to serve the static content from locations closer to the users, resulting in faster load times. Additionally, hosting static images on S3 is cost-effective as it offers low storage and data transfer costs.

Submit
11. A company has an Amazon EC2 instance running on a private subnet that needs to access a public websites to download patches and updates. The company does not want external websites to see the EC2 instance IP address or initiate connection to it. How can a solution architect achieve this objective?

Explanation

To achieve the objective of allowing the EC2 instance in the private subnet to access public websites without revealing its IP address or allowing incoming connections, a NAT gateway can be created in a public subnet. By routing outbound traffic from the private subnet through the NAT gateway, the EC2 instance's IP address is hidden from external websites. This ensures that only outbound connections are initiated from the EC2 instance, providing the desired level of security and privacy.

Submit
12. A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These users must be given access for a limited lime. What should a solutions architect do to securely meet these requirements?

Explanation

To securely meet the requirements of providing limited access to users without AWS credentials, a solutions architect should generate a pre-signed URL to share with the users. A pre-signed URL is a time-limited URL that provides temporary access to specific objects in an S3 bucket. This allows the users to access the files without needing AWS credentials, while also ensuring that the access is limited to a specific time period. This approach provides a secure and controlled method for sharing files with external users.

Submit
13. A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete. What should the solutions architect do to meet these requirements?

Explanation

To meet the requirements of reaching the desired EC2 capacity quickly and allowing the Auto Scaling group to scale down after batch jobs are complete, the solutions architect should configure scheduled scaling. By setting up a schedule, the Auto Scaling group can automatically scale up to the desired compute level before the batch jobs start at 1 AM every night. This ensures that the peak capacity is reached in a timely manner. Once the batch jobs are complete, the Auto Scaling group can then scale down, optimizing costs and resource utilization.

Submit
14. A company runs an application on Amazon EC2 Instances. The application is deployed in private subnets in three Availability Zones of the us-east-1 Region. The instances must be able to connect to the internet to download files. The company wants a design that is highly available across the Region. Which solution should be implemented to ensure that there are no disruptions to Internet connectivity?

Explanation

To ensure continuous internet connectivity for the instances in the private subnets, a NAT gateway should be deployed in a public subnet of each Availability Zone. NAT gateway allows instances in the private subnets to connect to the internet while also providing a highly available solution across the Region. Deploying a NAT instance in each Availability Zone would also work, but it is a less preferred option as it requires more management and configuration compared to the NAT gateway. Deploying a transit gateway or an internet gateway would not fulfill the requirement of allowing instances in private subnets to connect to the internet.

Submit
15. A company runs multiple Amazon EC2 Linux instances in a VPC with applications that use a hierarchical directory structure. The applications need to rapidly and concurrently read and write to shared storage. How can this be achieved?

Explanation

To achieve rapid and concurrent read and write access to shared storage, the best solution is to create an Amazon EFS (Elastic File System) file system and mount it from each EC2 instance. Amazon EFS provides a scalable and fully managed file storage service that can be easily shared across multiple instances. By mounting the EFS file system on each instance, the applications can access and modify the hierarchical directory structure concurrently and efficiently. This ensures consistent and reliable access to the shared storage for all instances in the VPC.

Submit
16. A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2 instances behind an Application Load Balancer and a relational database. The database should be highly available and fault tolerant. Which database implementations will meet these requirements? (Choose two.)

Explanation

The correct answer is MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ.

These two database implementations, MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ, are designed to provide high availability and fault tolerance.

Amazon Aurora Multi-AZ provides automatic failover to a standby replica in the event of a failure, ensuring that the database remains available even in the case of a hardware or software failure.

Similarly, Amazon RDS for SQL Server Standard Edition Multi-AZ also provides high availability by automatically replicating the database to a standby instance in a different Availability Zone.

By leveraging these two database implementations, the mission-critical web application can ensure that the database remains highly available and fault tolerant.

Submit
17. An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solution architect needs to solve the problem with minimal changes to the existing web application. What should the solution architect recommend?

Explanation

The solution architect should recommend creating a read replica of the primary database and having the business analysts run their queries on it. This solution allows the business analysts to perform their read-only queries without impacting the performance of the primary database. By offloading the read workload to the read replica, the web application's performance degradation can be minimized, and the existing architecture can remain largely unchanged.

Submit
18. A company uses an Amazon S3 bucket to store static images for its website. The company configured permissions to allow access to Amazon S3 objects by privileged users only. What should a solutions architect do to protect against data loss? (Choose two.)

Explanation

Enabling versioning on the S3 bucket ensures that multiple versions of each object are stored, allowing the company to recover previous versions in case of accidental deletion or data corruption. Using MFA Delete adds an extra layer of security by requiring multi-factor authentication before an object can be deleted, preventing unauthorized deletion and reducing the risk of data loss.

Submit
19. A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing. Which solution will meet these requirements?

Explanation

The correct solution is to use an AWS Storage Gateway file gateway to provide file storage to AWS and then perform analytics on this data in the AWS Cloud. This solution allows the company to access the data from on-premises applications for local data processing using an NFS protocol, while also making the data accessible from the AWS Cloud for further analytics and batch processing. The file gateway provides a seamless integration between on-premises and cloud storage, allowing the company to leverage the benefits of both environments for their hybrid workload.

Submit
20. A company has established a new AWS account. The account is newly provisioned and no changes have been made to the default settings. The company is concerned about the security of the AWS account root user. What should be done to secure the root user?

Explanation

To secure the root user of the newly provisioned AWS account, it is recommended to create IAM users for daily administrative tasks and enable multi-factor authentication (MFA) on the root user. By creating IAM users, the root user's credentials are not used for daily tasks, reducing the risk of unauthorized access. Enabling MFA adds an extra layer of security by requiring an additional authentication factor, such as a code from a mobile app or a physical device, to access the account. This helps protect against unauthorized access even if the root user's password is compromised.

Submit
21. A company's application hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Due to data sensitivity, traffic cannot traverse the internet How should a solutions architect configure access?

Explanation

To ensure that the company's application can access the Amazon S3 bucket without traffic traversing the internet, a solutions architect should configure a VPC gateway endpoint for Amazon S3 in the VPC. This allows the application to connect directly to the S3 bucket within the VPC, without needing to go over the internet. This ensures a secure and private connection for accessing the sensitive data in the S3 bucket.

Submit
22. A web application runs on Amazon EC2 instances behind an Application Load Balancer. The application allows users to create custom reports of historical weather data. Generating a report can take up to 5 minutes. These long-running requests use many of the available incoming connections, making the system unresponsive to other users. How can a solutions architect make the system more responsive?

Explanation

By using Amazon SQS with AWS Lambda to generate reports, the long-running requests can be offloaded from the web application and processed asynchronously. This means that the web application can quickly respond to other users' requests, making the system more responsive. SQS acts as a buffer, storing the requests until they can be processed by the Lambda function. This solution allows for scalability and ensures that the system can handle a large number of requests without becoming unresponsive.

Submit
23. A company is migrating to the AWS Cloud. A file server is the first workload to migrate. Users must be able to access the file share using the Server Message Block (SMB) protocol. Which AWS managed service meets these requirements?

Explanation

Amazon FSx is the correct answer because it is an AWS managed service that provides fully managed Windows file servers that are accessible using the Server Message Block (SMB) protocol. It is designed for migrating Windows-based applications that require file storage, making it suitable for the company's file server workload migration. Amazon EBS and Amazon S3 are not specifically designed for SMB protocol access, while Amazon EC2 is a virtual server and does not provide a fully managed file server solution.

Submit
24. A company's web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only. Which configuration will meet this requirement?

Explanation

not-available-via-ai

Submit
25. A company has a Microsoft Windows-based application that must be migrated to AWS. This application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances. What should a solution architect do to accomplish this?

Explanation

To accomplish the migration of the Microsoft Windows-based application to AWS with a shared Windows file system, the solution architect should configure Amazon FSx for Windows File Server. This service provides a fully managed native Windows file system that is accessible from multiple Amazon EC2 Windows instances. By mounting the Amazon FSx volume to each Windows instance, the application can continue to use the shared file system seamlessly. This option is the most appropriate and efficient solution for the given scenario.

Submit
26. A solutions architect must create a highly available bastion host architecture. The solution needs to be resilient within a single AWS Region and should require only minimal effort to maintain. What should the solutions architect do to meet these requirements?

Explanation

To create a highly available bastion host architecture, the solutions architect should use a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target. This setup ensures that the bastion host is distributed across multiple zones, providing resilience within a single AWS Region. Additionally, using Auto Scaling allows for automatic scaling of the bastion host based on demand, reducing the effort required for maintenance.

Submit
27. An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs. Which solution is the MOST cost-effective

Explanation

The most cost-effective solution is to use DEV with Scheduled Reserved Instances and PROD with Reserved Instances. This strategy allows for the utilization of reserved instances, which offer significant cost savings compared to on-demand instances. By using scheduled reserved instances for DEV, the instances can be run for a specific number of hours each day, aligning with the required 10-hour runtime. For PROD, running the instances 24/7 makes the use of reserved instances the most cost-effective option. This strategy optimizes costs by leveraging reserved instances for both environments while efficiently utilizing the instances based on their specific requirements.

Submit
28. A company recently deployed a two-tier application in two Availability Zones in the us-east-1 Region. The databases are deployed in a private subnet while the web servers are deployed in a public subnet. An internet gateway is attached to the VPC. The application and database run on Amazon EC2 instances. The database servers are unable to access patches on the internet. A solutions architect needs to design a solution that maintains database security with the least operational overhead. Which solution meets these requirements?

Explanation

The correct solution is to deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. By doing this, the database servers in the private subnet will be able to access patches on the internet through the NAT gateway. Updating the routing table of the private subnet to use the NAT gateway as the default route ensures that all outgoing traffic from the private subnet is directed through the NAT gateway, maintaining database security. This solution requires the least operational overhead as it leverages the built-in NAT gateway service provided by AWS.

Submit
29. A company that develops web applications has launched hundreds of Application Load Balancers (ALBs) in multiple Regions. The company wants to create an allow list (or the IPs of all the load balancers on its firewall device. A solutions architect is looking for a one-time, highly available solution to address this request, which will also help reduce the number of IPs that need to be allowed by the firewall. What should the solutions architect recommend to meet these requirements?

Explanation

The recommended solution is to launch AWS Global Accelerator and create endpoints for all the Regions. By registering all the ALBs in different Regions to the corresponding endpoints, the company can have a one-time, highly available solution to address the request. This solution will also help reduce the number of IPs that need to be allowed by the firewall, making it more efficient and manageable.

Submit
30. A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume. A certain video file has become popular and a large number of users across the world are accessing this content. This has resulted in a cost increase. Which action will DECREASE cost without compromising user accessibility?

Explanation

Storing the video in an Amazon S3 bucket and creating an Amazon CloudFront distribution will decrease cost without compromising user accessibility. Amazon S3 is a cost-effective storage service, and CloudFront is a content delivery network that caches the video content at edge locations worldwide. This means that users can access the video from the nearest edge location, reducing the load on the EBS volume and decreasing costs.

Submit
31. A company is investigating potential solutions that would collect, process, and store users' service usage data. The business objective is to create an analytics capability that will enable the company to gather operational insights quickly using standard SQL queries. The solution should be highly available and ensure Atomicity, Consistency, Isolation, and Durability (ACID) compliance in the data tier. Which solution should a solutions architect recommend

Explanation

The recommended solution is to use a fully managed Amazon RDS for MySQL database in a Multi-AZ design. This solution ensures high availability and ACID compliance in the data tier. Amazon RDS for MySQL is a managed database service that handles routine tasks like backups, software patching, and automatic failure detection and recovery. The Multi-AZ design provides redundancy by automatically replicating data to a standby instance in a different Availability Zone. This design ensures that data is protected and available even in the event of a failure.

Submit
32. A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a centrally networking setup for multiple accounts, VPCs, and VPNs. Which networking solution meets these requirements?

Explanation

A transit gateway with AWS Transit Gateway is the best networking solution for the given scenario. It allows for centralized networking setup by connecting multiple VPCs and VPNs. This solution can accommodate the company's growth to hundreds of VPCs and handle the new requests for site-to-site VPNs. With a transit gateway, all VPCs and VPNs can be connected, making it easier to manage and maintain the network architecture.

Submit
33. Company has a mobile chat application with a data store based in Amazon DynamoDB. Users would like new messages to be read with as little latency as possible. A solutions architect needs to  design an optimal solution that requires minimal application changes. Which method should the solutions architect select?

Explanation

Adding Amazon DynamoDB Accelerator (DAX) to the mobile chat application's data store can significantly reduce the latency for reading new messages. By configuring DAX for the new messages table and updating the code to use the DAX endpoint, the application can benefit from the in-memory caching provided by DAX. This allows for faster access to frequently accessed data, improving the overall performance of the application without requiring major changes to the existing codebase.

Submit
34. A company uses Amazon S3 as its object storage solution. The company has thousands of S3 buckets it uses to store data. Some of the S3 bucket have data that is accessed less frequently than others. A solutions architect found that lifecycle policies are not consistently implemented or are implemented partially? resulting in data being stored in high-cost storage. Which solution will lower costs without compromising the availability of objects

Explanation

Using S3 Intelligent-Tiering storage will lower costs without compromising the availability of objects. This storage class automatically moves objects between two access tiers: frequent access and infrequent access. It uses machine learning to analyze access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier, which has a lower storage cost. If the objects are accessed again, they are automatically moved back to the frequent access tier. This ensures that less frequently accessed data is stored in a lower-cost storage tier while still being readily available when needed.

Submit
35. A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images. Which method is the MOST costeffective for hosting the website?

Explanation

Creating an Amazon S3 bucket and hosting the website there is the most cost-effective method for hosting the website. Amazon S3 is a highly scalable and cost-efficient storage service that allows users to store and retrieve any amount of data at any time. It is designed for high durability, availability, and performance. By hosting the website in an S3 bucket, the development team can take advantage of the low cost of storage and data transfer, eliminating the need for managing and maintaining servers or containers. Additionally, S3 provides built-in features for website hosting, making it easy to configure and manage the website.

Submit
36. A solutions architect must design a solution for a persistent database that is being migrated from on-premises to AWS. The database requires 64,000 IOPS according to the database administrator. If possible, the database administrator wants to use a single Amazon Elastic Block Store (Amazon EBS) volume to host the database instance. Which solution effectively meets the database administrator's criteria?

Explanation

The correct solution is to create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached and configure the volume to have 64,000 IOPS. This solution meets the criteria of the database administrator by providing the required IOPS for the database. The Nitro-based instances are optimized for high-performance and can handle the workload efficiently. The use of Provisioned IOPS SSD ensures consistent and predictable performance for the database.

Submit
37. Solutions architect is designing an architecture for a new application that requires low network latency and high network throughput between Amazon EC2 instances. Which component should be included in the architectural design?

Explanation

A placement group using a cluster placement strategy should be included in the architectural design. This is because a cluster placement strategy ensures that EC2 instances are placed in close proximity to each other, reducing network latency. It also allows for high network throughput as it enables instances within the placement group to communicate with each other using enhanced networking. This makes it the ideal choice for an application that requires low network latency and high network throughput between EC2 instances.

Submit
38. A solution architect is designing a hybrid application using the AWS cloud. The network between the on-premises data center and AWS will use an AWS Direct Connect (DX) connection. The application connectivity between AWS and the on-premises data center must be highly resilient. Which DX configuration should be implemented to meet these requirements?

Explanation

To ensure highly resilient application connectivity between AWS and the on-premises data center, it is recommended to configure DX connections at multiple DX locations. This configuration provides redundancy and fault tolerance by establishing multiple connections between the on-premises data center and AWS. If one DX location or connection fails, the application traffic can still be routed through the remaining connections, ensuring continuous connectivity and minimizing downtime.

Submit
39. A company has two applications it wants to migrate to AWS. Both applications process a large set of files by accessing the same files at the same time. Both applications need to read the files with low latency. Which architecture should a solutions architect recommend for this situation?

Explanation

The recommended architecture is to configure two Amazon EC2 instances to run both applications and to configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data. This architecture allows both applications to access the same files at the same time with low latency. Amazon EFS provides a scalable file storage system that can handle concurrent access from multiple instances, making it suitable for this scenario. The General Purpose performance mode ensures low latency for file access, and Bursting Throughput mode allows for bursts of high throughput when needed.

Submit
40. A company is creating an architecture for a mobile app that requires minimal latency for its users. The company's architecture consists of Amazon EC2 instances behind an Application Load Balancer running in an Auto Scaling group. The EC2 instances connect to Amazon RDS. Application beta testing showed there was a slowdown when reading the data. However the metrics indicate that the EC2 instances do not cross any CPU utilization thresholds. How can this issue be addressed?

Explanation

To address the slowdown in reading data while minimizing latency, the company should add read replicas for the RDS instances and direct read traffic to the replica. By adding read replicas, the workload can be distributed across multiple instances, reducing the load on the main RDS instance and improving read performance. This solution is more effective than reducing the CPU utilization threshold or replacing the load balancer. Adding Multi-AZ support to the RDS instances would improve availability but may not directly address the latency issue.

Submit
41. A company needs to implement a relational database with a multi-Region disaster recovery Recovery Point Objective (RPO) of 1 second and an Recovery Time Objective (RTO) of 1 minute. Which AWS solution can achieve this?

Explanation

Amazon Aurora Global Database is the correct answer because it is designed to provide low-latency global access to a single database with a replication lag of less than 1 second. It also has the ability to automatically failover to a secondary region within 1 minute, meeting the RPO and RTO requirements mentioned in the question.

Submit
42. A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files. Which storage option meets these requirements?

Explanation

S3 Intelligent-Tiering is the best storage option for this scenario because it automatically moves data between two access tiers based on its usage patterns. Frequently accessed files will be stored in the frequent access tier, while rarely accessed files will be moved to the infrequent access tier. This allows for cost optimization as the architect only pays for the storage and retrieval of files based on their actual usage. Additionally, S3 Intelligent-Tiering provides resilience to the loss of an Availability Zone by replicating data across multiple zones.

Submit
43. A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3. How can a solutions architect ensure that the application has permission to access Amazon S3?

Explanation

To ensure that the application has permission to access Amazon S3, a solutions architect should create an IAM role with S3 permissions. This role can then be specified as the taskRoleArn in the task definition. By doing this, the application running on Amazon ECS will be granted the necessary permissions to make API calls to Amazon S3 and store the resized images.

Submit
44. A company plans to store sensitive user data on Amazon S3. Internal security compliance requirement mandate encryption of data before sending it to Amazon S3. What should a solution architect recommend to satisfy these requirements?

Explanation

The solution architect should recommend client-side encryption with a master key stored in AWS Key Management Service (AWS KMS) to satisfy the internal security compliance requirement of encrypting data before sending it to Amazon S3. This approach ensures that the sensitive user data is encrypted before it leaves the client's environment, providing an additional layer of security. The master key stored in AWS KMS allows for secure management and control of the encryption keys.

Submit
45. A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief information officer wants to simplify the on- premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the  existing investment in the on- premises backup applications and workflows. What should a solutions architect recommend?

Explanation

The solution architect should recommend setting up AWS Storage Gateway to connect with the backup applications using the iSCSI virtual tape library (VTL) interface. This solution allows the company to eliminate the use of physical backup tapes, reducing costs and simplifying the on-premises backup infrastructure. Additionally, it preserves the existing investment in the on-premises backup applications and workflows.

Submit
46. A company hosts an application on an Amazon EC2 instance that requires a maximum of 200 GB storage space. The application is used infrequently, with peaks during mornings and evenings. Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned about costs and has asked a solutions architect to recommend the most cost-effective storage option that does not sacrifice performance. Which solution should the solutions architect recommend?

Explanation

The solutions architect should recommend Amazon EBS General Purpose SSD (gp2) as the most cost-effective storage option that does not sacrifice performance. Although the application is used infrequently, it requires a maximum of 200 GB storage space and experiences peaks in disk I/O. General Purpose SSD (gp2) offers a balance between performance and cost, providing consistent performance for a wide range of workloads. It is suitable for applications with moderate I/O requirements, making it the appropriate choice in this scenario.

Submit
47. A company has global users accessing an application deployed in different AWS Regions, exposing public static IP addresses. The users are experiencing poor performance when accessing the application over the internet. What should a solutions architect recommend to reduce internet latency

Explanation

To reduce internet latency for global users accessing the application deployed in different AWS Regions, a solutions architect should recommend setting up AWS Global Accelerator and adding endpoints. AWS Global Accelerator is a service that improves the performance and availability of applications by directing traffic to the nearest AWS edge location. By adding endpoints, the architect can distribute the traffic across multiple regions, reducing latency and improving the user experience. This solution ensures that users can access the application with better performance and reduced latency.

Submit
48. A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase access to premium, shared content that is stored in an S3 bucket. upon payment consent will be available for download for 14 days before the user is denied access. Which of the following would be the LEAST complicated implementation?

Explanation

The correct answer is to use an Amazon CloudFront distribution with an OAI and configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. The application should set an expiration of 14 days for the URL. This implementation is the least complicated because it leverages the CloudFront content delivery network to improve performance and security. By using signed URLs, access to the content is controlled and limited to a specific time period. The expiration of 14 days ensures that users have access to the content for a limited time before being denied access.

Submit
49. A company is reviewing its AWS Cloud deployment to ensure its data is not accessed by anyone without appropriate authorization. A solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes. What should the solutions architect do to accomplish this?

Explanation

To accomplish the task of identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes, the solutions architect should enable the AWS Config service with the appropriate rules. AWS Config allows for continuous monitoring and recording of AWS resource configurations, including S3 buckets. By enabling AWS Config with the appropriate rules, the architect can ensure that any unauthorized access or configuration changes to the S3 buckets will be detected and recorded, helping to ensure the security and integrity of the company's data.

Submit
50. A company requires a durable backup storage solution for its on-premises database servers while ensuring on-premises applications maintain access to these backups for quick recovery. The company will use AWS storage services as the destination for these backups. A solutions architect is designing a solution with minimal operational overhead. Which solution should the solutions architect implement?

Explanation

Deploying an AWS Storage Gateway file gateway on-premises and associating it with an Amazon S3 bucket is the best solution for this scenario. This allows the company to have a durable backup storage solution for its on-premises database servers while ensuring quick access to the backups for recovery. The file gateway provides seamless integration between on-premises applications and the AWS storage services, allowing easy backup and restore operations. By associating it with an Amazon S3 bucket, the backups can be stored securely and accessed as needed. This solution minimizes operational overhead and provides a reliable and efficient backup storage solution.

Submit
51. A company recently launched its website to serve content to its global user base. The company wants to store and accelerate the delivery of static content to its users by leveraging Amazon CloudFront with an Amazon EC2 instance attached as its origin. How should a solutions architect optimize high availability for the application?

Explanation

Using Lambda@Edge for CloudFront allows for the execution of custom code at AWS edge locations, which helps optimize the delivery of content to users. This can include modifying responses, making decisions based on user requests, or implementing additional security measures. By leveraging Lambda@Edge, the company can enhance the availability and performance of its website by customizing the content delivery process according to the specific needs of its global user base.

Submit
52. An application is running on Amazon EC2 instances. Sensitive information required for the application is stored in an Amazon S3 bucket. The bucket needs to be protected from internet access while only allowing services within the VPC access to the bucket. Which combination of actions should solutions archived take to accomplish this? (Choose two.)

Explanation

To protect the Amazon S3 bucket from internet access and only allow access from services within the VPC, two actions should be taken. First, a VPC endpoint for Amazon S3 should be created. This allows communication between the VPC and the S3 bucket without going over the internet. Second, a bucket policy should be applied to restrict access to the S3 endpoint. This policy can specify which services or resources within the VPC are allowed to access the bucket, ensuring that only authorized entities can access the sensitive information.

Submit
53. Application developers have noticed that a production application is very slow when business reporting users run large production reports against the Amazon RDS instance backing the application. The CPU and memory utilization metrics for the RDS instance do not exceed 60% while the reporting queries are running. The business reporting users must be able to generate reports without affecting the applications performance. Which action will accomplish this?

Explanation

Creating a read replica and connecting the business reports to it will accomplish the goal of generating reports without affecting the application's performance. By offloading the reporting queries to the read replica, the load on the primary RDS instance will be reduced, allowing it to focus on serving the application's requests. This will help improve the overall performance of the application while still allowing the business reporting users to generate their reports.

Submit
54. An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database runs on Amazon RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from the database that are causing performance slowdowns. Which action should be taken to improve the performance of the backend?

Explanation

Implementing Amazon ElastiCache to cache the large datasets can improve the performance of the backend. ElastiCache is an in-memory data store that can be used to cache frequently accessed data, reducing the need to fetch it from the database every time. By caching the large datasets, the backend tier can retrieve the data faster, resulting in improved performance and reduced latency. This solution is especially effective for identical datasets that are frequently accessed, as it eliminates the need to make repeated calls to the database.

Submit
55. A company currently stores symmetric encryption keys in a hardware security module (HSM). A solution architect must design a solution to migrate key management to AWS. The solution should allow for key rotation and support the use of customer provided keys. Where should the key material be stored to meet these requirements?

Explanation

The AWS Key Management Service (AWS KMS) is the appropriate service to store the key material in order to meet the requirements of key rotation and support for customer provided keys. AWS KMS is a managed service that allows for the creation and control of encryption keys. It provides features such as key rotation, which allows for the automatic generation of new keys to enhance security. Additionally, AWS KMS supports the use of customer provided keys, allowing the company to have full control over their encryption keys.

Submit
56. A company has an application with a REST-based Interface that allows data to be received in near-real time from a third-party vendor Once received, the application processes and stores the data for further analysis. The application Is running on Amazon EC2 instances. The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application. When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to process all requests. Which design should a solutions architect recommend to provide a more scalable solution?

Explanation

Using Amazon Kinesis Data Streams to ingest the data and processing it with AWS Lambda functions would provide a more scalable solution. Kinesis Data Streams can handle high volumes of data and can scale automatically to accommodate spikes in data volume. AWS Lambda functions can be used to process the data in near-real time, allowing for efficient analysis. This combination of services would ensure that the application can handle the increased data load and prevent 503 Service Unavailable Errors.

Submit
57. A solutions architect is helping a developer design a new ecommerce shopping cart application using AWS services. The developer is unsure of the current database schema and expects to make changes as the ecommerce site grows. The solution needs to be highly resilient and capable of automatically scaling read and write capacity. Which database solution meets these requirements

Explanation

The correct answer is Amazon DynamoDB with on-demand enabled. This solution meets the requirements of being highly resilient and capable of automatically scaling read and write capacity. With on-demand enabled, DynamoDB automatically scales the read and write capacity to handle the workload, eliminating the need for manual capacity planning. This ensures that the application can handle any changes in the database schema and the growth of the ecommerce site without any disruption or performance issues.

Submit
58. A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database. Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest. Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure?

Explanation

To meet the compliance regulations and encrypt personally identifiable information (PII) at rest, the recommended solution is to configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys. This solution ensures that both the instance and database volumes are encrypted using AWS KMS keys, providing a secure environment for the highly sensitive application. It requires the least amount of changes to the existing infrastructure while meeting the encryption requirement.

Submit
59. A company's operations team has an existing Amazon S3 bucket configured to notify an Amazon SQS queue when new objects are created within the bucket. The development team also  wants to receive events when new objects are created. The existing operations team workflow must remain intact. Which solution would satisfy these requirements  

Explanation

The solution of creating an Amazon SNS topic and SQS queue for the bucket updates would satisfy the requirements. By updating the bucket to send events to the new topic, both the existing operations team and the development team can receive notifications when new objects are created. Adding subscriptions for both queues in the topic ensures that both teams can receive the events without affecting the existing operations team workflow.

Submit
60. Solutions architect needs to design a low-latency solution for a static single-page application accessed by users utilizing a custom domain name. The solution must be serverless, encrypted in transit, and cost-effective. Which combination of AWS services and features should the solutions architect use? (Choose two.)

Explanation

The solutions architect should use Amazon S3 and Amazon CloudFront for this low-latency, serverless, encrypted in transit, and cost-effective solution. Amazon S3 is a highly scalable storage service that can host static assets for the single-page application. Amazon CloudFront is a content delivery network (CDN) that can cache and distribute the application's content globally, reducing latency for users accessing the application from different locations. Together, these services provide a reliable and efficient solution for hosting and delivering the static single-page application.

Submit
61. A company wants to replicate its data to AWS to recover in the event of a disaster. Today, a system administrator has scripts that copy data to a NFS share Individual backup files need to be accessed with low latency by application administrators to deal with errors in processing. What should a solutions architect recommend to meet these requirements?

Explanation

The correct answer is to modify the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises NFS share. This solution would allow the company to replicate its data to AWS for disaster recovery purposes. The File Gateway virtual appliance provides low latency access to individual backup files, which is necessary for the application administrators to deal with errors in processing. This solution would also ensure that the data is stored in AWS, providing the necessary durability and availability in the event of a disaster.

Submit
62. A three-tier web application processes orders from customers. The web tier consists of Amazon EC2 instances behind an Application Load Balancer, a middle tier of three EC2 instances decoupled from the web tier using Amazon SQS. and an Amazon DynamoDB backend. At peak times, customers who submit orders using the site have to wait much longer than normal to receive confirmations due to lengthy processing times. A solutions architect needs to reduce these processing times. Which action will be MOST effective in accomplishing this?

Explanation

Adding more instances to the middle tier using Amazon EC2 Auto Scaling based on the SQS queue depth will be the most effective action to reduce processing times. By scaling out the middle tier, the system can handle a higher volume of incoming orders and process them more quickly. This will help to alleviate the bottleneck and reduce the wait times for customers receiving order confirmations.

Submit
63. A company is planning to migrate its virtual server-based workloads to AWS. The company has internet-facing load balancers backed by application servers. The application servers rely on patches  from an internet-hosted repository. Which services should a solutions architect recommend be hosted on the public subnet? (Choose two.)

Explanation

The NAT gateway should be hosted on the public subnet because it allows the application servers in the private subnet to access the internet and download patches from the internet-hosted repository. The Application Load Balancers should also be hosted on the public subnet to handle incoming internet traffic and distribute it to the application servers in the private subnet.

Submit
64. A monolithic application was recently migrated to AWS and is now running on a single Amazon EC2 instance. Due to application limitations, it is not possible to use automatic scaling to scale out the application. The chief technology officer (CTO) wants an automated solution to restore the EC2 instance in the unlikely event the underlying hardware fails. What would allow for automatic recovery of the EC2 instance as quickly as possible?

Explanation

An Amazon CloudWatch alarm can be configured to monitor the health of the EC2 instance. If the instance becomes impaired, the alarm will trigger the recovery of the instance. This ensures that in the unlikely event of underlying hardware failure, the EC2 instance will be automatically restored as quickly as possible.

Submit
65. A solutions architect is moving the static content from a public website hosted on Amazon EC2 instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the static assets. The security group used by the EC2 instances restricts access to a limited set of IP ranges. Access to the static content should be similarly restricted. Which combination of steps will meet these requirements? (Choose two.)

Explanation

To meet the requirements of restricting access to the static content, the architect should create an origin access identity (OAI) and associate it with the CloudFront distribution. By changing the permissions in the bucket policy to only allow the OAI to read the objects, access to the static content is limited. Additionally, the architect should create an AWS WAF web ACL that includes the same IP restrictions as the EC2 security group. By associating this web ACL with the CloudFront distribution, the IP restrictions are enforced and access to the static assets is further restricted.

Submit
66. A company has several business systems that require access to data stored in a file share. the business systems will access the file share using the Server Message Block (SMB) protocol. The file share solution should be accessible from both of the company's legacy on-premises environment and with AWS. Which services mod the business requirements? (Choose two.)

Explanation

The company's business systems require access to data stored in a file share using the Server Message Block (SMB) protocol. To meet this requirement, the company can use Amazon FSx for Windows, which provides fully managed Windows file servers that are accessible over the SMB protocol. Additionally, the company can also use AWS Storage Gateway file gateway, which is a hybrid cloud storage service that enables on-premises applications to seamlessly use AWS cloud storage, including file storage.

Submit
67. A company is planning to build a new web application on AWS. The company expects predictable traffic most of the year and very high traffic on occasion. The web application needs to be highly available and fault tolerant with minimal latency. What should a solutions architect recommend to meet these requirements?

Explanation

Using Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones is the recommended solution. This setup ensures high availability and fault tolerance by distributing traffic across multiple instances in different Availability Zones. The Auto Scaling group automatically adjusts the number of instances based on traffic demands, allowing the application to handle predictable traffic most of the year and scale up to handle very high traffic on occasion. The Application Load Balancer further improves performance by evenly distributing traffic to the instances, while minimizing latency.

Submit
68. A company's dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it wants to optimize site loading times for new European users. The site's backend must remain in the United States. The product is being launched in a few days, and an immediate solution is needed. What should the solutions architect recommend?

Explanation

The best solution for optimizing site loading times for new European users while keeping the backend in the United States is to use Amazon CloudFront with a custom origin pointing to the on-premises servers. Amazon CloudFront is a content delivery network that caches content at edge locations around the world, reducing latency for users in different regions. By configuring CloudFront with a custom origin pointing to the on-premises servers, the website can benefit from the global network of CloudFront edge locations while still accessing the backend servers in the United States. This solution can be implemented quickly and does not require migrating the site or setting up cross-Region replication.

Submit
69. A company running an on-premises application is migrating the application to AWS to increase its elasticity and availability. The current architecture uses a Microsoft SQL Server database with heavy read activity. The company wants to explore alternate database options and migrate database engines, if needed. Every 4 hours, the development team does a full copy of the production database to populate a test database. During this period, users experience latency. What should a solution architect recommend as replacement database?

Explanation

The solution architect should recommend using Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database. This option provides high availability and scalability by using Multi-AZ deployment and read replicas. It also allows for easy restoration of the test database from snapshots, minimizing the impact on users during the copy process.

Submit
70. A company mandates that an Amazon S3 gateway endpoint must allow traffic to trusted buckets only. Which method should a solutions architect implement to meet this requirement?

Explanation

To meet the requirement of allowing traffic only to trusted buckets, a solutions architect should create an S3 endpoint policy for each of the company's S3 gateway endpoints. This policy should provide access to the Amazon Resource Name (ARN) of the trusted S3 buckets. By specifying the ARN of the trusted buckets in the endpoint policy, access will be restricted to only those specific buckets, ensuring that traffic is allowed only to trusted buckets.

Submit
71. Company recently expanded globally and wants to make its application accessible to users in those geographic locations. The application is deploying on Amazon EC2 instances behind an Application Load balancer in an Auto Scaling group. The company needs the ability shift traffic from resources in one region to another. What should a solutions architect recommend?

Explanation

A solutions architect should recommend configuring an Amazon Route 53 geolocation routing policy. This routing policy allows the company to direct traffic to different resources based on the geographic location of the user. Since the company wants to make its application accessible to users in different geographic locations, this routing policy will enable them to shift traffic from resources in one region to another, ensuring an optimal user experience.

Submit
72. A company is using Amazon EC2 to run its big data analytics workloads. These variable workloads run each night, and it is critical they finish by the start of business the following day. A solutions architect has been tasked with designing the MOST cost-effective solution. Which solution will accomplish this?

Explanation

Reserved Instances are the most cost-effective solution for running variable workloads that have a predictable schedule. By purchasing Reserved Instances, the company can commit to using a specific instance type in a specific region for a one or three-year term, which provides a significant discount compared to On-Demand Instances. This allows the company to save costs while ensuring the availability of the required resources for their big data analytics workloads. Spot Instances may provide even greater cost savings, but they are not suitable for workloads that have strict time constraints and need to finish by a specific time.

Submit
73. A company has migrated an on-premises Oracle database to an Amazon RDS for Oracle Multi- AZ DB instance in the us-east-l Region. A solutions architect is designing a disaster recovery strategy to have the database provisioned in the us-west-2 Region in case the database becomes unavailable in the us-east-1 Region. The design must ensure the database is provisioned in the us-west-2 Region in a maximum of 2 hours, with a data loss window of no more than 3 hours. How can these requirements be met?

Explanation

To meet the requirements of provisioning the database in the us-west-2 Region within 2 hours and having a data loss window of no more than 3 hours, the solution is to edit the DB instance and create a read replica in us-west-2. By creating a read replica, the data from the primary database in the us-east-1 Region will be asynchronously replicated to the read replica in us-west-2. In case of a disaster in the us-east-1 Region, the read replica can be promoted to become the master database in us-west-2, ensuring minimal downtime and data loss.

Submit
74. A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible I/O performance for video processing. 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival media that is not in use anymore. Which set of services should a solutions architect recommend to meet these requirements?

Explanation

The recommended set of services includes Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage. Amazon EBS provides high-performance block storage for the company's systems, ensuring fast I/O performance for video processing. Amazon S3 offers durable storage for media content, ensuring that the data remains intact and accessible. Lastly, Amazon S3 Glacier provides long-term archival storage for media that is no longer in use, meeting the company's requirements for storing large amounts of data in a cost-effective manner.

Submit
75. A company runs a web service on Amazon CC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones. The company needs a minimum of tour instances a! all limes to meet the required service level agreement (SLA) while keeping costs low. If an Availability Zone tails, how can the company remain compliant with the SLA?

Explanation

By adding a target tracking scaling policy with a short cooldown period, the company can remain compliant with the SLA even if an Availability Zone fails. This policy will automatically adjust the number of instances based on a target metric, such as CPU utilization or request count per target. With a short cooldown period, the scaling actions will be triggered quickly to maintain the desired number of instances, ensuring that the SLA is met. This approach allows the company to efficiently manage costs by scaling up or down as needed without manual intervention.

Submit
76. A solutions architect is designing the cloud architecture for a new application being deployed to AWS. The application allows users to interactively download and upload files. Files older than 2 years will be accessed less frequently. The solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability. Which scalable solutions should the solutions architect recommend? (Choose two.)

Explanation

The recommended solutions are to store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier and S3 Standard-Infrequent Access (S3 Standard-IA). This approach ensures scalability by utilizing the storage options provided by Amazon S3, which can handle any number of files. By using a lifecycle policy, files older than 2 years are automatically moved to lower-cost storage options like S3 Glacier and S3 Standard-IA, reducing costs while still maintaining high availability and durability.

Submit
77. A company wants to migrate a workload to AWS. The chief information security officer requires that all data be encrypted at rest when stored in the cloud. The company wants complete control of encryption key lifecycle management. The company must be able to immediately remove the key material and audit key usage independently of AWS CloudTrail. The chosen services should integrate with other storage services that will be used on AWS. Which services satisfies these security requirements?

Explanation

AWS CloudHSM with the CloudHSM client satisfies the security requirements because it provides complete control of encryption key lifecycle management. With CloudHSM, the company can immediately remove the key material and audit key usage independently of AWS CloudTrail. Additionally, CloudHSM integrates with other storage services on AWS, allowing the company to securely store and manage their encryption keys while migrating their workload to the cloud.

Submit
78. A company is using a tape backup solution to store its key application data offsite. The daily data volume is around 50 TB. The company needs to retain the backups for 7 years for regulatory purposes. The backups are rarely accessed and a week's notice is typically given if a backup needs to be restored. The company is now considering a cloud-based option to reduce the storage costs and operational burden of managing tapes. The company also wants to make sure that the transition from tape backups to the cloud minimizes disruptions. Which storage solution is MOST costeffective?

Explanation

The most cost-effective storage solution in this scenario would be to use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier. This option allows for a seamless transition from tape backups to the cloud, minimizing disruptions. Snowball Edge is a physical device that can be used to transfer large amounts of data offline, which is ideal for the company's 50 TB daily data volume. By directly integrating with Amazon S3 Glacier, the company can take advantage of the low-cost storage solution for long-term retention of backups.

Submit
79. A company relies on an application that needs at least 4 Amazon EC2 instances during regular traffic and must scale up to 12 EC2 instances during peak loads. The application is critical to the business and must be highly available. Which solution will meet these requirements?

Explanation

Deploying the EC2 instances in an Auto Scaling group with a minimum of 4 and a maximum of M, with 2 instances in Availability Zone A and 2 instances in Availability Zone B, will meet the requirements. This configuration ensures that the application has at least 4 instances during regular traffic, providing the necessary capacity. During peak loads, the Auto Scaling group will automatically scale up to a maximum of M instances, allowing the application to handle the increased demand. Distributing the instances across Availability Zones also improves the availability of the application, as it can continue to operate even if one Availability Zone experiences issues.

Submit
80. A company has created an isolated backup of its environment in another Region. The application is running in warm standby mode and is fronted by an Application Load Balancer (ALB). The current failover process is manual and requires updating a DNS alias record to point to the secondary ALB in another Region. What should a solution architect do to automate the failover process?

Explanation

To automate the failover process, a solution architect should create a CNAME record on Amazon Route 53 pointing to the ALB endpoint. This allows the DNS alias record to be updated automatically, directing traffic to the secondary ALB in another Region. By using a CNAME record, the failover process can be seamlessly automated without the need for manual updates.

Submit
81. A company has recently updated its internal security standards. The company must now ensure all Amazon S3 buckets and Amazon Elastic Block Store (Amazon EBS) volumes are encrypted with keys created and periodically rotated by internal security specialists. The company is looking for a native, software-based AWS service to accomplish this goal. What should a solutions architect recommend as a solution?

Explanation

The correct answer is to use an AWS CloudHSM cluster with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in the CloudHSM cluster nodes. This solution meets the requirement of encrypting Amazon S3 buckets and Amazon EBS volumes with keys created and periodically rotated by internal security specialists. AWS CloudHSM provides secure and dedicated hardware security modules (HSMs) to store and manage cryptographic keys. By using a routine to periodically re-create and replace the keys in the CloudHSM cluster nodes, the company can ensure the encryption keys are regularly updated and meet the updated security standards.

Submit
82. A company has enabled AWS CloudTrail logs to deliver log files to an Amazon S3 bucket for each of its developer accounts. The company has created a central AWS account for streamlining management and audit reviews. An internal auditor needs to access the CloudTrail logs, yet access needs to be restricted for all developer account users. The solution must be secure and optimized. How should a solutions architect meet these requirements?

Explanation

The correct answer is to configure an AWS Lambda function in each developer account to copy the log files to the central account. This solution ensures that the CloudTrail logs from each developer account are securely and efficiently transferred to the central account. By creating an IAM role in the central account for the auditor and attaching an IAM policy with read-only permissions to the bucket, the auditor can access the logs without granting unnecessary access to the developer account users. This solution meets the requirements of providing secure and optimized access to the CloudTrail logs.

Submit
83. A company is hosting multiple websites for several lines of business under its registered parent domain. Users accessing these websites will be routed to appropriate backend Amazon EC2 instances based on the subdomain. The websites host static webpages, images, and server-side scripts like PHP and JavaScript. Some of the websites experience peak access during the first two hours of business with constant usage throughout the rest of the day. A solutions architect needs to design a solution that will automatically adjust capacity to these traffic patterns while keeping costs low. Which combination of AWS services or features will meet these requirements? (Choose two.)

Explanation

Amazon EC2 Auto Scaling is a service that automatically adjusts the capacity of EC2 instances based on the traffic patterns. It can scale up or down the number of instances to handle high or low traffic, ensuring that the websites have enough capacity during peak hours and reducing costs during low traffic periods. Amazon S3 website hosting is a feature that allows hosting static websites directly from an S3 bucket, providing a cost-effective solution for hosting static webpages and images. Combining these two services will provide the required scalability and cost efficiency for hosting the company's websites.

Submit
84. A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost. How can these requirements be met?

Explanation

To meet the requirements of minimizing bandwidth costs and allowing for immediate retrieval of data at no additional cost, the best solution is to deploy AWS Storage Gateway using stored volumes to store data locally. This allows the company to retain copies of frequently accessed data subsets locally, reducing the need for frequent data retrieval from Amazon S3 and minimizing bandwidth costs. Additionally, using Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 ensures data protection and availability without incurring additional costs for immediate retrieval.

Submit
85. A company is processing data on a daily basis. The results of the operations are stored in an Amazon S3 bucket, analyzed daily for one week, and then must remain immediately accessible for occasional analysis. What is the MOST cost-effective storage solution alternative to the current configuration?

Explanation

Configuring a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days is the most cost-effective storage solution alternative. This is because S3 One Zone-IA offers lower storage costs compared to S3 Standard-IA, while still providing immediate access to the data. By transitioning the objects to S3 One Zone-IA, the company can save on storage costs without sacrificing accessibility for occasional analysis.

Submit
86. A solutions architect is designing a customer-facing application. The application is expected to have a variable amount of reads and writes depending on the time of year and clearly defined access  patterns throughout the year. Management requires that database auditing and scaling be managed in the AWS Cloud. The Recovery Point Objective (RPO) must be less than 5 hours. Which solutions can accomplish this? (Choose two.)

Explanation

The question asks for solutions that can accomplish a variable amount of reads and writes, with clearly defined access patterns, while also managing database auditing and scaling in the AWS Cloud.

Option 1: Using Amazon DynamoDB with auto scaling allows the application to handle the variable amount of reads and writes. On-demand backups ensure data recovery and AWS CloudTrail provides auditing capabilities.

Option 2: Similar to option 1, using Amazon DynamoDB with auto scaling handles the variable workload. On-demand backups ensure data recovery and Amazon DynamoDB Streams provide auditing capabilities.

Both options fulfill the requirements of the question.

Submit
87. A company is designing a web application using AWS that processes insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type must be responded to within 24 hours, and must not be lost. The solution should be simple to set up and maintain. Which solution meets these requirements

Explanation

The correct solution is to create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type and deliver data streams to an Amazon Elasticsearch Service (Amazon ES) cluster. This solution meets the requirements of separating quotes by quote type, responding within 24 hours, and ensuring quotes are not lost. The web application can send messages to the appropriate delivery stream, and the backend application servers can search for and process the messages from Amazon ES accordingly. This solution is simple to set up and maintain.

Submit
View My Results

Quiz Review Timeline (Updated): Mar 21, 2023 +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Aug 13, 2020
    Quiz Created by
    Siva Neelam
Cancel
  • All
    All (87)
  • Unanswered
    Unanswered ()
  • Answered
    Answered ()
A solutions architect is working on optimizing a legacy document...
A company must migrate 20 TB of data from a data center to the AWS...
A company has a website running on Amazon EC2 instances across two...
A company runs a website on Amazon EC2 instances behind an ELB...
Company's website provides users with downloadable historical...
A company wants to deploy a shared file system for its .NET...
A solution architect must migrate a Windows internet information...
An application running on an Amazon EC2 instance in VPC-A needs to...
A company decides to migrate its three-tier web application from...
A company is running a two-tier ecommerce website using services. The...
A company has an Amazon EC2 instance running on a private subnet that...
A company delivers files in Amazon S3 to certain users who do not have...
A solutions architect observes that a nightly batch processing job is...
A company runs an application on Amazon EC2 Instances. The application...
A company runs multiple Amazon EC2 Linux instances in a VPC with...
A solutions architect is designing a mission-critical web application....
An ecommerce company has noticed performance degradation of its Amazon...
A company uses an Amazon S3 bucket to store static images for its...
A company wants to run a hybrid workload for data processing. The data...
A company has established a new AWS account. The account is newly...
A company's application hosted on Amazon EC2 instances needs to...
A web application runs on Amazon EC2 instances behind an Application...
A company is migrating to the AWS Cloud. A file server is the first...
A company's web application is running on Amazon EC2 instances...
A company has a Microsoft Windows-based application that must be...
A solutions architect must create a highly available bastion host...
An application requires a development environment (DEV) and production...
A company recently deployed a two-tier application in two Availability...
A company that develops web applications has launched hundreds of...
A media company stores video content in an Amazon Elastic Block Store...
A company is investigating potential solutions that would collect,...
A company is using a VPC peering strategy to connect its VPCs in a...
Company has a mobile chat application with a data store based in...
A company uses Amazon S3 as its object storage solution. The company...
A development team needs to host a website that will be accessed by...
A solutions architect must design a solution for a persistent database...
Solutions architect is designing an architecture for a new application...
A solution architect is designing a hybrid application using the AWS...
A company has two applications it wants to migrate to AWS. Both...
A company is creating an architecture for a mobile app that requires...
A company needs to implement a relational database with a multi-Region...
A solutions architect is using Amazon S3 to design the storage...
A company runs an application using Amazon ECS. The application...
A company plans to store sensitive user data on Amazon S3. Internal...
A recent analysis of a company's IT expenses highlights the need...
A company hosts an application on an Amazon EC2 instance that requires...
A company has global users accessing an application deployed in...
A Solutions Architect must design a web application that will be...
A company is reviewing its AWS Cloud deployment to ensure its data is...
A company requires a durable backup storage solution for its...
A company recently launched its website to serve content to its global...
An application is running on Amazon EC2 instances. Sensitive...
Application developers have noticed that a production application is...
An ecommerce company is running a multi-tier application on AWS. The...
A company currently stores symmetric encryption keys in a hardware...
A company has an application with a REST-based Interface that allows...
A solutions architect is helping a developer design a new ecommerce...
A company is running a highly sensitive application on Amazon EC2...
A company's operations team has an existing Amazon S3 bucket...
Solutions architect needs to design a low-latency solution for a...
A company wants to replicate its data to AWS to recover in the event...
A three-tier web application processes orders from customers. The web...
A company is planning to migrate its virtual server-based workloads to...
A monolithic application was recently migrated to AWS and is now...
A solutions architect is moving the static content from a public...
A company has several business systems that require access to data...
A company is planning to build a new web application on AWS. The...
A company's dynamic website is hosted using on-premises servers in...
A company running an on-premises application is migrating the...
A company mandates that an Amazon S3 gateway endpoint must allow...
Company recently expanded globally and wants to make its application...
A company is using Amazon EC2 to run its big data analytics workloads....
A company has migrated an on-premises Oracle database to an Amazon RDS...
A media company is evaluating the possibility of moving its systems to...
A company runs a web service on Amazon CC2 instances behind an...
A solutions architect is designing the cloud architecture for a new...
A company wants to migrate a workload to AWS. The chief information...
A company is using a tape backup solution to store its key application...
A company relies on an application that needs at least 4 Amazon EC2...
A company has created an isolated backup of its environment in another...
A company has recently updated its internal security standards. The...
A company has enabled AWS CloudTrail logs to deliver log files to an...
A company is hosting multiple websites for several lines of business...
A company has an on-premises data center that is running out of...
A company is processing data on a daily basis. The results of the...
A solutions architect is designing a customer-facing application. The...
A company is designing a web application using AWS that processes...
Alert!

Advertisement