AWS-02

Reviewed by Editorial Team
The ProProfs editorial team is comprised of experienced subject matter experts. They've collectively created over 10,000 quizzes and lessons, serving over 100 million users. Our team includes in-house content moderators and subject matter experts, as well as a global network of rigorously trained contributors. All adhere to our comprehensive editorial guidelines, ensuring the delivery of high-quality content.
Learn about Our Editorial Process
| By Siva Neelam
S
Siva Neelam
Community Contributor
Quizzes Created: 1 | Total Attempts: 658
| Attempts: 658
SettingsSettings
Please wait...
  • 1/87 Questions

    A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company's network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What should a solutions architect do to meet these requirements?

    • Use AWS Snowball
    • Use AWS DataSync
    • Use a secure VPN connection
    • Use Amazon S3 Transfer Acceleration
Please wait...
AWS-02 - Quiz
About This Quiz

AWS-02 focuses on optimizing costs and enhancing functionality in AWS environments. It assesses skills in choosing cost-effective solutions for different operational needs, configuring shared storage, and ensuring high availability and regional access controls for web applications.


Quiz Preview

  • 2. 

    A solutions architect is working on optimizing a legacy document management application running on Microsoft Windows Server in an on-premises data center. The application stores a large number of files on a network file share. The chief information officer wants to reduce the on-premises data center footprint and minimize storage costs by moving on-premises storage to AWS. What should the solutions architect do to meet these requirements?

    • Set up an AWS Storage Gateway file gateway

    • Set up Amazon Elastic File System (Amazon EFS)

    • Set up AWS Storage Gateway as a volume gateway

    • Set up an Amazon Elastic Block Store (Amazon EBS) volume

    Correct Answer
    A. Set up an AWS Storage Gateway file gateway
    Explanation
    To meet the requirements of reducing the on-premises data center footprint and minimizing storage costs, the solutions architect should set up an AWS Storage Gateway file gateway. This service allows the application to store files in Amazon S3, reducing the need for on-premises storage. The file gateway provides a seamless integration between the application and Amazon S3, allowing the files to be accessed and managed in the same way as they were on the network file share. This solution enables cost savings by leveraging the scalable and cost-effective storage of Amazon S3 while still providing the necessary functionality for the legacy document management application.

    Rate this question:

  • 3. 

    A company has a website running on Amazon EC2 instances across two Availability Zones. The company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user experience. How can a solutions architect meet this requirement

    • Use step scaling

    • Use simple scaling

    • Use lifecycle hooks

    • Use scheduled scaling.

    Correct Answer
    A. Use scheduled scaling.
    Explanation
    To meet the requirement of providing a consistent user experience during spikes in traffic on specific holidays, a solutions architect can use scheduled scaling. With scheduled scaling, the architect can configure the auto scaling group to automatically adjust the number of EC2 instances based on predefined schedules. This allows the architect to anticipate the spikes in traffic during holidays and scale up the resources accordingly, ensuring that the website can handle the increased load and provide a consistent user experience.

    Rate this question:

  • 4. 

    A company runs a website on Amazon EC2 instances behind an ELB Application Load Balancer. Amazon Route 53 is used for the DNS. The company wants to set up a backup website with a message including a phone number and email address that users can reach if the primary website is down. How should the company deploy this solution?

    • Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.

    • Use Amazon S3 website hosting for the backup website and Route 53 latency routing policy.

    • Deploy the application in another AWS Region and use ELB health checks for failover routing.

    • Deploy the application in another AWS Region and use server-side redirection on the primary website

    Correct Answer
    A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.
    Explanation
    The company should use Amazon S3 website hosting for the backup website and Route 53 failover routing policy. This solution allows the company to host the backup website on Amazon S3, which provides high availability and durability. Route 53's failover routing policy ensures that traffic is directed to the backup website if the primary website is down. This setup allows users to reach the backup website and contact the company through the provided phone number and email address.

    Rate this question:

  • 5. 

    Company's website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the company's website demands globally. The solution should be cost effective, limit the provisioning of infrastructure resources and provide the fastest possible response time. Which combination should a solutions architect recommend to meet these requirements?

    • Amazon CloudFront and Amazon S3

    • AWS Lambda and Amazon DynamoDB

    • Application Load Balancer with Amazon EC2 Auto Scaling

    • Amazon Route 53 with internal Application Load Balances

    Correct Answer
    A. Amazon CloudFront and Amazon S3
    Explanation
    The combination of Amazon CloudFront and Amazon S3 is the recommended solution because it meets all the given requirements. Amazon CloudFront is a content delivery network (CDN) that provides low latency and high transfer speeds globally. It can distribute the downloadable historical performance reports efficiently to users around the world, ensuring the fastest possible response time. Amazon S3 is a cost-effective and scalable storage service that can securely store the reports. This combination eliminates the need for provisioning infrastructure resources, as both services are managed by AWS, making it a cost-effective solution.

    Rate this question:

  • 6. 

    A company wants to deploy a shared file system for its .NET application servers and Microsoft SQL Server database running on Amazon EC2 instance with Windows Server 2016. The solution must be able to be integrated in to the corporate Active Directory domain, be highly durable, be managed by AWS, and provided levels of throughput and IOPS. Which solution meets these requirements?

    • Use Amazon FSx for Windows File Server

    • Use Amazon Elastic File System (Amazon EFS)

    • Use AWS Storage Gateway in file gateway mode

    • Deploy a Windows file server on two On Demand instances across two Availability Zones

    Correct Answer
    A. Use Amazon FSx for Windows File Server
    Explanation
    Amazon FSx for Windows File Server is the correct solution for this scenario. FSx for Windows File Server provides a fully managed shared file system that is integrated with the corporate Active Directory domain. It offers high durability, is managed by AWS, and provides the required levels of throughput and IOPS. This solution is specifically designed for Windows workloads and is the best fit for the given requirements.

    Rate this question:

  • 7. 

    A solution architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user's on-premises networkattached storage (NAS).The solution architected has proposed migrating the IIS web servers Which replacement to the on-promises file share is MOST resilient and durable?

    • Migrate the file Share to Amazon RDS

    • Migrate the tile Share to AWS Storage Gateway

    • Migrate the file Share to Amazon FSx for Windows File Server

    • Migrate the tile share to Amazon Elastic File System (Amazon EFS)

    Correct Answer
    A. Migrate the file Share to Amazon FSx for Windows File Server
    Explanation
    Migrating the file share to Amazon FSx for Windows File Server is the most resilient and durable replacement for the on-premises file share. Amazon FSx for Windows File Server is a fully managed native Windows file system that is built on Windows Server and provides compatibility with Windows applications. It offers high durability and availability, with automatic backups and continuous replication across multiple Availability Zones. This ensures that the data is protected against failures and provides a reliable file storage solution for the IIS web application.

    Rate this question:

  • 8. 

    An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both are in separate. AWS accounts. The network administrator needs to design a solution to enable secure access to EC2 instance in VPC-B from VPC-A. The connectivity should not have a single point of failure or bandwidth concerns. Which solution will meet these requirements?

    • Set up a VPC peering connection between VPC-A and VPC-B.

    • Set up VPC gateway endpoints for the EC2 instance running in VPC-B.

    • Attach a virtual private gateway to VPC-B and enable routing from VPC-A.

    • Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-B.

    Correct Answer
    A. Set up a VPC peering connection between VPC-A and VPC-B.
    Explanation
    Setting up a VPC peering connection between VPC-A and VPC-B will meet the requirements of secure access without a single point of failure or bandwidth concerns. VPC peering allows communication between instances in different VPCs using private IP addresses, without the need for internet gateways, VPN connections, or NAT devices. It provides a secure and reliable connection between the two VPCs, ensuring that the application running in VPC-A can access files in the EC2 instance in VPC-B.

    Rate this question:

  • 9. 

    A company decides to migrate its three-tier web application from on-premises to the AWS Cloud. The new database must be capable of dynamically scaling storage capacity and performing table joins. Which AWS service meets these requirements?

    • Amazon Aurora

    • Amazon RDS for Sql Server

    • Amazon DynamoDB Streams

    • Amazon DynamoDB on-demand

    Correct Answer
    A. Amazon Aurora
    Explanation
    Amazon Aurora is the correct answer because it is a fully managed relational database service that is compatible with MySQL and PostgreSQL. It provides the capability to dynamically scale storage capacity, allowing the company to easily adjust the storage capacity as needed. Additionally, Aurora supports table joins, making it suitable for the company's requirement of performing table joins in their web application.

    Rate this question:

  • 10. 

    A company has an Amazon EC2 instance running on a private subnet that needs to access a public websites to download patches and updates. The company does not want external websites to see the EC2 instance IP address or initiate connection to it. How can a solution architect achieve this objective?

    • Create a site-to-site VPN connection between the private subnet and the network in which the public site is deployed

    • Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAI gateway

    • Create a network ACL for the private subnet where the EC2 instance deployed only allows access from the IP address range of the public website

    • Create a security group that only allows connections from the IP address range of the public website. Attach the security group to the EC2 instance.

    Correct Answer
    A. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAI gateway
    Explanation
    To achieve the objective of allowing the EC2 instance in the private subnet to access public websites without revealing its IP address or allowing incoming connections, a NAT gateway can be created in a public subnet. By routing outbound traffic from the private subnet through the NAT gateway, the EC2 instance's IP address is hidden from external websites. This ensures that only outbound connections are initiated from the EC2 instance, providing the desired level of security and privacy.

    Rate this question:

  • 11. 

    A company is running a two-tier ecommerce website using services. The current architect uses a publish-facing Elastic Load Balancer that sends traffic to Amazon EC2 instances in a private subnet. The static content is hosted on EC2 instances, and the dynamic content is retrieved from a MYSQL database. The application is running in the United States. The company recently started selling to users in Europe and Australia. A solution architect needs to design solution so their international users have an improved browsing experience. Which solution is MOST cost-effective?

    • Host the entire website on Amazon S3.

    • Use Amazon CloudFront and Amazon S3 to host static images

    • Increase the number of public load balancers and EC2 instances

    • Deploy the two-tier website in AWS Regions in Europe and Australia

    Correct Answer
    A. Use Amazon CloudFront and Amazon S3 to host static images
    Explanation
    The solution of using Amazon CloudFront and Amazon S3 to host static images is the most cost-effective because it allows for the caching and distribution of static content closer to the international users, reducing latency and improving browsing experience. This solution leverages the global network of CloudFront edge locations to serve the static content from locations closer to the users, resulting in faster load times. Additionally, hosting static images on S3 is cost-effective as it offers low storage and data transfer costs.

    Rate this question:

  • 12. 

    A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete. What should the solutions architect do to meet these requirements?

    • Increase the minimum capacity for the Auto Scaling group

    • Increase the maximum capacity for the Auto Scaling group.

    • Configure scheduled scaling to scale up to the desired compute level

    • Change the scaling policy to add more EC2 instances during each scaling operation

    Correct Answer
    A. Configure scheduled scaling to scale up to the desired compute level
    Explanation
    To meet the requirements of reaching the desired EC2 capacity quickly and allowing the Auto Scaling group to scale down after batch jobs are complete, the solutions architect should configure scheduled scaling. By setting up a schedule, the Auto Scaling group can automatically scale up to the desired compute level before the batch jobs start at 1 AM every night. This ensures that the peak capacity is reached in a timely manner. Once the batch jobs are complete, the Auto Scaling group can then scale down, optimizing costs and resource utilization.

    Rate this question:

  • 13. 

    A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These users must be given access for a limited lime. What should a solutions architect do to securely meet these requirements?

    • Enable public access on an Amazon S3 bucket.

    • Generate a pre signed URL to share with the users

    • Encrypt files using AWS KMS and provide keys to the users

    • Create and assign IAM roles that will grant GetObject permissions to the users

    Correct Answer
    A. Generate a pre signed URL to share with the users
    Explanation
    To securely meet the requirements of providing limited access to users without AWS credentials, a solutions architect should generate a pre-signed URL to share with the users. A pre-signed URL is a time-limited URL that provides temporary access to specific objects in an S3 bucket. This allows the users to access the files without needing AWS credentials, while also ensuring that the access is limited to a specific time period. This approach provides a secure and controlled method for sharing files with external users.

    Rate this question:

  • 14. 

    A company runs multiple Amazon EC2 Linux instances in a VPC with applications that use a hierarchical directory structure. The applications need to rapidly and concurrently read and write to shared storage. How can this be achieved?

    • Create an Amazon EFS file system and mount it from each EC2 instance

    • Create an Amazon S3 bucket and permit access from all the EC2 instances in the VPC

    • Create a file system on an Amazon EBS Provisioned IOPS SSD (101) volume. Attach the volume to all the EC2 instances

    • Create file systems on Amazon EBS volumes attached to each EC2 instance. Synchronize the Amazon EBS volumes across the different EC2 instances

    Correct Answer
    A. Create an Amazon EFS file system and mount it from each EC2 instance
    Explanation
    To achieve rapid and concurrent read and write access to shared storage, the best solution is to create an Amazon EFS (Elastic File System) file system and mount it from each EC2 instance. Amazon EFS provides a scalable and fully managed file storage service that can be easily shared across multiple instances. By mounting the EFS file system on each instance, the applications can access and modify the hierarchical directory structure concurrently and efficiently. This ensures consistent and reliable access to the shared storage for all instances in the VPC.

    Rate this question:

  • 15. 

    A company runs an application on Amazon EC2 Instances. The application is deployed in private subnets in three Availability Zones of the us-east-1 Region. The instances must be able to connect to the internet to download files. The company wants a design that is highly available across the Region. Which solution should be implemented to ensure that there are no disruptions to Internet connectivity?

    • Deploy a NAT Instance in a private subnet of each Availability Zone

    • Deploy a NAT gateway in a public subnet of each Availability Zone

    • Deploy a transit gateway in a private subnet of each Availability Zone

    • Deploy an internet gateway in a public subnet of each Availability Zone

    Correct Answer
    A. Deploy a NAT gateway in a public subnet of each Availability Zone
    Explanation
    To ensure continuous internet connectivity for the instances in the private subnets, a NAT gateway should be deployed in a public subnet of each Availability Zone. NAT gateway allows instances in the private subnets to connect to the internet while also providing a highly available solution across the Region. Deploying a NAT instance in each Availability Zone would also work, but it is a less preferred option as it requires more management and configuration compared to the NAT gateway. Deploying a transit gateway or an internet gateway would not fulfill the requirement of allowing instances in private subnets to connect to the internet.

    Rate this question:

  • 16. 

    A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2 instances behind an Application Load Balancer and a relational database. The database should be highly available and fault tolerant. Which database implementations will meet these requirements? (Choose two.)

    • Amazon Redshift

    • Amazon DynamoDB

    • Amazon RDS for MySQL

    • MySQL-compatible Amazon Aurora Multi-AZ

    • Amazon RDS for SQL Server Standard Edition Multi-AZ

    Correct Answer(s)
    A. MySQL-compatible Amazon Aurora Multi-AZ
    A. Amazon RDS for SQL Server Standard Edition Multi-AZ
    Explanation
    The correct answer is MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ.

    These two database implementations, MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ, are designed to provide high availability and fault tolerance.

    Amazon Aurora Multi-AZ provides automatic failover to a standby replica in the event of a failure, ensuring that the database remains available even in the case of a hardware or software failure.

    Similarly, Amazon RDS for SQL Server Standard Edition Multi-AZ also provides high availability by automatically replicating the database to a standby instance in a different Availability Zone.

    By leveraging these two database implementations, the mission-critical web application can ensure that the database remains highly available and fault tolerant.

    Rate this question:

  • 17. 

    An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solution architect needs to solve the problem with minimal changes to the existing web application. What should the solution architect recommend?

    • Export the data to Amazon DynamoDB and have the business analysts run their queries

    • Load the data into Amazon ElastiCache and have the business analysts run their queries

    • Create a read replica of the primary database and have the business analysts run their queries.

    • Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.

    Correct Answer
    A. Create a read replica of the primary database and have the business analysts run their queries.
    Explanation
    The solution architect should recommend creating a read replica of the primary database and having the business analysts run their queries on it. This solution allows the business analysts to perform their read-only queries without impacting the performance of the primary database. By offloading the read workload to the read replica, the web application's performance degradation can be minimized, and the existing architecture can remain largely unchanged.

    Rate this question:

  • 18. 

    A company uses an Amazon S3 bucket to store static images for its website. The company configured permissions to allow access to Amazon S3 objects by privileged users only. What should a solutions architect do to protect against data loss? (Choose two.)

    • Enable versioning on the S3 bucket

    • Enable access logging on the S3 bucket.

    • Enable server-side encryption on the S3 bucket

    • Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier

    • Use MFA Delete to require multi-factor authentication to delete an object

    Correct Answer(s)
    A. Enable versioning on the S3 bucket
    A. Use MFA Delete to require multi-factor authentication to delete an object
    Explanation
    Enabling versioning on the S3 bucket ensures that multiple versions of each object are stored, allowing the company to recover previous versions in case of accidental deletion or data corruption. Using MFA Delete adds an extra layer of security by requiring multi-factor authentication before an object can be deleted, preventing unauthorized deletion and reducing the risk of data loss.

    Rate this question:

  • 19. 

    A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing. Which solution will meet these requirements?

    • Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud

    • Use an AWS storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS cloud

    • Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS

    • Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS cloud, then perform analytics on this data in the cloud

    Correct Answer
    A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud
    Explanation
    The correct solution is to use an AWS Storage Gateway file gateway to provide file storage to AWS and then perform analytics on this data in the AWS Cloud. This solution allows the company to access the data from on-premises applications for local data processing using an NFS protocol, while also making the data accessible from the AWS Cloud for further analytics and batch processing. The file gateway provides a seamless integration between on-premises and cloud storage, allowing the company to leverage the benefits of both environments for their hybrid workload.

    Rate this question:

  • 20. 

    A company's application hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Due to data sensitivity, traffic cannot traverse the internet How should a solutions architect configure access?

    • Create a private hosted zone using Amazon Route 53

    • Configure a VPC gateway endpoint for Amazon S3 in the VPC.

    • Configure AWS Private Link between the EC2 instance and the S3 bucket

    • Set up a site-to-site VPN connection between the VPC and the S3 bucket.

    Correct Answer
    A. Configure a VPC gateway endpoint for Amazon S3 in the VPC.
    Explanation
    To ensure that the company's application can access the Amazon S3 bucket without traffic traversing the internet, a solutions architect should configure a VPC gateway endpoint for Amazon S3 in the VPC. This allows the application to connect directly to the S3 bucket within the VPC, without needing to go over the internet. This ensures a secure and private connection for accessing the sensitive data in the S3 bucket.

    Rate this question:

  • 21. 

    A web application runs on Amazon EC2 instances behind an Application Load Balancer. The application allows users to create custom reports of historical weather data. Generating a report can take up to 5 minutes. These long-running requests use many of the available incoming connections, making the system unresponsive to other users. How can a solutions architect make the system more responsive?

    • Use Amazon SQS with AWS Lambda to generate reports

    • Increase the idle timeout on the Application Load Balancer to 5 minutes

    • Update the client-side application code to increase its request timeout to 5 minutes

    • Publish the reports to Amazon S3 and use Amazon CloudFront for downloading to the user.

    Correct Answer
    A. Use Amazon SQS with AWS Lambda to generate reports
    Explanation
    By using Amazon SQS with AWS Lambda to generate reports, the long-running requests can be offloaded from the web application and processed asynchronously. This means that the web application can quickly respond to other users' requests, making the system more responsive. SQS acts as a buffer, storing the requests until they can be processed by the Lambda function. This solution allows for scalability and ensures that the system can handle a large number of requests without becoming unresponsive.

    Rate this question:

  • 22. 

    A company is migrating to the AWS Cloud. A file server is the first workload to migrate. Users must be able to access the file share using the Server Message Block (SMB) protocol. Which AWS managed service meets these requirements?

    • Amazon EBS

    • Amazon EC2

    • Amazon FSx

    • Amazon S3

    Correct Answer
    A. Amazon FSx
    Explanation
    Amazon FSx is the correct answer because it is an AWS managed service that provides fully managed Windows file servers that are accessible using the Server Message Block (SMB) protocol. It is designed for migrating Windows-based applications that require file storage, making it suitable for the company's file server workload migration. Amazon EBS and Amazon S3 are not specifically designed for SMB protocol access, while Amazon EC2 is a virtual server and does not provide a fully managed file server solution.

    Rate this question:

  • 23. 

    A company has established a new AWS account. The account is newly provisioned and no changes have been made to the default settings. The company is concerned about the security of the AWS account root user. What should be done to secure the root user?

    • Create IAM users for daily administrative tasks. Disable the root user

    • Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.

    • Generate an access key for the root user. Use the access key for daily administration tasks instead of the AWS Management Console

    • Provide the root user credentials to the most senior solution architect. Have the solution architect use the root user for daily administration tasks

    Correct Answer
    A. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
    Explanation
    To secure the root user of the newly provisioned AWS account, it is recommended to create IAM users for daily administrative tasks and enable multi-factor authentication (MFA) on the root user. By creating IAM users, the root user's credentials are not used for daily tasks, reducing the risk of unauthorized access. Enabling MFA adds an extra layer of security by requiring an additional authentication factor, such as a code from a mobile app or a physical device, to access the account. This helps protect against unauthorized access even if the root user's password is compromised.

    Rate this question:

  • 24. 

    An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs. Which solution is the MOST cost-effective

    • DEV with Spot Instances and PROD with On-Demand Instances

    • DEV with On-Demand Instances and PROD with Spot Instances

    • DEV with Scheduled Reserved Instances and PROD with Reserved Instances

    • DEV with On-Demand Instances and PROD with Scheduled Reserved Instances

    Correct Answer
    A. DEV with Scheduled Reserved Instances and PROD with Reserved Instances
    Explanation
    The most cost-effective solution is to use DEV with Scheduled Reserved Instances and PROD with Reserved Instances. This strategy allows for the utilization of reserved instances, which offer significant cost savings compared to on-demand instances. By using scheduled reserved instances for DEV, the instances can be run for a specific number of hours each day, aligning with the required 10-hour runtime. For PROD, running the instances 24/7 makes the use of reserved instances the most cost-effective option. This strategy optimizes costs by leveraging reserved instances for both environments while efficiently utilizing the instances based on their specific requirements.

    Rate this question:

  • 25. 

    A company's web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only. Which configuration will meet this requirement?

    • Configure the security group for the EC2 instances

    • Configure the security group on the Application Load Balancer

    • Configure AWS WAF on the Application Load Balancer in a VPC.

    • Configure the network ACL for the subnet that contains the EC2 instances.

    Correct Answer
    A. Configure AWS WAF on the Application Load Balancer in a VPC.
  • 26. 

    A company has a Microsoft Windows-based application that must be migrated to AWS. This application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances. What should a solution architect do to accomplish this?

    • Configure a volume using Amazon EFS. Mount the EBS volume to each Windows Instance.

    • Configure AWS Storage Gateway in Volume Gateway mode. Mount the volume to each Windows Instance

    • Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows Instance

    • Configure an Amazon EBS volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.

    Correct Answer
    A. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows Instance
    Explanation
    To accomplish the migration of the Microsoft Windows-based application to AWS with a shared Windows file system, the solution architect should configure Amazon FSx for Windows File Server. This service provides a fully managed native Windows file system that is accessible from multiple Amazon EC2 Windows instances. By mounting the Amazon FSx volume to each Windows instance, the application can continue to use the shared file system seamlessly. This option is the most appropriate and efficient solution for the given scenario.

    Rate this question:

  • 27. 

    A solutions architect must create a highly available bastion host architecture. The solution needs to be resilient within a single AWS Region and should require only minimal effort to maintain. What should the solutions architect do to meet these requirements?

    • Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener

    • Create a Network Load Balancer backed by a Spot Fleet with instances in a group with instances in a partition placement group

    • Create a Network Load Balancer backed by the existing serves in different Availability Zones as the target

    • Create a Network Load Balancer backed by an Auto Scaling with instances in multiple Availability zones as the target

    Correct Answer
    A. Create a Network Load Balancer backed by an Auto Scaling with instances in multiple Availability zones as the target
    Explanation
    To create a highly available bastion host architecture, the solutions architect should use a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target. This setup ensures that the bastion host is distributed across multiple zones, providing resilience within a single AWS Region. Additionally, using Auto Scaling allows for automatic scaling of the bastion host based on demand, reducing the effort required for maintenance.

    Rate this question:

  • 28. 

    A company recently deployed a two-tier application in two Availability Zones in the us-east-1 Region. The databases are deployed in a private subnet while the web servers are deployed in a public subnet. An internet gateway is attached to the VPC. The application and database run on Amazon EC2 instances. The database servers are unable to access patches on the internet. A solutions architect needs to design a solution that maintains database security with the least operational overhead. Which solution meets these requirements?

    • Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route

    • Deploy a NAT gateway inside the private subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route.

    • Deploy two NAT instances inside the public subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route

    • Deploy two NAT instances inside the private subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route

    Correct Answer
    A. Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route
    Explanation
    The correct solution is to deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. By doing this, the database servers in the private subnet will be able to access patches on the internet through the NAT gateway. Updating the routing table of the private subnet to use the NAT gateway as the default route ensures that all outgoing traffic from the private subnet is directed through the NAT gateway, maintaining database security. This solution requires the least operational overhead as it leverages the built-in NAT gateway service provided by AWS.

    Rate this question:

  • 29. 

    A company that develops web applications has launched hundreds of Application Load Balancers (ALBs) in multiple Regions. The company wants to create an allow list (or the IPs of all the load balancers on its firewall device. A solutions architect is looking for a one-time, highly available solution to address this request, which will also help reduce the number of IPs that need to be allowed by the firewall. What should the solutions architect recommend to meet these requirements?

    • Create a AWS Lambda function to keep track of the IPs for all the ALBs in different Regions Keep refreshing this list

    • Set up a Network Load Balancer (NLB) with Elastic IPs. Register the private IPs of all the ALBs as targets to this NLB

    • Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.

    • Set up an Amazon EC2 instance, assign an Elastic IP to this EC2 instance, and configure the instance as a proxy to forward traffic to all the ALBs.

    Correct Answer
    A. Launch AWS Global Accelerator and create endpoints for all the Regions. Register all the ALBs in different Regions to the corresponding endpoints.
    Explanation
    The recommended solution is to launch AWS Global Accelerator and create endpoints for all the Regions. By registering all the ALBs in different Regions to the corresponding endpoints, the company can have a one-time, highly available solution to address the request. This solution will also help reduce the number of IPs that need to be allowed by the firewall, making it more efficient and manageable.

    Rate this question:

  • 30. 

    A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume. A certain video file has become popular and a large number of users across the world are accessing this content. This has resulted in a cost increase. Which action will DECREASE cost without compromising user accessibility?

    • Change the EBS volume to Provisioned IOPS (PIOPS).

    • Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution

    • Split the video into multiple, smaller segments so users are routed to the requested video segments only

    • Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket

    Correct Answer
    A. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution
    Explanation
    Storing the video in an Amazon S3 bucket and creating an Amazon CloudFront distribution will decrease cost without compromising user accessibility. Amazon S3 is a cost-effective storage service, and CloudFront is a content delivery network that caches the video content at edge locations worldwide. This means that users can access the video from the nearest edge location, reducing the load on the EBS volume and decreasing costs.

    Rate this question:

  • 31. 

    A company is investigating potential solutions that would collect, process, and store users' service usage data. The business objective is to create an analytics capability that will enable the company to gather operational insights quickly using standard SQL queries. The solution should be highly available and ensure Atomicity, Consistency, Isolation, and Durability (ACID) compliance in the data tier. Which solution should a solutions architect recommend

    • Use Amazon DynamoDB transactions

    • Create an Amazon Neptune database in a Multi AZ design

    • Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design

    • Deploy PostgreSQL on an Amazon EC2 instance that uses Amazon EBS Throughput Optimized HDD storage

    Correct Answer
    A. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design
    Explanation
    The recommended solution is to use a fully managed Amazon RDS for MySQL database in a Multi-AZ design. This solution ensures high availability and ACID compliance in the data tier. Amazon RDS for MySQL is a managed database service that handles routine tasks like backups, software patching, and automatic failure detection and recovery. The Multi-AZ design provides redundancy by automatically replicating data to a standby instance in a different Availability Zone. This design ensures that data is protected and available even in the event of a failure.

    Rate this question:

  • 32. 

    Company has a mobile chat application with a data store based in Amazon DynamoDB. Users would like new messages to be read with as little latency as possible. A solutions architect needs to design an optimal solution that requires minimal application changes. Which method should the solutions architect select?

    • Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint

    • Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.

    • Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint

    • Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.

    Correct Answer
    A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint
    Explanation
    Adding Amazon DynamoDB Accelerator (DAX) to the mobile chat application's data store can significantly reduce the latency for reading new messages. By configuring DAX for the new messages table and updating the code to use the DAX endpoint, the application can benefit from the in-memory caching provided by DAX. This allows for faster access to frequently accessed data, improving the overall performance of the application without requiring major changes to the existing codebase.

    Rate this question:

  • 33. 

    A company uses Amazon S3 as its object storage solution. The company has thousands of S3 buckets it uses to store data. Some of the S3 bucket have data that is accessed less frequently than others. A solutions architect found that lifecycle policies are not consistently implemented or are implemented partially? resulting in data being stored in high-cost storage. Which solution will lower costs without compromising the availability of objects

    • Use S3 ACLs.

    • Use Amazon Elastic Block Store (EBS) automated snapshots

    • Use S3 Intelligent-Tiering storage

    • Use S3 One Zone-infrequent Access (S3 One Zone-IA).

    Correct Answer
    A. Use S3 Intelligent-Tiering storage
    Explanation
    Using S3 Intelligent-Tiering storage will lower costs without compromising the availability of objects. This storage class automatically moves objects between two access tiers: frequent access and infrequent access. It uses machine learning to analyze access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier, which has a lower storage cost. If the objects are accessed again, they are automatically moved back to the frequent access tier. This ensures that less frequently accessed data is stored in a lower-cost storage tier while still being readily available when needed.

    Rate this question:

  • 34. 

    A solutions architect must design a solution for a persistent database that is being migrated from on-premises to AWS. The database requires 64,000 IOPS according to the database administrator. If possible, the database administrator wants to use a single Amazon Elastic Block Store (Amazon EBS) volume to host the database instance. Which solution effectively meets the database administrator's criteria?

    • Use an instance from the I3 I/O optimized family and leverage local ephemeral storage to achieve the IOPS requirement

    • Create an Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached. Configure the volume to have 64,000 IOPS

    • Create and map an Amazon Elastic File System (Amazon EFS) volume to the database instance and use the volume to achieve the required IOPS for the database.

    • Provision two volumes and assign 32,000 IOPS to each. Create a logical volume at the operating system level that aggregates both volumes to achieve the IOPS requirements.

    Correct Answer
    A. Create an Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached. Configure the volume to have 64,000 IOPS
    Explanation
    The correct solution is to create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached and configure the volume to have 64,000 IOPS. This solution meets the criteria of the database administrator by providing the required IOPS for the database. The Nitro-based instances are optimized for high-performance and can handle the workload efficiently. The use of Provisioned IOPS SSD ensures consistent and predictable performance for the database.

    Rate this question:

  • 35. 

    Solutions architect is designing an architecture for a new application that requires low network latency and high network throughput between Amazon EC2 instances. Which component should be included in the architectural design?

    • An Auto Scaling group with Spot Instance types

    • Placement group using a cluster placement strategy

    • A placement group using a partition placement strategy

    • A placement group using a partition placement strategy

    Correct Answer
    A. Placement group using a cluster placement strategy
    Explanation
    A placement group using a cluster placement strategy should be included in the architectural design. This is because a cluster placement strategy ensures that EC2 instances are placed in close proximity to each other, reducing network latency. It also allows for high network throughput as it enables instances within the placement group to communicate with each other using enhanced networking. This makes it the ideal choice for an application that requires low network latency and high network throughput between EC2 instances.

    Rate this question:

  • 36. 

    A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a centrally networking setup for multiple accounts, VPCs, and VPNs. Which networking solution meets these requirements?

    • Configure shared VPCs and VPNs and share to each other

    • Configure a hub-and-spoke and route all traffic through VPC peering

    • Configure an AWS Direct Connect between all VPCs and VPNs

    • Configure a transit gateway with AWS Transit Gateway and connected all VPCs and VPNs.

    Correct Answer
    A. Configure a transit gateway with AWS Transit Gateway and connected all VPCs and VPNs.
    Explanation
    A transit gateway with AWS Transit Gateway is the best networking solution for the given scenario. It allows for centralized networking setup by connecting multiple VPCs and VPNs. This solution can accommodate the company's growth to hundreds of VPCs and handle the new requests for site-to-site VPNs. With a transit gateway, all VPCs and VPNs can be connected, making it easier to manage and maintain the network architecture.

    Rate this question:

  • 37. 

    A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images. Which method is the MOST costeffective for hosting the website?

    • Containerize the website and host it in AWS Fargate.

    • Create an Amazon S3 bucket and host the website there

    • Deploy a web server on an Amazon EC2 instance to host the website

    • Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework

    Correct Answer
    A. Create an Amazon S3 bucket and host the website there
    Explanation
    Creating an Amazon S3 bucket and hosting the website there is the most cost-effective method for hosting the website. Amazon S3 is a highly scalable and cost-efficient storage service that allows users to store and retrieve any amount of data at any time. It is designed for high durability, availability, and performance. By hosting the website in an S3 bucket, the development team can take advantage of the low cost of storage and data transfer, eliminating the need for managing and maintaining servers or containers. Additionally, S3 provides built-in features for website hosting, making it easy to configure and manage the website.

    Rate this question:

  • 38. 

    A company has two applications it wants to migrate to AWS. Both applications process a large set of files by accessing the same files at the same time. Both applications need to read the files with low latency. Which architecture should a solutions architect recommend for this situation?

    • Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an instance store volume to store the data.

    • Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to store the data.

    • Configure one memory optimized Amazon EC2 instance to run both applications simultaneously. Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data

    • Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data

    Correct Answer
    A. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data
    Explanation
    The recommended architecture is to configure two Amazon EC2 instances to run both applications and to configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data. This architecture allows both applications to access the same files at the same time with low latency. Amazon EFS provides a scalable file storage system that can handle concurrent access from multiple instances, making it suitable for this scenario. The General Purpose performance mode ensures low latency for file access, and Bursting Throughput mode allows for bursts of high throughput when needed.

    Rate this question:

  • 39. 

    A company is creating an architecture for a mobile app that requires minimal latency for its users. The company's architecture consists of Amazon EC2 instances behind an Application Load Balancer running in an Auto Scaling group. The EC2 instances connect to Amazon RDS. Application beta testing showed there was a slowdown when reading the data. However the metrics indicate that the EC2 instances do not cross any CPU utilization thresholds. How can this issue be addressed?

    • Reduce the threshold for CPU utilization in the Auto Scaling group

    • Replace the Application Load Balancer with a Network Load Balancer

    • Add read replicas for the RDS instances and direct read traffic to the replica

    • Add Multi-AZ support to the RDS instances and direct read traffic to the new EC2 instance.

    Correct Answer
    A. Add read replicas for the RDS instances and direct read traffic to the replica
    Explanation
    To address the slowdown in reading data while minimizing latency, the company should add read replicas for the RDS instances and direct read traffic to the replica. By adding read replicas, the workload can be distributed across multiple instances, reducing the load on the main RDS instance and improving read performance. This solution is more effective than reducing the CPU utilization threshold or replacing the load balancer. Adding Multi-AZ support to the RDS instances would improve availability but may not directly address the latency issue.

    Rate this question:

  • 40. 

    A solution architect is designing a hybrid application using the AWS cloud. The network between the on-premises data center and AWS will use an AWS Direct Connect (DX) connection. The application connectivity between AWS and the on-premises data center must be highly resilient. Which DX configuration should be implemented to meet these requirements?

    • Configure a DX connection with a VPN on top of it

    • Configure DX connections at multiple DX locations

    • Configure a DX connection using the most reliable DX partner

    • Configure multiple virtual interfaces on top of a DX connection.

    Correct Answer
    A. Configure DX connections at multiple DX locations
    Explanation
    To ensure highly resilient application connectivity between AWS and the on-premises data center, it is recommended to configure DX connections at multiple DX locations. This configuration provides redundancy and fault tolerance by establishing multiple connections between the on-premises data center and AWS. If one DX location or connection fails, the application traffic can still be routed through the remaining connections, ensuring continuous connectivity and minimizing downtime.

    Rate this question:

  • 41. 

    A company needs to implement a relational database with a multi-Region disaster recovery Recovery Point Objective (RPO) of 1 second and an Recovery Time Objective (RTO) of 1 minute. Which AWS solution can achieve this?

    • Amazon Aurora Global Database

    • Amazon DynamoDB global tables

    • Amazon RDS for MySQL with Multi-AZ enabled

    • Amazon RDS for MySQL with a cross-Region snapshot copy

    Correct Answer
    A. Amazon Aurora Global Database
    Explanation
    Amazon Aurora Global Database is the correct answer because it is designed to provide low-latency global access to a single database with a replication lag of less than 1 second. It also has the ability to automatically failover to a secondary region within 1 minute, meeting the RPO and RTO requirements mentioned in the question.

    Rate this question:

  • 42. 

    A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files. Which storage option meets these requirements?

    • S3 Standard

    • S3 Intelligent-Tiering

    • S3 Standard-Infrequent Access (S3 Standard-IA)

    • S3 One Zone-Infrequent Access (S3 One Zone-IA)

    Correct Answer
    A. S3 Intelligent-Tiering
    Explanation
    S3 Intelligent-Tiering is the best storage option for this scenario because it automatically moves data between two access tiers based on its usage patterns. Frequently accessed files will be stored in the frequent access tier, while rarely accessed files will be moved to the infrequent access tier. This allows for cost optimization as the architect only pays for the storage and retrieval of files based on their actual usage. Additionally, S3 Intelligent-Tiering provides resilience to the loss of an Availability Zone by replicating data across multiple zones.

    Rate this question:

  • 43. 

    A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3. How can a solutions architect ensure that the application has permission to access Amazon S3?

    • Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container

    • Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition

    • Create a security group that allows access from Amazon ECS to Amazon S3, and update the launch configuration used by the ECS cluster

    • Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account

    Correct Answer
    A. Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition
    Explanation
    To ensure that the application has permission to access Amazon S3, a solutions architect should create an IAM role with S3 permissions. This role can then be specified as the taskRoleArn in the task definition. By doing this, the application running on Amazon ECS will be granted the necessary permissions to make API calls to Amazon S3 and store the resized images.

    Rate this question:

  • 44. 

    A company plans to store sensitive user data on Amazon S3. Internal security compliance requirement mandate encryption of data before sending it to Amazon S3. What should a solution architect recommend to satisfy these requirements?

    • Server-side encryption with customer-provided encryption keys

    • Client-side encryption with Amazon S3 managed encryption keys

    • Server-side encryption with keys stored in AWS key Management Service (AWS KMS)

    • Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)

    Correct Answer
    A. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)
    Explanation
    The solution architect should recommend client-side encryption with a master key stored in AWS Key Management Service (AWS KMS) to satisfy the internal security compliance requirement of encrypting data before sending it to Amazon S3. This approach ensures that the sensitive user data is encrypted before it leaves the client's environment, providing an additional layer of security. The master key stored in AWS KMS allows for secure management and control of the encryption keys.

    Rate this question:

  • 45. 

    A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief information officer wants to simplify the on- premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on- premises backup applications and workflows. What should a solutions architect recommend?

    • Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.

    • Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.

    • Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface

    • Set up AWS Storage Gateway to connect with the backup applications using the iSCSIvirtual tape library (VTL) interface

    Correct Answer
    A. Set up AWS Storage Gateway to connect with the backup applications using the iSCSIvirtual tape library (VTL) interface
    Explanation
    The solution architect should recommend setting up AWS Storage Gateway to connect with the backup applications using the iSCSI virtual tape library (VTL) interface. This solution allows the company to eliminate the use of physical backup tapes, reducing costs and simplifying the on-premises backup infrastructure. Additionally, it preserves the existing investment in the on-premises backup applications and workflows.

    Rate this question:

  • 46. 

    A company hosts an application on an Amazon EC2 instance that requires a maximum of 200 GB storage space. The application is used infrequently, with peaks during mornings and evenings. Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned about costs and has asked a solutions architect to recommend the most cost-effective storage option that does not sacrifice performance. Which solution should the solutions architect recommend?

    • Amazon EBS Cold HDD (sc1)

    • Amazon EBS General Purpose SSD (gp2)

    • Amazon EBS Provisioned IOPS SSD (io1)

    • Amazon EBS Throughput Optimized HDD (st1)

    Correct Answer
    A. Amazon EBS General Purpose SSD (gp2)
    Explanation
    The solutions architect should recommend Amazon EBS General Purpose SSD (gp2) as the most cost-effective storage option that does not sacrifice performance. Although the application is used infrequently, it requires a maximum of 200 GB storage space and experiences peaks in disk I/O. General Purpose SSD (gp2) offers a balance between performance and cost, providing consistent performance for a wide range of workloads. It is suitable for applications with moderate I/O requirements, making it the appropriate choice in this scenario.

    Rate this question:

  • 47. 

    A company has global users accessing an application deployed in different AWS Regions, exposing public static IP addresses. The users are experiencing poor performance when accessing the application over the internet. What should a solutions architect recommend to reduce internet latency

    • Set up AWS Global Accelerator and add endpoints

    • Set up AWS Direct Connect locations in multiple Regions

    • Set up an Amazon CloudFront distribution to access an application

    • Set up an Amazon Route 53 geo proximity routing policy to route traffic

    Correct Answer
    A. Set up AWS Global Accelerator and add endpoints
    Explanation
    To reduce internet latency for global users accessing the application deployed in different AWS Regions, a solutions architect should recommend setting up AWS Global Accelerator and adding endpoints. AWS Global Accelerator is a service that improves the performance and availability of applications by directing traffic to the nearest AWS edge location. By adding endpoints, the architect can distribute the traffic across multiple regions, reducing latency and improving the user experience. This solution ensures that users can access the application with better performance and reduced latency.

    Rate this question:

  • 48. 

    A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase access to premium, shared content that is stored in an S3 bucket. upon payment consent will be available for download for 14 days before the user is denied access. Which of the following would be the LEAST complicated implementation?

    • Use an Amazon CloudFront distribution with an origin access identity (OAI). Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design a Lambda function to remove data that is older than 14 days

    • Use an S3 bucket and provide direct access to the tile Design the application to track purchases in a DynamoDB table. Configure a Lambda function to remove data that is older than 14 days based on a query to Amazon DynamoDB.

    • Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to sot an expiration of 14 days for the URL

    • Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 60 minutes for the URL and recreate the URL as necessary

    Correct Answer
    A. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to sot an expiration of 14 days for the URL
    Explanation
    The correct answer is to use an Amazon CloudFront distribution with an OAI and configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. The application should set an expiration of 14 days for the URL. This implementation is the least complicated because it leverages the CloudFront content delivery network to improve performance and security. By using signed URLs, access to the content is controlled and limited to a specific time period. The expiration of 14 days ensures that users have access to the content for a limited time before being denied access.

    Rate this question:

  • 49. 

    A company is reviewing its AWS Cloud deployment to ensure its data is not accessed by anyone without appropriate authorization. A solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes. What should the solutions architect do to accomplish this?

    • Enable AWS Config service with the appropriate rules

    • Enable AWS Trusted Advisor with the appropriate checks

    • Write a script using an AWS SDK to generate a bucket report

    • Enable Amazon S3 server access logging and configure Amazon CloudWatch Events

    Correct Answer
    A. Enable AWS Config service with the appropriate rules
    Explanation
    To accomplish the task of identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes, the solutions architect should enable the AWS Config service with the appropriate rules. AWS Config allows for continuous monitoring and recording of AWS resource configurations, including S3 buckets. By enabling AWS Config with the appropriate rules, the architect can ensure that any unauthorized access or configuration changes to the S3 buckets will be detected and recorded, helping to ensure the security and integrity of the company's data.

    Rate this question:

Quiz Review Timeline (Updated): Mar 21, 2023 +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Aug 13, 2020
    Quiz Created by
    Siva Neelam
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.