AWS-02

Approved & Edited by ProProfs Editorial Team
The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
Learn about Our Editorial Process
| By Siva Neelam
S
Siva Neelam
Community Contributor
Quizzes Created: 1 | Total Attempts: 621
Questions: 87 | Attempts: 625

SettingsSettingsSettings
Software Quizzes & Trivia

Questions and Answers
  • 1. 

    An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs. Which solution is the MOST cost-effective

    • A.

      DEV with Spot Instances and PROD with On-Demand Instances

    • B.

      DEV with On-Demand Instances and PROD with Spot Instances

    • C.

      DEV with Scheduled Reserved Instances and PROD with Reserved Instances

    • D.

      DEV with On-Demand Instances and PROD with Scheduled Reserved Instances

    Correct Answer
    C. DEV with Scheduled Reserved Instances and PROD with Reserved Instances
    Explanation
    The most cost-effective solution is to use DEV with Scheduled Reserved Instances and PROD with Reserved Instances. This strategy allows for the utilization of reserved instances, which offer significant cost savings compared to on-demand instances. By using scheduled reserved instances for DEV, the instances can be run for a specific number of hours each day, aligning with the required 10-hour runtime. For PROD, running the instances 24/7 makes the use of reserved instances the most cost-effective option. This strategy optimizes costs by leveraging reserved instances for both environments while efficiently utilizing the instances based on their specific requirements.

    Rate this question:

  • 2. 

    A company runs multiple Amazon EC2 Linux instances in a VPC with applications that use a hierarchical directory structure. The applications need to rapidly and concurrently read and write to shared storage. How can this be achieved?

    • A.

      Create an Amazon EFS file system and mount it from each EC2 instance

    • B.

      Create an Amazon S3 bucket and permit access from all the EC2 instances in the VPC

    • C.

      Create a file system on an Amazon EBS Provisioned IOPS SSD (101) volume. Attach the volume to all the EC2 instances

    • D.

      Create file systems on Amazon EBS volumes attached to each EC2 instance. Synchronize the Amazon EBS volumes across the different EC2 instances

    Correct Answer
    A. Create an Amazon EFS file system and mount it from each EC2 instance
    Explanation
    To achieve rapid and concurrent read and write access to shared storage, the best solution is to create an Amazon EFS (Elastic File System) file system and mount it from each EC2 instance. Amazon EFS provides a scalable and fully managed file storage service that can be easily shared across multiple instances. By mounting the EFS file system on each instance, the applications can access and modify the hierarchical directory structure concurrently and efficiently. This ensures consistent and reliable access to the shared storage for all instances in the VPC.

    Rate this question:

  • 3. 

    A solutions architect observes that a nightly batch processing job is automatically scaled up for 1 hour before the desired Amazon EC2 capacity is reached. The peak capacity is the same every night and the batch jobs always start at 1 AM. The solutions architect needs to find a cost-effective solution that will allow for the desired EC2 capacity to be reached quickly and allow the Auto Scaling group to scale down after the batch jobs are complete. What should the solutions architect do to meet these requirements?

    • A.

      Increase the minimum capacity for the Auto Scaling group

    • B.

      Increase the maximum capacity for the Auto Scaling group.

    • C.

      Configure scheduled scaling to scale up to the desired compute level

    • D.

      Change the scaling policy to add more EC2 instances during each scaling operation

    Correct Answer
    C. Configure scheduled scaling to scale up to the desired compute level
    Explanation
    To meet the requirements of reaching the desired EC2 capacity quickly and allowing the Auto Scaling group to scale down after batch jobs are complete, the solutions architect should configure scheduled scaling. By setting up a schedule, the Auto Scaling group can automatically scale up to the desired compute level before the batch jobs start at 1 AM every night. This ensures that the peak capacity is reached in a timely manner. Once the batch jobs are complete, the Auto Scaling group can then scale down, optimizing costs and resource utilization.

    Rate this question:

  • 4. 

    A Solutions Architect must design a web application that will be hosted on AWS, allowing users to purchase access to premium, shared content that is stored in an S3 bucket. upon payment consent will be available for download for 14 days before the user is denied access. Which of the following would be the LEAST complicated implementation?

    • A.

      Use an Amazon CloudFront distribution with an origin access identity (OAI). Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design a Lambda function to remove data that is older than 14 days

    • B.

      Use an S3 bucket and provide direct access to the tile Design the application to track purchases in a DynamoDB table. Configure a Lambda function to remove data that is older than 14 days based on a query to Amazon DynamoDB.

    • C.

      Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to sot an expiration of 14 days for the URL

    • D.

      Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to set an expiration of 60 minutes for the URL and recreate the URL as necessary

    Correct Answer
    C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. Design the application to sot an expiration of 14 days for the URL
    Explanation
    The correct answer is to use an Amazon CloudFront distribution with an OAI and configure the distribution with an Amazon S3 origin to provide access to the file through signed URLs. The application should set an expiration of 14 days for the URL. This implementation is the least complicated because it leverages the CloudFront content delivery network to improve performance and security. By using signed URLs, access to the content is controlled and limited to a specific time period. The expiration of 14 days ensures that users have access to the content for a limited time before being denied access.

    Rate this question:

  • 5. 

    A solutions architect is designing a mission-critical web application. It will consist of Amazon EC2 instances behind an Application Load Balancer and a relational database. The database should be highly available and fault tolerant. Which database implementations will meet these requirements? (Choose two.)

    • A.

      Amazon Redshift

    • B.

      Amazon DynamoDB

    • C.

      Amazon RDS for MySQL

    • D.

      MySQL-compatible Amazon Aurora Multi-AZ

    • E.

      Amazon RDS for SQL Server Standard Edition Multi-AZ

    Correct Answer(s)
    D. MySQL-compatible Amazon Aurora Multi-AZ
    E. Amazon RDS for SQL Server Standard Edition Multi-AZ
    Explanation
    The correct answer is MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ.

    These two database implementations, MySQL-compatible Amazon Aurora Multi-AZ and Amazon RDS for SQL Server Standard Edition Multi-AZ, are designed to provide high availability and fault tolerance.

    Amazon Aurora Multi-AZ provides automatic failover to a standby replica in the event of a failure, ensuring that the database remains available even in the case of a hardware or software failure.

    Similarly, Amazon RDS for SQL Server Standard Edition Multi-AZ also provides high availability by automatically replicating the database to a standby instance in a different Availability Zone.

    By leveraging these two database implementations, the mission-critical web application can ensure that the database remains highly available and fault tolerant.

    Rate this question:

  • 6. 

    A company's web application is running on Amazon EC2 instances behind an Application Load Balancer. The company recently changed its policy, which now requires the application to be accessed from one specific country only. Which configuration will meet this requirement?

    • A.

      Configure the security group for the EC2 instances

    • B.

      Configure the security group on the Application Load Balancer

    • C.

      Configure AWS WAF on the Application Load Balancer in a VPC.

    • D.

      Configure the network ACL for the subnet that contains the EC2 instances.

    Correct Answer
    C. Configure AWS WAF on the Application Load Balancer in a VPC.
  • 7. 

    A company has an Amazon EC2 instance running on a private subnet that needs to access a public websites to download patches and updates. The company does not want external websites to see the EC2 instance IP address or initiate connection to it. How can a solution architect achieve this objective?

    • A.

      Create a site-to-site VPN connection between the private subnet and the network in which the public site is deployed

    • B.

      Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAI gateway

    • C.

      Create a network ACL for the private subnet where the EC2 instance deployed only allows access from the IP address range of the public website

    • D.

      Create a security group that only allows connections from the IP address range of the public website. Attach the security group to the EC2 instance.

    Correct Answer
    B. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAI gateway
    Explanation
    To achieve the objective of allowing the EC2 instance in the private subnet to access public websites without revealing its IP address or allowing incoming connections, a NAT gateway can be created in a public subnet. By routing outbound traffic from the private subnet through the NAT gateway, the EC2 instance's IP address is hidden from external websites. This ensures that only outbound connections are initiated from the EC2 instance, providing the desired level of security and privacy.

    Rate this question:

  • 8. 

    A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company's network bandwidth is limited to 15 Mbps and cannot exceed 70% utilization. What should a solutions architect do to meet these requirements?

    • A.

      Use AWS Snowball

    • B.

      Use AWS DataSync

    • C.

      Use a secure VPN connection

    • D.

      Use Amazon S3 Transfer Acceleration

    Correct Answer
    A. Use AWS Snowball
    Explanation
    AWS Snowball is a service that allows for the migration of large amounts of data to and from the AWS Cloud. It is specifically designed for situations where the network bandwidth is limited or the data size is too large to be transferred over the network within a reasonable time frame. In this scenario, with a limited network bandwidth of 15 Mbps, it would not be feasible to transfer 20 TB of data within 30 days. Therefore, using AWS Snowball, which physically transfers the data using a secure appliance, would be the most appropriate solution to meet the requirements.

    Rate this question:

  • 9. 

    A company has a website running on Amazon EC2 instances across two Availability Zones. The company is expecting spikes in traffic on specific holidays, and wants to provide a consistent user experience. How can a solutions architect meet this requirement

    • A.

      Use step scaling

    • B.

      Use simple scaling

    • C.

      Use lifecycle hooks

    • D.

      Use scheduled scaling.

    Correct Answer
    D. Use scheduled scaling.
    Explanation
    To meet the requirement of providing a consistent user experience during spikes in traffic on specific holidays, a solutions architect can use scheduled scaling. With scheduled scaling, the architect can configure the auto scaling group to automatically adjust the number of EC2 instances based on predefined schedules. This allows the architect to anticipate the spikes in traffic during holidays and scale up the resources accordingly, ensuring that the website can handle the increased load and provide a consistent user experience.

    Rate this question:

  • 10. 

    An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database runs on Amazon RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from the database that are causing performance slowdowns. Which action should be taken to improve the performance of the backend?

    • A.

      Implement Amazon SNS to store the database calls

    • B.

      Implement Amazon ElastiCache to cache the large datasets

    • C.

      Implement an RDS for MySQL read replica to cache database calls

    • D.

      Implement Amazon Kinesis Data Firehose to stream the calls to the database

    Correct Answer
    B. Implement Amazon ElastiCache to cache the large datasets
    Explanation
    Implementing Amazon ElastiCache to cache the large datasets can improve the performance of the backend. ElastiCache is an in-memory data store that can be used to cache frequently accessed data, reducing the need to fetch it from the database every time. By caching the large datasets, the backend tier can retrieve the data faster, resulting in improved performance and reduced latency. This solution is especially effective for identical datasets that are frequently accessed, as it eliminates the need to make repeated calls to the database.

    Rate this question:

  • 11. 

    A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost. How can these requirements be met?

    • A.

      Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload

    • B.

      Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally

    • C.

      Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3

    • D.

      Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up pointin- time snapshots of the data to Amazon S3.

    Correct Answer
    C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3
    Explanation
    To meet the requirements of minimizing bandwidth costs and allowing for immediate retrieval of data at no additional cost, the best solution is to deploy AWS Storage Gateway using stored volumes to store data locally. This allows the company to retain copies of frequently accessed data subsets locally, reducing the need for frequent data retrieval from Amazon S3 and minimizing bandwidth costs. Additionally, using Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 ensures data protection and availability without incurring additional costs for immediate retrieval.

    Rate this question:

  • 12. 

    A company is processing data on a daily basis. The results of the operations are stored in an Amazon S3 bucket, analyzed daily for one week, and then must remain immediately accessible for occasional analysis. What is the MOST cost-effective storage solution alternative to the current configuration?

    • A.

      Configure a lifecycle policy to delete the objects after 30 days

    • B.

      Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days

    • C.

      Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days

    • D.

      Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days

    Correct Answer
    D. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days
    Explanation
    Configuring a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days is the most cost-effective storage solution alternative. This is because S3 One Zone-IA offers lower storage costs compared to S3 Standard-IA, while still providing immediate access to the data. By transitioning the objects to S3 One Zone-IA, the company can save on storage costs without sacrificing accessibility for occasional analysis.

    Rate this question:

  • 13. 

    A company delivers files in Amazon S3 to certain users who do not have AWS credentials. These users must be given access for a limited lime. What should a solutions architect do to securely meet these requirements?

    • A.

      Enable public access on an Amazon S3 bucket.

    • B.

      Generate a pre signed URL to share with the users

    • C.

      Encrypt files using AWS KMS and provide keys to the users

    • D.

      Create and assign IAM roles that will grant GetObject permissions to the users

    Correct Answer
    B. Generate a pre signed URL to share with the users
    Explanation
    To securely meet the requirements of providing limited access to users without AWS credentials, a solutions architect should generate a pre-signed URL to share with the users. A pre-signed URL is a time-limited URL that provides temporary access to specific objects in an S3 bucket. This allows the users to access the files without needing AWS credentials, while also ensuring that the access is limited to a specific time period. This approach provides a secure and controlled method for sharing files with external users.

    Rate this question:

  • 14. 

    A company wants to run a hybrid workload for data processing. The data needs to be accessed by on-premises applications for local data processing using an NFS protocol, and must also be accessible from the AWS Cloud for further analytics and batch processing. Which solution will meet these requirements?

    • A.

      Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud

    • B.

      Use an AWS storage Gateway tape gateway to copy the backup of the local data to AWS, then perform analytics on this data in the AWS cloud

    • C.

      Use an AWS Storage Gateway volume gateway in a stored volume configuration to regularly take snapshots of the local data, then copy the data to AWS

    • D.

      Use an AWS Storage Gateway volume gateway in a cached volume configuration to back up all the local storage in the AWS cloud, then perform analytics on this data in the cloud

    Correct Answer
    A. Use an AWS Storage Gateway file gateway to provide file storage to AWS, then perform analytics on this data in the AWS Cloud
    Explanation
    The correct solution is to use an AWS Storage Gateway file gateway to provide file storage to AWS and then perform analytics on this data in the AWS Cloud. This solution allows the company to access the data from on-premises applications for local data processing using an NFS protocol, while also making the data accessible from the AWS Cloud for further analytics and batch processing. The file gateway provides a seamless integration between on-premises and cloud storage, allowing the company to leverage the benefits of both environments for their hybrid workload.

    Rate this question:

  • 15. 

    A company plans to store sensitive user data on Amazon S3. Internal security compliance requirement mandate encryption of data before sending it to Amazon S3. What should a solution architect recommend to satisfy these requirements?

    • A.

      Server-side encryption with customer-provided encryption keys

    • B.

      Client-side encryption with Amazon S3 managed encryption keys

    • C.

      Server-side encryption with keys stored in AWS key Management Service (AWS KMS)

    • D.

      Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)

    Correct Answer
    D. Client-side encryption with a master key stored in AWS Key Management Service (AWS KMS)
    Explanation
    The solution architect should recommend client-side encryption with a master key stored in AWS Key Management Service (AWS KMS) to satisfy the internal security compliance requirement of encrypting data before sending it to Amazon S3. This approach ensures that the sensitive user data is encrypted before it leaves the client's environment, providing an additional layer of security. The master key stored in AWS KMS allows for secure management and control of the encryption keys.

    Rate this question:

  • 16. 

    A solutions architect is moving the static content from a public website hosted on Amazon EC2 instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the static assets. The security group used by the EC2 instances restricts access to a limited set of IP ranges. Access to the static content should be similarly restricted. Which combination of steps will meet these requirements? (Choose two.)

    • A.

      Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects

    • B.

      Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution

    • C.

      Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the CloudFront distribution

    • D.

      Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the S3 bucket hosting the static content.

    Correct Answer(s)
    A. Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects
    B. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution
    Explanation
    To meet the requirements of restricting access to the static content, the architect should create an origin access identity (OAI) and associate it with the CloudFront distribution. By changing the permissions in the bucket policy to only allow the OAI to read the objects, access to the static content is limited. Additionally, the architect should create an AWS WAF web ACL that includes the same IP restrictions as the EC2 security group. By associating this web ACL with the CloudFront distribution, the IP restrictions are enforced and access to the static assets is further restricted.

    Rate this question:

  • 17. 

    A company is investigating potential solutions that would collect, process, and store users' service usage data. The business objective is to create an analytics capability that will enable the company to gather operational insights quickly using standard SQL queries. The solution should be highly available and ensure Atomicity, Consistency, Isolation, and Durability (ACID) compliance in the data tier. Which solution should a solutions architect recommend

    • A.

      Use Amazon DynamoDB transactions

    • B.

      Create an Amazon Neptune database in a Multi AZ design

    • C.

      Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design

    • D.

      Deploy PostgreSQL on an Amazon EC2 instance that uses Amazon EBS Throughput Optimized HDD storage

    Correct Answer
    C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design
    Explanation
    The recommended solution is to use a fully managed Amazon RDS for MySQL database in a Multi-AZ design. This solution ensures high availability and ACID compliance in the data tier. Amazon RDS for MySQL is a managed database service that handles routine tasks like backups, software patching, and automatic failure detection and recovery. The Multi-AZ design provides redundancy by automatically replicating data to a standby instance in a different Availability Zone. This design ensures that data is protected and available even in the event of a failure.

    Rate this question:

  • 18. 

    A company recently launched its website to serve content to its global user base. The company wants to store and accelerate the delivery of static content to its users by leveraging Amazon CloudFront with an Amazon EC2 instance attached as its origin. How should a solutions architect optimize high availability for the application?

    • A.

      Use Lambda@Edge for CloudFront

    • B.

      Use Amazon S3 Transfer Acceleration for CloudFront.

    • C.

      Configure another EC2 instance in a different Availability Zone as part of the origin group

    • D.

      Configure another EC2 instance as part of the origin server cluster in the same Availability Zone.

    Correct Answer
    A. Use Lambda@Edge for CloudFront
    Explanation
    Using Lambda@Edge for CloudFront allows for the execution of custom code at AWS edge locations, which helps optimize the delivery of content to users. This can include modifying responses, making decisions based on user requests, or implementing additional security measures. By leveraging Lambda@Edge, the company can enhance the availability and performance of its website by customizing the content delivery process according to the specific needs of its global user base.

    Rate this question:

  • 19. 

    An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both are in separate. AWS accounts. The network administrator needs to design a solution to enable secure access to EC2 instance in VPC-B from VPC-A. The connectivity should not have a single point of failure or bandwidth concerns. Which solution will meet these requirements?

    • A.

      Set up a VPC peering connection between VPC-A and VPC-B.

    • B.

      Set up VPC gateway endpoints for the EC2 instance running in VPC-B.

    • C.

      Attach a virtual private gateway to VPC-B and enable routing from VPC-A.

    • D.

      Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-B.

    Correct Answer
    A. Set up a VPC peering connection between VPC-A and VPC-B.
    Explanation
    Setting up a VPC peering connection between VPC-A and VPC-B will meet the requirements of secure access without a single point of failure or bandwidth concerns. VPC peering allows communication between instances in different VPCs using private IP addresses, without the need for internet gateways, VPN connections, or NAT devices. It provides a secure and reliable connection between the two VPCs, ensuring that the application running in VPC-A can access files in the EC2 instance in VPC-B.

    Rate this question:

  • 20. 

    A company currently stores symmetric encryption keys in a hardware security module (HSM). A solution architect must design a solution to migrate key management to AWS. The solution should allow for key rotation and support the use of customer provided keys. Where should the key material be stored to meet these requirements?

    • A.

      Amazon S3

    • B.

      AWS Secrets Manager

    • C.

      AWS Systems Manager Parameter store

    • D.

      AWS Key Management Service (AWS KMS)

    Correct Answer
    D. AWS Key Management Service (AWS KMS)
    Explanation
    The AWS Key Management Service (AWS KMS) is the appropriate service to store the key material in order to meet the requirements of key rotation and support for customer provided keys. AWS KMS is a managed service that allows for the creation and control of encryption keys. It provides features such as key rotation, which allows for the automatic generation of new keys to enhance security. Additionally, AWS KMS supports the use of customer provided keys, allowing the company to have full control over their encryption keys.

    Rate this question:

  • 21. 

    A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief information officer wants to simplify the on- premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on- premises backup applications and workflows. What should a solutions architect recommend?

    • A.

      Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.

    • B.

      Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.

    • C.

      Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface

    • D.

      Set up AWS Storage Gateway to connect with the backup applications using the iSCSIvirtual tape library (VTL) interface

    Correct Answer
    D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSIvirtual tape library (VTL) interface
    Explanation
    The solution architect should recommend setting up AWS Storage Gateway to connect with the backup applications using the iSCSI virtual tape library (VTL) interface. This solution allows the company to eliminate the use of physical backup tapes, reducing costs and simplifying the on-premises backup infrastructure. Additionally, it preserves the existing investment in the on-premises backup applications and workflows.

    Rate this question:

  • 22. 

    A company hosts an application on an Amazon EC2 instance that requires a maximum of 200 GB storage space. The application is used infrequently, with peaks during mornings and evenings. Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned about costs and has asked a solutions architect to recommend the most cost-effective storage option that does not sacrifice performance. Which solution should the solutions architect recommend?

    • A.

      Amazon EBS Cold HDD (sc1)

    • B.

      Amazon EBS General Purpose SSD (gp2)

    • C.

      Amazon EBS Provisioned IOPS SSD (io1)

    • D.

      Amazon EBS Throughput Optimized HDD (st1)

    Correct Answer
    B. Amazon EBS General Purpose SSD (gp2)
    Explanation
    The solutions architect should recommend Amazon EBS General Purpose SSD (gp2) as the most cost-effective storage option that does not sacrifice performance. Although the application is used infrequently, it requires a maximum of 200 GB storage space and experiences peaks in disk I/O. General Purpose SSD (gp2) offers a balance between performance and cost, providing consistent performance for a wide range of workloads. It is suitable for applications with moderate I/O requirements, making it the appropriate choice in this scenario.

    Rate this question:

  • 23. 

    A company's application hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Due to data sensitivity, traffic cannot traverse the internet How should a solutions architect configure access?

    • A.

      Create a private hosted zone using Amazon Route 53

    • B.

      Configure a VPC gateway endpoint for Amazon S3 in the VPC.

    • C.

      Configure AWS Private Link between the EC2 instance and the S3 bucket

    • D.

      Set up a site-to-site VPN connection between the VPC and the S3 bucket.

    Correct Answer
    B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.
    Explanation
    To ensure that the company's application can access the Amazon S3 bucket without traffic traversing the internet, a solutions architect should configure a VPC gateway endpoint for Amazon S3 in the VPC. This allows the application to connect directly to the S3 bucket within the VPC, without needing to go over the internet. This ensures a secure and private connection for accessing the sensitive data in the S3 bucket.

    Rate this question:

  • 24. 

    A company has two applications it wants to migrate to AWS. Both applications process a large set of files by accessing the same files at the same time. Both applications need to read the files with low latency. Which architecture should a solutions architect recommend for this situation?

    • A.

      Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an instance store volume to store the data.

    • B.

      Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to store the data.

    • C.

      Configure one memory optimized Amazon EC2 instance to run both applications simultaneously. Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data

    • D.

      Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data

    Correct Answer
    D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data
    Explanation
    The recommended architecture is to configure two Amazon EC2 instances to run both applications and to configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data. This architecture allows both applications to access the same files at the same time with low latency. Amazon EFS provides a scalable file storage system that can handle concurrent access from multiple instances, making it suitable for this scenario. The General Purpose performance mode ensures low latency for file access, and Bursting Throughput mode allows for bursts of high throughput when needed.

    Rate this question:

  • 25. 

    An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solution architect needs to solve the problem with minimal changes to the existing web application. What should the solution architect recommend?

    • A.

      Export the data to Amazon DynamoDB and have the business analysts run their queries

    • B.

      Load the data into Amazon ElastiCache and have the business analysts run their queries

    • C.

      Create a read replica of the primary database and have the business analysts run their queries.

    • D.

      Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.

    Correct Answer
    C. Create a read replica of the primary database and have the business analysts run their queries.
    Explanation
    The solution architect should recommend creating a read replica of the primary database and having the business analysts run their queries on it. This solution allows the business analysts to perform their read-only queries without impacting the performance of the primary database. By offloading the read workload to the read replica, the web application's performance degradation can be minimized, and the existing architecture can remain largely unchanged.

    Rate this question:

  • 26. 

    A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database. Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest. Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure?

    • A.

      Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume

    • B.

      Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume

    • C.

      Configure SSL encryption using AWS Key Management Service customer master keys (AWS KMS CMKs) to encrypt database volumes

    • D.

      Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes

    Correct Answer
    D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes
    Explanation
    To meet the compliance regulations and encrypt personally identifiable information (PII) at rest, the recommended solution is to configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys. This solution ensures that both the instance and database volumes are encrypted using AWS KMS keys, providing a secure environment for the highly sensitive application. It requires the least amount of changes to the existing infrastructure while meeting the encryption requirement.

    Rate this question:

  • 27. 

    A company running an on-premises application is migrating the application to AWS to increase its elasticity and availability. The current architecture uses a Microsoft SQL Server database with heavy read activity. The company wants to explore alternate database options and migrate database engines, if needed. Every 4 hours, the development team does a full copy of the production database to populate a test database. During this period, users experience latency. What should a solution architect recommend as replacement database?

    • A.

      Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database

    • B.

      Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database

    • C.

      Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database

    • D.

      Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database

    Correct Answer
    D. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database
    Explanation
    The solution architect should recommend using Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database. This option provides high availability and scalability by using Multi-AZ deployment and read replicas. It also allows for easy restoration of the test database from snapshots, minimizing the impact on users during the copy process.

    Rate this question:

  • 28. 

    A company has enabled AWS CloudTrail logs to deliver log files to an Amazon S3 bucket for each of its developer accounts. The company has created a central AWS account for streamlining management and audit reviews. An internal auditor needs to access the CloudTrail logs, yet access needs to be restricted for all developer account users. The solution must be secure and optimized. How should a solutions architect meet these requirements?

    • A.

      Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket

    • B.

      Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket

    • C.

      Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket.

    • D.

      Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket in each developer account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket

    Correct Answer
    A. Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket
    Explanation
    The correct answer is to configure an AWS Lambda function in each developer account to copy the log files to the central account. This solution ensures that the CloudTrail logs from each developer account are securely and efficiently transferred to the central account. By creating an IAM role in the central account for the auditor and attaching an IAM policy with read-only permissions to the bucket, the auditor can access the logs without granting unnecessary access to the developer account users. This solution meets the requirements of providing secure and optimized access to the CloudTrail logs.

    Rate this question:

  • 29. 

    A company has several business systems that require access to data stored in a file share. the business systems will access the file share using the Server Message Block (SMB) protocol. The file share solution should be accessible from both of the company's legacy on-premises environment and with AWS. Which services mod the business requirements? (Choose two.)

    • A.

      Amazon EBS

    • B.

      Amazon EFS

    • C.

      Amazon FSx for Windows

    • D.

      Amazon S3

    • E.

      AWS Storage Gateway file gateway

    Correct Answer(s)
    C. Amazon FSx for Windows
    E. AWS Storage Gateway file gateway
    Explanation
    The company's business systems require access to data stored in a file share using the Server Message Block (SMB) protocol. To meet this requirement, the company can use Amazon FSx for Windows, which provides fully managed Windows file servers that are accessible over the SMB protocol. Additionally, the company can also use AWS Storage Gateway file gateway, which is a hybrid cloud storage service that enables on-premises applications to seamlessly use AWS cloud storage, including file storage.

    Rate this question:

  • 30. 

    A company is using Amazon EC2 to run its big data analytics workloads. These variable workloads run each night, and it is critical they finish by the start of business the following day. A solutions architect has been tasked with designing the MOST cost-effective solution. Which solution will accomplish this?

    • A.

      Spot Fleet

    • B.

      Spot Instances

    • C.

      Reserved Instances

    • D.

      On-Demand Instances

    Correct Answer
    C. Reserved Instances
    Explanation
    Reserved Instances are the most cost-effective solution for running variable workloads that have a predictable schedule. By purchasing Reserved Instances, the company can commit to using a specific instance type in a specific region for a one or three-year term, which provides a significant discount compared to On-Demand Instances. This allows the company to save costs while ensuring the availability of the required resources for their big data analytics workloads. Spot Instances may provide even greater cost savings, but they are not suitable for workloads that have strict time constraints and need to finish by a specific time.

    Rate this question:

  • 31. 

    A company has a Microsoft Windows-based application that must be migrated to AWS. This application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances. What should a solution architect do to accomplish this?

    • A.

      Configure a volume using Amazon EFS. Mount the EBS volume to each Windows Instance.

    • B.

      Configure AWS Storage Gateway in Volume Gateway mode. Mount the volume to each Windows Instance

    • C.

      Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows Instance

    • D.

      Configure an Amazon EBS volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.

    Correct Answer
    C. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows Instance
    Explanation
    To accomplish the migration of the Microsoft Windows-based application to AWS with a shared Windows file system, the solution architect should configure Amazon FSx for Windows File Server. This service provides a fully managed native Windows file system that is accessible from multiple Amazon EC2 Windows instances. By mounting the Amazon FSx volume to each Windows instance, the application can continue to use the shared file system seamlessly. This option is the most appropriate and efficient solution for the given scenario.

    Rate this question:

  • 32. 

    A company has created an isolated backup of its environment in another Region. The application is running in warm standby mode and is fronted by an Application Load Balancer (ALB). The current failover process is manual and requires updating a DNS alias record to point to the secondary ALB in another Region. What should a solution architect do to automate the failover process?

    • A.

      Enable an ALB health check

    • B.

      Enable an Amazon Route 53 health check

    • C.

      Crate an CNAME record on Amazon Route 53 pointing to the ALB endpoint

    • D.

      Create conditional forwarding rules on Amazon Route 53 pointing to an internal BIND DNS server

    Correct Answer
    C. Crate an CNAME record on Amazon Route 53 pointing to the ALB endpoint
    Explanation
    To automate the failover process, a solution architect should create a CNAME record on Amazon Route 53 pointing to the ALB endpoint. This allows the DNS alias record to be updated automatically, directing traffic to the secondary ALB in another Region. By using a CNAME record, the failover process can be seamlessly automated without the need for manual updates.

    Rate this question:

  • 33. 

    Company has a mobile chat application with a data store based in Amazon DynamoDB. Users would like new messages to be read with as little latency as possible. A solutions architect needs to design an optimal solution that requires minimal application changes. Which method should the solutions architect select?

    • A.

      Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint

    • B.

      Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.

    • C.

      Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint

    • D.

      Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.

    Correct Answer
    A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint
    Explanation
    Adding Amazon DynamoDB Accelerator (DAX) to the mobile chat application's data store can significantly reduce the latency for reading new messages. By configuring DAX for the new messages table and updating the code to use the DAX endpoint, the application can benefit from the in-memory caching provided by DAX. This allows for faster access to frequently accessed data, improving the overall performance of the application without requiring major changes to the existing codebase.

    Rate this question:

  • 34. 

    A company is creating an architecture for a mobile app that requires minimal latency for its users. The company's architecture consists of Amazon EC2 instances behind an Application Load Balancer running in an Auto Scaling group. The EC2 instances connect to Amazon RDS. Application beta testing showed there was a slowdown when reading the data. However the metrics indicate that the EC2 instances do not cross any CPU utilization thresholds. How can this issue be addressed?

    • A.

      Reduce the threshold for CPU utilization in the Auto Scaling group

    • B.

      Replace the Application Load Balancer with a Network Load Balancer

    • C.

      Add read replicas for the RDS instances and direct read traffic to the replica

    • D.

      Add Multi-AZ support to the RDS instances and direct read traffic to the new EC2 instance.

    Correct Answer
    C. Add read replicas for the RDS instances and direct read traffic to the replica
    Explanation
    To address the slowdown in reading data while minimizing latency, the company should add read replicas for the RDS instances and direct read traffic to the replica. By adding read replicas, the workload can be distributed across multiple instances, reducing the load on the main RDS instance and improving read performance. This solution is more effective than reducing the CPU utilization threshold or replacing the load balancer. Adding Multi-AZ support to the RDS instances would improve availability but may not directly address the latency issue.

    Rate this question:

  • 35. 

    A company runs a website on Amazon EC2 instances behind an ELB Application Load Balancer. Amazon Route 53 is used for the DNS. The company wants to set up a backup website with a message including a phone number and email address that users can reach if the primary website is down. How should the company deploy this solution?

    • A.

      Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.

    • B.

      Use Amazon S3 website hosting for the backup website and Route 53 latency routing policy.

    • C.

      Deploy the application in another AWS Region and use ELB health checks for failover routing.

    • D.

      Deploy the application in another AWS Region and use server-side redirection on the primary website

    Correct Answer
    A. Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy.
    Explanation
    The company should use Amazon S3 website hosting for the backup website and Route 53 failover routing policy. This solution allows the company to host the backup website on Amazon S3, which provides high availability and durability. Route 53's failover routing policy ensures that traffic is directed to the backup website if the primary website is down. This setup allows users to reach the backup website and contact the company through the provided phone number and email address.

    Rate this question:

  • 36. 

    A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible I/O performance for video processing. 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival media that is not in use anymore. Which set of services should a solutions architect recommend to meet these requirements?

    • A.

      Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

    • B.

      Amazon EBS for maximum performance. Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage

    • C.

      Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage

    • D.

      Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

    Correct Answer
    A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
    Explanation
    The recommended set of services includes Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage. Amazon EBS provides high-performance block storage for the company's systems, ensuring fast I/O performance for video processing. Amazon S3 offers durable storage for media content, ensuring that the data remains intact and accessible. Lastly, Amazon S3 Glacier provides long-term archival storage for media that is no longer in use, meeting the company's requirements for storing large amounts of data in a cost-effective manner.

    Rate this question:

  • 37. 

    A company uses Amazon S3 as its object storage solution. The company has thousands of S3 buckets it uses to store data. Some of the S3 bucket have data that is accessed less frequently than others. A solutions architect found that lifecycle policies are not consistently implemented or are implemented partially? resulting in data being stored in high-cost storage. Which solution will lower costs without compromising the availability of objects

    • A.

      Use S3 ACLs.

    • B.

      Use Amazon Elastic Block Store (EBS) automated snapshots

    • C.

      Use S3 Intelligent-Tiering storage

    • D.

      Use S3 One Zone-infrequent Access (S3 One Zone-IA).

    Correct Answer
    C. Use S3 Intelligent-Tiering storage
    Explanation
    Using S3 Intelligent-Tiering storage will lower costs without compromising the availability of objects. This storage class automatically moves objects between two access tiers: frequent access and infrequent access. It uses machine learning to analyze access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier, which has a lower storage cost. If the objects are accessed again, they are automatically moved back to the frequent access tier. This ensures that less frequently accessed data is stored in a lower-cost storage tier while still being readily available when needed.

    Rate this question:

  • 38. 

    An application is running on Amazon EC2 instances. Sensitive information required for the application is stored in an Amazon S3 bucket. The bucket needs to be protected from internet access while only allowing services within the VPC access to the bucket. Which combination of actions should solutions archived take to accomplish this? (Choose two.)

    • A.

      Create a VPC endpoint for Amazon S3

    • B.

      Enable server access logging on the bucket

    • C.

      Apply a bucket policy to restrict access to the S3 endpoint

    • D.

      Add an S3 ACL to the bucket that has sensitive information

    • E.

      Restrict users using the IAM policy to use the specific bucket

    Correct Answer(s)
    A. Create a VPC endpoint for Amazon S3
    C. Apply a bucket policy to restrict access to the S3 endpoint
    Explanation
    To protect the Amazon S3 bucket from internet access and only allow access from services within the VPC, two actions should be taken. First, a VPC endpoint for Amazon S3 should be created. This allows communication between the VPC and the S3 bucket without going over the internet. Second, a bucket policy should be applied to restrict access to the S3 endpoint. This policy can specify which services or resources within the VPC are allowed to access the bucket, ensuring that only authorized entities can access the sensitive information.

    Rate this question:

  • 39. 

    A web application runs on Amazon EC2 instances behind an Application Load Balancer. The application allows users to create custom reports of historical weather data. Generating a report can take up to 5 minutes. These long-running requests use many of the available incoming connections, making the system unresponsive to other users. How can a solutions architect make the system more responsive?

    • A.

      Use Amazon SQS with AWS Lambda to generate reports

    • B.

      Increase the idle timeout on the Application Load Balancer to 5 minutes

    • C.

      Update the client-side application code to increase its request timeout to 5 minutes

    • D.

      Publish the reports to Amazon S3 and use Amazon CloudFront for downloading to the user.

    Correct Answer
    A. Use Amazon SQS with AWS Lambda to generate reports
    Explanation
    By using Amazon SQS with AWS Lambda to generate reports, the long-running requests can be offloaded from the web application and processed asynchronously. This means that the web application can quickly respond to other users' requests, making the system more responsive. SQS acts as a buffer, storing the requests until they can be processed by the Lambda function. This solution allows for scalability and ensures that the system can handle a large number of requests without becoming unresponsive.

    Rate this question:

  • 40. 

    A solutions architect must create a highly available bastion host architecture. The solution needs to be resilient within a single AWS Region and should require only minimal effort to maintain. What should the solutions architect do to meet these requirements?

    • A.

      Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener

    • B.

      Create a Network Load Balancer backed by a Spot Fleet with instances in a group with instances in a partition placement group

    • C.

      Create a Network Load Balancer backed by the existing serves in different Availability Zones as the target

    • D.

      Create a Network Load Balancer backed by an Auto Scaling with instances in multiple Availability zones as the target

    Correct Answer
    D. Create a Network Load Balancer backed by an Auto Scaling with instances in multiple Availability zones as the target
    Explanation
    To create a highly available bastion host architecture, the solutions architect should use a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target. This setup ensures that the bastion host is distributed across multiple zones, providing resilience within a single AWS Region. Additionally, using Auto Scaling allows for automatic scaling of the bastion host based on demand, reducing the effort required for maintenance.

    Rate this question:

  • 41. 

    A three-tier web application processes orders from customers. The web tier consists of Amazon EC2 instances behind an Application Load Balancer, a middle tier of three EC2 instances decoupled from the web tier using Amazon SQS. and an Amazon DynamoDB backend. At peak times, customers who submit orders using the site have to wait much longer than normal to receive confirmations due to lengthy processing times. A solutions architect needs to reduce these processing times. Which action will be MOST effective in accomplishing this?

    • A.

      Replace the SQS queue with Amazon Kinesis Data Firehose

    • B.

      Use Amazon ElastiCache for Redis in front of the DynamoDB backend tier

    • C.

      Add an Amazon CloudFront distribution to cache the responses for the web tier

    • D.

      Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SOS queue depth

    Correct Answer
    D. Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SOS queue depth
    Explanation
    Adding more instances to the middle tier using Amazon EC2 Auto Scaling based on the SQS queue depth will be the most effective action to reduce processing times. By scaling out the middle tier, the system can handle a higher volume of incoming orders and process them more quickly. This will help to alleviate the bottleneck and reduce the wait times for customers receiving order confirmations.

    Rate this question:

  • 42. 

    A company relies on an application that needs at least 4 Amazon EC2 instances during regular traffic and must scale up to 12 EC2 instances during peak loads. The application is critical to the business and must be highly available. Which solution will meet these requirements?

    • A.

      Deploy the EC2 instances in an Auto Scaling group Set the minimum to 4 and the maximum to M, with 2 in Availability Zone A and 2 in Availability Zone B

    • B.

      Deploy the EC2 instances in an Auto Scaling group Set the minimum to 4 and the maximum to 12, with all 4 in Availability Zone A.

    • C.

      Deploy the EC2 instances in an Auto Scaling group Set the minimum to 8 and the maximum to 12, with 4 in Availability Zone A and 4 in Availability Zone B

    • D.

      Deploy the EC2 instances in an Auto Scaling group Set the minimum to 8 and the maximum to 12, with 4 in Availability Zone A and 4 in Availability Zone B

    Correct Answer
    A. Deploy the EC2 instances in an Auto Scaling group Set the minimum to 4 and the maximum to M, with 2 in Availability Zone A and 2 in Availability Zone B
    Explanation
    Deploying the EC2 instances in an Auto Scaling group with a minimum of 4 and a maximum of M, with 2 instances in Availability Zone A and 2 instances in Availability Zone B, will meet the requirements. This configuration ensures that the application has at least 4 instances during regular traffic, providing the necessary capacity. During peak loads, the Auto Scaling group will automatically scale up to a maximum of M instances, allowing the application to handle the increased demand. Distributing the instances across Availability Zones also improves the availability of the application, as it can continue to operate even if one Availability Zone experiences issues.

    Rate this question:

  • 43. 

    A solutions architect must design a solution for a persistent database that is being migrated from on-premises to AWS. The database requires 64,000 IOPS according to the database administrator. If possible, the database administrator wants to use a single Amazon Elastic Block Store (Amazon EBS) volume to host the database instance. Which solution effectively meets the database administrator's criteria?

    • A.

      Use an instance from the I3 I/O optimized family and leverage local ephemeral storage to achieve the IOPS requirement

    • B.

      Create an Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached. Configure the volume to have 64,000 IOPS

    • C.

      Create and map an Amazon Elastic File System (Amazon EFS) volume to the database instance and use the volume to achieve the required IOPS for the database.

    • D.

      Provision two volumes and assign 32,000 IOPS to each. Create a logical volume at the operating system level that aggregates both volumes to achieve the IOPS requirements.

    Correct Answer
    B. Create an Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached. Configure the volume to have 64,000 IOPS
    Explanation
    The correct solution is to create a Nitro-based Amazon EC2 instance with an Amazon EBS Provisioned IOPS SSD (io1) volume attached and configure the volume to have 64,000 IOPS. This solution meets the criteria of the database administrator by providing the required IOPS for the database. The Nitro-based instances are optimized for high-performance and can handle the workload efficiently. The use of Provisioned IOPS SSD ensures consistent and predictable performance for the database.

    Rate this question:

  • 44. 

    Solutions architect is designing an architecture for a new application that requires low network latency and high network throughput between Amazon EC2 instances. Which component should be included in the architectural design?

    • A.

      An Auto Scaling group with Spot Instance types

    • B.

      Placement group using a cluster placement strategy

    • C.

      A placement group using a partition placement strategy

    • D.

      A placement group using a partition placement strategy

    Correct Answer
    B. Placement group using a cluster placement strategy
    Explanation
    A placement group using a cluster placement strategy should be included in the architectural design. This is because a cluster placement strategy ensures that EC2 instances are placed in close proximity to each other, reducing network latency. It also allows for high network throughput as it enables instances within the placement group to communicate with each other using enhanced networking. This makes it the ideal choice for an application that requires low network latency and high network throughput between EC2 instances.

    Rate this question:

  • 45. 

    A company has global users accessing an application deployed in different AWS Regions, exposing public static IP addresses. The users are experiencing poor performance when accessing the application over the internet. What should a solutions architect recommend to reduce internet latency

    • A.

      Set up AWS Global Accelerator and add endpoints

    • B.

      Set up AWS Direct Connect locations in multiple Regions

    • C.

      Set up an Amazon CloudFront distribution to access an application

    • D.

      Set up an Amazon Route 53 geo proximity routing policy to route traffic

    Correct Answer
    A. Set up AWS Global Accelerator and add endpoints
    Explanation
    To reduce internet latency for global users accessing the application deployed in different AWS Regions, a solutions architect should recommend setting up AWS Global Accelerator and adding endpoints. AWS Global Accelerator is a service that improves the performance and availability of applications by directing traffic to the nearest AWS edge location. By adding endpoints, the architect can distribute the traffic across multiple regions, reducing latency and improving the user experience. This solution ensures that users can access the application with better performance and reduced latency.

    Rate this question:

  • 46. 

    A company wants to migrate a workload to AWS. The chief information security officer requires that all data be encrypted at rest when stored in the cloud. The company wants complete control of encryption key lifecycle management. The company must be able to immediately remove the key material and audit key usage independently of AWS CloudTrail. The chosen services should integrate with other storage services that will be used on AWS. Which services satisfies these security requirements?

    • A.

      AWS CloudHSM with the CloudHSM client

    • B.

      AWS Key Management Service (AWS KMS) with AWS CloudHSM

    • C.

      AWS Key Management Service (AWS KMS) with an external key material origin

    • D.

      AWS Key Management Service (AWS KMS) with AWS managed customer master keys (CMKs)

    Correct Answer
    A. AWS CloudHSM with the CloudHSM client
    Explanation
    AWS CloudHSM with the CloudHSM client satisfies the security requirements because it provides complete control of encryption key lifecycle management. With CloudHSM, the company can immediately remove the key material and audit key usage independently of AWS CloudTrail. Additionally, CloudHSM integrates with other storage services on AWS, allowing the company to securely store and manage their encryption keys while migrating their workload to the cloud.

    Rate this question:

  • 47. 

    A company recently deployed a two-tier application in two Availability Zones in the us-east-1 Region. The databases are deployed in a private subnet while the web servers are deployed in a public subnet. An internet gateway is attached to the VPC. The application and database run on Amazon EC2 instances. The database servers are unable to access patches on the internet. A solutions architect needs to design a solution that maintains database security with the least operational overhead. Which solution meets these requirements?

    • A.

      Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route

    • B.

      Deploy a NAT gateway inside the private subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route.

    • C.

      Deploy two NAT instances inside the public subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route

    • D.

      Deploy two NAT instances inside the private subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route

    Correct Answer
    A. Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route
    Explanation
    The correct solution is to deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. By doing this, the database servers in the private subnet will be able to access patches on the internet through the NAT gateway. Updating the routing table of the private subnet to use the NAT gateway as the default route ensures that all outgoing traffic from the private subnet is directed through the NAT gateway, maintaining database security. This solution requires the least operational overhead as it leverages the built-in NAT gateway service provided by AWS.

    Rate this question:

  • 48. 

    A company has an application with a REST-based Interface that allows data to be received in near-real time from a third-party vendor Once received, the application processes and stores the data for further analysis. The application Is running on Amazon EC2 instances. The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application. When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to process all requests. Which design should a solutions architect recommend to provide a more scalable solution?

    • A.

      Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions

    • B.

      Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party vendor.

    • C.

      Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer

    • D.

      Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group

    Correct Answer
    A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions
    Explanation
    Using Amazon Kinesis Data Streams to ingest the data and processing it with AWS Lambda functions would provide a more scalable solution. Kinesis Data Streams can handle high volumes of data and can scale automatically to accommodate spikes in data volume. AWS Lambda functions can be used to process the data in near-real time, allowing for efficient analysis. This combination of services would ensure that the application can handle the increased data load and prevent 503 Service Unavailable Errors.

    Rate this question:

  • 49. 

    Solutions architect needs to design a low-latency solution for a static single-page application accessed by users utilizing a custom domain name. The solution must be serverless, encrypted in transit, and cost-effective. Which combination of AWS services and features should the solutions architect use? (Choose two.)

    • A.

      Amazon S3

    • B.

      Amazon EC2

    • C.

      AWS Fargate

    • D.

      Amazon CloudFront

    • E.

      Elastic Load Balancer

    Correct Answer(s)
    A. Amazon S3
    D. Amazon CloudFront
    Explanation
    The solutions architect should use Amazon S3 and Amazon CloudFront for this low-latency, serverless, encrypted in transit, and cost-effective solution. Amazon S3 is a highly scalable storage service that can host static assets for the single-page application. Amazon CloudFront is a content delivery network (CDN) that can cache and distribute the application's content globally, reducing latency for users accessing the application from different locations. Together, these services provide a reliable and efficient solution for hosting and delivering the static single-page application.

    Rate this question:

  • 50. 

    A company is migrating to the AWS Cloud. A file server is the first workload to migrate. Users must be able to access the file share using the Server Message Block (SMB) protocol. Which AWS managed service meets these requirements?

    • A.

      Amazon EBS

    • B.

      Amazon EC2

    • C.

      Amazon FSx

    • D.

      Amazon S3

    Correct Answer
    C. Amazon FSx
    Explanation
    Amazon FSx is the correct answer because it is an AWS managed service that provides fully managed Windows file servers that are accessible using the Server Message Block (SMB) protocol. It is designed for migrating Windows-based applications that require file storage, making it suitable for the company's file server workload migration. Amazon EBS and Amazon S3 are not specifically designed for SMB protocol access, while Amazon EC2 is a virtual server and does not provide a fully managed file server solution.

    Rate this question:

Quiz Review Timeline +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Aug 13, 2020
    Quiz Created by
    Siva Neelam
Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.