AWS SAA - Exam B

Approved & Edited by ProProfs Editorial Team
The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
Learn about Our Editorial Process
| By Kristina Galvan
K
Kristina Galvan
Community Contributor
Quizzes Created: 1 | Total Attempts: 119
Questions: 65 | Attempts: 119

SettingsSettingsSettings
AWS SAA - Exam B - Quiz

1. Exam B 50 questions + 15 questions from Exam C (1-16, page 135)


Questions and Answers
  • 1. 

    A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares read-only data using a network distributed file system The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast The database tier uses shared-storage clustering to provide database fall over capability, and uses several read slaves for scaling Data on all servers and the distributed file system directory is backed up weekly to off-site tapes. Which AWS storage and database architecture meets the requirements of the application?

    • A.

      Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi- AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots. 1

    • B.

      Web servers: store read-only data in an EC2 NFS server, mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

    • C.

      Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi- AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

    • D.

      Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi- AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

    Correct Answer
    A. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi- AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots. 1
    Explanation
    The correct answer is the first option. This architecture meets the requirements of the application by storing read-only data in S3 and copying it to the root volume of the web servers at boot time. The app servers share state using DynamoDB and IP unicast. The database uses RDS with multi-AZ deployment and read replicas for scalability. Additionally, the web servers, app servers, and database are backed up weekly to Glacier using snapshots, ensuring data protection.

    Rate this question:

  • 2. 

    Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these requirements?

    • A.

      A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore

    • B.

      B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.

    • C.

      C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore

    • D.

      D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.

    Correct Answer
    A. A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore
    Explanation
    Option A is the correct answer because it utilizes automated daily DB backups for the RDS Oracle database, ensuring that the database can be recovered in case of any issues. Additionally, it suggests backing up the EC2 instances using AMIs, which allows for whole server and whole disk restores. The option also includes file-level backup to S3 using traditional enterprise backup software, enabling individual file restores with a recovery time of no more than two hours. This backup architecture meets all the requirements specified by the customer.

    Rate this question:

  • 3. 

    A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes. How can you reduce the load on your on-premises database resources in the most cost-effective way?

    • A.

      A. Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.  

    • B.

      B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.

    • C.

      C. Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.

    • D.

      D. Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline

    Correct Answer
    B. B. Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
    Explanation
    By modifying the application to write to an Amazon SQS queue and developing a worker process to flush the queue to the on-premises database, the load on the on-premises database resources can be reduced. This approach decouples the application from the database, allowing for asynchronous processing and reducing the volume of writes directly to the database. Using Amazon SQS also provides scalability and fault tolerance. This solution is cost-effective as it leverages existing AWS services without the need for additional infrastructure or data synchronization mechanisms.

    Rate this question:

  • 4. 

    Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring information from the mobile app to a DynamoDS table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket. What is the best approach for storing data to DynamoDB and S3?

    • A.

      A. Use an EC2 Instance that is launched with an EC2 role providing access to the Score Data DynamoDB table and the GameState S3 bucket that communicates with the mobile app via web services.

    • B.

      B. Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation.

    • C.

      C. Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data DynamoDB table and the Game State S3 bucket.

    • D.

      D. Use an IAM user with access credentials assigned a role providing access to the Score Data DynamoDB table and the Game State S3 bucket for distribution with the mobile app.

    Correct Answer
    B. B. Use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation.
    Explanation
    The best approach for storing data to DynamoDB and S3 in this scenario is to use temporary security credentials that assume a role providing access to the Score Data DynamoDB table and the Game State S3 bucket using web identity federation. This approach ensures that users can securely log in to the game using their existing social media account and that their progress data is stored in the Game state S3 bucket. By using temporary security credentials, the access to the DynamoDB table and S3 bucket is limited and can be easily managed, providing an extra layer of security for the stored data.

    Rate this question:

  • 5. 

    Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use?

    • A.

      A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.

    • B.

      B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to the database.

    • C.

      C. Amazon ElastiCache to store the writes until the writes are committed to the database.

    • D.

      D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput

    Correct Answer
    B. B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to the database.
  • 6. 

    You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached The EC2 Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The two EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is provisioned with 4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the instance The EC2 Instance initially delivers the expected 16 000 IOPS random read and write performance Sometime later in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID Each volume Is provisioned to 4.000 IOPs like the original four for a total of 24.000 IOPS on the EC2 instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level does not increase at all. What is the problem and a valid solution?

    • A.

      A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of each of the 6 EBS volumes to 1TB

    • B.

      B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput.

    • C.

      C. Small block sizes cause performance degradation, limiting the I'O throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput.

    • D.

      D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS.

    • E.

      E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.

    Correct Answer
    E. E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.
    Explanation
    The problem is that the total random IOPS measured at the instance level does not increase even after adding additional EBS volumes. The valid solution is to change the instance root volume to also be a 500GB 4,000 Provisioned IOPS volume. This is because the standard EBS instance root volume limits the total IOPS rate, and by changing it to a Provisioned IOPS volume, the total random I/O performance of the instance can be increased.

    Rate this question:

  • 7. 

    You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?

    • A.

      A. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance

    • B.

      B. Ingest data into a DynamoDB table and move old data to a Redshift cluster

    • C.

      C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage

    • D.

      D. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

    Correct Answer
    C. C. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
    Explanation
    Option C suggests replacing the current RDS instance with a 6 node Redshift cluster with 96TB of storage. This setup will meet the requirements of supporting at least 100K sensors and storing sensor data for at least two years. Redshift is a highly scalable data warehousing solution that can handle large amounts of data and provide fast query performance. With 96TB of storage, it can accommodate the storage needs for the increased number of sensors and the requirement to store data for two years. Additionally, the use of a Redshift cluster allows for further scaling if needed in the future.

    Rate this question:

  • 8. 

    Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data Ensure processing of the biometric data is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining Which architecture outlined below win meet the initial requirements for the collection platform?

    • A.

      A. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.

    • B.

      B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.

    • C.

      C. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.

    • D.

      D. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.

    Correct Answer
    B. B. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
    Explanation
    Option B is the correct answer because it utilizes Amazon Kinesis, which is a real-time data streaming service, to collect the inbound sensor data. It also allows for analysis of the data using Kinesis clients, ensuring real-time analytics. The results of the analysis are then saved to a Redshift cluster using EMR, which provides a highly durable, elastic, and parallel processing capability. This architecture meets all the requirements stated in the question.

    Rate this question:

  • 9. 

    You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls, which are usually a few calls/second. Put once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible?

    • A.

      A. Use RDS Multi-AZ with two tables, one for -Active calls" and one for -Terminated calls". In this way the "Active calls_ table is always small and effective to access.

    • B.

      B. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive'" attribute that is present for active calls only In this way the Global Secondary index is sparse and more effective.

    • C.

      C. Use DynamoDB with a 'Calls" table and a Global secondary index on a 'State" attribute that can equal to "active" or "terminated" in this way the Global Secondary index can be used for all Items in the table.

    • D.

      D. Use RDS Multi-AZ with a "CALLS" table and an Indexed "STATE* field that can be equal to 'ACTIVE" or -TERMINATED" In this way the SOL query Is optimized by the use of the Index.

    Correct Answer
    B. B. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive'" attribute that is present for active calls only In this way the Global Secondary index is sparse and more effective.
    Explanation
    Using DynamoDB with a "Calls" table and a Global Secondary Index on the "IsActive" attribute that is present for active calls only would be the best fit for this scenario. This is because the "Active calls" table is always small and effective to access, which is suitable for the requirement of knowing the list of currently active calls every minute. Additionally, the Global Secondary Index being sparse and more effective helps optimize the query performance. This solution also aligns with the priority of cost saving for the project.

    Rate this question:

  • 10. 

    A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files. They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and Keep costs to a minimum. What AWS architecture would you recommend?

    • A.

      A. ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an IAM policy that permits access to sub-directories within the bucket via use of the 'username' Policy variable.

    • B.

      B. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer.

    • C.

      C. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance.

    • D.

      D. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer

    Correct Answer
    A. A. ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an IAM policy that permits access to sub-directories within the bucket via use of the 'username' Policy variable.
    Explanation
    The recommended AWS architecture is to ask customers to use an S3 client instead of an FTP client. A single S3 bucket should be created, and an IAM user should be created for each customer. These IAM users should be grouped together and given an IAM policy that allows access to sub-directories within the bucket using the 'username' Policy variable. This architecture ensures customer privacy by maintaining separate sub-directories for each customer while also minimizing costs by using a single S3 bucket.

    Rate this question:

  • 11. 

    Amazon EC2 provides virtual computing environments known as _____.

    • A.

      A. instances

    • B.

      B. volumes

    • C.

      C. microsystems

    • D.

      D. servers

    Correct Answer
    A. A. instances
    Explanation
    Amazon EC2 provides virtual computing environments known as instances. Instances are virtual servers in the cloud that can be used to run applications and services. They are customizable and can be easily scaled up or down based on the needs of the user. Instances are the fundamental building blocks of Amazon EC2 and are used to deploy various types of applications and workloads.

    Rate this question:

  • 12. 

    You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? (Choose two.)

    • A.

      A. Route 53 Record Sets

    • B.

      B. IM1 Roles

    • C.

      C. Elastic IP Addresses (EIP)

    • D.

      D. EC2 Key Pairs

    • E.

      E. Launch configurations

    • F.

      F. Security Groups

    Correct Answer(s)
    A. A. Route 53 Record Sets
    B. B. IM1 Roles
    Explanation
    Route 53 Record Sets and IM1 Roles do not need to be recreated in the second region because they are globally available resources. Route 53 Record Sets can be accessed from any region and IM1 Roles can be used across regions. The other resources mentioned (Elastic IP Addresses, EC2 Key Pairs, Launch configurations, and Security Groups) are region-specific and would need to be recreated in the second region for disaster recovery purposes.

    Rate this question:

  • 13. 

    Your company runs a customer facing event registration site This site is built with a 3-tier architecture with web and application tier servers and a MySQL database The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?

    • A.

      A. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB, and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ.

    • B.

      B. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs.

    • C.

      C. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.

    • D.

      D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside anAuto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment.

    Correct Answer
    D. D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside anAuto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment.
    Explanation
    Option D provides high availability because it deploys the web tier and application tier across all three availability zones (AZs) with multiple EC2 instances in each AZ. This ensures that if one AZ goes down, the application can still run on the instances in the remaining AZs. Additionally, the use of an Auto Scaling Group and elastic load balancer helps distribute the traffic evenly and scale the instances based on demand. The deployment of a Multi-AZ RDS ensures that the database is replicated across multiple AZs for data redundancy and fault tolerance.

    Rate this question:

  • 14. 

    Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load? Why?

    • A.

      A. Yes, you should deploy two Memcached ElastiCache Clusters in different AZs because the RDS instance will not be able to handle the load if the cache node fails.

    • B.

      B. No, if the cache node fails you can always get the same data from the DB withouthaving any availability impact.

    • C.

      C. No, if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact.

    • D.

      D. Yes, you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails.

    Correct Answer
    A. A. Yes, you should deploy two Memcached ElastiCache Clusters in different AZs because the RDS instance will not be able to handle the load if the cache node fails.
    Explanation
    Deploying two Memcached ElastiCache Clusters in different AZs ensures high availability in case of a cache node failure. If the cache node fails, the RDS instance will not be able to handle the increased load on its own. By having two cache clusters in different AZs, the workload can be distributed and the application can continue to function without any availability impact.

    Rate this question:

  • 15. 

    You are responsible for a legacy web application whose server environment is approaching end of life You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations: The VM's single 10GB VMDK is almost full Me virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized It is currently running on a highly customized. Windows VM within a VMware environment: You do not have me installation media This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?

    • A.

      A. Use the EC2 VM Import Connector for vCenter to import the VM into EC2.

    • B.

      B. Use Import/Export to import the VM as an ESS snapshot and attach to EC2.

    • C.

      C. Use S3 to create a backup of the VM and restore the data into EC2.

    • D.

      D. Use me ec2-bundle-instance API to Import an Image of the VM into EC2

    Correct Answer
    A. A. Use the EC2 VM Import Connector for vCenter to import the VM into EC2.
    Explanation
    The correct answer is A. Use the EC2 VM Import Connector for vCenter to import the VM into EC2. This option allows for the migration of the legacy web application to AWS by using the EC2 VM Import Connector for vCenter. This tool enables the import of the virtual machine (VM) into EC2, ensuring a quick migration process. By utilizing this method, the application can be moved to AWS while meeting the business continuity requirements, including the RTO and RPO objectives.

    Rate this question:

  • 16. 

    An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours They should synchronize their data on a regular basis and be able to provision me web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?

    • A.

      A. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Lastupdated' attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.

    • B.

      B. Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region.

    • C.

      C. Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region.

    • D.

      D. Send also each Ante into an SQS queue in me second region; use an auto-scaling group behind the SQS queue to replay the write in the second region.

    Correct Answer
    A. A. Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Lastupdated' attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
    Explanation
    The correct answer is A because it suggests using AWS Data Pipeline to schedule a cross-region copy of DynamoDB once a day. By creating a "Lastupdated" attribute in the DynamoDB table to represent the timestamp of the last update, it can be used as a filter to synchronize only the modified elements. This solution meets the requirements of a 2-hour Recovery Time Objective and a 24-hour Recovery Point Objective, while also minimizing changes to the existing web application and controlling the throughput of DynamoDB used for data synchronization.

    Rate this question:

  • 17. 

    Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances which are used as batch processors Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner?

    • A.

      A. Reduce the overall lime for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.

    • B.

      B. Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to S3.

    • C.

      C. Implement message passing between EC2 instances within a batch by exchanging messages through SQS.

    • D.

      D. Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.

    • E.

      E. Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.

    Correct Answer
    D. D. Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.
    Explanation
    The given architecture diagram shows the use of Simple Queue Service (SQS) to set up a message queue between EC2 instances for batch processing. CloudWatch monitors the number of job requests and an Auto Scaling group adds or deletes batch servers based on CloudWatch alarms. This allows for automatic coordination of the number of EC2 instances with the number of job requests, improving cost effectiveness. By dynamically scaling the number of batch servers based on the workload, resources are efficiently utilized and costs are optimized.

    Rate this question:

  • 18. 

    Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?

    • A.

      A. Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.

    • B.

      B. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.

    • C.

      C. Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload.

    • D.

      D. Install your application on a compute-optimized EC2 instance capable of supporting the application's average load. Synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.

    Correct Answer
    A. A. Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
    Explanation
    The correct answer is A because it addresses the requirements given by the CIO. By creating an EBS backed private AMI, the application can be easily deployed in AWS. The CloudFormation template ensures that the application is deployed across multiple availability zones, improving availability and reducing the risk of infrastructure failures. Asynchronously replicating transactions from the on-premises database to a database instance in AWS ensures a Recovery Point Objective (RPO) of 1 hour or less. The secure VPN connection ensures the data is transferred securely. Overall, this solution meets the specified RTO and RPO targets while minimizing costs.

    Rate this question:

  • 19. 

    An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?

    • A.

      A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.

    • B.

      B. Use synchronous database master-slave replication between two availability zones.

    • C.

      C. Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes.

    • D.

      D. Take 15-minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.

    Correct Answer
    A. A. Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.
    Explanation
    The correct answer is A because taking hourly database backups to S3 ensures that the RPO is met by having backups available every hour. Additionally, storing transaction logs in S3 every 5 minutes ensures that the RPO is met by having the most recent transaction logs available in case of data corruption. This strategy allows for a recovery time objective of less than 3 hours, as the customer can restore the most recent backup and replay the transaction logs to recover the data.

    Rate this question:

  • 20. 

    Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure. Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

    • A.

      A. Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.

    • B.

      B. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.

    • C.

      C. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.

    • D.

      D. Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

    Correct Answer
    C. C. Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
  • 21. 

    You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route53 Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. Running a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? (Choose two.)

    • A.

      A. Latency resource record sets cannot be used in combination with weighted resource record sets.

    • B.

      B. You did not setup an HTTP health check tor one or more of the weighted resource record sets associated with me disabled web servers.

    • C.

      C. The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.

    • D.

      D. One of the two working web servers in the other region did not pass its HTTP health check.

    • E.

      E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example com in the region where you disabled the servers.

    Correct Answer(s)
    B. B. You did not setup an HTTP health check tor one or more of the weighted resource record sets associated with me disabled web servers.
    E. E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example com in the region where you disabled the servers.
    Explanation
    When using Route53 Latency-Based Routing, it is important to set up HTTP health checks for the weighted resource record sets associated with the disabled web servers. This ensures that Route53 can monitor the health of the servers and direct traffic away from any servers that are not functioning properly. Additionally, it is necessary to set "Evaluate Target Health" to "Yes" on the latency alias resource record set in the region where the servers are disabled. This allows Route53 to consider the health of the servers when making routing decisions. If these steps are not taken, Route53 will not automatically direct users to the other region when all servers in one region are disabled.

    Rate this question:

  • 22. 

    Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design tor the application that leverages multiple regions tor the most recently accessed content and latency sensitive portions of the wet) site The most latency sensitive component of the application involves reading user preferences to support web site personalization and ad selection. In addition to running your application in multiple regions, which option will support this application's requirements?

    • A.

      A. Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table.

    • B.

      B. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region.

    • C.

      C. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3 CloudFront and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating DynamoDB updates.

    • D.

      D. Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized OB to each ElastiCache cluster.

    Correct Answer
    A. A. Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table.
    Explanation
    The correct answer is A because it suggests serving user content from S3 and CloudFront, which provides high availability and low latency. Using Route53 latency-based routing between ELBs in each region ensures that requests are directed to the region with the lowest latency. Retrieving user preferences from a local DynamoDB table in each region allows for quick access and reduces latency. Leveraging SQS to capture changes to user preferences with SOS workers ensures that updates are propagated efficiently to each table. This design meets the requirements of a highly available and latency-sensitive application.

    Rate this question:

  • 23. 

    Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC2 instances. Which of the following strategies will help prevent a similar situation in the future? The administrator still must be able to: - launch, start stop, and terminate development resources. - launch and start production instances.

    • A.

      A. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.

    • B.

      B. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources.

    • C.

      C. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances

    • D.

      D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.

    Correct Answer
    B. B. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources.
    Explanation
    Option B suggests leveraging resource-based tagging along with an IAM user to prevent specific users from terminating production EC2 resources. By using resource-based tagging, specific EC2 instances can be tagged as "production" and only users with the necessary permissions can terminate those instances. This strategy allows the administrator to still have the ability to launch and start production instances while preventing them from mistakenly terminating them.

    Rate this question:

  • 24. 

    A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer's end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter. Which of the following options provide a viable solution to remedy this situation? (Choose two.)

    • A.

      A. Add a route to the route table with an iPsec VPN connection as the target.

    • B.

      B. Enable route propagation to the virtual pinnate gateway (VGW).

    • C.

      C. Enable route propagation to the customer gateway (CGW).

    • D.

      D. Modify the route table of all Instances using the 'route' command.

    • E.

      E. Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment.

    Correct Answer(s)
    B. B. Enable route propagation to the virtual pinnate gateway (VGW).
    E. E. Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment.
    Explanation
    Enabling route propagation to the virtual private gateway (VGW) allows the VGW to advertise routes to the customer's on-premises environment, which would enable connectivity between EC2 instances and servers in the datacenter. Modifying the Instances VPC subnet route table by adding a route back to the customer's on-premises environment ensures that traffic from the EC2 instances is correctly routed back to the customer's datacenter. These two solutions together address the issue of the customer being unable to connect from EC2 instances to servers in its datacenter.

    Rate this question:

  • 25. 

    Your company previously configured a heavily used, dynamically routed VPN connection between your onpremises data center and AWS. You recently provisioned a DirectConnect connection and would like to start using the new connection. After configuring DirectConnect settings in the AWS Console, which of the following options win provide the most seamless transition for your users?

    • A.

      A. Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect.

    • B.

      B. Configure your DirectConnect router with a higher 8GP priority man your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection.

    • C.

      C. Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection.

    • D.

      D. Configure your DirectConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP pointy. And verify network traffic is leveraging the DirectConnect connection.

    Correct Answer
    C. C. Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection.
    Explanation
    The most seamless transition for the users would be to first update the VPC route tables to point to the DirectConnect connection. This ensures that the network traffic is directed towards the DirectConnect connection. Then, the DirectConnect router should be configured with the appropriate settings to establish the connection. After verifying that the network traffic is indeed leveraging the DirectConnect connection, the existing VPN connection can be safely deleted. This approach ensures a smooth transition from the VPN connection to the DirectConnect connection without causing any disruptions to the users.

    Rate this question:

  • 26. 

    A web company is looking to implement an external payment service into their highly available application deployed in a VPC. Their application EC2 instances are behind a public lacing ELB Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses is allowed at a time and can be added through an API. How should they architect their solution?

    • A.

      A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances.

    • B.

      B. Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway.

    • C.

      C. Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB.

    • D.

      D. Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.

    Correct Answer
    A. A. Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances.
  • 27. 

    You are designing the network infrastructure for an application server in Amazon VPC Users will access all the application instances from the Internet as well as from an on-premises network The on- premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements?

    • A.

      A. Configure a single routing Table with a default route via the Internet gateway Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.

    • B.

      B. Configure a single routing table with a default route via the internet gateway Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets..

    • C.

      C. Configure a single routing table with two default routes: one to the internet via an Internet gateway the other to the on-premises network via the VPN gateway use this routing table across all subnets in your VPC,

    • D.

      D. Configure two routing tables one that has a default route via the Internet gateway and another that has a default route via the VPN gateway Associate both routing tables with each VPC subnet

    Correct Answer
    B. B. Configure a single routing table with a default route via the internet gateway Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets..
    Explanation
    The correct answer is B because it suggests configuring a single routing table with a default route via the internet gateway, which allows users to access the application instances from the internet. Additionally, it recommends propagating specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router, ensuring connectivity between the on-premises network and the VPC. Finally, it suggests associating the routing table with all VPC subnets, ensuring consistent routing across the VPC. This design meets the requirements of allowing access from both the internet and the on-premises network.

    Rate this question:

  • 28. 

    You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S3, across the AWS Direct Connect link. You want other Internet traffic to use your existing link to an Internet Service Provider. What is the correct way to configure AWS Direct connect for access to services such as Amazon S3?

    • A.

      A. Configure a public Interface on your AWS Direct Connect link. Configure a static route via your AWS Direct Connect link that points to Amazon S3 Advertise a default route to AWS using BGP.

    • B.

      B. Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3 Configure specific routes to your network in your VPC,

    • C.

      C. Create a public interface on your AWS Direct Connect link. Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS.

    • D.

      D. Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.

    Correct Answer
    C. C. Create a public interface on your AWS Direct Connect link. Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS.
    Explanation
    The correct way to configure AWS Direct Connect for access to services such as Amazon S3 is to create a public interface on the Direct Connect link. Then, redistribute BGP routes into the existing routing infrastructure and advertise specific routes for your network to AWS. This allows for the use of AWS public service endpoints like Amazon S3 while still directing other internet traffic through the existing link to the Internet Service Provider.

    Rate this question:

  • 29. 

    You have deployed a three-tier web application in a VPC with a CIOR block of 10 0 0 0/28. You initially deploy two web servers, two application servers, two database servers and one NAT instance tor a total of seven EC2 instances. The web. Application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch. Which of the following could De the root caused? (Choose two.)

    • A.

      A. AWS reserves the first and the last private IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances

    • B.

      B. The Internet Gateway (IGW) of your VPC has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches

    • C.

      C. The ELB has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches

    • D.

      D. AWS reserves one IP address in each subnet's CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances

    • E.

      E. AWS reserves the first four and the last IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances

    Correct Answer(s)
    C. C. The ELB has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches
    E. E. AWS reserves the first four and the last IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances
    Explanation
    The root cause of the issue could be that the ELB has scaled up, adding more instances to handle the traffic spike. This increase in instances reduces the number of available private IP addresses for new instance launches. Additionally, AWS reserves the first four and the last IP address in each subnet's CIDR block, further limiting the number of available addresses for launching new EC2 instances.

    Rate this question:

  • 30. 

    You've been brought in as solutions architect to assist an enterprise customer with their migration of an ecommerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3-tier VPC, The configuration is as follows: You are now ready to begin deploying EC2 instances into the VPC. Web servers must have direct access to the internet. Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?

    • A.

      A. Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb- 238bc44b to the NAT instance.

    • B.

      B. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet-248bc44c.

    • C.

      C. Create a bastion and NAT instance in subnet-248bc44c, and add a route from rtb- 238bc44b to subnet-258bc44d.

    • D.

      D. Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to Igw- 2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet-248bc44c.

    Correct Answer
    A. A. Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb- 238bc44b to the NAT instance.
    Explanation
    The correct configuration is to create a bastion and NAT instance in subnet-258bc44d and add a route from rtb-238bc44b to the NAT instance. This configuration allows the web servers to have direct access to the internet while ensuring that the application and database servers do not have direct access. The NAT instance acts as a gateway for the private servers to retrieve updates from the internet, and the route from the route table ensures that the traffic is directed to the NAT instance.

    Rate this question:

  • 31. 

    You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? (Choose two.)

    • A.

      A. Configure a NAT instance in your VPC. Create a default route via the NAT instance and associate it with all subnets. Configure a DNS A record that points to the NAT instance public IP address.

    • B.

      B. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers. Configure a Route53 CNAME record to your CloudFront distribution.

    • C.

      C. Place all your web servers behind EL8. Configure a Route53 CNMIE to point to the ELB DNS name.

    • D.

      D. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs with health checks and DNS failover.

    • E.

      E. Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP.

    Correct Answer(s)
    C. C. Place all your web servers behind EL8. Configure a Route53 CNMIE to point to the ELB DNS name.
    D. D. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs with health checks and DNS failover.
    Explanation
    The correct answers are C and D.

    Option C suggests placing all web servers behind an ELB (Elastic Load Balancer) and configuring a Route53 CNAMIE (CNAME) record to point to the ELB DNS name. This ensures that the web servers are highly available and can handle traffic effectively.

    Option D suggests assigning Elastic IP addresses (EIPs) to all web servers and configuring a Route53 record set with all EIPs, along with health checks and DNS failover. This also ensures high availability by distributing traffic across multiple servers and monitoring their health.

    Both options provide solutions for achieving a highly available architecture for the web servers while ensuring Internet connectivity.

    Rate this question:

  • 32. 

    You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC. Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose three.)

    • A.

      A. An AWS Direct Connect link between the VPC and the network housing the internal services.

    • B.

      B. An Internet Gateway to allow a VPN connection.

    • C.

      C. An Elastic IP address on the VPC instance

    • D.

      D. An IP address space that does not conflict with the one on-premises

    • E.

      E. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses

    • F.

      F. A VM Import of the current virtual machine

    Correct Answer(s)
    A. A. An AWS Direct Connect link between the VPC and the network housing the internal services.
    D. D. An IP address space that does not conflict with the one on-premises
    F. F. A VM Import of the current virtual machine
    Explanation
    The correct answers are A, D, and F. An AWS Direct Connect link between the VPC and the network housing the internal services is needed to establish a secure and dedicated connection between the VPC and the on-premises services. An IP address space that does not conflict with the one on-premises is necessary to ensure that there are no IP address conflicts between the VPC and the on-premises network. A VM Import of the current virtual machine is required to migrate the legacy application from the virtual machine in the datacenter to the VPC. These three options together will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured.

    Rate this question:

  • 33. 

    You are migrating a legacy client-server application to AWS. The application responds to a specific DNS domain (e.g. www.example.com) and has a 2-tier architecture, with multiple application servers and a database server. Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code, but you have to file a change request. How would you implement the architecture on AWS in order to maximize scalability and high availability?

    • A.

      A. File a change request to implement Alias Resource support in the application. Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.

    • B.

      B. File a change request to implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs.

    • C.

      C. File a change request to implement Cross-Zone support in the application. Use an ELB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.

    • D.

      D. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.

    Correct Answer
    D. D. File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.
    Explanation
    The correct answer is D because it suggests implementing Proxy Protocol support in the application. By using an ELB with a TCP Listener and Proxy Protocol enabled, the load can be distributed on two application servers in different Availability Zones (AZs). This setup allows the application servers to know the IP address of the clients, which is necessary for proper functioning. Additionally, using multiple AZs provides high availability and scalability.

    Rate this question:

  • 34. 

    A newspaper organization has an on-premises application, which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate Its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability. Which is the most appropriate?

    • A.

      A. Use S3 with reduced redundancy lo store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.

    • B.

      B. Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.

    • C.

      C. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.

    • D.

      D. Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL.

    • E.

      E. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53 with DNS round-robin.

    Correct Answer
    C. C. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
    Explanation
    The most appropriate option is C because it utilizes S3 with standard redundancy to store and serve the scanned files, which ensures durability and availability. CloudSearch is used for query processing, providing an efficient search functionality. Elastic Beanstalk is used to host the website across multiple availability zones, improving availability and scalability. This architecture meets the organization's requirements for cost efficiency, availability, and durability.

    Rate this question:

  • 35. 

    A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose two.)

    • A.

      A. Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.

    • B.

      B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.

    • C.

      C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket.

    • D.

      D. The application authenticates against LDAP the application, then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials, the application can use the IAM temporary credentials to access the appropriate S3 bucket.

    • E.

      E. The application authenticates against IAM Security Token Service using the LDAP credentials, the application uses those temporary AWS security credentials to access the appropriate S3 bucket.

    Correct Answer(s)
    B. B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.
    C. C. Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket.
    Explanation
    The correct answers are B and C.

    Option B suggests that the application authenticates against LDAP and retrieves the name of an IAM role associated with the user. It then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials obtained to access the appropriate S3 bucket.

    Option C suggests developing an identity broker that authenticates against LDAP and then calls the IAM Security Token Service to get IAM federated user credentials. The application can then call the identity broker to obtain IAM federated user credentials with access to the appropriate S3 bucket.

    Both options involve authenticating against LDAP and using IAM Security Token Service to obtain temporary credentials or federated user credentials for accessing the S3 bucket specific to each user.

    Rate this question:

  • 36. 

    You are designing a multi-platform web application for AWS. The application will run on EC2 instances and will be accessed from PCs. tablets and smart phones Supported accessing platforms are Windows. MACOS. IOS and Android Separate sticky session and SSL certificate setups are required for different platform types which of the following describes the most cost effective and performance efficient architecture setup?

    • A.

      A. Setup a hybrid architecture to handle session state and SSL certificates on-prem and separate EC2 Instance groups running web applications for different platform types running in a VPC

    • B.

      B. Set up one ELB for all platforms to distribute load among multiple instance under it Each EC2 instance implements ail functionality for a particular platform.

    • C.

      C. Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each ELB run separate EC2 instance groups to handle the web application for each platform.

    • D.

      D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.

    Correct Answer
    D. D. Assign multiple ELBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.
    Explanation
    The most cost-effective and performance-efficient architecture setup would be to assign multiple ELBs to an EC2 instance or group of EC2 instances running the common components of the web application. Each ELB would be dedicated to a specific platform type, allowing for separate session stickiness and SSL termination to be handled at the ELBs. This setup ensures that the load is distributed among multiple instances, while also providing the necessary security and session management for each platform type.

    Rate this question:

  • 37. 

    Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst. In web traffic due to a company announcement. Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures ability to handle unexpected increases in traffic. The application currently consists of 2 tiers a web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability of your application in the short timeframe required?

    • A.

      A. Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website.

    • B.

      B. Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted In AWS.

    • C.

      C. Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.

    • D.

      D. Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.

    Correct Answer
    C. C. Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
    Explanation
    Option C suggests setting up a CloudFront distribution to offload traffic from the on-premises environment. CloudFront can cache objects from a custom origin, reducing the load on the on-premises servers. By customizing the object cache behavior and setting a TTL (Time-to-Live), objects can be stored in the cache for a specified period, further reducing the load on the servers. This approach helps improve the application's ability to handle unexpected increases in traffic by distributing the load and reducing the strain on the on-premises infrastructure.

    Rate this question:

  • 38. 

    Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements Customers can show off their Individuality on the ski slopes and have access to head-up-displays. GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUDA, across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time?

    • A.

      A. Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an autoscaling group of G2 instances in a placement group.

    • B.

      B. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta-data Use an auto-scaling group of G2 instances in a placement group.

    • C.

      C. Use Amazon Simple Workflow (SWF) to manages assessments movement of data & meta-data Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Virtualization).

    • D.

      D. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto- scaling group of C3 with SR-IOV (Single Root I/O virtualization).

    Correct Answer
    B. B. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta-data Use an auto-scaling group of G2 instances in a placement group.
  • 39. 

    You're running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible block- based storage. You have 140TB of data and would like to mount it as a single folder on your file server Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case?

    • A.

      A. Use Storage Gateway and configure it to use Gateway Cached volumes.

    • B.

      B. Configure your backup software to use S3 as the target for your data backups.

    • C.

      C. Configure your backup software to use Glacier as the target for your data backups.

    • D.

      D. Use Storage Gateway and configure it to use Gateway Stored volumes.

    Correct Answer
    D. D. Use Storage Gateway and configure it to use Gateway Stored volumes.
  • 40. 

    You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2 8x large Instance type, whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job?

    • A.

      A. Create smaller files on Amazon S3.

    • B.

      B. Add additional cc2 8x large instances by introducing a task group.

    • C.

      C. Use smaller instances that have higher aggregate I/O performance.

    • D.

      D. Create fewer, larger files on Amazon S3.

    Correct Answer
    C. C. Use smaller instances that have higher aggregate I/O performance.
    Explanation
    Using smaller instances that have higher aggregate I/O performance would be the most cost-efficient way to reduce the runtime of the job. Since the cc2 8x large instances have mostly idle CPUs, using smaller instances with higher I/O performance would utilize the available resources more effectively and reduce the overall runtime of the job. This would result in cost savings as the job would be completed faster without the need for additional instances or unnecessary file management.

    Rate this question:

  • 41. 

    Your department creates regular analytics reports from your company's log files. All log data is collected in Amazon S3 and processed by daily Amazon Elastic MapReduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse. Your CFO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?

    • A.

      A. Use reduced redundancy storage (RRS) for all data In S3. Use a combination of Spot Instances and Reserved Instances for Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.

    • B.

      B. Use reduced redundancy storage (RRS) for PDF and .csv data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift.

    • C.

      C. Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.

    • D.

      D. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift

    Correct Answer
    D. D. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift
  • 42. 

    You are the new IT architect in a company that operates a mobile sleep tracking application When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table. Every morning, you scan the table to extract and aggregate last night's data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SMS mobile push notifications that new data is available, which is parsed and visualized by (The mobile app Currently you have around 100k users who are mostly based out of North Americ

    • A.

      A. You have been tasked to optimize the architecture of the backend system to lower cost what would you  recommend? (Choose two.)

    • B.

      B. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.

    • C.

      C. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3.

    • D.

      D. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput..

    • E.

      E. Introduce Amazon Elasticache lo cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.

    • F.

      F. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3

    Correct Answer(s)
    A. A. You have been tasked to optimize the architecture of the backend system to lower cost what would you  recommend? (Choose two.)
    C. C. Have the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3.
    Explanation
    Option A is recommended because creating a new DynamoDB table each day and dropping the previous day's table can help reduce storage costs. Option C is also recommended because accessing DynamoDB directly instead of storing JSON files on S3 can reduce storage costs and improve performance. Option B is not recommended as it does not address cost optimization. Option D and E are not relevant to cost optimization. Option F is not recommended as it suggests replacing both DynamoDB and S3 with Redshift, which may not be necessary and can potentially increase costs.

    Rate this question:

  • 43. 

    Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using companyprovided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery'?

    • A.

      A. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.

    • B.

      B. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2.

    • C.

      C. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3.

    • D.

      D. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoded videos from Glacier.

    Correct Answer
    C. C. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3.
    Explanation
    The correct answer is C because it suggests using Elastic Transcoder to transcode the high-resolution MP4 videos to HLS format, which is required for the company-provided tablets. The videos are then stored in S3, which can be cost-effective and scalable. Lifecycle Management is used to archive the original files to Glacier after a few days, ensuring long-term storage without incurring high costs. Finally, CloudFront is used to serve the HLS transcoded videos from S3, providing global distribution and high availability. This architecture meets the requirements of cost-efficiency, high availability, and quality video delivery.

    Rate this question:

  • 44. 

    You've been hired to enhance the overall security posture for a very large e-commerce site. They have a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3 They are using a combination of RDS and DynamoOB for their dynamic data and then archiving nightly into S3 for further processing with EMR They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access. Which approach provides a cost effective scalable mitigation to this kind of attack?

    • A.

      A. Recommend that they lease space at a DirectConnect partner location and establish a 1G DirectConnect connection to theirvPC they would then establish Internet connectivity into their space, filter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection into their application running in their VPC,

    • B.

      B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet.

    • C.

      C. Add a WAF tier by creating a new ELB and an AutoScalmg group of EC2 Instances running a hostbased WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group

    • D.

      D. Remove all but TLS 1 2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the ELB itself to perform WAF functionality.

    Correct Answer
    C. C. Add a WAF tier by creating a new ELB and an AutoScalmg group of EC2 Instances running a hostbased WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group
  • 45. 

    An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise's account The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of these conditions?

    • A.

      A. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.

    • B.

      B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new access and secret key for the user and provide these credentials to the SaaS provider.

    • C.

      C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application.

    • D.

      D. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required tor the Saas application to work, provide the role ARM to the SaaS provider to use when launching their application instances.

    Correct Answer
    C. C. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application.
    Explanation
    Option C is the correct answer because it allows the SaaS provider's account to assume the IAM role, ensuring that the credentials used by the SaaS vendor cannot be used by any other third party. Additionally, by assigning a policy to the IAM role that allows only the actions required by the SaaS application, the enterprise's internal security policies of least privilege are met. This solution provides the necessary access to the SaaS application while maintaining security controls. Options A and B do not meet the requirement of ensuring that the credentials cannot be used by any other third party. Option D is not suitable as it is specific to EC2 instances and does not address the requirement of the SaaS application.

    Rate this question:

  • 46. 

    You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CONs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the Internet. Which of the following options would you consider?

    • A.

      A. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes.

    • B.

      B. Implement security groups and configure outbound rules to only permit traffic to software depots.

    • C.

      C. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only..

    • D.

      D. Implement network access control lists to all specific destinations, with an Implicit deny as a rule

    Correct Answer
    A. A. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes.
    Explanation
    Option A suggests configuring a web proxy server in the VPC and enforcing URL-based rules for outbound access while removing default routes. This option allows VPC instances to access software depots and distributions on the Internet for product updates while explicitly denying any other outbound connections to hosts on the Internet. By using URL-based rules, only specific URLs for the depots and distributions will be allowed, providing a more secure data leak prevention solution. Removing default routes ensures that all outbound traffic goes through the web proxy server, allowing for better control and monitoring of the connections.

    Rate this question:

  • 47. 

    An administrator is using Amazon CloudFormation to deploy a three tier web application that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials?

    • A.

      A. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile.

    • B.

      B. Use the Parameter section in the Cloud Formation template to nave the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table.

    • C.

      C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.

    • D.

      D. Create an identity and Access Management user in the CloudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.

    Correct Answer
    C. C. Create an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.
    Explanation
    Option C is the correct answer because it suggests creating an Identity and Access Management (IAM) Role with the necessary permissions to access the DynamoDB tables. By referencing this Role in the instance profile property of the application instance, the application instance can access the DynamoDB tables without exposing API credentials. This approach ensures secure access to the DynamoDB tables while maintaining the principle of least privilege.

    Rate this question:

  • 48. 

    An AWS customer is deploying an application mat is composed of an AutoScaling group of EC2 Instances. The customers' security policy requires that every outbound connection from these instances to any other service within the customers Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instance-id. In addition, an x 509 certificates must Designed by the customer's Key management service in order to be trusted for authentication. Which of the following configurations will support these requirements?

    • A.

      A. Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the certificate from Amazon S3 upon first boot.

    • B.

      B. Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the  launched instances generate a certificate signature request with the instance's assigned instance- id to the Key management service for signature.

    • C.

      C. Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance.

    • D.

      D. Configure the launched instances to generate a new certificate upon first boot Have the Key management service poll the AutoScaling group for associated instances and send new instances a certificate signature (hat contains the specific instance-id.

    Correct Answer
    A. A. Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the certificate from Amazon S3 upon first boot.
  • 49. 

    Your company has recently extended its datacenter into a VPC on AVVS to add burst computing capacity as needed Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary You don't want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console Which option below will meet the needs for your NOC members?

    • A.

      A. Use OAuth 2 0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AVVS Management Console.

    • B.

      B. Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.

    • C.

      C. Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.

    • D.

      D. Use your on-premises SAML2.0-compliant identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console.

    Correct Answer
    C. C. Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.
    Explanation
    Option C is the correct answer because it suggests using an on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint. This means that the NOC members can use their existing credentials from the on-premises IDP to access the AWS Management Console without creating new IAM users or signing in again. This allows for seamless access and administration of Amazon EC2 instances as needed.

    Rate this question:

  • 50. 

    You are designing an SSUTLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient. Which of the following options would you consider for configuring the web server infrastructure? (Choose two.)

    • A.

      A. Configure ELB with TCP listeners on TCP/4d3. And place the Web servers behind it.

    • B.

      B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.

    • C.

      C. Configure ELB with HTTPS listeners, and place the Web servers behind it.

    • D.

      D. Configure your web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.

    Correct Answer(s)
    A. A. Configure ELB with TCP listeners on TCP/4d3. And place the Web servers behind it.
    B. B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.

Quiz Review Timeline +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 19, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Mar 06, 2020
    Quiz Created by
    Kristina Galvan

Related Topics

Back to Top Back to top
Advertisement
×

Wait!
Here's an interesting quiz for you.

We have other quizzes matching your interest.