This example quiz shows that you may upload a certificate with your own branding, logo, signature, design and even custom text.
Create a private subnet for the Amazon EC2 instances and a public subnet for the Amazon RDS cluster.
Create a private subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster.
Create a public subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster.
Create a public subnet for the Amazon EC2 instances and a public subnet for the Amazon RDS cluster.
Migrate the data on the Amazon EBS volume to an SSD-backed volume.
Change the EC2 instance type to one with EC2 instance store volumes.
Migrate the data on the EBS volume to provisioned IOPS SSD (io1).
Change the EC2 instance type to one with burstable performance.
Modify the Redshift cluster and configure cross-region snapshots to the other region.
Modify the Redshift cluster to take snapshots of the Amazon EBS volumes each day, sharing those snapshots with the other region.
Modify the Redshift cluster and configure the backup and specify the Amazon S3 bucket in the other region.
Modify the Redshift cluster to use AWS Snowball in export mode with data delivered to the other region.
Purchase Reserved Instances to run all containers. Use Auto Scaling groups to schedule jobs.
Host a container management service on Spot Instances. Use Reserved Instances to run Docker containers.
Use Amazon ECS orchestration and Auto Scaling groups: one with Reserve Instances, one with Spot Instances.
Use Amazon ECS to manage container orchestration. Purchase Reserved Instances to run all batch workloads at the same time.
Store the AWS Access Key ID/Secret Access Key combination in software comments
Assign an IAM user to the Amazon EC2 instance.
Assign an IAM role to the Amazon EC2 instance.
Enable multi-factor authentication for the AWS root account.
Amazon Aurora
Amazon Redshift
Amazon DynamoDB
Amazon RDS MySQL
Create an Amazon Kinesis Firehouse delivery stream to store the data in Amazon S3.
Create an Auto Scaling group of Amazon EC2 servers behind ELBs to write the data into Amazon RDS.
Create an Amazon SQS queue, and have the machines write to the queue.
Create an Amazon EC2 server farm behind an ELB to store the data in Amazon EBS Cold HDD volumes.
Create a network ACL on the web server’s subnet, and allow HTTPS inbound and MySQL outbound. Place both database and web servers on the same subnet.
Open an HTTPS port on the security group for web servers and set the source to 0.0.0.0/0. Open the MySQL port on the database security group and attach it to the MySQL instance. Set the source to Web Server Security Group.
Create a network ACL on the web server’s subnet, and allow HTTPS inbound, and specify the source as 0.0.0.0/0. Create a network ACL on a database subnet, allow MySQL port inbound for web servers, and deny all outbound traffic.
Open the MySQL port on the security group for web servers and set the source to 0.0.0.0/0. Open the HTTPS port on the database security group and attach it to the MySQL instance. Set the source to Web Server Security Group.
Create a read replica of the database.
Provision a new RDS instance as a secondary master.
Configure the database to be in multiple regions.
Increase the number of provisioned storage IOPS.
Amazon EC2
Amazon API Gateway
AWS Elastic Beanstalk
Amazon EC2 Container Service
Change the Auto Scaling groups scale out event to scale based on network utilization.
Create an Auto Scaling scheduled action to scale out the necessary resources at 8:30 AM every morning.
Use Reserved Instances to ensure the system has reserved the right amount of capacity for the scale-up events.
Permanently keep a steady state of instances that is needed at 9:00 AM to guarantee available resources, but leverage Spot Instances.
Deploy two instances in each of three Availability Zones.
Deploy two instances in each of two Availability Zones.
Deploy four instances in each of two Availability Zones.
Deploy one instance in each of three Availability Zones.
Upload directly to S3 using a pre-signed URL.
Upload to a second bucket, and have a Lambda event copy the image to the primary bucket.
Upload to a separate Auto Scaling group of servers behind an ELB Classic Load Balancer, and have them write to the Amazon S3 bucket.
Expand the web server fleet with Spot Instances to provide the resources to handle the images.
Store data in a filesystem backed by Amazon Elastic File System (EFS).
Store data in Amazon S3 and use a third-party solution to expose Amazon S3 as a filesystem to the database server.
Store data in Amazon Dynamo DB and emulate relational database semantics.
Stripe data across multiple Amazon EBS volumes using RAID 0
Amazon Redshift
Amazon DynamoDB
Amazon RDS MySQL
Amazon Aurora
Randomize a key name prefix.
Store the event data in separate buckets.
Randomize the key name suffix.
Use Amazon S3 Transfer Acceleration.
Auto Scaling group
AWS CloudTrail
ELB Classic Load Balancer
Amazon DynamoDB
Amazon ElastiCache
Amazon CloudFront with on-premises servers as the origin
ELB Application Load Balancer
Amazon Route 53 latency-based routing
Amazon EFS to store and server static files
Amazon DynamoDB
Amazon Aurora MySQL
Amazon RDS MySQL
Amazon Redshift
Use an Amazon Redshift database. Copy the product database into Redshift and allow the team to query it.
Use an Amazon RDS read replica of the production database and allow the team to query against it.
Use multiple Amazon EC2 instances running replicas of the production database, placed behind a load balancer.
Use an Amazon DynamoDB table to store a copy of the data.
Migrate the database to MySQL.
Use Amazon RedShift to analyze the queries.
Integrate Amazon ElastiCache into the application.
Use a Lambda-triggered request to the backend database.
Amazon SNS
AWS Lambda with sequential dispatch
A FIFO queue in Amazon SQS
A standard queue in Amazon SQS
Move some Amazon EC2 instances to a subnet in a different AZ"
Move the website to Amazon S3.
Change the ELB to an Application Load Balancer.
Move some Amazon EC2 instances to a subnet in the same Availability Zone.
An egress-only internet gateway
A NAT gateway
A custom NAT instance
A VPC endpoint
Scheduled Reserved Instances
Convertible Reserved Instances
Standard Reserved Instances
Spot Instances
VPC peering connection.
NAT gateway
VPC endpoint
AWS Direct Connect
Using security groups that reference the security groups of the other application
Using security groups that reference the application servers IP addresses
Using Network Access Control Lists to allow/deny traffic based on application IP addresses
Migrating the applications to separate subnets from each other
Default Amazon CloudWatch metrics.
Custom Amazon CloudWatch metrics.
Amazon Inspector resource monitoring.
Default monitoring of Amazon EC2 instances.
Configure the database security group to allow database traffic from the application server IP addresses.
Configure the database security group to allow database traffic from the application server security group.
Configure the database subnet network ACL to deny all inbound non-database traffic from the application-tier subnet.
Configure the database subnet network ACL to allow inbound database traffic from the application-tier subnet.
Use Amazon CloudWatch Events to invoke an AWS Lambda function that can launch On-Demand Instances.
Regularly store data from the application on Amazon DynamoDB. Increase the maximum number of instances in the AWS Auto Scaling group.
Manually place a bid for additional Spot Instances at a higher price in the same AWS Region and Availability Zone.
Ensure that the Amazon Machine Image associated with the application has the latest configurations for the launch configuration.
Amazon Kinesis Data Firehouse
Amazon SQS
Amazon Redshift
Amazon SNS
Amazon DynamoDB
Encrypt the files on the client side and store the files on Amazon Glacier, then decrypt the reports on the client side.
Move the files to Amazon ElastiCache and provide a username and password for downloading the reports.
Specify the use of AWS KMS server-side encryption at the time of an object creation on Amazon S3.
Store the files on Amazon S3 and use the application to generate S3 pre-signed URLs to users.
Use Amazon SQS.
Deploy Multi-AZ RDS MySQL
Configure Amazon RDS with additional read replicas.
Migrate from MySQL to RDS Microsoft SQL Server.
Use ephemeral volumes to store the log files.
Use a scheduled Amazon CloudWatch Event to take regular Amazon EBS snapshots.
Use an Amazon CloudWatch agent to push the logs to Amazon CloudWatch Logs.
Use AWS CloudTrail to pull the logs from the Amazon EC2 instances.
Create a private subnet for the Amazon EC2 instances and a public subnet for the Amazon RDS cluster.
Create a private subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster.
Create a public subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster.
Create a public subnet for the Amazon EC2 instances and a public subnet for the Amazon RDS cluster.
Use Amazon Cognito Identity with SMS-based MFA.
Edit AWS IAM policies to require MFA for all users.
Federate IAM against corporate AD that requires MFA.
Use Amazon API Gateway and require SSE for photos.
The instance with the oldest launch configuration.
The instance in the Availability Zone that has most instances.
The instance closest to the next billing hour.
The oldest instance in the group.
Option 1
Option 2
Option 3
Option 4
Amazon RDS
Amazon DynamoDB
Amazon Redshift
AWS Data Pipeline
Store an access key on the Amazon EC2 instance with rights to the Dynamo DB table.
Attach an IAM user to the Amazon EC2 instance.
Create an IAM role with permissions to write to the DynamoDB table.
Attach an IAM role to the Amazon EC2 instance.
Attach an IAM policy to the Amazon EC2 instance.
Amazon EC2 instance storage
Amazon EBS General Purpose SSD (gp2) storage
Amazon S3
Amazon EBS Provision IOPS SSD (io1) storage
Create a read replica of the primary database and deploy it in a different AWS Region.
Enable multi-AZ to create a standby database in a different Availability Zone.
Enable multi-AZ to create a standby database in a different AWS Region.
Create a read replica of the primary database and deploy it in a different Availability Zone.
The application is reading parts of objects from Amazon S3 using a range header.
The application is reading objects from Amazon S3 using parallel object requests.
The application is updating records by writing new objects with unique keys.
The application is updating records by overwriting existing objects with the same keys.
AWS Snowball storage for the legacy application until the application can be re-architected.
AWS Storage Gateway in cached mode for the legacy application storage to write data to Amazon S3.
AWS Storage Gateway in stored mode for the legacy application storage to write data to Amazon S3.
An Amazon S3 volume mounted on the legacy application server locally using the File Gateway service
Configure a NAT gateway in a public subnet and route all traffic to Amazon Kinesis through the NAT gateway.
Configure a gateway VPC endpoint for Kinesis and route all traffic to Kinesis through the gateway VPC endpoint.
Configure an interface VPC endpoint for Kinesis and route all traffic to Kinesis through the gateway VPC endpoint.
Configure an AWS Direct Connect private virtual interface for Kinesis and route all traffic to Kinesis through the virtual interface.
Amazon API Gateway and AWS Lambda
Elastic Load Balancing with Auto Scaling groups and Amazon EC2
Amazon API Gateway and Amazon EC2
Amazon CloudFront and AWS Lambda
The Spot Instance request type must be one-time.
The Spot Instance request type must be persistent.
The root volume must be an Amazon EBS volume.
The root volume must be an instance store volume.
The launch configuration is changed.
Use Amazon ElastiCache to provide a caching layer
Use Amazon ElastiCache to provide a caching layer
Use Amazon DynamoDB Accelerator (DAX) to provide a caching layer
Obtain Reserved Capacity for Amazon DynamoDB to manage the increased number of queries
Quiz Review Timeline (Updated): Feb 24, 2020 +
Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.