1 / 652%
Question 1 of 65
You have a customer-facing application running on multiple M3 instances in two AZs. These instances are in an auto-scaling group configured to scale up when load increases. After taking a look at your CloudWatch metrics, you realize that during specific times every single day, the auto-scaling group has a lot more instances than it normally does. Despite this, one of your customers is complaining that the application is very slow to respond during those time periods every day. The application is reading and writing to a DynamoDB table which has 400 Write Capacity Units and 400 Read Capacity Units. The primary key is the company ID, and the table is storing roughly 20 TB of data. Which solution would solve the issue in a scalable and cost-effective manner?
AUse data pipelines to migrate your DynamoDB table to a new DynamoDB table with a different primary key that evenly distributes the dataset across the table.
BAdd a caching layer in front of the web application with ElastiCache Memcached, or Redis.
CDynamoDB is not a good solution for this use case. Instead, create a data pipeline to move data from DynamoDB to Amazon RDS, which is more suitable for this.
DDouble the number of Read and Write Capacity Units. The DynamoDB table is being throttled when customers from the same company all use the table at the same time.