Browse DBS Questions

Study all 100 questions at your own pace with detailed explanations

Total: 100 questionsPage: 7 of 10
Question 61 of 100

Your company releases new features with high frequency while demanding high application availability. As part of the application’s A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed in near real-time, to ensure that the application is working flawlessly after each deployment. If the logs show any anomalous behavior, then the application version of the instance is changed to a more stable one. Which of the following methods should you use for shipping and analyzing the logs in a highly available manner?

AShip the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour.
BShip the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
CShip the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.
DShip the logs to a large Amazon EC2 instance and analyze the logs in a live manner.
EStore the logs locally on each instance and then have an Amazon Kinesis stream pull the logs for live analysis
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 62 of 100

Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets. Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for real-time analytics of the inbound biometric data to ensure processing of the biometric data is highly durable, Elastic and parallel. The results of the analytic processing should be persisted for data mining. Which architecture outlined below will meet the initial requirements for the collection platform?

AUtilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
BUtilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
CUtilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
DUtilize EMR to collect the inbound sensor data, analyze the data from EMR with Amazon Kinesis and save the results to DynamoDB.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 63 of 100

A social media customer has data from different data sources including RDS running MySQL, Redshift, and Hive on EMR. To support better analysis, the customer needs to be able to analyze data from different data sources and to combine the results. What is the most cost-effective solution to meet these requirements?

ALoad all data from a different database/warehouse to S3. Use Redshift COPY command to copy data to Redshift for analysis.
BInstall Presto on the EMR cluster where Hive sits. Configure MySQL and PostgreSQL connector to select from different data sources in a single query.
CSpin up an Elasticsearch cluster. Load data from all three data sources and use Kibana to analyze.
DWrite a program running on a separate EC2 instance to run queries to three different systems. Aggregate the results after getting the responses from all three systems.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 64 of 100

Management has requested a comparison of total sales performance in the five North American regions in January. They're hoping to determine how to allocate a budget to regions based on performance in that single period. What sort of visualization do you use in Amazon QuickSight?

ABar chart
BLine chart
CStacked area chart
DHistogram
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 65 of 100

A new client is requesting a tool that will provide fast query performance for enterprise reporting and business intelligence workloads, particularly those involving extremely complex SQL with multiple joins and sub-queries. They also want the ability to give analysts access to a central system through tradition SQL clients that allow them to explore and familiarize themselves with the data. What solution do you initially recommend they investigate?

BRedshift
CAthena
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 66 of 100Multiple Choice

A company stores data in an S3 bucket. Some of the data contains sensitive information. They need to ensure that the bucket complies with PCI DSS (Payment Card Industry Data Security Standard) compliance standards. Which of the following should be implemented to fulfill this requirement? (Select TWO)

AEnable server side encryption SSE for a bucket.
BEnable versioning for the bucket
CEnsure that access to the bucket is only given to one IAM role
DEnsure that objects from the bucket are request only via HTTPS
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 67 of 100

Your company sells consumer devices and needs to record the first activation of all sold devices. Devices are not activated until the information is written on a persistent database. Activation data is very important for your company and must be analyzed daily with a MapReduce job. The execution time of the data analysis process must be less than three hours per day. Devices are usually sold evenly during the year, but when a new device model is out, there is a predictable peak in activation's, that is, for a few days there are 10 times or even 100 times more activation's than in average day. Which of the following databases and analysis framework would you implement to better optimize costs and performance for this workload?

AAmazon RDS and Amazon Elastic MapReduce with Spot instances.
BAmazon DynamoDB and Amazon Elastic MapReduce with Spot instances.
CAmazon RDS and Amazon Elastic MapReduce with Reserved instances.
DAmazon DynamoDB and Amazon Elastic MapReduce with Reserved instances
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 68 of 100

Your company is storing millions of sensitive transactions across thousands of 100-GB files that must be encrypted in transit and at rest. Analysts concurrently depend on subsets of files, which can consume up to 5TB of space, to generate simulations that can be used to steer business decisions. You are required to design an AWS solution that can cost effectively accommodate the long-term storage and in-flight subsets of data.

AUse Amazon Simple Storage Service (S3) with server-side encryption, and run simulations on subsets in ephemeral drives on Amazon EC2.
BUse Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
CUse HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
DUse HDFS on Amazon Elastic MapReduce (EMR), and run simulations on subsets in-memory on Amazon Elastic Compute Cloud (EC2).
EStore the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capture snapshots that can be cloned to EC2 workstations
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 69 of 100

An administrator needs to design the event log storage architecture for events from mobile devices. The event data will be processed by an Amazon EMR cluster daily for aggregated reporting and analytics before being archived. How should the administrator recommend storing the log data?

ACreate an Amazon S3 bucket and write log data into folders by device. Execute the EMR job on the device folders.
BCreate an Amazon DynamoDB table partitioned on the device and sorted on date, write log data to table. Execute the EMR job on the Amazon DynamoDB table.
CCreate an Amazon S3 bucket and write data into folders by day. Execute the EMR job on the daily folder.
DCreate an Amazon DynamoDB table partitioned on EventID, write log data to table. Execute the EMR job on the table.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 70 of 100

Your company uses DynamoDB to support their mobile application and S3 to host the images and other documents shared between users. DynamoDB has a table with 60 partitions and is being heavily accessed by users. The queries run by users do not fully use the per-partition’s throughput. However there are times when in less than 3 minutes, a heavy load of queries flow in and this happen occasionally. Sometimes there are many background tasks that are running in background. How can DynamoDB be configured to handle the workload?

AUsing Burst Capacity effectively
BUsing Adaptive Capacity
CDesign Partition Keys to distribute workload evenly
DUsing Write Sharding to distribute Workloads Evenly
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Showing 61-70 of 100 questions