Browse DBS Questions

Study all 100 questions at your own pace with detailed explanations

Total: 100 questionsPage: 4 of 10
Question 31 of 100

An online retailer is using Amazon DynamoDB to store data related to customer transactions. The items in the table contains several string attributes describing the transaction as well as a JSON attribute containing the shopping cart and other details corresponding to the transaction. Average item size is – 250KB, most of which is associated with the JSON attribute. The average customer generates – 3GB of data per month. Customers access the table to display their transaction history and review transaction details as needed. Ninety percent of the queries against the table are executed when building the transaction history view, with the other 10% retrieving transaction details. The table is partitioned on CustomerID and sorted on transaction date. The client has very high read capacity provisioned for the table and experiences very even utilization, but complains about the cost of Amazon DynamoDB compared to other NoSQL solutions. Which strategy will reduce the cost associated with the client’s read queries while not degrading quality?

AModify all database calls to use eventually consistent reads and advise customers that transaction history may be one second out-of-date.
BChange the primary table to partition on TransactionID, create a GSI partitioned on customer and sorted on date, project small attributes into GSI, and then query GSI for summary data and the primary table for JSON details.
CVertically partition the table, store base attributes on the primary table, and create a foreign key reference to a secondary table containing the JSON data. Query the primary table for summary data and the secondary table for JSON details.
DCreate an LSI sorted on date, project the JSON attribute into the index, and then query the primary table for summary data and the LSI for JSON details.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 32 of 100

Your client needs to load a 600 GB file into a Redshift cluster from S3, using the Redshift COPY command. The file has several known (and potentially some unknown) issues that will probably cause the load process to fail. How should the client most efficiently detect load errors without needing to perform cleanup if the load process fails?

ASplit the 600 GB file into smaller 25 GB chunks and load each separately.
BCompress the input file before running COPY.
CWrite a script to delete the data from the tables in case of errors.
DUse the COPY command with the NOLOAD parameter.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 33 of 100

A company that manufactures and sells smart air conditioning units also offers add-on services so that customers can see real-time dashboards in a mobile application or a web browser. Each unit sends its sensor information in JSON format every two seconds for processing and analysis. The company also needs to consume this data to predict possible equipment problems before they occur. A few thousand pre-purchased units will be delivered in the next couple of months. The company expects high market growth in the next year and needs to handle a massive amount of data and scale without interruption. Which ingestion solution should the company use?

AWrite sensor data records to Amazon Kinesis Streams. Process the data using KCL applications for the end-consumer dashboard and anomaly detection workflows.
BBatch sensor data to Amazon Simple Storage Service (S3) every 15 minutes. Flow the data downstream to the end-consumer dashboard and to the anomaly detection application.
CWrite sensor data records to Amazon Kinesis Firehose with Amazon Simple Storage Service (S3) as the destination. Consume the data with a KCL application for the end-consumer dashboard and anomaly detection.
DWrite sensor data records to Amazon Relational Database Service (RDS). Build both the end-consumer dashboard and anomaly detection application on top of Amazon RDS.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 34 of 100

A web application is using Amazon Kinesis Streams for clickstream data that may not be consumed for up to 12 hours. As a security requirement, how can the data be secured at rest within the Kinesis Streams?

AEnable SSL connections to Kinesis
BUse Amazon Kinesis Consumer Library
CEncrypt the data once it is at rest with a Lambda function
DEnable server-side encryption in Kinesis Streams
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 35 of 100

You're launching a test Elasticsearch cluster with the Amazon Elasticsearch Service, and you'd like to restrict access to only your office desktop computer that you occasionally share with an intern to allow her to get more experience interacting with Elasticsearch. What's the easiest way to do this?

ACreate a username and password combination to allow you to sign into the cluster.
BCreate an SSH key and add that to the accepted keys of the Elasticsearch cluster. Then store that SSH key on your desktop and use it to sign in.
CCreate an IAM user and role that allows access to the Elasticsearch cluster.
DCreate an IP-based resource policy on the Elasticsearch cluster that allows access to requests coming from the IP of the machine.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 36 of 100

Your application development team is building a solution with two applications. The security team wants each application's logs to be captured in two different places because one of the applications produces logs with sensitive data. How can you meet the requirements with the least risk and effort?

AAggregate logs into one file, then use Amazon CloudWatch Logs and then design two CloudWatch metric filters to filter sensitive data from the logs.
BUse Amazon CloudWatch logs to capture all logs, write an AWS Lambda function that parses the log file, and move sensitive data to a different log.
CAdd logic to the application that saves sensitive data logs on the Amazon EC2 instances' local storage, and write a batch script that logs into the EC2 instances and moves sensitive logs to a secure location.
DUse Amazon CloudWatch logs with two log groups, one for each application, and use an AWS IAM policy to control access to the log groups as required.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 37 of 100

There are thousands of text files on Amazon S3. The total size of the files is 1 PB. The files contain retail order information for the past 2 years. A data engineer needs to run multiple interactive queries to manipulate the data. The Data Engineer has AWS access to spin up an Amazon EMR cluster. The data engineer needs to use an application on the cluster to process this data and return the results in interactive time frame. Which application on the cluster should the data engineer use?

AOozie
BApache Pig with Tachyon
CApache Hive
DPresto
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 38 of 100Multiple Choice

A company hosts a web application on AWS which uses RDS instance to store critical data. As a part of a security audit, it was recommended hardening of RDS instance. What actions would help achieve the same? (Select TWO)

AUse Secure Socket Layer (SSL) connections with DB instances
BUse AWS CloudTrail to track all the SSH access to the RDS instance
CUse AWS Inspector to apply patches to the RDS instance
DUse RDS encryption to secure the RDS instances and snapshots at rest.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 39 of 100

A data engineer chooses Amazon DynamoDB as a data store for a regulated application. This application must be submitted to regulators for review. The data engineer needs to provide a control framework that lists the security controls from the process to follow to add new users down to the physical controls of the data center, including items like security guards and cameras. How should this control mapping be achieved using AWS?

ARequest AWS third-party audit reports and/or the AWS quality addendum and map the AWS responsibilities to the controls that must be provided.
BRequest data center Temporary Auditor access to an AWS data center to verify the control mapping.
CRequest relevant SLAs and security guidelines for Amazon DynamoDB and define these guidelines within the application’s architecture to map to the control framework.
DRequest Amazon DynamoDB system architecture designs to determine how to map the AWS responsibilities to the control that must be provided.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 40 of 100

You need to filter and transform incoming messages coming from a smart sensor you have connected with AWS. Once messages are received, you need to store them as time series data in DynamoDB. Which AWS service can you use?

AIoT Device Shadow Service
BRedshift
CKinesis
DIoT Rules Engine
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Showing 31-40 of 100 questions