Browse DBS Questions

Study all 100 questions at your own pace with detailed explanations

Total: 100 questionsPage: 6 of 10
Question 51 of 100

A solutions architect for a logistics organization ships packages from thousands of suppliers to end customers. The architect is building a platform where suppliers can view the status of one or more of their shipments. Each supplier can have multiple roles that will only allow access to specific fields in the resulting information. Which strategy allows the appropriate level of access control and requires the LEAST amount of management work?

ASend the tracking data to Amazon Kinesis Streams. Use AWS Lambda to store the data in an Amazon DynamoDB Table. Generate temporary AWS credentials for the suppliers’ users with AWS STS, specifying fine-grained security policies to limit access only to their applicable data.
BSend the tracking data to Amazon Kinesis Firehose. Use Amazon S3 notifications and AWS Lambda to prepare files in Amazon S3 with appropriate data for each supplier’s roles. Generate temporary AWS credentials for the suppliers’ users with AWS STS. Limit access to the appropriate files through security policies.
CSend the tracking data to Amazon Kinesis Streams. Use Amazon EMR with Spark Streaming to store the data in HBase. Create one table per supplier. Use HBase Kerberos integration with the suppliers’ users. Use HBase ACL-based security to limit access for the roles to their specific table and columns.
DSend the tracking data to Amazon Kinesis Firehose. Store the data in an Amazon Redshift cluster. Create views for the suppliers’ users and roles. Allow suppliers access to the Amazon Redshift cluster using a user limited to the applicable view.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 52 of 100

A utility company is building an application that stores data coming from more than 10,000 sensors. Each sensor has a unique ID and will send a datapoint (approximately 1KB) every 10 minutes throughout the day. Each datapoint contains the information coming from the sensor as well as a timestamp. This company would like to query information coming from a particular sensor for the past week very rapidly and want to delete all the data that is older than 4 weeks. Using Amazon DynamoDB for its scalability and rapidity, how do you implement this in the most cost effective way?

AOne table, with a primary key that is the sensor ID and a sort key that is the timestamp
BOne table, with a primary key that is the concatenation of the sensor ID and timestamp
COne table for each week, with a primary key that is the concatenation of the sensor ID and timestamp
DOne table for each week, with a primary key that is the sensor ID and a sort key that is the timestamp
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 53 of 100

You need to provide customers with rich visualizations that allow you to easily connect multiple disparate data sources in S3, Redshift, and several CSV files. Which tool should you use that requires the least setup?

AHue on EMR
BRedshift
CQuickSight
DElasticsearch
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 54 of 100

You need to create a recommendation engine for your e-commerce website that sells over 300 items. The items never change, and the new users need to be presented with the list of all 300 items in order of their interest. Which option do you use to accomplish this?

AMahout
BSpark/Spark MLlib
CAmazon Machine Learning
DRDS MySQL
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 55 of 100

A web application emits multiple types of events to Amazon Kinesis Streams for operational reporting. Critical events must be captured immediately before processing can continue, but informational events do not need to delay processing. What is the most appropriate solution to record these different types of events?

ALog all events using the Kinesis Producer Library.
BLog critical events using the Kinesis Producer Library, and log informational events using the PutRecords API method.
CLog critical events using the PutRecords API method, and log informational events using the Kinesis Producer Library.
DLog all events using the PutRecords API method.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 56 of 100

You have to identify potential fraudulent credit card transactions using Amazon Machine Learning. You have been given historical labeled data that you can use to create your model. You will also need to the ability to tune the model you pick. Which model type should you use?

AClustering
BRegression
CBinary
DCannot be done using Amazon Machine Learning
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 57 of 100

You've been asked by the VP of People to showcase the current breakdown of the headcount for each department within your organization. What chart do you select to do this to make it easy to compare each department?

ALine chart
BColumn chart
CPie chart
DScatter plot
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 58 of 100Multiple Choice

An online gaming company uses DynamoDB to store user activity logs and is experiencing throttled writes on the company’s DynamoDB table. The company is NOT consuming close to the provisioned capacity. The table contains a large number of items and is partitioned on user and sorted by date. The table is 200GB and is currently provisioned at 10K WCU and 20K RCU. Which two additional pieces of information are required to determine the cause of the throttling? (Choose two.)

AThe structure of any GSIs that have been defined on the table
BCloudWatch data showing consumed and provisioned write capacity when writes are being throttled
CApplication-level metrics showing the average item size and peak update rates for each attribute
DThe structure of any LSIs that have been defined on the table
EThe maximum historical WCU and RCU for the table
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 59 of 100

A Redshift data warehouse has different user teams that need to query the same table with very different query types. These user teams are experiencing poor performance. Which action improves performance for the user teams in this situation?

ACreate custom table views.
BAdd interleaved sort keys per team.
CMaintain team-specific copies of the table.
DAdd support for workload management queue hopping.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 60 of 100

A data engineer needs to collect data from multiple Amazon Redshift clusters within a business and consolidate the data into a single central data warehouse. Data must be encrypted at all times while at rest or in flight. What is the most scalable way to build this data collection process?

ARun an ETL process that connects to the source clusters using SSL to issue a SELECT query for new data, and then write to the target data warehouse using an INSERT command over another SSL secured connection.
BUse AWS KMS data key to run an UNLOAD ENCRYPTED command that stores the data in an unencrypted S3 bucket; run a COPY command to move the data into the target cluster.
CRun an UNLOAD command that stores the data in an S3 bucket encrypted with an AWS KMS data key; run a COPY command to move the data into the target cluster.
DConnect to the source cluster over an SSL client connection, and write data records to Amazon Kinesis Firehose to load into your target data warehouse.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Showing 51-60 of 100 questions