Browse SAP Questions
Study all 100 questions at your own pace with detailed explanations
Total: 100 questionsPage: 8 of 10
Question 71 of 100
You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for capturing and analyzing this data?
ALog clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
BPush web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
CWrite click events directly to Amazon Redshift and then analyze with SQL
DPublish web clicks by session to an Amazon SQS queue and periodically drain these events to Amazon RDS and analyze with SQL
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 72 of 100
A public archives organization is about to move a pilot application they are running on AWS into production. You have been of hired to analyze their application architecture and give cost-saving recommendations. The application displays scanned historical documents. Each document is split into individual image tiles at multiple zoom levels to improve responsiveness and ease of use for the end users. At maximum zoom level the average document will be 8000 x 6000 pixels in size, split into multiple 40px x 40px image tiles. The tiles are batch processed by Amazon Elastic Compute Cloud (EC2) instances, and put into an Amazon Simple Storage Service (S3) bucket. A browser-based JavaScript viewer fetches tiles from the Amazon (S3) bucket and displays them to users as they zoom and pan around each document. The average storage size of all zoom levels for a document is approximately 30MB of JPEG tiles. Originals of each document are archived in Amazon Glacier. The company expects to process and host over 500,000 scanned documents in the first year. What are your recommendations? Choose 3 answers
ADeploy an Amazon CloudFront distribution in front of the Amazon S3 tiles bucket.
BIncrease the size (width/height) of the individual tiles at the maximum zoom level.
CDecrease the size (width/height) of the individual tiles at the maximum zoom level.
DStore the maximum zoom level in the low cost Amazon S3 Glacier option and only retrieve the most frequently access tiles as they are requested by users
EUse Amazon S3 Reduced Redundancy Storage for each zoom level.
FUse Amazon S3 Standard Storage for each zoom level.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 73 of 100
A company currently has a highly available web application running in production. The application's web front-end utilizes an Elastic Load Balancer and Auto scaling across 3 availability zones. During peak load, your web servers operate at 90% utilization and leverage a combination of heavy utilization reserved instances for steady state load and on-demand and spot instances for peak load. You are asked with designing a cost effective architecture to allow the application to recover quickly in the event that an availability zone is unavailable during peak load. Which option provides the most cost effective high availability architectural design for this application?
AIncrease auto scaling capacity and scaling thresholds to allow the web-front to cost-effectively scale across all availability zones to lower aggregate utilization levels that will allow an availability zone to fail during peak load without affecting the applications availability.
BContinue to run your web front-end at 90% utilization, but purchase an appropriate number of utilization RIs in each availability zone to cover the loss of any of the other availability zones during peak load.
CContinue to run your web front-end at 90% utilization, but leverage a high bid price strategy to cover the loss of any of the other availability zones during peak load.
DIncrease use of spot instances to cost effectively to scale the web front-end across all availability zones to lower aggregate utilization levels that will allow an availability zone to fail during peak load without affecting the applications availability.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 74 of 100
Your web site is hosted on 10 EC2 instances in 5 regions around the globe with 2 instances per region. How could you configure your site to maintain site availability with minimum downtime if one of the 5 regions was to lose network connectivity for extended period of time?
ACreate an Elastic Load Balancer to place in front of the EC2 instances. Set an appropriate health check on each ELB.
BEstablish VPN Connections between the instances in each region. Rely on BGP to failover in the case of a region wide connectivity outage
CCreate a Route 53 Latency Based Routing Record Set that resolves to an Elastic Load Balancer in each region. Set an appropriate health check on each ELB.
DCreate a Route 53 Latency Based Routing Record Set that resolves to Elastic Load Balancers in each region and has the Evaluate Target Health flag set to true.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 75 of 100
A company is running a batch analysis every hour on their main transactional DB running on an RDS MySQL instance to populate their central Data Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required The on-premises system cannot be modified because is managed by another team. How would you optimize this scenario to solve performance issues and automate the process as much as possible?
AReplace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
BReplace RDS with Redshift for the batch analysis and SQS to send a message to the on-premises system to update the dashboard
CCreate an RDS Read Replica for the batch analysis and SNS to notify the on-premises system to update the dashboard
DCreate an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 76 of 100
A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an IPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) keyspace specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers)
ADevelop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
BThe application authenticates against LDAP and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM role The application can use the temporary credentials to access the appropriate S3 bucket.
CDevelop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket.
DThe application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security Token service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket.
EThe application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 77 of 100
You are building a website that will retrieve and display highly sensitive information to users. The amount of traffic the site will receive is known and not expected to fluctuate. The site will leverage SSL to protect the communication between the clients and the web servers. Due to the nature of the site you are very concerned about the security of your SSL private key and want to ensure that the key cannot be accidentally or intentionally moved outside your environment. Additionally, while the data the site will display is stored on an encrypted EBS volume, you are also concerned that the web servers’ logs might contain some sensitive information; therefore, the logs must be stored so that they can only be decrypted by employees of your company. Which of these architectures meets all of the requirements?
AUse Elastic Load Balancing to distribute traffic to a set of web servers. To protect the SSL private key, upload the key to the load balancer and configure the load balancer to offload the SSL traffic. Write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
BUse Elastic Load Balancing to distribute traffic to a set of web servers. Use TCP load balancing on the load balancer and configure your web servers to retrieve the private key from a private Amazon S3 bucket on boot. Write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption.
CUse Elastic Load Balancing to distribute traffic to a set of web servers, configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption.
DUse Elastic Load Balancing to distribute traffic to a set of web servers. Configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 78 of 100
Which of the following are techniques to stop DDoS attacks on your AWS architecture? Choose 3 answers from the options below
AAdd multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth
BUse dedicated instances to ensure that each instance has the maximum performance possible.
CUse an Amazon Cloud Front distribution for both static and dynamic content.
DUse an Elastic Load Balancer with auto scaling groups at the web, App and Amazon Relational Database Service (RDS) tiers
EAdd alert Amazon CloudWatch to look for high Network in and CPU utilization.
FCreate processes and capabilities to quickly add and remove rules to the instance OS firewall.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 79 of 100Multiple Choice
A DynamoDB table used for a busy online store is currently set to 3000RCU and 1000WCU and is experiencing performance issues during sale periods, the table has a good Partition and sort key architecture for the read and write workloads. You have identified that the read and write workloads during a sale period increase by 50x and 2x respectively. The sale periods are important to the business and occur twice per month. The business has asked for solutions to accommodate the extra load and while they are willing to spend extra, the solution should be the most economical, what should you suggest? (Select THREE)
AIncrease the RCU to 150,000 on the table during the sale periods and reduce it afterwards
BIntegrate DAX with the online store application
CUse SQS for read caching against the DynamoDB Table
DIncrease the WCU to 2,000 on the table during busy periods and reduce it afterwards
EUse SQS for write buffering against the DynamoDB table
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 80 of 100
An organization has a requirement to store 10TB worth of scanned files. They are required to have a search application in place to search through the scanned files. Which of the below mentioned options is ideal for implementing the storage and search facility?
AUse S3 with reduced redundancy to store and serve the scanned files. Install a commercial search application on EC2 Instances and configure with Auto-Scaling and an Elastic Load Balancer.
BModel the environment using CloudFormation. Use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the scanned files with a search index.
CUse S3 with standard redundancy to store and serve the scanned files. Use CloudSearch for query processing and use Elastic Beanstalk to host the website across multiple Availability Zones.
DUse a single-AZ RDS MySQL instance to store the search index for the scanned files and use an EC2 instance with a custom application to search based on the index.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation