Browse SAP Questions

Study all 100 questions at your own pace with detailed explanations

Total: 100 questionsPage: 5 of 10
Question 41 of 100

A Company is running a production load Redshift cluster for a client. The client has an RTO objective of one hour and an RPO of one day. While configuring the initial cluster what configuration would best meet the recovery needs of the client for this specific Redshift cluster configuration? Choose the correct answer:

ACreate the cluster configuration and enable Redshift replication from the cluster running in the primary region to the cluster running in the secondary region. In the event of a disaster, change the DNS endpoint to the secondary cluster’s leader node.
BEnable automatic snapshots on the cluster in the production region FROM the disaster recovery region so snapshots are available in the disaster recovery region and can be launched in the event of a disaster.
CEnable automatic snapshots on a Redshift cluster. In the event of a disaster, a failover to the backup region is needed. Manually copy the snapshot from the primary region to the secondary region.
DEnable automatic snapshots and configure automatic snapshot copy from the current production cluster to the disaster recovery region.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 42 of 100

An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes. The customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?

ATake hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes
BUse synchronous database master-slave replication between two availability zones.
CTake hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes.
DTake 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 43 of 100

Your Company wants to perform A/B testing on a new website feature for 20 percent of its users. The website uses CloudFront for whole site delivery, with some content cached for up to 24 hours. How do you enable this testing for the required proportion of users while minimizing performance impact?

AConfigure the web servers to handle two domain names. The feature is switched on or off depending on which domain name is used for a request. Configure a CloudFront origin for each domain name, and configure the CloudFront distribution to use one origin for 20 percent of users and the other origin for the other 80 percent.
BConfigure the CloudFront distribution to forward a cookie specific to this feature. For requests where the cookie is not set, the web servers set its value to ''on" for 20 percent of responses and "off" for 80 percent. For requests where the cookie is set, the web servers use Its value to determine whether the feature should be on or off for the response.
CCreate a second stack of web servers that host the website with the feature on. Using Amazon Route 53, create two resource record sets with the same name: one with a weighting of "1" and a value of this new stack; the other a weighting of "4" and a value of the existing stack. Use the resource record set's name as the CloudFront distribution's origin.
DInvalidate all of the CloudFront distribution's cache items that the feature affects. On future requests, the web servers create responses with the feature on for 20 percent of users, and off for 80 percent. The web servers set "Cache-Control: no-cache" on all of these responses.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 44 of 100

You have been told by your security officer that you need to give a presentation on encryption on data at rest on AWS to 50 of your co-workers. You feel like you understand this extremely well regarding data stored on AWS S3 so you aren't too concerned, but you begin to panic a little when you realize you also probably need to talk about encryption on data stored on your databases, namely Amazon RDS. Regarding Amazon RDS encryption, which of the following statements is the truest? Choose the correct answer

AEncryption cannot be enabled on RDS instances unless the keys are not managed by KMS.
BEncryption can be enabled on RDS instances to encrypt the underlying storage, and this will by default also encrypt snapshots as they are created. However, some additional configuration needs to be made on the client side for this to work.
CEncryption can be enabled on RDS instances to encrypt the underlying storage, but you cannot encrypt snapshots as they are created.
DEncryption can be enabled on RDS instances to encrypt the underlying storage, and this will by default also encrypt snapshots as they are created. No additional configuration needs to be made on the client side for this to work.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 45 of 100

Your customer needs to create an application to allow contractors to upload videos to Amazon Simple Storage Service (S3) so they can be transcoded into a different format. She creates AWS Identity and Access Management (IAM) users for her application developers, and in just one week, they have the application hosted on a fleet of Amazon Elastic Compute Cloud (EC2) instances. The attached IAM role is assigned to the instances. As expected, a contractor who authenticates to the application is given a pre-signed URL that points to the location for video upload. However, contractors are reporting that they cannot upload their videos. Which of the following are valid reasons for this behavior?{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "s3:*", "Resource": "*" }] } Choose 2 answers

AThe IAM role does not explicitly grant permission to upload the object.
BThe contractorsˈ accounts have not been granted “write” access to the S3 bucket.
CThe application is not using valid security credentials to generate the pre-signed URL.
DThe developers do not have access to upload objects to the S3 bucket.
EThe S3 bucket still has the associated default permissions.
FThe pre-signed URL has expired.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 46 of 100

You tried to integrate two subsystems (front-end and back-end) with an HTTP interface to one large system. These subsystems don’t store any state inside. All state is stored in an amazon DynamoDB table. You have launched each of these two subsystems from a separate AMI. Black box testing has shown that these servers have stopped running and are issuing malformed requests that do not meet HTTP specifications from the client. Your developers have discover and fixed this issue, and you deploy the fix to the two subsystems as soon as possible without service disruption. What are more effective options to deploy the fixes? Choose 2 Answers

AUse AWS OpsWorks auto healing for both the front-end and back-end instance pair.
BUse elastic Load Balancing in front of the front-end subsystems and Auto Scaling to keep the specified number of instances.
CUse elastic Load Balancing in front of the Back-end subsystems and Auto Scaling to keep the specified number of instances.
DUse Amazon CloudFront, which accesses the front-end server when origin fetch.
EUse Amazon Simple Queue Service SQS between the front-end and back-end sub systems.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 47 of 100

You've created a mobile application that serves data stored in an Amazon DynamoDB table. Your primary concern is scalability of the application and being able to handle millions of visitors and data requests. As part of your application, the customer needs access to the data located in the DynamoDB table. Given the application requirements, what would be the best method for designing the application? Choose the correct answer from the options below

AConfigure an on-premise AD server utilizing SAML 2.0 to manage the application users inside of the on-premise AD server and write code that authenticates against the AD server. Grant a role assigned to the STS token to allow the end-user to access the required data in the DynamoDB table.
BLet the users sign into the app using a third party identity provider such as Amazon, Google, or Facebook. Use the AssumeRoleWith API call to assume the role containing the proper permissions to Communicate with the DynamoDB table. Write the application in JavaScript and host the JavaScript interface in an S3 bucket.
CLet the users sign into the app using a third party identity provider such as Amazon, Google, or Facebook. Use the AssumeRoleWithWebldentity API call to assume the role containing the proper permissions to communicate with the DynamoDB table. Write the application in a server-side language using the AWS SDK and host the application in an S3 bucket for scalability.
DLet the users sign into the app using a third party identity provider such as Amazon, Google, or Facebook. Use the AssumeRoleWithWebldentity API call to assume the role containing the proper permissions to communicate with the DynamoDB table. Write the application in JavaScript and host the JavaScript interface in an S3 bucket.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 48 of 100

A Disaster Recovery meeting has concluded and you have been placed in charge. The decision has been made, based on RTO/RPO and cost considerations to use the Pilot Light technique for Disaster Recovery. What preparatory steps should you take before implementing Pilot Light?

ASet up Amazon EC2 instances to replicate or mirror data. Create and maintain AMIs of key servers where fast recovery is required. Regularly run these servers, test them, and apply any software updates and configuration changes. Consider automating the provisioning of AWS resources.
BSet up Amazon EC2 instances to replicate or mirror data. Ensure that you have all supporting custom software packages available in AWS. Create and maintain AMIs of key servers where fast recovery is required. Regularly run these servers, test them, and apply any software updates and configuration changes. Consider automating the provisioning of AWS resources.
CSet up Amazon EC2 instances to replicate or mirror data. Ensure that you have all supporting custom software packages available in AWS. Create and maintain AMIs of key servers where fast recovery is required. Regularly run these servers, test them, and apply any software updates and configuration changes. Consider automating the provisioning of AWS resources. Gain authorization from AWS.
DSet up Amazon EC2 instances to replicate or mirror data. Ensure that you have all supporting custom software packages available in AWS. Create and maintain AMIs of key servers where fast recovery is required. Consider automating the provisioning of AWS resources.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 49 of 100

A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?

AStateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch. And RDS with read replicas.
BStateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas
CStateful instances for the web and application tier in an autoscaling group monitored with CloudWatch. And multi-AZ RDS
DStateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 50 of 100

You've configured an AWS VPC and several EC2 instances running MongoDB with an internal IP address of 10.0.2.1. To simplify failover and connectivity to the instance, you create an internal Route 53 A record called mongodb.example.com. You have a VPN connection from on-premise to your VPC and are attempting to connect an on-premise VMWare instance to mongodb.example.com, but the DNS will not resolve. Given the current design, why is the internal DNS record not resolving on-premise? Choose the correct answer:

AThe VPN is not configured to use BGP dynamic routing and a static route is not configured from the on-premise subnet to the VPC subnet with the MongoDB server.
BRoute 53 internal DNS records only work if the DNS request originates from within the VPC.
CA public Route 53 resource record was created using the private IP address instead of an internal DNS record.
DThe on-premise VM instance needs to have an /etc/resolv.conf record pointing to the Route53 internal DNS server.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Showing 41-50 of 100 questions