Browse SAP Questions

Study all 100 questions at your own pace with detailed explanations

Total: 100 questionsPage: 10 of 10
Question 91 of 100

In reviewing the auto scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for the cost while preserving elasticity? Choose 2 answers.

AModify the Amazon CloudWatch alarm period that triggers your auto scaling scale down policy.
BModify the Auto scaling group termination policy to terminate the oldest instance first.
CModify the Auto scaling policy to use scheduled scaling actions.
DModify the Auto scaling group cool down timers.
EModify the Auto scaling group termination policy to terminate newest instance first.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 92 of 100

You have two different groups using Redshift to analyze data of a petabyte-scale data warehouse. Each query issued by the first group takes approximately 1-2 hours to analyze the data while the second group's queries only take between 5-10 minutes to analyze data. You don't want the second group's queries to wait until the first group's queries are finished. You need to design a solution so that this does not happen. Which of the following would be the best and cheapest solution to deploy to solve this dilemma?

ACreate a read replica of Redshift and run the second team's queries on the read replica.
BCreate two separate workload management groups and assign them to the respective groups.
CPause the long queries when necessary and resume them when there are no queries happening.
DStart another Redshift cluster from a snapshot for the second team if the current Redshift cluster is busy processing long queries.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 93 of 100

You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S3, across the AWS Direct Connect link. You want other Internet traffic to use your existing link to an Internet Service Provider. What is the correct way to configure AWS Direct Connect for access to services such as Amazon S3?

AConfigure a public Interface on your AWS Direct Connect link Configure a static route via your AWS Direct Connect link that points to Amazon S3 Advertise a default route to AWS using BGP.
BCreate a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3 Configure specific routes to your network in your VPC.
CCreate a public interface on your AWS Direct Connect link. Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS
DCreate a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 94 of 100Multiple Choice

Your organization is planning to shift one of the high-performance data analytics applications purchased from the 3rd party vendor to the AWS. Currently, the application works in an on-premise load balancer and all the data is stored in a very large shared file system for low-latency and high throughput purpose. The management wants minimal disruption to existing service and also wants to do stepwise migration for easy rollback. How can the organization plan its migration? (Select THREE)

ASave all the data on S3 and use it as shared storage, use an application load balancer with EC2 instances to share the processing load
BCreate a RAID 1 storage using EBS and run the application on EC2 with application-level load balancers to share the processing load
CUse the VPN or Direct Connect to create a link between your company premise and AWS regional data center
DUse the VPC Peering to create a link between your company premise and AWS regional data center
ECreate an EFS with provisioned throughput and share the storage between your on-premise instances and EC2 instances
FSetup a Route 53 record to distribute the load between on-premise and AWS load balancer with the weighted routing policy
GSetup a CloudFront to distribute the load between on-premise and AWS load balancer with the weighted routing policy
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 95 of 100

The company you work for has a huge amount of infrastructure built on AWS. However there has been some concerns recently about the security of this infrastructure, and an external auditor has been given the task of running a thorough check of all of your company's AWS assets. The auditor will be in the USA while your company's infrastructure resides in the Asia Pacific (Sydney) region on AWS. Initially, he needs to check all of your VPC assets, specifically, security groups and NACLs You have been assigned the task of providing the auditor with a login to be able to do this. Which of the following would be the best and most secure solution to provide the auditor with, so he can begin his initial investigations? Choose the correct answer

ACreate an IAM user tied to an administrator role. Also provide an additional level of security with MFA.
BCreate an IAM user with full VPC access but set a condition that will not allow him to modify anything if the request is from any IP other than his own.
CGive him root access to your AWS Infrastructure, because he is an auditor he will need access to every service.
DCreate an IAM user who will have read-only access to your AWS VPC infrastructure and provide the auditor with those credentials.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 96 of 100

A research scientist is planning for the one-time launch of an Elastic MapReduce cluster and is encouraged by her manager to minimize the costs. The cluster is designed to ingest 200TB of genomics data with a total of 100 Amazon EC2 instances and is expected to run for around four hours. The resulting data set must be stored temporarily until archived into an Amazon RDS Oracle instance. Which option will help save the most money while meeting requirements?

AStore ingest and output files in Amazon S3. Deploy on-demand for the master and core nodes and spot for the task nodes.
BOptimize by deploying a combination of on-demand, RI and spot-pricing models for the master, core and task nodes. Store ingest and output files in Amazon S3 with a lifecycle policy that archives them to Amazon Glacier.
CStore the ingest files in Amazon S3 RRS and store the output files in S3. Deploy Reserved Instances for the master and core nodes and on-demand for the task nodes.
DDeploy on-demand master, core and task nodes and store ingest and output files in Amazon S3 RRS
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 97 of 100

You are designing a system, which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, you company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1's AZ's a through f, inclusive.

A3 servers in each of AZ's a through d, inclusive.
B8 servers in each of AZ's a and b.
C2 servers in each of AZ's a through e, inclusive.
D4 servers in each of AZ's a through c, inclusive.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 98 of 100

An administrator is using Amazon CloudFormation to deploy a three tier web applications that consists of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the CloudFormation template which of the following would allow the application instance access to the DynamoDB tables without exposing API credentials?

ACreate an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and associate the Role to the application instances by referencing an instance profile.
BUse the Parameter section in the Cloud Formation template to nave the user input Access and Secret Keys from an already created IAM user that has me permissions required to read and write from the required DynamoDB table.
CCreate an Identity and Access Management Role that has the required permissions to read and write from the required DynamoDB table and reference the Role in the instance profile property of the application instance.
DCreate an identity and Access Management user in the CloudFormation template that has permissions to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and secret keys and pass them to the application instance through user-data.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 99 of 100

A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? Choose 2 answers

AUse AWS Consolidated Billing and disable AWS root account access for the child accounts.
BEnable IAM cross-account access for all corporate IT administrators in each child account.
CCreate separate VPCs for each division within the corporate IT AWS account.
DUse AWS Consolidated Billing to link the divisions’ accounts to a parent corporate account.
EWrite all children AWS CloudTrail and Amazon CloudWatch logs to each child account’s Amazon S3 'Log' bucket.
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Question 100 of 100

You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors, which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements?

AAdd an SQS queue to the ingestion layer to buffer writes to the RDS instance
BIngest data into a DynamoDB table and move old data to a Redshift cluster
CReplace the RDS instance with a 6 node Redshift cluster with 96TB of storage
DKeep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
💡 Try to answer first, then click "Show Answer" to see the correct answer and explanation
Showing 91-100 of 100 questions