- (Topic 4)
A company has a new mobile app. Anywhere in the world, users can see local news on topics they choose. Users also can post photos and videos from inside the app.
Users access content often in the first minutes after the content is posted. New content quickly replaces older content, and then the older content disappears. The local nature of the news means that users consume 90% of the content within the AWS Region where it is uploaded.
Which solution will optimize the user experience by providing the LOWEST latency for content uploads?
Correct Answer:
B
The most suitable solution for optimizing the user experience by providing the lowest latency for content uploads is to upload and store content in Amazon S3 and use S3 Transfer Acceleration for the uploads. This solution will enable the company to leverage the AWS global network and edge locations to speed up the data transfer between the users and the S3 buckets.
Amazon S3 is a storage service that provides scalable, durable, and highly available object storage for any type of data. Amazon S3 allows users to store and retrieve data from anywhere on the web, and offers various features such as encryption, versioning, lifecycle management, and replication1.
S3 Transfer Acceleration is a feature of Amazon S3 that helps users transfer data to and from S3 buckets more quickly. S3 Transfer Acceleration works by using optimized network paths and Amazon’s backbone network to accelerate data transfer speeds. Users can enable S3 Transfer Acceleration for their buckets and use a distinct URL to access them, such as
The other options are not correct because they either do not provide the lowest latency or are not suitable for the use case. Uploading and storing content in Amazon S3 and using
Amazon CloudFront for the uploads is not correct because this solution is not designed for optimizing uploads, but rather for optimizing downloads. Amazon CloudFront is a content delivery network (CDN) that helps users distribute their content globally with low latency and high transfer speeds. CloudFront works by caching the content at edge locations around the world, so that users can access it quickly and easily from anywhere3. Uploading content to Amazon EC2 instances in the Region that is closest to the user and copying the data to Amazon S3 is not correct because this solution adds unnecessary complexity and cost to the process. Amazon EC2 is a computing service that provides scalable and secure virtual servers in the cloud. Users can launch, stop, or terminate EC2 instances as needed, and choose from various instance types, operating systems, and configurations4. Uploading and storing content in Amazon S3 in the Region that is closest to the user and using multiple distributions of Amazon CloudFront is not correct because this solution is not cost-effective or efficient for the use case. As mentioned above, Amazon CloudFront is a CDN that helps users distribute their content globally with low latency and high transfer speeds. However, creating multiple CloudFront distributions for each Region would incur additional charges and management overhead, and would not be necessary since 90% of the content is consumed within the same Region where it is uploaded3.
References:
✑ What Is Amazon Simple Storage Service? - Amazon Simple Storage Service
✑ Amazon S3 Transfer Acceleration - Amazon Simple Storage Service
✑ What Is Amazon CloudFront? - Amazon CloudFront
✑ What Is Amazon EC2? - Amazon Elastic Compute Cloud
- (Topic 4)
A company has a multi-tier payment processing application that is based on virtual machines (VMs). The communication between the tiers occurs asynchronously through a third-party middleware solution that guarantees exactly-once delivery.
The company needs a solution that requires the least amount of infrastructure management. The solution must guarantee exactly-once delivery for application messaging
Which combination of actions will meet these requirements? (Select TWO.)
Correct Answer:
AD
This solution meets the requirements because it requires the least amount of infrastructure management and guarantees exactly-once delivery for application messaging. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You only pay for the compute time you consume. Lambda scales automatically with the size of your workload. Amazon SQS FIFO queues are designed to ensure that messages are processed exactly once, in the exact order that they are sent. FIFO queues have high availability and deliver messages in a strict first-in, first-out order. You can use Amazon SQS to decouple and scale microservices, distributed systems, and serverless applications. References: AWS Lambda, Amazon SQS FIFO queues
- (Topic 3)
A financial company hosts a web application on AWS. The application uses an Amazon API Gateway Regional API endpoint to give users the ability to retrieve current stock prices. The company's security team has noticed an increase in the number of API requests. The security team is concerned that HTTP flood attacks might take the application offline.
A solutions architect must design a solution to protect the application from this type of attack.
Which solution meats these requirements with the LEAST operational overhead?
Correct Answer:
B
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html
A rate-based rule in AWS WAF allows the security team to configure thresholds that trigger rate-based rules, which enable AWS WAF to track the rate of requests for a specified time period and then block them automatically when the threshold is exceeded. This provides the ability to prevent HTTP flood attacks with minimal operational overhead.
- (Topic 4)
A company uses multiple vendors to distribute digital assets that are stored in Amazon S3
buckets The company wants to ensure that its vendor AWS accounts have the minimum access that is needed to download objects in these S3 buckets
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer:
C
A cross-account IAM role is a way to grant users from one AWS account access to resources in another AWS account. The cross-account IAM role can have a read-only access policy attached to it, which allows the users to download objects from the S3 buckets without modifying or deleting them. The cross-account IAM role also reduces the operational overhead of managing multiple IAM users and policies in each account. The cross-account IAM role meets all the requirements of the question, while the other options do not. References:
✑ https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-
managing-access-example2.html
✑ https://aws.amazon.com/blogs/storage/setting-up-cross-account-amazon-s3- access-with-s3-access-points/
✑ https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for- user_externalid.html
- (Topic 3)
A company needs to ingested and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2 instances and sends data to Amazon Kinesis Data Streams. which is contained wild default settings. Every other day the application consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing the company observes that Amazon S3 is not receiving all the data that trio application sends to Kinesis Data Streams.
What should a solutions architect do to resolve this issue?
Correct Answer:
A
The data retention period of a Kinesis data stream is the time period from when a record is added to when it is no longer accessible1. The default retention period for a Kinesis data stream is 24 hours, which can be extended up to 8760 hours (365 days)1. The data retention period can be updated by using the AWS Management Console, the AWS CLI, or the Kinesis Data Streams API1.
To meet the requirements of the scenario, the solutions architect should update the Kinesis Data Streams default settings by modifying the data retention period. The solutions architect should increase the retention period to a value that is greater than or equal to the frequency of consuming the data and writing it to S32. This way, the company can ensure that S3 receives all the data that the application sends to Kinesis Data Streams.