Module 10 - Developing With Messaging Services
Module 10 - Developing With Messaging Services
Contents
Module 10: Developing with Messaging Services 4
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 3
AWS Training and Certification Module 10: Developing with Messaging Services
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 4
AWS Training and Certification Module 10: Developing with Messaging Services
Section 1: Introduction
Module 10: Developing with Messaging Services
Section 1: Introduction.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 5
AWS Training and Certification Module 10: Developing with Messaging Services
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 6
AWS Training and Certification Module 10: Developing with Messaging Services
Module overview
Sections Demonstration
1. Introduction • Working with Amazon Messaging Services
Knowledge check
4
Finally, you will complete a knowledge check to test your understanding of key
concepts covered in this module.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 7
AWS Training and Certification Module 10: Developing with Messaging Services
Currently, updating the coffee inventory on the website is a manual process. Sofía
would like to find an easier way to keep the inventory updated. She considers setting
up a messaging system to automatically receive and process inventory updates.
Mateo, a café regular and AWS consultant, suggested using the Amazon Simple
Notification Service (Amazon SNS) to receive messages from the suppliers and an
Amazon Simple Queue Service (Amazon SQS) queue to store the messages until they
are processed.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 8
AWS Training and Certification Module 10: Developing with Messaging Services
The diagram on this slide gives an overview of the application that you will build
through the labs in this course. The highlighted portions are relevant to this module.
As highlighted in the diagram, you will use Amazon SQS and Amazon SNS to set up a
system to receive, queue, and send the coffee inventory information.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 9
AWS Training and Certification Module 10: Developing with Messaging Services
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 10
AWS Training and Certification Module 10: Developing with Messaging Services
Synchronous processing
In the example of the coffee shop, this is similar to ordering a coffee at the counter
and getting the cup of coffee immediately before you walk away. This action is
complete before the next person orders.
In the example diagrammed on the slide, the client makes a request to service A.
Service A then calls service B. Service A waits for service B to complete its actions and
respond to service A. That response then goes back to the client.
In this model, strong interdependency exists between the producer and the
consumer. This interdependency creates a tightly coupled system. The disadvantage
of a tightly coupled system is that it is not fault tolerant. This means that if any
component of the system fails, then the entire system will fail. If the consumer fails
while processing a message, then the producer will be forced to wait until that
message gets processed (assuming that the message is not lost). Additionally, if new
consumer instances are launched to recover from a failure or to keep up with an
increased workload, the producer must be explicitly made aware of the new
consumer instances. In this scenario, the producer is tightly coupled with the
consumer, and the coupling is prone to brittleness.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 11
AWS Training and Certification Module 10: Developing with Messaging Services
Asynchronous processing
In the coffee shop example, this is similar to placing an order that is delivered later.
You get your order number, and then the next person in line can order without
waiting for your order to be fulfilled. Someone brings you the order when it is ready.
A problem with completing your order does not prevent the person behind you in line
from ordering.
In the example diagrammed on the slide, when the client makes a request to service
A, service A surfaces the event to its consumers (in this example, service B) and
moves on.
The disadvantage is that service B does not have a channel to pass back results to
service A. You need to write your applications to get the results and deal with errors
asynchronously. Although you might be used to writing code that has explicit
coupling, the coupling is not actually needed for the type of processing you are doing.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 12
AWS Training and Certification Module 10: Developing with Messaging Services
The producer does not need to wait for the first message to be read. Instead, the producer can
continue to generate messages whether the consumer is processing them or not. The producer is
not impacted if the consumer fails to process the message. This means much less interdependency
between the producer and the consumer, which makes the system more loosely coupled with
greater fault tolerance.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 13
AWS Training and Certification Module 10: Developing with Messaging Services
In this module, you'll learn about three types of messaging services that you can use
for asynchronous processing:
• Message queues
• Publish/subscribe (pub/sub) messaging
• Data streams
The type of messaging service that you choose depends on your use case. The next
few slides provide an overview of these three types of messaging services. In each
type, producers send requests (messages) to a service and do not wait for a result.
The messaging service provides an interim location from which consumers get
messages. This provides an asynchronous, decoupled design between your
components.
You'll learn about three fully managed services that you can use to implement each
type of messaging service: Amazon Simple Queue Service (Amazon SQS) for message
queues, Amazon Simple Notification Service (Amazon SNS) for pub/sub messaging,
and Amazon Kinesis Data Streams for data streams.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 14
AWS Training and Certification Module 10: Developing with Messaging Services
11
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 15
AWS Training and Certification Module 10: Developing with Messaging Services
12
Unlike queue consumers, subscribers do not need to check for messages in a pub/sub
model. Messages that are published to a topic are pushed out to all topic subscribers.
With pub/sub messaging, you can easily notify large numbers of subscribers about
events. For example, you might publish a message to an "alerts" topic when a certain
type of error occurs, and multiple consumers might take different actions on that
message.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 16
AWS Training and Certification Module 10: Developing with Messaging Services
3 Aggregate
and analyze
large
numbers of
messages
13
Data streams are similar to queues in that they provide a temporary repository for
producer messages, and consumers poll the stream to check for new messages.
Streams are different from queues in that the rate of messages is typically much
higher. Rather than processing each individual message on the queue, streams are
typically used to analyze high volumes of data in near-real time.
Another distinction from queues is that multiple consumers might process the same
messages and do entirely different actions with them. Consumers all view the same
data (similar to looking through the same window) and act on the messages for their
own purposes. Consumers do not delete messages from a stream.
Streams are helpful for use cases where you need to handle rapid and continuous
data intake and aggregation. For example, you might use a stream to process IT
infrastructure log data, social media data, market data feeds, or web clickstream data.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 17
AWS Training and Certification Module 10: Developing with Messaging Services
14
The following are the key takeaways from this section of the module:
• An asynchronous design reduces interdependencies and can improve
responsiveness to the client.
• Messaging services are good for creating asynchronous designs and include
queues, pub/sub messaging, and streams.
• AWS provides fully managed messaging services including Amazon SQS, Amazon
SNS, and Kinesis Data Streams.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 18
AWS Training and Certification Module 10: Developing with Messaging Services
15
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 19
AWS Training and Certification Module 10: Developing with Messaging Services
Amazon SQS
Fully managed message queuing service that
eliminates the complexity and overhead
associated with managing and operating
message queues
• Limits administrative overhead
• Reliably delivers messages at very high volume
and throughput
Amazon Simple • Scales dynamically
Queue Service
(Amazon SQS) • Can keep data secure
• Offers standard and First-In-First-Out (FIFO)
options
16
Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing
service. With Amazon SQS, you can decouple and scale microservices, distributed
systems, and serverless applications. The service eliminates the complexity and
overhead associated with managing and operating message-oriented services, so that
you can focus on the business logic of your application.
You can use Amazon SQS to transmit any volume of data at any level of throughput,
without losing messages or requiring other services to be available. With Amazon
SQS, you can decouple application components so that they run and fail
independently, which increases the overall fault tolerance of the system. Multiple
copies of every message are stored redundantly across multiple Availability Zones so
that they are available when they are needed.
Amazon SQS uses the AWS Cloud to dynamically scale based on demand. Amazon
SQS scales elastically with your application so that you don’t have to worry about
capacity planning and pre-provisioning. The number of messages per queue is not
limited.
You can use Amazon SQS to exchange sensitive data between applications using
server-side encryption (SSE) to encrypt each message body. Amazon SQS SSE
integration with AWS Key Management Service (AWS KMS) provides the ability to
centrally manage the keys that protect Amazon SQS messages along with keys that
protect your other AWS resources. AWS KMS logs every use of your encryption keys
to AWS CloudTrail to help meet your regulatory and compliance needs.
Amazon SQS offers both standard and First-In-First-Out (FIFO) queues to support
different processing priorities.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 20
AWS Training and Certification Module 10: Developing with Messaging Services
• Process a high number of credit card • Ensure that a credit card transaction
validation requests is not processed more than once
17
You can use standard message queues in many scenarios, as long as your application
can process messages that arrive more than once and out of order. For example, you
can use a standard queue for the following:
• Decouple live user requests from intensive background work: Enable users to
upload media while resizing or encoding it.
• Allocate tasks to multiple worker nodes: Process a high number of credit card
validation requests.
• Batch messages for future processing: Schedule multiple entries to be added to a
database that are not order dependent and for which idempotency is not an issue.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 21
AWS Training and Certification Module 10: Developing with Messaging Services
You use FIFO queues for applications when the order of operations and events is critical, or in cases
where duplicates can't be tolerated. For example, you can use a FIFO queue for the following:
• Bank transactions: Ensure that a deposit is recorded before a bank withdrawal happens.
• Credit card transactions: Ensure that a credit card transaction is not processed more than once.
• Course enrollment: Prevent a student from enrolling in a course before they register for an
account.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 22
AWS Training and Certification Module 10: Developing with Messaging Services
18
You can integrate Amazon SQS with other AWS services to make applications more
reliable and scalable.
Note that the Auto Scaling group scales based on the size of the SQS queue. The
processing servers keep working until the queue is empty and then keep holding until
messages appear. If the EC2 instances become unavailable, the Lambda function can
keep putting messages on the queue when an upload is made. When the Auto Scaling
group is ready to start consuming messages again, it polls the queue for messages.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 23
AWS Training and Certification Module 10: Developing with Messaging Services
19
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 24
AWS Training and Certification Module 10: Developing with Messaging Services
20
You can perform these actions with the SendMessage, ReceiveMessage, and
DeleteMessage API calls, respectively, either programmatically or from the Amazon
SQS console. A variety of configuration options affect cost, message processing, and
message retention.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 25
AWS Training and Certification Module 10: Developing with Messaging Services
{
"MD5OfMessageBody": "51b0a325...39163aa0",
"MessageId": "d6790f8d-d575-4f01-bc51-40122EXAMPLE"
}
Example Amazon SQS response
with MD5 hash of body and
MessageID
21
The slide illustrates how you might use the AWS Command Line Interface (AWS CLI)
to call the SendMessage operation.
After you send a message, the Amazon SQS response includes a Message-Digest
algorithm 5 (MD5) digest of the message body string, which you can use to verify that
Amazon SQS received the message correctly. The response also includes a system-
assigned MessageId, which is useful for identifying messages. Additional elements
might be included in the response depending on the message parameters and the
type of queue.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 26
AWS Training and Certification Module 10: Developing with Messaging Services
"Messages": [
{
"Body": "My first message.",
"ReceiptHandle": "AQEBzbVv...fqNzFw==",
"MD5OfBody": "51b0a325...39163aa0",
"MessageId": "d6790f8d-d575-4f01-bc51-40122EXAMPLE",
"Attributes": {
Example Amazon SQS …
response with messages. },
Each message has a }
ReceiptHandle. ]
}
22
You can retrieve one or more messages (up to 10) from the queue by using the
ReceiveMessage operation. You set the number of messages that you want to
retrieve by using the MaxNumberOfMessages parameter.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 27
AWS Training and Certification Module 10: Developing with Messaging Services
23
Before the visibility timeout expires, the consumer is expected to process AND then
delete the message from the queue. Amazon SQS doesn't automatically delete the
message because, as a distributed system, there's no guarantee that the consumer
actually received and handled the message (for example, because of a connectivity
issue or because of an issue in the consumer application).
For the duration of the visibility timeout, Amazon SQS temporarily stops returning the
message as part of any ReceiveMessage requests.
The visibility timeout defaults to 30 seconds. The minimum is 0 seconds, and the
maximum is 12 hours. You can use the VisibilityTimeout parameter on the
ReceiveMessage operation to modify the visibility timeout for a ReceiveMessage
request.
If your consumer doesn't delete the message before the visibility timeout expires, the
message becomes visible to other consumers, and Amazon SQS flags the message as
being returned to the queue. Amazon SQS increments its counter each time the
message becomes visible on the queue. By default, Amazon SQS will increment the
counter and make the message available to consumers until the retention period for
that message expires. You can modify this behavior using a dead-letter queue with a
redrive policy.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 28
AWS Training and Certification Module 10: Developing with Messaging Services
24
For messages that cannot be processed (and deleted by the consumer), configure a
dead-letter queue in Amazon SQS. Dead-letter queues can help you troubleshoot
incorrect transmission operations and stop your queue from continually trying to
process a corrupted message.
The dead-letter queue can receive messages from the source queue after the
maximum number of processing attempts (maxReceiveCount) has been reached. The
maxReceiveCount is set as part of the source queue's redrive policy. Messages can be
sent to and received from a dead-letter queue just like any other SQS queue. You can
create a dead-letter queue from the Amazon SQS API or the Amazon SQS console.
To configure the dead-letter queue from the console, first create a queue to be used
as your dead-letter queue. Then, on the source queue, choose to configure a redrive
policy. As part of the redrive policy, select the queue to use as the dead-letter queue,
and set the maximum receives option to tell Amazon SQS how many processing
attempts to make before sending the message to the dead-letter queue.
In the example, on the slide the source queue is set with a redrive policy with a
maxReceiveCount equal to two. Because of the policy, message A is moved to the
dead-letter queue after the second consumer fails to process the message. Rather
than making the message visible to consumers for a third time, Amazon SQS moves
message A to the dead-letter queue, and the message is no longer available in the
source queue.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 29
AWS Training and Certification Module 10: Developing with Messaging Services
For more information, see the following resources in the Amazon SQS Developer Guide:
• Amazon SQS Dead-Letter Queues:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-
letter-queues.html.
• Configuring a Dead-Letter Queue (Console):
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-
configure-dead-letter-queue.html.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 30
AWS Training and Certification Module 10: Developing with Messaging Services
25
As noted earlier, consumers use a polling model to retrieve messages from an SQS
queue. By default, when you make a ReceiveMessage API call, Amazon SQS performs
short polling, in which it samples a subset of SQS servers (based on a weighted
random distribution).
Amazon SQS immediately returns only the messages that are on the sampled servers.
In the example, short polling returns messages A, B, C, and D, but it does not return
message E. Short polling occurs when the WaitTimeSeconds parameter of a
ReceiveMessage call is set to 0.
If the number of messages in the queue is small (fewer than 1,000), you will most
likely get fewer messages than you requested per ReceiveMessage call. If the number
of messages in the queue is extremely small, you might not receive any messages
from a particular ReceiveMessage call. If you keep requesting ReceiveMessage,
Amazon SQS will sample all of the servers and you will eventually receive all of your
messages.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 31
AWS Training and Certification Module 10: Developing with Messaging Services
26
By contrast, with long polling, Amazon SQS queries all of the servers and waits until a
message is available in the queue before it sends a response. Long polling helps
reduce the cost of using Amazon SQS by eliminating the number of empty responses
(when no messages are available for a ReceiveMessage request) and false empty
responses (when messages are available but aren't included in a response).
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 32
AWS Training and Certification Module 10: Developing with Messaging Services
https://sqs.us-east-1.amazonaws.com/123456789012/testQueue/
?Action=ReceiveMessage
&WaitTimeSeconds=10 Use long polling with a wait time of 10 seconds
&MaxNumberOfMessages=5
Retrieve up to 5 messages at a time
&VisibilityTimeout=15
&AttributeName=All; Make these messages invisible to other consumers for 15 seconds
&Expires=2019-04-18T22%3A52%3A43PST
&Version=2012-11-05
&AUTPARAMS
27
The slides provides an example of a query to retrieve messages from a queue named
testQueue by using the ReceiveMessage API. The query includes the following
parameters to modify the default behaviors:
• WaitTimeSeconds is set to 10, which indicates that long polling is enabled.
• MaxNumberOfMessages is set to 5, which indicates the maximum number of
messages to return in a single call.
• VisibilityTimeout is set to 15, which indicates that the message will be invisible to
other consumers to process for 15 seconds.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 33
AWS Training and Certification Module 10: Developing with Messaging Services
DeleteMessage operation
28
To prevent a message from being received and processed again when the visibility
timeout expires, the consumer must delete the message. You can use the
DeleteMessage operation to delete a specific message from a specific queue, or you
can use DeleteMessageBatch to delete up to 10 messages. To select the message to
delete, use the receipt handle of the message. To use the DeleteMessageBatch
operation, provide the list of receipt handles to be deleted.
Amazon SQS automatically deletes messages that have been in a queue longer than
the queue’s configured message retention period. The default message retention
period is 4 days. However, you can set the message retention period to a value from
60 seconds to 1,209,600 seconds (14 days) by using the SetQueueAttributes action.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 34
AWS Training and Certification Module 10: Developing with Messaging Services
29
You can perform these actions with the SendMessage, ReceiveMessage, and
DeleteMessage API calls, respectively, either programmatically or from the Amazon
SQS console. A variety of configuration options affect cost, message processing, and
message retention.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 35
AWS Training and Certification Module 10: Developing with Messaging Services
The following are the key takeaways from this section of the module:
• Developers use the SendMessage, ReceiveMessage, and DeleteMessage API
operations to add, retrieve, and delete queue messages.
• Amazon SQS makes a retrieved message invisible on the queue for the duration of
the visibility timeout.
• Amazon SQS can send failed records to a dead-letter queue for separate
processing.
• Short polling samples Amazon SQS servers for messages, while long polling queries
all servers.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 36
AWS Training and Certification Module 10: Developing with Messaging Services
31
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 37
AWS Training and Certification Module 10: Developing with Messaging Services
32
Standard queues are the default queue type. Standard queues support a nearly
unlimited number of transactions per second (TPS) per action.
Because of the queue's highly distributed architecture (which allows this high
throughput), you need to account for the following two potential conditions related
to message delivery to your consumer(s).
1. Messaging order is not guaranteed. Standard queues provide best-effort
ordering. This ensures that messages are generally delivered in the same order as
they are sent, but some might be delivered out of order. If you use a standard
queue and have a target application that needs to process messages in the order
received, you need to include logic in your consumer application to reorder
messages before further processing; for example, using the timestamp of each
message to compare and order them.
2. More than one copy of a message might be delivered. Standard queues support
at-least-once delivery, not exactly-once delivery. To avoid processing the same
message more than once, your application code needs to check for duplicates and
only process a message the first time it is received. For example, you might add
code that checks your database for a unique value in the incoming event and
ignores it if it already exists in the database. MessageID is a unique identifier that
Amazon SQS generates. If you want to verify the uniqueness of the record on the
SQS queue, you could use the messageID attribute. Another scenario might be
that you actually need to verify the uniqueness of the payload. For example, if an
upstream producer sent the same record to Amazon SQS twice, each event would
have a unique messageID in Amazon SQS but would be the same payload from
your application’s perspective. You can avoid processing the same payload twice
using the md5OfBody attribute in the Amazon SQS event. This field represents a
unique hash of the payload.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 38
AWS Training and Certification Module 10: Developing with Messaging Services
33
To avoid having to write code to handle message ordering and deduplication where
order is critical and duplicates can't be tolerated, use a First-In-First-Out (FIFO) queue.
FIFO queues are designed to provide exactly-once processing, which helps you to
avoid sending duplicates to a queue. FIFO queues also guarantee message ordering.
FIFO queues support message groups, which allow multiple ordered message groups
within a single queue.
FIFO queues do have a more limited transaction per second (TPS) throughput than
standard queues, so only choose a FIFO queue if you need the features of a FIFO
queue.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 39
AWS Training and Certification Module 10: Developing with Messaging Services
Creates an SQS queue including queue attributes. If you don't provide a value
CreateQueue
for an attribute, the queue is created with the default value for the attribute.
Retrieves attributes of a queue including those that help you estimate the
GetQueueAttributes
resources required to process queue messages; for example, the approximate
number of messages available to retrieve and how many are currently invisible.
34
You can use the AWS Management Console to create and configure your SQS queues,
and you might find it helpful to use the console to send test messages to your queue.
You can also create and configure queues using API operations, and set queue
parameters similar to some of the message parameters that you just learned about.
For example, you can set the WaitTimeSeconds on the queue itself rather than an
individual ReceiveMessage request.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 40
AWS Training and Certification Module 10: Developing with Messaging Services
Set the message retention period to 3 days Set the maximum receive count to 1,000
35
This AWS CLI example creates a queue named MyQueue with the parameters in the
create-queue.json file.
The create-queue.json file specifies a redrive policy for failed records. The policy
sends failures to a dead-letter queue called MyDeadLetterQueue after 1,000
attempts (maxReceiveCount).
The parameters file also sets the message retention period for the source queue to
259,200 seconds or 3 days. If a message remains on the source queue for more than
3 days, it will be discarded regardless of the maxReceiveCount.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 41
AWS Training and Certification Module 10: Developing with Messaging Services
• For identity and access management: Access to Amazon SQS requires credentials
that AWS can use to authenticate your requests. These credentials must have
permissions to access AWS resources, such as SQS queues and messages. Use AWS
Identity and Access Management (IAM) policies and Amazon SQS policies. Amazon
SQS has its own resource-based permissions system that uses policies that are
written in the same language that is used for IAM policies. Thus, you can achieve
similar results with Amazon SQS policies and IAM policies, and you might use them
in combination. An example is presented on the next slide. The main difference is
that you MUST use SQS policies to grant permissions to other AWS accounts.
For more information about using IAM and Amazon SQS policies, see the following
sections of the Amazon SQS Developer Guide:
• Overview of Managing Access in Amazon SQS:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDevelo
perGuide/sqs-overview-of-managing-access.html
• Using Custom Policies with the Amazon SQS Access Policy Language:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDevelo
perGuide/sqs-creating-custom-policies.html
• For data encryption: Encrypt your data using server-side encryption (SSE). With
SSE, you can transmit sensitive data in encrypted queues. SSE protects the
contents of messages in SQS queues by using keys that are managed by AWS KMS.
When you enable SSE on an SQS queue, Amazon SQS will encrypt an incoming
message before storing it on the queue and unencrypt it upon delivery to the
consumer. Note that the encryption occurs at a point in time and applies to
messages arriving after that point. For example, if messages 1–10 are on the
queue when you enable SSE, message 11 will be encrypted, but messages 1–10
will not be.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 42
AWS Training and Certification Module 10: Developing with Messaging Services
For more information, see the Encryption at Rest section of the Amazon SQS Developer Guide at
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-
side-encryption.html.
• For internetwork traffic privacy: If you use Amazon Virtual Private Cloud (Amazon VPC ) to host
your AWS resources, you can establish a connection between your VPC and Amazon SQS. You can
use this connection to send messages to your SQS queues without crossing the public internet.
For more information, see the Internetwork Traffic Privacy section of the Amazon SQS Developer
Guide at https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-
internetwork-traffic-privacy.html.
For more information about Amazon SQS security, see the Amazon SQS Developer Guide at
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-
security.html.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 43
AWS Training and Certification Module 10: Developing with Messaging Services
IAM policy attached to IAM user Bob Amazon SQS policy attached to MyQueue
Allow Allow who:
Bob
Actions:
ReceiveMessage, SendMessage Actions:
ReceiveMessage, SendMessage
Resource:
arn:aws:sqs:us-east- Resource:
1:80398EXAMPLE:MyQueue
arn:aws:sqs:us-east-
1:80398EXAMPLE:MyQueue
37
In this example, the IAM policy and Amazon SQS policy grant equivalent access to a
user named Bob who is in the account where the queue exists.
The IAM policy grants the permissions to use the Amazon SQS ReceiveMessage and
SendMessage actions for the queue called MyQueue in the AWS account. The policy
is attached to Bob.
The Amazon SQS policy also gives permissions to Bob to use the ReceiveMessage and
SendMessage actions for the same queue.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 44
AWS Training and Certification Module 10: Developing with Messaging Services
38
The following are the key takeaways from this section of the module:
• Standard queues provide nearly unlimited throughput but don't preserve message
order and might include duplicate messages.
• FIFO queues preserve order and provide exactly-once delivery but have more
limited throughput.
• Queue security includes managing access to queue resources, protecting data
stored in the queue, and limiting exposure of messages to the internet.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 45
AWS Training and Certification Module 10: Developing with Messaging Services
39
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 46
AWS Training and Certification Module 10: Developing with Messaging Services
Amazon SNS
Fully managed messaging service with pub/sub
functionality for many-to-many messaging
between distributed systems, microservices, and
event-driven applications
• Offloads message filtering logic from your subscriber
systems and message routing logic from your
publisher systems
• Reliably delivers messages and provides failure
Amazon Simple management features (retries, dead-letter queues)
Notification
Service (Amazon • Scales dynamically
SNS)
• Offers a FIFO topics option, which works with FIFO
SQS queues to preserve message order
40
Amazon SNS includes substantial retry policies to limit the potential for delivery
failures and provides a dead-letter queue option for messages that continue to fail.
Amazon SNS also offers a First-In-First-Out (FIFO) topics option. You can use this
option with FIFO SQS queues to ensure message ordering without writing a lot of
custom code to manage message ordering and deduplication.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 47
AWS Training and Certification Module 10: Developing with Messaging Services
41
An SNS topic is a logical access point, which acts as a communication channel. Instead
of including a specific destination address in each message, a publisher sends a
message to the topic. Amazon SNS matches the topic to a list of subscribers for that
topic. The service then delivers the message to each subscriber.
Each topic has a unique name, which identifies the Amazon SNS endpoint for
publishers to post messages and subscribers to register for notifications.
A publisher can send messages to topics that they have created or to topics they have
permissions to publish to. When you create an SNS topic, you can control access to it
by defining policies that determine which publishers and subscribers can
communicate with the topic.
To receive messages that are published to a topic, you must subscribe an endpoint to
the topic. When you subscribe an endpoint to a topic, the endpoint begins to receive
messages published to the associated topic. Subscribers receive all messages that are
published to the topics that they subscribe to, and all subscribers to a topic receive
the same messages. Amazon SNS formats the message based on each endpoint type.
For more information about using Amazon SNS for application-to-application (A2A)
messaging, see the Amazon SNS Developer Guide at
https://docs.aws.amazon.com/sns/latest/dg/sns-system-to-system-messaging.html.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 48
AWS Training and Certification Module 10: Developing with Messaging Services
42
When a message must be processed by more than one consumer, you can combine
pub/sub messaging with a message queue in a fanout design pattern. In a fanout
pattern, you use an SNS topic to receive a message that is then pushed out to
multiple subscribers for parallel processing.
In the example on the slide, SQS queues subscribe to the "new-order" topic. When a
message is pushed from that topic, each queue gets an identical message but
performs its own asynchronous processing. In this example, one queue hands off
messages to an order fulfillment application, and the other queue interacts with a
data warehousing application.
You could also use fanout to replicate data that is sent to your production
environment to your development environment. For example, you could subscribe
yet another queue to the same topic for new incoming orders. Then, by attaching this
new queue to your development environment, you could continue to improve and
test your application by using data received from your production environment. In
this example, the DEV queue is connected to a Lambda function that anonymizes data
before writing it to a development Amazon DynamoDB database.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 49
AWS Training and Certification Module 10: Developing with Messaging Services
43
This slide provides an example of how you can use fanout for image processing.
When a user uploads images to an S3 bucket for image processing, a message about
the event is published to an SNS topic. Multiple SQS queues are subscribed to that
topic. When the SQS queues receive the message, they each invoke a Lambda
function with the payload of the published message. The Lambda functions process
the images (for example, generate thumbnail images, size images for mobile
applications, or size images for web applications) and send the results to another S3
bucket.
For more information about fanout and other common Amazon SNS scenarios, see
the Amazon SNS Developer Guide at https://docs.aws.amazon.com/sns/latest/dg/sns-
common-scenarios.html.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 50
AWS Training and Certification Module 10: Developing with Messaging Services
44
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 51
AWS Training and Certification Module 10: Developing with Messaging Services
45
When a publisher sends a message to a topic, Amazon SNS returns a message ID and
then attempts to deliver the message to all subscriber endpoints.
Amazon SNS defines a delivery policy for each delivery protocol. The delivery policy
defines how Amazon SNS retries the delivery of messages when server-side errors
occur (when the system that hosts the subscribed endpoint becomes unavailable).
When the delivery policy is exhausted, Amazon SNS stops retrying the delivery and
discards the message—unless a dead-letter queue is attached to the subscription.
For more information about delivery protocols and policies for each endpoint type,
see the Amazon SNS Developer Guide at
https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 52
AWS Training and Certification Module 10: Developing with Messaging Services
• Subscriber’s endpoint
Subscribe Prepares to subscribe to
• Protocol • ARN of topic
an endpoint • ARN of topic
Verifies an endpoint
ConfirmSubscription owner's intent to receive • Token sent to endpoint
messages
• Message
Sends a message to all of • Message attributes (optional)
Publish a topic's subscribed • Message structure:json (optional) • Message ID
endpoints • Subject (optional)
• ARN of topic
Deletes a topic and all of
DeleteTopic • ARN of topic
its subscriptions
46
Developers should be familiar with the following common API calls for Amazon SNS:
• CreateTopic: Creates a topic where notifications can be published. If a requester
already owns a topic with the specified name, that topic's Amazon Resource Name
(ARN) is returned without creating a new topic.
• Subscribe: Prepares to subscribe to an endpoint by sending a confirmation
message to the endpoint. If the service was able to create a subscription
immediately (without requiring endpoint owner confirmation), the response of the
Subscribe request includes the ARN of the subscription. To actually create a
subscription, the endpoint owner must call the ConfirmSubscription action with
the token from the confirmation message.
• ConfirmSubscription: Verifies an endpoint owner's intent to receive messages by
validating the token that was sent to the endpoint by an earlier Subscribe action. If
the token is valid, the action creates a new subscription and returns its ARN.
• Publish: Sends a message to all of a topic's subscribed endpoints. When a message
ID is returned, the message has been saved, and Amazon SNS will attempt to
deliver it to the topic's subscribers.
• DeleteTopic: Deletes a topic and all of its subscriptions. Deleting a topic might
prevent some messages that were previously sent to the topic from being
delivered to subscribers.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 53
AWS Training and Certification Module 10: Developing with Messaging Services
47
By default, an SNS topic subscriber receives every message that is published to the
topic. To receive a subset of the messages, a subscriber must assign a filter policy to
the topic subscription. A filter policy is a simple JSON object that contains attributes
that define which messages the subscriber receives. When you publish a message to
a topic, Amazon SNS compares the message attributes to the attributes in the filter
policy for each of the topic's subscriptions. If any of the attributes match, Amazon
SNS sends the message to the subscriber. Otherwise, Amazon SNS skips the
subscriber without sending the message. If a subscription doesn't have a filter policy,
the subscription receives every message that is published to its topic.
In this example, an online buyer visits a website, cancels one order, and places
another order. These messages are published to the SNS topic Shopping Events. The
Payment SQS queue subscriber has a filter policy applied so it receives only the
messages about the orders. The Lambda function subscriber has a filter policy to
receive only the messages that the product page was visited.
For more information, see Amazon SNS Message Filtering in the Amazon SNS
Developer Guide at https://docs.aws.amazon.com/sns/latest/dg/sns-message-
filtering.html.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 54
AWS Training and Certification Module 10: Developing with Messaging Services
topic_arn = sns.create_topic(
SNS topic
Name=‘ShoppingEvents’
)['TopicArn'}
sns.set_subscription_attributes( Attribute in
SubscriptionArn = search_engine_subscription_arn, filter policy
AttributeName = 'FilterPolicy'
AttributeValue = '{"event_type": ["product_page_visited"]}'
)
48
This slide highlights an example of a topic with a filter policy and example attributes
that might be used for filtering messages.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 55
AWS Training and Certification Module 10: Developing with Messaging Services
sns.publish(
TopicArn = topic_arn,
Subject = "Product Visited #1251",
Message = message,
MessageAttributes = { Message attribute that
'event_type': { matches the attribute
'DataType': 'String', in the filter policy
'StringValue': 'product_page_visited'
}
}
}
49
When you publish a message to a topic, Amazon SNS compares the message
attributes to the attributes in the filter policy for each of the topic's subscriptions. If
any of the attributes match, Amazon SNS sends the message to the subscriber.
In the ecommerce example, the message attribute event_type with the value
product_page_visited matches only the filter policy that is associated with the search
engine subscription. Therefore, only the Lambda function that is subscribed to the
SNS topic is notified about this navigation event, and the Payment SQS queue is not
notified.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 56
AWS Training and Certification Module 10: Developing with Messaging Services
50
You have detailed control over which endpoints a topic allows, who is able to publish
to a topic, and under what conditions:
• For identity and access management: Amazon SNS is integrated with IAM so that
you can specify which Amazon SNS actions a user in your AWS account can
perform with your Amazon SNS resources. You can specify a particular topic in the
policy. For example, when you create an IAM policy, you can use variables that give
certain users permissions to use the Publish action with specific topics in your AWS
account. You can use an IAM policy to restrict your users' access to Amazon SNS
actions and topics. An IAM policy can restrict access only to users within your AWS
account, not to other AWS accounts. You can use an Amazon SNS policy with a
particular topic to restrict who can work with that topic (for example, who can
publish messages to it and who can subscribe to it). Amazon SNS policies can give
access to other AWS accounts or to users within your own AWS account.
• For data encryption: Encrypt your data using server-side encryption (SSE). SSE
protects the contents of messages in SNS topics by using keys that are managed in
AWS KMS.
• For internetwork traffic privacy: If you use Amazon VPC to host your AWS
resources, you can establish a private connection between your VPC and Amazon
SNS. With this connection, you can publish messages to your SNS topics without
sending them through the public internet.
For more information about Amazon SNS security, see the Amazon SNS Developer
Guide at https://docs.aws.amazon.com/sns/latest/dg/sns-security.html.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 57
AWS Training and Certification Module 10: Developing with Messaging Services
Demonstration:
Working with
Amazon Messaging
Services
51
There is a video demonstration available for this topic. You can find this video within
the module 7 section of the course with the title: Demo Working with Amazon
Messaging Services.
If you are unable to locate this video demonstration please reach out to your
educator for assistance.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 58
AWS Training and Certification Module 10: Developing with Messaging Services
52
The following are the key takeaways from this section of the module:
• Developers publish messages to SNS topics, and subscribers to that topic get
copies of all published messages.
• With attribute filtering, subscribers receive only relevant messages.
• Subscriber endpoints include both application-to-application (A2A) and
application-to-person (A2P) types.
• Amazon SNS security includes managing access to SNS resources, protecting data
sent to topics, and limiting exposure of messages to the internet.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 59
AWS Training and Certification Module 10: Developing with Messaging Services
53
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 60
AWS Training and Certification Module 10: Developing with Messaging Services
Amazon Kinesis Data Streams is a fully managed data streaming service. Kinesis Data
Streams can continuously capture gigabytes of data per second from hundreds of
thousands of sources such as website clickstreams, database event streams, financial
transactions, social media feeds, IT logs, and location-tracking events. The data
collected is available in milliseconds to enable real-time analytics use cases.
Kinesis Data Streams reduces the overhead of building a streaming application with
tools including the AWS SDK, the Kinesis Client Library (KCL), connectors, and agents.
Kinesis Data Streams also has built-in integrations to AWS Lambda, Amazon Kinesis
Data Analytics, Amazon Kinesis Data Firehose, and AWS Glue Schema Registry to
simplify setting up a consumer to read records from the stream.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 61
AWS Training and Certification Module 10: Developing with Messaging Services
Log and event data Real-time analytics Mobile data capture Gaming data feed
collection
55
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 62
AWS Training and Certification Module 10: Developing with Messaging Services
56
A producer collects data and puts it on to the stream; for example, a web server that
sends log data to a Kinesis data stream is a producer. You can build producers for
Kinesis Data Streams using the Kinesis Producer Library (KPL) or the AWS SDK for
Java.
Data records also include a partition key and a data blob. The partition key is used to
group data by shard within a stream. The sequence number is unique per partition
key within its shard. For writes, a shard can support up to 1,000 records per second or
up to a maximum of 1 MB per second. For reads, a shard can support up to 5
transactions per second or up to a maximum of 2 MB per second.
Kinesis Data Streams stores a unit of data as a data record. A data stream represents
a group of data records. The data stream ingests a large amount of data in real time,
durably stores the data, and makes the data available for consumption. The data
records in a data stream are distributed into shards. After a record is added to the
stream, the record is available for a specified retention period, which you can set per
stream. The Kinesis Data Streams service adds shards to scale horizontally.
Each shard has a uniquely identified sequence of data records, and each data record
has a sequence number that Kinesis assigns.
A partition key is used to group data by shard within a stream. When you create a
stream, you specify the number of shards for the stream. The total capacity of a
stream is the sum of the capacities of its shards.
A consumer is an application that polls the data stream and processes the data from a
Kinesis data stream. You can have multiple consumers on a data stream. When you
have multiple consumers, they share the read throughput of the stream among them.
An option called enhanced fanout lets you give each consumer its own allotment of
read throughput.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 63
AWS Training and Certification Module 10: Developing with Messaging Services
Note that, unlike queue consumers, stream consumers do not delete records from the stream.
Instead, each consumer must maintain a pointer of where they are on the stream.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 64
AWS Training and Certification Module 10: Developing with Messaging Services
Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into
data lakes, data stores, and analytics services. The service can capture, transform,
and deliver streaming data to Amazon S3, Amazon Redshift, Amazon ES, generic HTTP
endpoints, and service providers such as Datadog, New Relic, MongoDB, and Splunk.
With Kinesis Data Firehose, you can automatically convert the incoming data to open
and standards based formats such as Apache Parquet and Apache ORC before the
data is delivered.
With Kinesis Data Firehose, you don't need to write consumer applications or manage
shards. You configure your data producers to send data to Kinesis Data Firehose, and
the service automatically delivers the data to a destination that you specify.
With Kinesis Data Analytics, you can do real-time analysis by using SQL queries before
persisting the data. The service is designed for near-real-time queries, and you can
aggregate data across a sliding window.
With Kinesis Data Analytics, you write SQL statements or applications, and then
upload them into a Kinesis Data Analytics application to perform analysis on the data
in the stream. Kinesis Data Analytics also supports using a Lambda function to
preprocess the data before your SQL runs.
Kinesis Data Analytics applications can enrich data by using reference sources,
aggregate data over time, or use machine learning to find data anomalies. Then you
can write the analysis results to another Kinesis data stream, a Kinesis Data Firehose
delivery stream, or a Lambda function.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 65
AWS Training and Certification Module 10: Developing with Messaging Services
58
In this example architecture, customer interaction logs from multiple applications and
microservices running across Amazon EC2, Amazon Elastic Container Service (Amazon
ECS), and Lambda are all producers that add records to a Kinesis data stream.
Consumers of the stream include Kinesis Data Firehose and Kinesis Data Analytics.
Kinesis Data Firehose delivers the streaming data to two storage destinations:
Amazon ES and Amazon S3. The application owners can use Amazon ES for
operational insights into their applications. Business users can use Amazon Athena or
a similar query tool to interact with the data stored on Amazon S3 without building a
complex Extract, Transform, and Load (ETL) processor.
At the same time, Kinesis Data Analytics consumes the stream and generates metrics,
percentiles, and derived values. These values are then emitted to Amazon
CloudWatch where they are part of an overall monitoring solution.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 66
AWS Training and Certification Module 10: Developing with Messaging Services
The Kinesis API provides actions related to managing the stream itself including
creating or deleting a stream, or modifying the number of shards on the stream. The
API also provides actions that producers use to write records to the stream
(PutRecord) and that consumers use to read from the stream (GetRecords).
You might run some of the basic commands from the AWS CLI when learning about
streams or when testing. You can manage some stream tasks and AWS integrations
from the AWS Management Console. However, you will need other tools to build
custom producer or consumer applications. The Kinesis Producer Library (KPL) and
Kinesis Client Library (KCL) were built for this purpose.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 67
AWS Training and Certification Module 10: Developing with Messaging Services
Get the starting position to read the stream, then use the output value to get records
60
This slide depicts a few examples of commands that you might use from the AWS CLI
to create and do a basic test on a stream.
After you create the stream, you can use the describe-stream-summary command to
verify that the stream is in an "active" state before you try to use the stream.
To get records off the stream, first use the get-shard-iterator command to get an
identifier for where to start processing stream records. Then, use the get-records
command with the shard-iterator value to get records off the stream.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 68
AWS Training and Certification Module 10: Developing with Messaging Services
61
To build your producer and consumer applications, you would typically use the KPL
and KCL, or possibly the AWS SDK.
Both the KPL and KCL abstract stream management tasks so that you can focus on
application logic. If you need to build custom producer and consumer applications,
use the KPL and KCL unless your use case has a specific need that the libraries cannot
meet.
The KPL is a highly configurable library that helps you write to a Kinesis data stream.
The library acts as an intermediary between your producer application code and the
Kinesis Data Streams API actions. The KPL performs the following primary tasks:
• Writes to one or more Kinesis data streams with an automatic and configurable
retry mechanism
• Collects records and uses PutRecords to write multiple records to multiple shards
per request
• Aggregates user records to increase payload size and improve throughput
• Integrates seamlessly with the KCL to deaggregate batched records on the
consumer
• Submits CloudWatch metrics on your behalf to provide visibility into producer
performance
The KCL helps you consume and process data from a Kinesis data stream by taking
care of many of the complex tasks that are associated with distributed computing.
These include load balancing across multiple consumer application instances,
responding to consumer application instance failures, checkpointing processed
records, and reacting to resharding.
You can build producer and consumer applications by using the AWS SDK to interact
directly with the Kinesis Data Streams API. However, for most use cases, use the KPL
or KCL to remove the complexities of handling all of the subtasks of stream
processing so that you can focus on your application's business logic.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 69
AWS Training and Certification Module 10: Developing with Messaging Services
62
As noted earlier in this module, although both queues and streams are messaging
services that use polling to retrieve messages from an interim store, queues and
streams are suited to different types of data patterns, and each process the data a bit
differently.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 70
AWS Training and Certification Module 10: Developing with Messaging Services
The following are the key takeaways from this section of the module:
• With Kinesis Data Streams, you can ingest, buffer, and process streaming data in
real time.
• Kinesis Data Firehose and Kinesis Data Analytics were designed to simplify
common data streaming use cases.
• Producers put records on to a stream where the records are stored in shards, and
consumers get records off the stream for processing.
• The Kinesis Producer Library (KPL) and Kinesis Client Library (KCL) abstract stream
interactions for developers.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 71
AWS Training and Certification Module 10: Developing with Messaging Services
Lab 10.1:
Implementing a
Messaging System
Using Amazon SNS
and Amazon SQS
64
You will now complete Lab 10.1: Implementing a Messaging System Using Amazon
SNS and Amazon SQS.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 72
AWS Training and Certification Module 10: Developing with Messaging Services
Lab: Tasks
1. Preparing the development environment
2. Configuring the Amazon SQS dead-letter queue
3. Configuring the Amazon SQS queue
4. Configuring the Amazon SNS topic
5. Linking Amazon SQS and Amazon SNS
6. Testing message publishing
7. Configuring the application to poll the queue
65
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 73
AWS Training and Certification Module 10: Developing with Messaging Services
66
The diagram summarizes what you will have built after you complete the lab.
***For accessibility: Inventory updates are sent with an SNS topic to an SQS queue.
Data then flows to a dead-letter queue or an Aurora Serverless database. End
description.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 74
AWS Training and Certification Module 10: Developing with Messaging Services
~ 90 minutes
67
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 75
AWS Training and Certification Module 10: Developing with Messaging Services
Lab debrief:
Key takeaways
68
After you complete the lab, your educator might choose to lead a conversation about
the key takeaways.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 76
AWS Training and Certification Module 10: Developing with Messaging Services
Module wrap-up
Module 10: Developing with Messaging Services
69
It’s now time to review the module and wrap up with a knowledge check, and
discussion of a practice certification exam question.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 77
AWS Training and Certification Module 10: Developing with Messaging Services
Module summary
In summary, in this module, you learned how to do the following:
• Illustrate how messaging services, including queues, pub/sub messaging, and
streams, support asynchronous processing
• Describe Amazon SQS
• Send messages to an SQS queue
• Describe Amazon SNS
• Subscribe an SQS queue to an SNS topic
• Describe how Kinesis can be used for real-time analytics
70
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 78
AWS Training and Certification Module 10: Developing with Messaging Services
71
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 79
AWS Training and Certification Module 10: Developing with Messaging Services
• The order
72
It is important to fully understand the scenario and question being asked before even
reading the answer choices. Find the keywords in this scenario and question that will
help you find the correct answer.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 80
AWS Training and Certification Module 10: Developing with Messaging Services
Choice Response
D Configure Amazon ECS with a cluster of EC2 instances that run Docker containers.
73
Now that we have bolded the keywords in this scenario, let us look at the answers.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 81
AWS Training and Certification Module 10: Developing with Messaging Services
Choice Response
D Configure Amazon ECS with a cluster of EC2 instances that run Docker containers.
74
Look at the answer choices and rule them out based on the keywords that were
previously highlighted.
Kinesis Data Streams does maintain record order when processing records on a
stream, but the description of individual transaction processing implies that you need
a queue. To ensure in-order processing, you need a FIFO queue instead of a standard
queue.
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 82
AWS Training and Certification Module 10: Developing with Messaging Services
Thank you
75
© 2023, Amazon Web Services, Inc. or its affiliates. All rights reserved. 83