0% found this document useful (0 votes)
26 views40 pages

AWS Application Integration & Architectural Best Practices

The document discusses AWS Application Integration and architectural best practices, highlighting the importance of application integration for productivity, data integration, and reduced development costs. It covers various AWS services like SQS, SNS, SWF, and Step Functions that facilitate communication and workflow management among distributed applications. Additionally, it outlines the AWS Well-Architected Framework and design principles to ensure operational excellence, security, reliability, performance efficiency, and cost optimization in cloud architectures.

Uploaded by

tanmay malhotra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views40 pages

AWS Application Integration & Architectural Best Practices

The document discusses AWS Application Integration and architectural best practices, highlighting the importance of application integration for productivity, data integration, and reduced development costs. It covers various AWS services like SQS, SNS, SWF, and Step Functions that facilitate communication and workflow management among distributed applications. Additionally, it outlines the AWS Well-Architected Framework and design principles to ensure operational excellence, security, reliability, performance efficiency, and cost optimization in cloud architectures.

Uploaded by

tanmay malhotra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

AWS Application Integration &

Architectural Best Practices


Module 7
AWS Application Integration

Application integration is the process of getting independently built


software systems to work together without manual intervention.

Modern application design encourages the flexible exchange of


data between applications for increased efficiency, modularity, and
reusability.

Application integration allows your developers to build


applications that reuse existing services and systems. This way,
they can do more with less coding.

It also facilitates automation, as applications can communicate with


each other for complex enterprise workflows.
What are the benefits of application integration?

Increases productivity
People are naturally more productive when they don’t need to
switch between different applications. Integrating data and
functionality from other apps allows users to do more tasks in one
application, removing the need for context-switching.

Supports data integration


One of the biggest barriers to efficiency is the data silos (repository
of data that's controlled by one department) that exist across
many different applications in all types of systems. It can be
extremely difficult to combine data from disparate components in
enterprise data architecture.
What are the benefits of application integration?

Enhances customer appeal


Many end users expect applications and services to interoperate
with one another.

You can integrate popular applications with your application, such


as adding an email or social media account login methods. You can
then meet the usability expectations of a larger group and increase
your customer base.

Lowers development costs


To build software, developers use libraries and frameworks that
perform complex functions, so they don't have to write that code
themselves.
Amazon SQS (Simple Queue Service)

SQS enables web service applications that help to quickly and


reliably queue messages. These messages have one component in
their application that generates only when to be consumed by
another component.

Therefore, the queue is a temporary repository for messages and


these messages are awaiting processing.

So, once these messages are processed, the messages also get
deleted from the queue. AWS SQS service basically adds messages
in a queue and then, users will pick up these messages from the
queue.

A queue is a place where you can store your messages until they are
extracted from the queue or expired.
How does AWS SQS work?

SQS offer asynchronous massaged-based communication between


two services. Before SQS the communication between the two
services was based upon the API calls after the SQS the event
producer will notify the consumer in an asynchronous manner.
How does AWS SQS work?

Amazon Simple Queue Service (SQS) will let you send messages,
store the messages, and receive messages between various software
components at any amount, without losing of actual messages. Also,
without requiring some other services to be available. so, basically,
Amazon SQS is a distributed queue system.

The messages that are in the queue are in the form of Jason format
and the queue is a holding pool of messages the limit of each
message is 256 kilobytes per message. The publishers will publish
the messages into the queue and the messages are processed or
removed with the help of consumers.
Amazon SQS architecture

Distributed queues
There are three main parts in a distributed messaging system: the
components of your distributed system, your queue (distributed on
Amazon SQS servers), and the messages in the queue.
Amazon SQS architecture

In the following scenario, your system has several producers


(components that send messages to the queue) and consumers
(components that receive messages from the queue).

The queue (which holds messages A through E) redundantly stores


the messages across multiple Amazon SQS servers.
Message lifecycle
Message lifecycle

 A producer (component 1) sends message A to a queue, and the


message is distributed across the Amazon SQS servers
redundantly.

 When a consumer (component 2) is ready to process messages,


it consumes messages from the queue, and message A is
returned. While message A is being processed, it remains in the
queue and isn't returned to subsequent receive requests for the
duration of the visibility timeout.

 The consumer (component 2) deletes message A from the


queue to prevent the message from being received and processed
again when the visibility timeout expires.
Amazon SNS (Simple Notification Service)

SNS means the AWS Simple Notification Service (SNS) which


allows communication between distributed applications and
services. We can broadcast messages to multiple subscribers in real
time.

This is due to the fact that It operates on a publish-subscribe model


in which the publisher sends the message and the subscriber
receives them.

You can use it for various purposes such as pushing out critical
notifications to end-users or updating various components of a
microservices architecture simultaneously.
Amazon SNS (Simple Notification Service)
Amazon SWF (Simple Workflow Service)

Amazon Simple workflow services (Amazon SWF) are task-based


Application interfaces, which are programmed to make it easy to
coordinate different works across distributed application
components.

Here tasks represent a logical unit of work that is being performed


by the application. It includes management of Inter-task
dependencies, creating schedules, and concurrency according to
the logical flow of the application.

With its help, you can have full control over task implementation.
You can coordinate them without thinking about the underlying
complexities. And the worries of tracking and maintaining their
states are not a problem anymore. With its use, workers can be
implemented to perform the tasks.
Amazon SWF (Simple Workflow Service)

Tasks can be created for a long run or small period, or may time
out after some time and may require restarts.

Amazon SWF stores the tasks and assigns them to workers when
required. It also maintains their state and tracks their progress until
their completion.

Amazon SWF supports a variety of application requirements and is


suitable for a variety of different ranges of tasks including web
application back-ends, analytics pipelines, etc.
Amazon SWF (Simple Workflow Service)

The primary function of Amazon SWF is to control the workflow


of your application. It serves as a coordination center for your
application’s many components. When it comes to its functions,
the most important ones are:

 It maintains the application’s state.

 It supervises the execution and progress of workflows

 It includes tasks such as holding and dispatching.

 It defines which of your application hosts responsibilities will


be carried out.
DRM-Digital Rights Managed
AWS Step Functions

AWS Step Functions is a serverless orchestration service that lets


developers create and manage multi-step application workflows in
the cloud.

By using the service’s drag-and-drop visual editor, teams can


easily assemble individual microservices into unified workflows.

At each step of a given workflow, Step Functions manages input,


output, error handling, and retries, so that developers can focus on
higher-value business logic for their applications.
How AWS Step Functions Works

AWS Step Functions consists of the following main components:

State Machine
In computer science, a state machine is defined as a type of
computational device that is able to store various status values and
update them based on inputs.

AWS Step Functions builds upon this very concept and uses the
term state machine to refer to an application workflow.

Developers can build a state machine in Step Functions with JSON


files by using the Amazon States Language.
How AWS Step Functions Works

State
A state represents a step in your workflow. States can perform a
variety of functions:

 Perform work in the state machine (Task state—see more


information below)
 Choose between different paths in a workflow (Choice state)
 Stop the workflow with failure or success (a Fail or Succeed
state)
 Pass output or some fixed data to another state (Pass state)
 Pause the workflow for a specified amount of time (Wait state)
 Begin parallel branches of execution (Parallel state)
 Repeat execution for each item of input (Map state)
How AWS Step Functions Works

Task State
A task state (typically just referred to as a task) within your state
machine is used to complete a single unit of work. Tasks can be
used to call the API actions of over two hundred Amazon and AWS
services. Two types of tasks can be included in your workflows:

Activity tasks Service tasks


How AWS Step Functions Works

Activity tasks

Activity tasks let you connect a step in your workflow to a batch of


code that is running elsewhere.

This external batch of code, called an activity worker, polls Step


Functions for work, asynchronously completes the work using your
code, and returns results.

Activity tasks are common with asynchronous workflows in which


some human intervention is required (to verify a user account, for
example).
How AWS Step Functions Works

Service tasks

Service tasks let you connect steps in your workflow to specific


AWS services.

Step Functions sends requests to other services, waits for the task to
complete, and then continues to the next step in the workflow.

They can be used easily for automated steps, such as executing a


Lambda function.
AWS Well-Architected Framework

AWS Well-Architected Framework is a tool that uses best practices


to find improvements for your applications in the cloud.
It helps you in five areas:

Operational excellence

Security

Reliability

Performance efficiency

Cost optimization

Those areas are also called the five pillars of AWS Well-Architected
Framework.
AWS Well-Architected Framework

Operational Excelence Pillar

The operational excellence pillar is a capacity to manage and


monitor systems.

It improves supporting systems processes and procedures.

It includes:

 Making small and reversible changes

 Prediction of system disruptions

 Performing code tasks

 Making documentation notes


AWS Well-Architected Framework

Security Pillar

 The security pillar consists of protecting systems and data.

 Well-Architected Framework applies security at all levels.

 It protects both stored and in-transit data.

 When possible, best security practices are automatically applied.


AWS Well-Architected Framework

Reliability Pillar

 The reliability pillar is the ability to minimize disruptions of the


system.

 It obtains computing resources as needed.

 It entails boosting system availability.

 It automatically recovers the system from disruptions.


AWS Well-Architected Framework

Performance Efficiency Pillar

 The performance efficiency pillar is the capacity to accurately use


computing resources.

 It satisfies the efficiency on demand.


AWS Well-Architected Framework

Cost Optimization Pillar

 Cost optimization pillar helps you run your cloud services at the
lowest price points.

Cost optimization performs operations such as:

 Analysis of your costs

 Operating managed services

 Makes sure you only pay for what you use


Design Principles for AWS Cloud Architectures

Organize teams around business outcomes: The ability of a team


to achieve business outcomes comes from leadership vision,
effective operations, and a business-aligned operating model.
Leadership should be fully invested and committed to a CloudOps
transformation with a suitable cloud operating model that
incentivizes teams to operate in the most efficient way and meet
business outcomes.

Implement observability for actionable insights: Establish key


performance indicators (KPIs) and leverage observability
telemetry to make informed decisions and take prompt action when
business outcomes are at risk
Design Principles for AWS Cloud Architectures

Safely automate where possible: You can define your entire


workload and its operations (applications, infrastructure,
configuration, and procedures) as code, and update it. You can then
automate your workload’s operations by initiating them in response
to events.

In the cloud, you can employ automation safety by configuring


guardrails, including rate control, error thresholds, and
approvals. Through effective automation, you can achieve
consistent responses to events, limit human error, and reduce
operator toil.
Design Principles for AWS Cloud Architectures

Make frequent, small, reversible changes: Design workloads that


are scalable and loosely coupled to permit components to be updated
regularly.

Automated deployment techniques together with smaller,


incremental changes reduces the blast radius and allows for
faster reversal when failures occur.

This increases confidence to deliver beneficial changes to your


workload while maintaining quality and adapting quickly to changes
in market conditions.
Design Principles for AWS Cloud Architectures

Refine operations procedures frequently: As you use operations


procedures, look for opportunities to improve them. Hold regular
reviews and validate that all procedures are effective and that
teams are familiar with them. Where gaps are identified, update
procedures accordingly.

Anticipate failure: Maximize operational success by driving failure


scenarios to understand the workload’s risk profile and its impact on
your business outcomes. Test the effectiveness of your procedures
and your team’s response against these simulated failures.
Design Principles for AWS Cloud Architectures

Implement a strong identity foundation: Implement the


principle of least privilege and enforce separation of duties with
appropriate authorization for each interaction with your AWS
resources.

Maintain traceability: Monitor, alert, and audit actions and changes


to your environment in real time. Integrate log and metric
collection with systems to automatically investigate and take
action.

Apply security at all layers: Apply a defense in depth approach


with multiple security controls. Apply to all layers (for example,
edge of network, VPC, load balancing, every instance and compute
service, operating system, application, and code).
Scalability and Elasticity

Cloud elasticity refers to the ability to scale Computing Resources


in the cloud up or down based on actual demand. This ability to
adapt to increased usage (or decreased usage) allows you to provide
resources when needed and avoid costs if they are not.

Scalability is the ability of a system, network, or process to handle


a growing amount of work or its potential to be enlarged in various
ways. A scalable solution can get scaled up by adding processing
power, storage capacity, and bandwidth.

A cloud can increase or decrease its resource capacity dynamically.


With scalability, there is no having to provision new hardware,
install operating systems and software, or make any other changes
to the running system. Cloud scalability allows a cloud operator to
grow or shrink their computing resources as needed.
AWS Scalability

AWS auto-scaling is a feature of AWS that allows you to scale your


EC2 instances based on a series of triggers -automatically. This can
be especially useful if you have an application that requires a lot of
resources at peak times and less during off-peak hours.

Use a scalable, load-balanced cluster. This approach allows for the


distribution of workloads across multiple servers, which can help to
increase scalability.

Enable detailed monitoring. Thorough monitoring allows for the


collection of CloudWatch metric data at a one-minute frequency,
which can help to ensure a faster response to load changes.
AWS cloud elasticity

Design for horizontal scaling: One of the most significant


advantages of cloud computing is the ability to scale your
application using a distributed architecture that can be easily
replicated across multiple instances.

Use Elastic Load Balancing: ELB can automatically detect


unhealthy instances and redirect traffic to healthy ones. It distributes
incoming traffic across multiple instances of your application,
helping to ensure that no single model becomes overloaded.

AWS CloudWatch allows you to monitor the performance of your


application and the resources it uses. You can set up alarms to
trigger Auto Scaling actions based on metrics such as CPU
utilization, network traffic, or custom metrics.
High Availability and Fault Tolerance

High availability means a system will almost always maintain


uptime, albeit sometimes in a degraded state. About AWS, a system
has high availability when it has 99.999% uptime, also known as "five
nines." To put that in perspective, the system would be down for a
mere five minutes and fifteen seconds a year.

High availability is architected by removing single points of failure by


leveraging system redundancy. For instance, if you had five
computers connected to one server, that would be a single point of
failure. If that server room floods and the server is destroyed, then
you're out of luck.

To mitigate this situation, however, a backup server can be switched


on in case of emergencies. This is adding redundancy to remove
single points of failure. So, high availability removes single points of
failure by adding redundancy.
High Availability and Fault Tolerance

Fault tolerance means that a system will almost always maintain


uptime — and users will not notice any differences during a primary
system outage.

At a bare minimum, multiple servers would have to be load-balanced,


databases would have to be replicated, and availability would need to
span multiple regions.

All of this would need to be maintained by the company itself. Not


only would the company have to foot the bill for all the hardware and
expertise to run it, but they would have to follow esoteric IT security
standards.
Security

You might also like