0% found this document useful (0 votes)
20 views82 pages

Unit 3 Enriching The Cloud Security With Computing Technology

This document outlines the syllabus for Unit III of a course on cloud security and computing technology at MIT School of Computing, Pune. It covers key concepts of IaaS, PaaS, and SaaS, detailing their definitions, benefits, and use cases, as well as AWS compute services like EC2, Lambda, and Elastic Beanstalk. The unit aims to equip students with knowledge on deploying and securing cloud services, understanding compliance, and optimizing costs.

Uploaded by

GAYATRI BHOSALE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views82 pages

Unit 3 Enriching The Cloud Security With Computing Technology

This document outlines the syllabus for Unit III of a course on cloud security and computing technology at MIT School of Computing, Pune. It covers key concepts of IaaS, PaaS, and SaaS, detailing their definitions, benefits, and use cases, as well as AWS compute services like EC2, Lambda, and Elastic Beanstalk. The unit aims to equip students with knowledge on deploying and securing cloud services, understanding compliance, and optimizing costs.

Uploaded by

GAYATRI BHOSALE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

MIT Art Design and Technology University

MIT School of Computing, Pune


21BTCS002-Cloud Foundations
Class -T.Y. (SEM-II), <CORE,AIA,BCT,CSF>

Unit - III
ENRICHING THE CLOUD SECURITY WITH COMPUTING
TECHNOLOGY

Prof. XYZ

AY 2023-2024 SEM-I
Unit III - Syllabus

Unit-III– ENRICHING THE CLOUD SECURITY WITH


COMPUTING TECHNOLOGY 09 hours
• Iaas, Paas, Saas, Understanding compute services,
deploying and scaling services using AWS elastic
beanstalk, shared responsibility model, securing accounts
and ensuring compliance, scale and load balance your data
architecture, server monitoring using cloud logs, use case.
Unit objectives
After completing this module, student should be able to:
• Demonstrate why to use Amazon Elastic Compute Cloud (Amazon EC2)
• Identify Amazon EC2 cost optimization elements
• Demonstrate when to use AWS Elastic Beanstalk
• Demonstrate when to use AWS Lambda
• Identify how to run containerized applications in a cluster of managed servers
• Recognize the shared responsibility model
• Recognize IAM users, groups, and roles
• Describe different types of security credentials in IAM
• Identify the steps to securing a new AWS account
• Explore IAM users and groups
• Recognize how to secure AWS data
• Recognize AWS compliance programs
SECTION1: Iaas, Paas, Saas
INFRASTRUCTURE-AS-A-SERVICE (Iaas)
● IaaS provides access to fundamental resources such as physical machines,
virtual machines, virtual storage, etc.
● Apart from these resources, the IaaS also offers:
○ Virtual machine disk storage
○ Virtual local area network (VLANs)
○ Load balancers
○ IP addresses
○ Software bundles
● All of the above resources are made available to end user via server
virtualization. Moreover, these resources are accessed by the customers as if
they own them.
● Infrastructure as a service or IaaS refers to cloud computing infrastructure in terms of
servers and storage, operating systems and network.
● Instead of buying these, the client will buy the resources as an on-demand service.
● The public cloud refers to infrastructure which comprises shared resources that are
deployed on self-service basis across the Internet, while private cloud refers to
infrastructure which offers resources through a private network.
● Some providers even offer a combination of both these networks to produce a hybrid
cloud.
● In IaaS, resources get distributed as services and it allows dynamic scaling. IaaS has
variable costs because it follows a utility pricing model.
● You can use the IaaS when demands are changing, or for new businesses which do
not have much capital for investing in hardware, or when an organization is growing
very fast and scaling the hardware is challenging.
● IaaS is also beneficial when organizations face pressures to reduce capital costs and
shift to operational costs and also for specific business needs or short-term
infrastructure needs.
● IaaS should not be used where data storage outsourcing can become difficult with
regulatory compliance or where on-site infrastructure can cater to an organization’s
needs.
Benefits
IaaS allows the cloud provider to freely locate the infrastructure over the Internet
in a cost- effective manner. Some of the key benefits of IaaS are listed below:
● Full control of the computing resources through administrative access to VMs.
● Flexible and efficient renting of computer hardware.
● Portability, interoperability with legacy applications.
PLATFORM-AS-A-SERVICE (PAAS)
● PaaS provides the runtime environment for applications, development and
deployment tools, etc.
● Platform-as-a-Service offers the runtime environment for applications.
● It also offers development and deployment tools required to develop
applications.
● PaaS has a feature of point-and-click tools that enables non-developers to
create web applications.
● App Engine of Google and Force.com are examples of PaaS offering
vendors.
● Developer may log on to these websites and use the built-in API to create
web-based applications.
● But the disadvantage of using PaaS is that, the developer locks-in with a
particular vendor.
● For example, an application written in Python against API of Google, and
using App Engine of Google is likely to work only in that environment.
PaaS refers to a computing model which allows for the fast and easy creation of applications without
buying or maintaining software and infrastructure for them. Unlike SaaS which is software delivered
across the web, PaaS is the platform for creation of such software. It refers to:
• Services which develop, test, implement and maintain applications in an integrated development
setting.
• Web-based interface creation tools for creating, altering, testing and deploying different user
interfaces.
• Tools for handling billing
• Built-in scalability of software that includes failover and load balancing
• Integration with databases and web services through common standards
• Multitenant architecture where multiple users are using the same applications.
• Supporting development team collaborations; some PaaS solutions have project planning tools.
PaaS is mainly used when many developers are working on one project and when outside parties
have to communicate with development processes. It is useful for those that have existing data
sources and are keen to build applications for leveraging that data. PaaS should not be sued where
the application must be portable, or when proprietary languages can affect the development, or when
Benefits
● Lower administrative overhead
Customer need not bother about the administration because it is the responsibility
of cloud provider.
● Lower total cost of ownership
Customer need not purchase expensive hardware, servers, power, and data
storage.
● Scalable solutions
It is very easy to scale the resources up or down automatically, based on their
demand.
● More current system software
It is the responsibility of the cloud provider to maintain software versions and
patch installations.
SOFTWARE-AS-A-SERVICE (SAAS)
● Software-as–a-Service (SaaS) model allows to provide software application
as a service to the end users. It refers to a software that is deployed on a host
service and is accessible via Internet. There are several SaaS applications
listed below:
○ Billing and invoicing system
○ Customer Relationship Management (CRM) applications
○ Help desk applications
○ Human Resource (HR) solutions
● Some of the SaaS applications are not customizable such as Microsoft Office
Suite. But SaaS provides us Application Programming Interface (API), which
allows the developer to develop a customized application.
Software as a Service or SaaS is software deployed over the web and a SaaS application can be licensed by
a vendor to clients as on-demand service. This is made possible through subscription according to a pay-as-
you-use model or free of cost where there are chances of generating revenues from channels like
advertisements. SaaS offers internet access to commercial software and it managed from a central point. It is
software that is offered through a “one-to-many” model and users do not have to be worried about patches
and software upgrades. When businesses want to shift their operations to the cloud, they need to understand
which applications should be shifted. For instance:
• “Vanilla” offerings wherein solutions are largely undifferentiated; for example, emails where competitors often
use the same software as this basic technology is needed to conduct business but does not on its own provide
a competitive advantage.
• Applications which demand Internet access like sales management software.
• Applications which involve interplay between the outside world and an organization like software for an email
newsletter campaign.
• Software which sees frequent demand spikes like tax software or billing software
• Software which is needed for a short term like collaboration software for some project.
SaaS should not be deployed for applications which need fast processing of real-time data or for applications
where laws do not allow the data to be hosted externally or applications having an on-site solution that can
Benefits
Using SaaS has proved to be beneficial in terms of scalability, efficiency and
performance.Some of the benefits are listed below:
● Modest software tools
● Efficient use of software licenses
● Centralized management and data
● Platform responsibilities managed by provider
● Multi Tenant solutions
IAAS vs PAAS vs SAAS
The 3 main cloud computing stack or cloud software stack is different from each other in
many senses. But the main base at which they are differentiated is the control and the cost.
● When you have SaaS as your cloud software stack you lose a little control over the
applications. This is because of the control of not only the applications but also of OS,
storage as well as networking shifts to your vendor. Hence, if you are the owner of a
small enterprise then SaaS as a cloud technology stack is the most suitable for you.
● Whereas with PaaS one gets the privilege of controlling their applications and data
more than the vendor. The vendor is more responsible for managing OS, runtime, etc.
Therefore, PaaS is better when it comes to cost. It is more suitable for enterprises that
are into app development but do not keep their employees engaged in networking or
running servers.
● IaaS stack cloud computing gives you control on both applications as well as over the
infrastructure. The best part of IaaS is that the vendor will spending on physical
servers, networking, and storage. It is a bit costlier than the other cloud stack in cloud
computing.
SECTION 2: UNDERSTANDING COMPUTE
SERVICES
What are Compute Services?
● Compute services are also known as Infrastructure-as-a-Service (IaaS).

● Compute platforms, such as AWS Compute, supply a virtual server instance


and storage and APIs that let users migrate workloads to a virtual machine.

● Users have allocated compute power and can start, stop, access, and
configure their computer resources as desired.

https://aws.amazon.com/what-is/compute/
Terminology

Instance = One running virtual machine.


Instance Type = hardware configuration: cores, memory, disk.
Instance Store Volume = Temporary disk associated with instance.
Image (AMI) = Stored bits which can be turned into instances.
Key Pair = Credentials used to access VM from command line.
Region = Geographic location, price, laws, network locality.
Availability Zone = Subdivision of region the is fault-independent.
What is an API? (Content beyond syllabus)
API stands for Application Programming Interface.
● In the context of APIs, the word Application refers to any software with a
distinct function.
● Interface can be thought of as a contract of service between two applications.

What is the example of API?


● Developers can also use web APIs to create new capabilities for their
applications.
● An example of this is Google Maps API.
● It helps business owners share their company's locations on the application so
customers can find them.
AWS compute services
Amazon Web Services (AWS) offers many compute services. This module will
discuss the highlighted services.

Amazon EC2 Amazon EC2 Amazon Elastic Amazon Elastic VMware Cloud
Auto Container Registry Container Service on AWS
Scaling (Amazon ECR) (Amazon ECS)

AWS Elastic AWS Lambda Amazon Elastic Amazon Lightsail AWS Batch
Beanstalk Kubernetes
Service (Amazon
EKS)

AWS Fargate AWS Outposts AWS Serverless


Application Repository
Categorizing compute

services
Services
Amazon EC2
Key Concepts
• Infrastructure as a service
Characteristics
• Provision virtual machines that
Ease of Use
A familiar concept to many IT
(IaaS) you can manage as you choose professionals.
• Instance-based
• Virtual machines
• AWS Lambda • Serverless computing • Write and deploy code that runs A relatively new concept for
• Function-based on a schedule or that can be many IT staff members,
• Low-cost triggered by events but easy to use after you
• Use when possible (architect for learn how.
the cloud)

• Amazon ECS • Container-based computing • Spin up and run jobs more AWS Fargate reduces
• Amazon EKS • Instance-based quickly administrative overhead, but
• AWS Fargate you can use options that give
• Amazon ECR you more control.

• AWS Elastic • Platform as a service (PaaS) • Focus on your code (building Fast and easy to get started.
Beanstalk • For web applications your application)
• Can easily tie into other
services—databases, Domain
Name System (DNS), etc.
What is a container? (Content beyond syllabus)

Before software is released, it must be tested, packaged, and installed. Software deployment refers to the
process of preparing an application for running on a computer system or a device.

● Docker is a tool used by developers for deploying software.


● It provides a standard way to package an application’s code and run it on any system.
● It combines software code and its dependencies inside a container.

● Containers (or Docker Images) can then run on any platform via a docker engine.
● Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management
service that supports Docker containers and allows you to easily run applications on a managed cluster of
Amazon EC2 instances.
● This ensures quick, reliable, and consistent deployments, regardless of the environment.

A hospital booking application: an example of Docker


● For example, a hospital wants to make an appointment booking application.
● The end users may use the app on Android, iOS, Windows machine, MacBook, or via the hospital’s website.
● If the code were deployed separately on each platform, it would be challenging to maintain. Instead,
● Docker could be used to create a single universal container of the booking application.
● This container can run everywhere, including on computing platforms like AWS.
Choosing the optimal compute
•service
The optimal compute service or services that you use will depend on your use
case
• Some aspects to consider –
• What is your application design?
• What are your usage patterns?
• Which configuration settings will you want to manage?
• Selecting the wrong compute solution for an architecture can lead to lower
performance efficiency
• A good starting place—Understand the available compute options
How to choose between different AWS Compute Services?

Choosing the best AWS infrastructure depends on your application requirements, lifecycle,
code size, demand, and computing needs.

Take a look at these three examples:

● If you want to deploy a selection of on-demand instances offering a wide array of different
performance benefits within your AWS environment, you would use
Amazon Elastic Compute Cloud (EC2).

● If you want to run Docker-enabled applications packaged as containers across a cluster of


EC2 instances, you could use Amazon Elastic Container Service (Amazon ECS).

● If you want to run your own code using only milliseconds of compute resource in response
to event-driven triggers in serverless environment, you could use AWS Lambda.
What are the benefits of AWS compute services?

Right compute for your workloads


● Amazon EC2 (Amazon Elastic Compute Cloud) offers granular control for managing application infrastructure with the
choice of processors, storage, and networking.

● Amazon Elastic Container Services (Amazon ECS) offer choice and flexibility to run containers.

Built-in security
● AWS offers significantly more security, compliance, and governance services, and key features than the next largest
cloud provider.
● The AWS Nitro System has security built in at the chip level to continuously monitor, protect, and verify the instance
hardware.

Cost optimization
● With AWS compute you pay only for the instance or resource you need, for as long as you use it, without requiring long-
term contracts or complex licensing.

Flexibility
● AWS provides multiple ways to build, deploy, and get applications to market quickly. For example, Amazon Lightsail is an
easy-to-use service that offers you everything you need to build an application or website.
AWS compute services
● AWS EC2 provides various instance types with different configurations of CPU, memory, storage, and networking
resources so a user can tailor their compute resources to the needs of their application.

● Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove
EC2 instances according to conditions you define.

● EC2 Image Builder simplifies the building, testing, and deployment of VMs and container images for use on AWS or
on-premises

● Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS.

● Amazon Linux 2023 (AL2023) is our new Linux-based operating system for AWS that is designed to provide a secure,
stable, high-performance environment to develop and run your cloud applications.

● AWS App Runner is a fully managed service that makes it easy for developers to quickly deploy containerized web
applications and APIs, at scale and with no prior infrastructure experience required.
AWS compute services
● AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing
jobs on AWS.

● AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with
Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker

● AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters.

● AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume—there
is no charge when your code is not running.

● AWS Serverless Application Repository enables you to quickly deploy code samples, components, and complete applications for
common use cases such as web and mobile back-ends, event and data processing, logging, monitoring, Internet of Things (IoT),
and more.

● AWS Outposts bring native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or
on-premises facility.

● You can use the same APIs, the same tools, the same hardware, and the same functionality across on-premises and the cloud to
deliver a truly consistent hybrid experience
Amazon EC2
overview
• Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing
capacity in the Amazon Web Services (AWS) Cloud.

Using Amazon EC2 eliminates your need to invest in hardware up front, so you
can develop and deploy applications faster.

You can use Amazon EC2 to launch as many or as few virtual servers as you
need, configure security and networking, and manage storage.
Amazo
Amazon EC2 enables you to scale up or down to handle changes in
n requirements or spikes in popularity, reducing your need to forecast traffic.
EC2
Introduction

Amazon Elastic Compute Cloud (Amazon EC2)


provides scalable computing capacity in the Amazon
Web Services (AWS) Cloud. Using Amazon EC2
eliminates your need to invest in hardware up front, so
you can develop and deploy applications faster. You can
use Amazon EC2 to launch as many or as few virtual
servers as you need, configure security and networking,
and manage storage. Amazon EC2 enables you to scale
up or down to handle changes in requirements or spikes
in popularity, reducing your need to forecast traffic.
Description

• The full form of Amazon EC2 is Amazon Elastic Compute Cloud. Amazon
EC2 is one of the most used and most basic services in Amazon so it
makes sense to start with EC2 when you are new to AWS.
• Well, to be very simple, EC2 is a machine with an operating system and
hardware components of your choice. But the difference is that it is totally
virtualized. You can run multiple virtual computers in a single physical
hardware.
• Elastic Compute Cloud (EC2) is one of the integral parts of the
AWS ecosystem. EC2 enables on-demand, scalable computing capacity in
the AWS cloud.
• Amazon EC2 instances eliminate the up-front investment for hardware, and
there is no need to maintain any rented hardware. It enables you to build
Features

• Virtual computing environments, known as instances


• Preconfigured templates for your instances, known as Amazon Machine
Images (AMIs), that package the bits you need for your server (including
the operating system and additional software)
• Various configurations of CPU, memory, storage, and networking capacity
for your instances, known as instance types
• Secure login information for your instances using key pairs (AWS stores
the public key, and you store the private key in a secure place)
• Storage volumes for temporary data that's deleted when you stop,
hibernate, or terminate your instance, known as instance store volumes
• Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS),
known as Amazon EBS volumes
• Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known
as Regions and Availability Zones.
• A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your
instances using security groups
• Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses.
• Metadata, known as tags, that you can create and assign to your Amazon EC2 resources
• Virtual networks you can create that are logically isolated from the rest of the AWS Cloud, and that
you can optionally connect to your own network, known as virtual private clouds (VPCs).
Block Diagram
Amazon EC2 storage options
• Amazon Elastic Block Store (Amazon EBS) –
• Durable, block-level storage volumes.
• You can stop the instance and start it again, and the data will still be there.
• Amazon EC2 Instance Store –
• Ephemeral storage is provided on disks that are attached to the host computer where the
EC2
instance is running.
• If the instance stops, data stored here is deleted.
• Other options for storage (not for the root volume) –
• Mount an Amazon Elastic File System (Amazon EFS) file system.
• Connect to Amazon Simple Storage Service (Amazon S3).
Amazon EC2 instance
lifecycle Only instances backed by Amazon EBS

Launch Start
pending

AMI

Reboot Stop
rebooting running stopping stopped
Stop-
Hibernate
Terminate

shutting-
down

Terminate
terminated
Consider using an Elastic IP address
• Rebooting an instance will not • If you require a persistent public IP
address –
change any IP addresses or DNS • Associate an Elastic IP address with the
hostnames. instance.

• When an instance is stopped and • Elastic IP address characteristics –


then started again – • Can be associated with instances in
• The public IPv4 address and external the Region as needed.
DNS hostname will change.
• Remains allocated to your account until
• The private IPv4 address and internal you choose to release it.
DNS
hostname do not change.
Elastic
IP
Addres
s
Amazon EC2 pricing models
On-Demand Instances Reserved Instances Spot Instances
• Pay by the hour • Full, partial, or no upfront payment • Instances run as long as they are
• No long-term commitments. for instance you reserve. available and your bid is above the
• Eligible for the AWS Free Tier. • Discount on hourly charge for that Spot Instance price.
instance. • They can be interrupted by AWS
• 1-year or 3-year term. with a 2-minute notification.
Dedicated Hosts • Interruption options include
• A physical server with EC2 instance Scheduled Reserved terminated, stopped or hibernated.
capacity fully dedicated to your
use.
Instances • Prices can be significantly less
• Purchase a capacity reservation expensive compared to On-
that is always available on a Demand Instances
Dedicated Instances recurring schedule you specify. • Good choice when you have
• Instances that run in a VPC on • 1-year term. flexibility in when your applications
hardware that is dedicated to a single can run.
customer.

Per second billing available for On-Demand Instances, Reserved


Instances, and Spot Instances that run Amazon Linux or Ubuntu.
SECTION 3: Deploying and Scaling services
using AWS elastic beanstalk
Beanstalk

Jack, a poor country boy, trades the family cow for a handful of magic
beans, which grow into a massive, towering beanstalk reaching up into
the clouds. Jack climbs the beanstalk and finds himself in the castle of an
unfriendly giant. The giant senses Jack's presence and wants to kill him.

Outwitting the giant, Jack is able to retrieve many goods once stolen from his family, including a bag of gold, an enchanted
goose that lays golden eggs and a magic golden harp that plays and sings by itself. Jack then escapes by chopping down the
beanstalk. The giant, who is pursuing him, falls to his death, and Jack and his family prosper.
Deploying and Scaling services using AWS elastic beanstalk

● AWS Elastic Beanstalk is a cloud deployment and provisioning service that


automates the process of getting applications set up on the Amazon Web
Services (AWS) infrastructure.
● Deploying and scaling services using AWS Elastic Beanstalk is a
straightforward process that allows you to quickly deploy, manage, and scale
applications in the AWS cloud.
● Elastic Beanstalk abstracts the underlying infrastructure, automates
deployment, and provides a platform for deploying various types of
applications, including web applications, APIs, and worker services.
Steps:
Here's an overview of the steps involved in deploying and scaling services using AWS Elastic
Beanstalk:
Step1: Create an Elastic Beanstalk Environment: Start by creating an Elastic Beanstalk
environment for your application. The environment represents the infrastructure and resources that
will host your application. You can choose the application platform, such as Node.js, Python, Java,
or .NET, based on your application's requirements.
Step2: Package and Upload Your Application: Package your application code, configurations, and
dependencies into a ZIP file or a Docker container image. You can use tools like AWS CLI or AWS
Management Console to upload your application to Elastic Beanstalk.
Step3: Configure Environment Settings: Configure various environment settings, such as the
instance type, auto-scaling settings, security groups, environment variables, and database
connections. These settings define how your application will be deployed and scaled.
Step 4: Deploy Your Application: Once the environment is configured, initiate the deployment
process to deploy your application. Elastic Beanstalk will automatically provision the necessary
resources, such as EC2 instances, load balancers, and databases, based on your configuration.
Step 5: Monitor and Test Your Application: Elastic Beanstalk provides built-in monitoring and logging
capabilities. You can use AWS CloudWatch to monitor application health, performance metrics, and logs. It's
essential to regularly monitor and test your application to ensure it's running correctly and meeting your
performance requirements.

Step 6: Scaling Your Application: Elastic Beanstalk simplifies the scaling process by automatically handling
load changes. You can configure auto-scaling settings based on metrics like CPU utilization or request count.
Elastic Beanstalk will automatically scale the number of instances up or down to handle the load efficiently.

Step 7: Application Updates: As you make changes or release new versions of your application, Elastic
Beanstalk allows you to perform rolling updates to minimize downtime. You can either manually trigger the
update or configure automatic deployments when new versions are available.

Step 8: Monitoring and Optimizing Costs: Continuously monitor and optimize your environment to control
costs. Analyze your application's resource usage, adjust instance types, and optimize auto-scaling settings to
match the actual workload. Utilize AWS Cost Explorer and other cost management tools to gain insights into
your expenses and identify potential cost-saving opportunities.

By following these steps, you can effectively deploy, manage, and scale your applications using AWS Elastic
Beanstalk. Elastic Beanstalk abstracts the underlying infrastructure complexities, allowing you to focus on
your application's development and functionality while taking advantage of AWS's scalable and reliable
infrastructure.
SECTION 4: AWS SHARED
RESPONSIBILITY MODEL
Shared Responsibility Model - AWS

● The shared responsibility model is a security framework that outlines the


division of security responsibilities between AWS (the cloud service provider)
and its customers.
● It clarifies which security aspects are the responsibility of AWS and which
aspects the customers are responsible for.
16

AWS shared responsibility model


Shared Responsibility Model - AWS
AWS's Responsibilities:
a. Infrastructure Security: AWS is responsible for the security of the underlying cloud
infrastructure, including the physical security of data centers, networking, software and
hardware.
b. Managed Services: AWS manages the security of its managed services such as Amazon
RDS, Amazon S3, and Amazon DynamoDB, including patching, availability, and data durability.
c. Compliance and Auditing: AWS ensures that its infrastructure and services comply with
various industry-specific and global security standards and undergoes regular audits to
validate security controls.
d. Global Security Architecture: AWS designs and maintains a secure global infrastructure,
implementing security best practices to protect customer data.
Shared Responsibility Model - AWS
Customer's Responsibilities:
a. Application Security: Customers are responsible for securing their applications,
including the operating system, network, and application-level security.
b. Data Security: Customers are responsible for classifying and protecting their data,
including encryption, access controls, and data lifecycle management.
c. Identity and Access Management: Customers are responsible for managing user
access, credentials, and permissions for their AWS accounts and services.
d. Network Security: Customers are responsible for configuring and managing security
groups, network access control lists (ACLs), and other network security measures.
e. Patch Management: Customers are responsible for maintaining the security of their
operating systems, databases, and applications by applying patches and updates.
SECTION 5: SECURING
ACCOUNTS AND ENSURING
COMPLIANCE
Securing accounts and ensuring compliance
Securing accounts and ensuring compliance in cloud computing is of utmost importance
to protect data, maintain privacy, and meet regulatory requirements. Here are key
practices for securing accounts and achieving compliance in cloud computing:
1. Strong Identity and Access Management (IAM):
a. Implement robust authentication mechanisms, such as multi-factor authentication
(MFA), to ensure only authorized individuals can access accounts and resources.
b. Apply the principle of least privilege, granting users the minimum required
permissions to perform their tasks.
c. Regularly review and update IAM policies and user access to maintain the
principle of least privilege.
2. Data Encryption:
a. Encrypt sensitive data both at rest and in transit using encryption mechanisms
provided by the cloud service provider (CSP) or third-party encryption solutions.
b. Manage encryption keys securely and separately from the encrypted data.
3. Network Security:
a. Utilize virtual private cloud (VPC) or similar network isolation mechanisms to
create private network segments and control inbound and outbound traffic.
b. Set up network access control lists (ACLs) and security groups to control and
monitor network traffic.
c. Employ secure protocols (e.g., SSL/TLS) for communication and secure APIs
using authentication and authorization mechanisms.
4. Security Monitoring and Incident Response:
a. Implement robust logging and monitoring mechanisms to detect and respond to
security incidents promptly.
b. Utilize cloud-native security tools and services, such as AWS CloudTrail and
AWS Config, to monitor and track user activities and changes to resources.
c. Establish an incident response plan to handle security incidents effectively and
minimize their impact.
5. Compliance and Governance:
a. Understand the specific compliance requirements applicable to your industry
and ensure your cloud environment adheres to these standards.
b. Regularly assess and audit your cloud infrastructure and applications to ensure
compliance with security and privacy regulations.
c. Leverage CSP-provided compliance certifications and reports to demonstrate
adherence to industry standards and regulations.

6. Security Education and Awareness:


a. Train employees on secure cloud usage practices, such as password hygiene,
phishing awareness, and secure data handling.
b. Promote a culture of security and encourage employees to report any security
incidents or potential vulnerabilities.
7. Regular Security Assessments:
a. Perform regular vulnerability assessments and penetration testing to identify
and address potential security weaknesses.
b. Engage third-party security experts to conduct independent audits and
assessments of your cloud environment security posture.

8. Vendor Management:
a. Establish clear security requirements and responsibilities in contracts and
service level agreements (SLAs) with CSPs.
b. Regularly review the security practices and compliance certifications of your
CSP to ensure they align with your requirements.
22

AWS Identity and Access Management (IAM)


• It allows you to control access to compute, storage, database, and application services in
the AWS Cloud. IAM can be used to handle authentication, and to specify and enforce
authorization policies so that you can specify which users can access which services.

• Use IAM to manage access to AWS resources –


• A resource is an entity in an AWS account that you can work with
• Example resources; An Amazon EC2 instance or an Amazon S3 bucket

• Example – Control who can terminate Amazon EC2 instances

• Define fine-grained access rights –


• Who can access the resource
• Which resources can be accessed and what can the user do to the resource
• How resources can be accessed

• IAM is a no-cost AWS account feature


23

IAM: Essential components


A person or application that can authenticate with an
IAM user AWS account.
A collection of IAM users that are granted identical
authorization.
IAM group

The document that defines which resources can be


accessed and the level of access to each
resource.
IAM policy
Useful mechanism to grant a set of
IAM role permissions for making AWS service
requests.
24

IAM policies
• An IAM policy is a document that defines permissions
• Enables fine-grained access control
• Two types of policies – identity-based and resource-based IAM entities
• Identity-based policies –
• Attach a policy to any IAM entity Attach to
one of
• An IAM user, an IAM group, or an IAM role IAM user
• Policies specify:
• Actions that may be performed by the entity IAM
• Actions that may not be performed by the entity policy
IAM group
• A single policy can be attached to multiple entities
• A single entity can have multiple policies attached to it
IAM role
• Resource-based policies
• Attached to a resource (such as an S3 bucket)
25

Resource-based policies
• Identity-based policies are attached to a
user, group, or role
• Resource-based policies are attached to AWS
Account
a resource (not to a user, group or role) IAM user S3 bucket
MaryMajor photos
• Characteristics of resource-based
policies – attached Defined inline
on the bucket
• Specifies who has access to the resource and
what actions they can perform on it
• The policies are inline only, not managed Identity-based Resource-
policy based policy
• Resource-based policies are Policy grants list, Policy grants user
supported only by some AWS read objects to the MaryMajor list, read
photos bucket objects
services
26

IAM permissions
How IAM determines permissions:

Is the permission Is the permission


N N Den
explicitly denied ? explicitly allowed ?
o o y
Implicit deny

Ye Ye
s s
Den Allo
y w
SECTION 6: SECURING
ACCOUNTS
35

AWS Key Management Service (AWS KMS)


AWS Key Management Service (AWS KMS) features:
• Enables you to create and manage encryption keys

• Enables you to control the use of encryption across AWS services and in your
applications.
• Integrates with AWS CloudTrail to log all key usage.

• Uses hardware security modules (HSMs) that are validated by Federal


Information Processing Standards (FIPS) 140-2 to protect keys

AWS Key Management


Service (AWS KMS)
36

Amazon Cognito
Amazon Cognito features:
• Adds user sign-up, sign-in, and access control to your web and mobile
applications.
• Scales to millions of users.

• Supports sign-in with social identity providers, such as Facebook, Google, and Amazon;
and enterprise identity providers, such as Microsoft Active Directory via Security
Assertion Markup Language (SAML) 2.0.

Amazon Cognito
37

AWS Shield
• AWS Shield features:
• Is a managed distributed denial of service (DDoS) protection service

• Safeguards applications running on AWS

• Provides always-on detection and automatic inline mitigations


• AWS Shield Standard enabled for at no additional cost. AWS Shield Advanced is
an optional paid service.

• Use it to minimize application downtime and latency.

AWS Shield
SECTION 7: SECURING DATA
ON AWS
39

Encryption of data at rest


• Encryption encodes data with a secret key, which makes it unreadable
• Only those who have the secret key can decode the data
• AWS KMS can manage your secret keys

• AWS supports encryption of data at rest


• Data at rest = Data stored physically (on disk or on tape)
• You can encrypt data stored in any service that is supported by AWS KMS,
including:
• Amazon S3
• Amazon EBS
• Amazon Elastic File System (Amazon EFS)
• Amazon RDS managed databases
40

Encryption of data in transit


• Encryption of data in transit (data moving across a network)
• Transport Layer Security (TLS)—formerly SSL—is an open standard protocol
• AWS Certificate Manager provides a way to manage, deploy, and renew TLS or SSL
certificates
• Secure HTTP (HTTPS) creates a secure tunnel
• Uses TLS or SSL for the bidirectional exchange of data
• AWS services support data in transit encryption.
• Two examples:

AWS Cloud Corporate data center AWS Cloud

TLS encrypted
data traffic TLS or SSL
encrypted Amazon S3
Amazon EC2 Amazon EFS AWS Storage Gateway
41

Securing Amazon S3 buckets and objects


• Newly created S3 buckets and objects are private and protected by default.
• When use cases require sharing data objects on Amazon S3 –
• It is essential to manage and control the data access.
• Follow the permissions that follow the principle of least privilege and consider
using Amazon S3 encryption.
• Tools and options for controlling access to S3 data include –
• Amazon S3 Block Public Access feature: Simple to use.
• IAM policies: A good option when the user can authenticate using IAM.
• Bucket policies
• Access control lists (ACLs): A legacy access control mechanism.
• AWS Trusted Advisor bucket permission check: A free feature.
SECTION 8: SCALE AND
LOAD BALANCE YOUR DATA
ARCHITECTURE
Scaling
Scaling in cloud computing refers to the ability to adjust the resources, such as
compute, storage, or network capacity, to meet the changing demands of an
application or workload. Cloud computing provides flexible and scalable
infrastructure that allows organizations to scale their resources up or down as
needed. There are two primary types of scaling in cloud computing:
1. Vertical Scaling (Scaling Up)
2. Horizontal Scaling (Scaling Out)
Vertical Scaling (Scaling Up):
● Vertical scaling involves increasing the capacity of individual resources, such
as upgrading to a higher-capacity server or adding more memory to a virtual
machine.
● In this approach, the size or capacity of a single resource is increased to
handle higher workloads or to meet performance requirements.
● Vertical scaling is suitable for workloads that can be managed by a single,
more powerful resource.
● Cloud providers typically offer options for vertical scaling, allowing users to
resize their instances, virtual machines, or databases to accommodate
increased demand.
● This can be done manually or automatically through auto-scaling policies
based on predefined metrics like CPU utilization or memory usage.
Horizontal Scaling (Scaling Out):
● Horizontal scaling involves adding more instances, nodes, or servers to
distribute the workload across multiple resources.
● Instead of increasing the capacity of a single resource, horizontal scaling
adds additional resources to handle increased demand.
● This approach is suitable for workloads that can be divided into smaller,
independent units or can be processed in parallel.
● Cloud providers offer services and features that support horizontal scaling,
such as auto-scaling groups or managed database services that can
automatically provision and distribute resources based on workload patterns
Load balancing mechanisms are often used to distribute incoming traffic or
workload across multiple instances or containers, ensuring efficient resource
utilization and high availability.
● and predefined rules.
Load Balancing
● Load balancing in cloud computing is a technique used to distribute incoming
network traffic or workload across multiple resources to ensure optimal
performance, high availability, and efficient resource utilization.
● It helps evenly distribute the workload and prevents any single resource from
being overwhelmed.
● Load balancing can be implemented at various levels, including network load
balancing, load balancing across virtual machines, or load balancing for
application services.
● Load balancing distributes incoming traffic or workload across multiple
resources to ensure optimal performance, scalability, and fault tolerance.
● Cloud providers offer load balancing services, such as AWS Elastic Load
Balancer, Google Cloud Load Balancer, or Azure Load Balancer, that
automatically distribute traffic across instances or containers based on
various algorithms (e.g., round-robin, least connections).
● Load balancers can be configured to perform health checks on instances and
route traffic only to healthy resources.
● By evenly distributing workload across resources, load balancing ensures
efficient resource utilization, minimizes response times, and enhances the
overall user experience.
SECTION 9: SERVER
MONITORING USING CLOUD
LOGS
server monitoring using cloud logs
Server monitoring using cloud logs is an effective approach to gain visibility into
the health, performance, and security of your server infrastructure in the cloud.
Cloud logs provide a centralized and scalable solution for collecting, storing, and
analyzing server logs, allowing you to monitor and troubleshoot issues efficiently
steps you can use cloud logs for server monitoring:
Step 1: Log Collection: Configure your server instances to send logs to a centralized log management
system provided by the cloud platform. For example, AWS offers Amazon CloudWatch Logs, Google Cloud
provides Cloud Logging, and Azure offers Azure Monitor Logs. You can also use third-party log management
tools that integrate with cloud platforms.
Step 2: Log Aggregation: Set up log aggregation to consolidate logs from multiple servers into a central
repository. This simplifies log analysis and troubleshooting by providing a unified view of logs across your
server infrastructure. Aggregated logs can include system logs, application logs, security logs, and custom
logs.
Step 3: Log Storage and Retention: Cloud log management services offer scalable storage and retention
options for logs. Determine the appropriate retention period based on compliance requirements and the need
for historical analysis. Cloud platforms typically provide options to archive logs for long-term storage and
compliance purposes.
Step 4: Log Search and Analysis: Use log query languages or search capabilities provided by the log
management service to search, filter, and analyze logs. This allows you to identify patterns, anomalies,
errors, or performance issues. You can define custom log metrics, create alerts based on log events, and
build dashboards for visualization.
Step 5: Real-time Monitoring and Alerting: Set up real-time monitoring and alerting
based on predefined log-based metrics or conditions. Configure alerts to notify you when
specific log events or patterns occur. This helps you proactively identify and respond to
critical server issues, security breaches, or performance bottlenecks.
Step 6: Log-based Troubleshooting: When troubleshooting server issues, use log data to
analyze events leading up to the problem. Correlate logs from different servers or
components to understand the root cause of an issue. Log data provides valuable insights
into system behavior, errors, and interactions between server components.
Step 7: Security Monitoring: Monitor server logs for security-related events, such as
authentication failures, unauthorized access attempts, or suspicious activities. Use log
analysis techniques, including anomaly detection and pattern matching, to identify potential
security breaches and respond in a timely manner.
Step 8: Compliance and Auditing: Log management is crucial for meeting compliance
requirements and facilitating audits. Use log data to demonstrate adherence to security
policies, regulations, and industry standards. Log retention, access controls, and audit trails
help maintain a secure and auditable server environment.
49

THANK YOU

You might also like