cloud computing unit-I,II
cloud computing unit-I,II
Cloud computing is a general term for the delivery of hosted computing services and IT
resources over the internet with pay-as-you-go pricing. Users can obtain technology services
such as processing power, storage and databases from a cloud provider, eliminating the need
for purchasing, operating and maintaining on-premises physical data centers and servers.
A cloud can be private, public or a hybrid. A public cloud sells services to anyone on the
internet. A private cloud is a proprietary network or a data center that supplies hosted services
to a limited number of people, with certain access and permissions settings. A hybrid cloud
offers a mixed computing environment where data and resources can be shared between both
public and private clouds. Regardless of the type, the goal of cloud computing is to provide
easy, scalable access to computing resources and IT services.
Cloud services can be classified into three general service delivery categories:
PaaS doesn’t require users to manage the underlying infrastructure, i.e., the network,
servers, operating systems, or storage, but gives them control over the deployed applications.
This allows organizations to focus on the deployment and management of their applications
by freeing them of the responsibility of software maintenance, planning, and resource
procurement.
Cost-effectiveness
Cloud computing lets you offload some or all of the expense and effort of purchasing,
installing, configuring and managing mainframe computers and other on-premises
infrastructure. You pay only for cloud-based infrastructure and other computing resources as
you use them.
Unlimited scalability
Cloud computing provides elasticity and self-service provisioning, so instead of
purchasing excess capacity that sits unused during slow periods, you can scale capacity up
and down in response to spikes and dips in traffic. You can also use your cloud provider’s
global network to spread your applications closer to users worldwide.
Cloud computing can either be classified based on the deployment model or the type of
service. Based on the specific deployment model, we can classify cloud as public, private,
and hybrid cloud. At the same time, it can be classified as infrastructure-as-a-service (IaaS),
platform-as-a-service (PaaS), and software-as-a-service (SaaS) based on the service the cloud
model offers.
Types of Cloud Computing
Private cloud:
In a private cloud, the computing services are offered over a private IT network for
the dedicated use of a single organization. Also termed internal, enterprise, or corporate
cloud, a private cloud is usually managed via internal resources and is not accessible to
anyone outside the organization. Private cloud computing provides all the benefits of a public
cloud, such as self-service, scalability, and elasticity, along with additional control, security,
and customization.
Private clouds provide a higher level of security through company firewalls and internal
hosting to ensure that an organization’s sensitive data is not accessible to third-party
providers. The drawback of private cloud, however, is that the organization becomes
responsible for all the management and maintenance of the data centers, which can prove to
be quite resource-intensive.
Public cloud
Public cloud refers to computing services offered by third-party providers over the
internet. Unlike private cloud, the services on public cloud are available to anyone who wants
to use or purchase them. These services could be free or sold on-demand, where users only
have to pay per usage for the CPU cycles, storage, or bandwidth they consume.
Public clouds can help businesses save on purchasing, managing, and maintaining on-
premises infrastructure since the cloud service provider is responsible for managing the
system. They also offer scalable RAM and flexible bandwidth, making it easier for
businesses to scale their storage needs.
Hybrid cloud
Hybrid cloud uses a combination of public and private cloud features. The “best of
both worlds” cloud model allows a shift of workloads between private and public clouds as
the computing and cost requirements change. When the demand for computing and
processing fluctuates, hybrid cloudOpens a new window allows businesses to scale their
on-premises infrastructure up to the public cloud to handle the overflow while ensuring that
no third-party data centers have access to their data.
In a hybrid cloud model, companies only pay for the resources they use temporarily instead
of purchasing and maintaining resources that may not be used for an extended period. In
short, a hybrid cloud offers the benefits of a public cloud without its security risks.
WHAT IS VIRTUALIZATION?
Virtualization primarily refers to sharing all hardware resources while running several
operating systems on a single machine. Additionally, it aids in providing a pool of IT
resources that we may share for mutually beneficial business outcomes. Cloud computing is
built on the virtualization technique, making it possible to use actual computer hardware
more effectively.
Through software, virtualization can divide the hardware components of a single
computer, such as its processors, memory, storage, and other components, into several virtual
computers, also known as virtual machines (VMs). Despite only using a small percentage of
the underlying computer hardware, each virtual machine (VM) runs its operating system (OS)
and functions as a separate computer. As a result, virtualization allows for a more significant
return on an organization’s hardware investment and more effective use of physical computer
systems.
Now a days, enterprise IT architecture uses virtualization as a best practise. The economics of
cloud computing are likewise based on this technology. Cloud users can buy only the
computing resources they require at the time they need them, and they can scale those
resources affordably as their workloads increase thanks to virtualization, which enables cloud
providers to provide services using their existing physical computer hardware.
Below mentioned all five of the following qualities should be considered for a cloud:
Self-service on-demand: A customer can voluntarily provision computing resources like
server time and network storage.
Broad network access: Various clients and devices can access the network’s capabilities.
Resources Pooling: A multi-tenant model is used to pool the provider’s computer resources
to service a large number of customers.
Rapid elasticity: Through software, users can increase or decrease capacity.
Manages Services: Service metrics Resources are managed and optimized automatically,
with information on who is using what and how much.
The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.
1. Application Virtualization: Existing operating system and its hardware resources are
used in traditionally running applications. Essentially, you are running the application
on top of your computer. Application virtualization compacts the application and
separates it from the underlying operating system. Application virtualization helps a
user to have remote access to an application from a server that stores all personal
information and other application characteristics but can still run on a local
workstation through the internet.
2. Network Virtualization: Network virtualization provides a facility to create and
provision virtual networks—logical switches, routers, firewalls, load balancers, VPN,
and workload security within days or weeks. It is the ability to run multiple virtual
networks with separate control and data plan. It co-exists on top of one physical
network. It can be managed by potentially confidential individual parties.
1. Frontend
Frontend of the cloud architecture refers to the client side of cloud computing system.
Means it contains all the user interfaces and applications which are used by the client to
access the cloud computing services/resources. For example, use of a web browser to
access the cloud platform.
2. Backend
Backend refers to the cloud itself which is used by the service provider. It contains the
resources as well as manages the resources and provides security mechanisms. Along with
this, it includes huge storage, virtual applications, virtual machines, traffic control
mechanisms, deployment models, etc.
Components of Cloud Computing Architecture
Following are the components of Cloud Computing Architecture
3. Service: Service in backend refers to the major three types of cloud based services
like SaaS, PaaS and IaaS. Also manages which type of service the user accesses.
4. Runtime Cloud: Runtime cloud in backend provides the execution and Runtime
platform/environment to the Virtual machine.
5. Storage: Storage in backend provides flexible and scalable storage service and
management of stored data.
9. Internet: Internet connection acts as the medium or a bridge between frontend and
backend and establishes the interaction and communication between frontend and
backend.
10. Database: Database in backend refers to provide database for storing structured
data, such as SQL and NOSQL databases. Example of Databases services include
Amazon RDS, Microsoft Azure SQL database and Google CLoud SQL.
12. Analytics: Analytics in backend service that provides analytics capabilities for data
in the cloud, such as warehousing, business intelligence and machine learning.
Benefits of Cloud Computing Architecture
Makes overall cloud computing system simpler.
Improves data processing requirements.
Helps in providing high security.
Makes it more modularized.
Results in better disaster recovery.
Gives good user accessibility.
Reduces IT operating costs.
Provides high level reliability.
Scalability.
Application availability
Application availability can be affected as much by system infrastructure as by
software architecture so, to reach the desired level of availability it is important to take an
holistic view of uptime.
It is therefore important to work with your hosting provider to ensure you get the
application availability and uptime assurances you need.
Uptime is the measure of the availability of a component or system and in the context of
cloud hosting is usually used to indicate the availability of the data centre facilities being
used. However there are two major components to the uptime equation: the application as
well as the infrastructure. Both should be optimised to prevent it downtime. There are also
points of overlap, where the benefits of high availability measures applied to the
infrastructure level can only be realised if complementary steps are taken within the software
architecture. It is important to understand that other factors can affect availability in addition
to your uptime guarantee.
The cost
Although the ‘holy grail’ is zero downtime, this is almost impossible to achieve within
acceptable financial and environmental constraints. When planning for high application
availability, there are choices and, ultimately, some compromises to be made. The fact is that
you need to decide what amount of downtime is acceptable to your organisation. How
quickly do you need your systems and applications to recover in the event of an IT failure
and what amount of data loss can you cope with? Understanding your recovery time objective
(RTO) and recovery point objective (RPO) will help you make the right choices.
Highly available infrastructure
When tackling the infrastructure aspects of your application’s availability it all begins with
the data centre tier upon which your cloud infrastructure is built. There are four tiers of data
centres offering different levels of uptime. Consider what tier best suits your availability
requirements.
Single points of failure can take many forms and might not necessarily be under your direct
control. Device level redundancy is the ideal to offset the risk of any application outage
arising from a single component’s failure, although again this depends very much on your
attitude to how much downtime you can tolerate and how quickly your hosting provider can
replace broken components.
Scalability
Aapplication needs to be able to cope with increases in demand. The ability to scale out (add
new servers/instances) or scale up (add new resources to a server) will determine your
uptime. Good capacity planning within your business will allow you to determine what
scalability you need within the infrastructure and therefore what you pay for.
Application architecture
However if we have not optimised your application to use redundant or scalable resources
none of the above will improve your uptime and availability. Making it stateless is the ideal
though this can be difficult and you may need to look at achieving uptime through other
measures.
Application performance
Assessing the different factors that affect application availability is no mean feat. By
working with our hosting provider you can achieve a better understanding of what can be
done to achieve maximum uptime.
To keep tabs on your cloud resources and usage, you need cloud performance metrics.
Similar to the check engine light on your car, cloud performance metrics let you know when
your cloud performance needs a tune up. There are several cloud performance metrics that
you can use to get a holistic view of your entire cloud deployment model.
Latency
Latency describes the speed at which operations can be executed on your cloud platform. In a
perfect world, your cloud platform should support processing speeds at levels seen with on-
premises servers. Rarely is that the case though due to throttling performed by cloud service
providers. Since the public cloud is a shared environment, cloud service providers may limit
speeds for the benefit of all users.
Resource Availability
Resource availability lets you know if your cloud instances are running as expected, or if
cloud platform requests are hanging in the balance. High availability is the goal of every
business operating on the cloud. High availability means that your applications are always
available whenever and wherever your customers and internal users need them most.
Capacity
Capacity is the ability of your cloud platform to provide adequate storage that best suits your
business needs. Capacity directly impacts the ability of the cloud platform to process
requests. Higher available capacity is often correlated with higher cloud performance.
Now that you have a better understanding of the various metrics used to determine cloud
performance, how exactly do you test it? There are a number of tests you can perform:
o Load test – measures the performance of the application under both normal and peak
conditions
o Stress test – measures the performance of the application under extreme conditions
outside of the bounds of normal operation
o Browser test – confirms that the application works as intended when accessed
through different web browsers
o Latency test – measures the amount of time it takes to move data from one point to
another within the network
o Targeted infrastructure test – isolates and measures component of the application to
test its performance
o Failover test – confirms the application’s ability to automatically provide extra
resources and move to a back-up system in the event of server or system failure
o Capacity test – measures how many users the application can handle before
performance suffers
o Soak test – measures the performance of the application under a high load for an
extended period.
There are many ways to achieve the performance that you want on the cloud. Note that you
may have to resort to a combination of techniques to achieve desired performance.
Select the Right Cloud Data Platform
There are a variety of cloud service providers to choose from, including Microsoft
Azure, Google Cloud Platform and Amazon Web Services (AWS). Each provider offers a
slightly different set of features on the cloud. Certain cloud platforms offer better storage
capacities. While other cloud platforms are optimized to handle the processing demands of
mission-critical workloads. It is important to perform in-depth research and develop an
effective cloud strategy before selecting a cloud service provider.
Right Sizing
Right sizing refers to scaling your cloud computing resources automatically to match
spikes in customer demand. Customer traffic typically ebbs and flows depending on the time
of year, like the holiday shopping season for retail. As much as you need your cloud
resources to scale up during peaks, you also need it to scale back down just as quickly.Or else
you will be left paying for an unusually high cloud bill for idle resources that you no longer
use.
Cloud security is the set of control-based security measures and technology protection,
designed to protect online stored resources from leakage, theft, and data loss. Protection
includes data from cloud infrastructure, applications, and threats. Security applications
uses a software the same as SaaS (Software as a Service) model.
Firewall is the central part of cloud architecture. The firewall protects the network and the
perimeter of end-users. It also protects traffic between various apps stored in the cloud.
Access control protects data by allowing us to set access lists for various assets. For example,
you can allow the application of specific employees while restricting others. It's a rule that
employees can access the equipment that they required. We can keep essential documents
which are stolen from malicious insiders or hackers to maintaining strict access control.
Data protection methods include Virtual Private Networks (VPN), encryption, or masking. It
allows remote employees to connect the network. VPNaccommodates the tablets and
smartphone for remote access. Data masking maintains the data's integrity by keeping
identifiable information private. A medical company share data with data masking without
violating the HIPAA laws.
For example, we are putting intelligence information at risk in order of the importance of
security. It helps to protect mission-critical assets from threats. Disaster recovery is vital for
security because it helps to recover lost or stolen data.
We understand how the cloud computing security operates to find ways to benefit your
business.
Ransomware is a malware that hijacks system's data and asks for a financial ransom.
Companies are reluctant to give ransom because they want their data back
Identity and access management (IAM): IAM services and tools allow
administrators to centrally manage and control who has access to specific cloud-based and
on-premises resources. IAM can enable you to actively monitor and restrict how users
interact with services, allowing you to enforce your policies across your entire organization
Data loss prevention (DLP): DLP can help you gain visibility into the data you store
and process by providing capabilities to automatically discover, classify, and de-identify
regulated cloud data.
Security information and event management (SIEM): SIEM solutions combine
security information and security event management to offer automated monitoring,
detection, and incident response to threats in your cloud environments. Using AI and ML
technologies, SIEM tools allow you to examine and analyze log data generated across your
applications and network devices—and act quickly if a potential threat is detected.
Public key infrastructure (PKI): PKI is the framework used to manage secure,
encrypted information exchange using digital certificates. PKI solutions typically provide
authentication services for applications and verify that data remains uncompromised and
confidential through transport. Cloud-based PKI services allow organizations to manage and
deploy digital certificates used for user, device, and service authentication.
However, most organizations will likely face specific cloud security challenges, including:
Lack of visibility
Cloud-based resources run on infrastructure that is located outside your corporate network
and owned by a third party. As a result, traditional network visibility tools are not suitable for
cloud environments, making it difficult for you to gain oversight into all your cloud assets,
how they are being accessed, and who has access to them.
Misconfigurations
Misconfigured cloud security settings are one of the leading causes of data breaches
in cloud environments. Cloud-based services are made to enable easy access and data
sharing, but many organizations may not have a full understanding of how to secure cloud
infrastructure. This can lead to misconfigurations, such as leaving default passwords in place,
failing to activate data encryption, or mismanaging permission controls.
Access management
Cloud deployments can be accessed directly using the public internet, which enables
convenient access from any location or device. At the same time, it also means that attackers
can more easily gain authorized resources with compromised credentials or improper access
control.
Dynamic workloads
Cloud resources can be provisioned and dynamically scaled up or down based on your
workload needs. However, many legacy security tools are unable to enforce policies in
flexible environments with constantly changing and ephemeral workloads that can be added
or removed in a matter of seconds.
Compliance
The cloud adds another layer of regulatory and internal compliance requirements that
you can violate even if you don’t experience a security breach. Managing compliance in the
cloud is an overwhelming and continuous process. Unlike an on-premises data center where
you have complete control over your data and how it is accessed, it is much harder for
companies to consistently identify all cloud assets and controls, map them to relevant
requirements, and properly document everything.
The goal of cloud DR is virtually identical to traditional DR: to protect valuable business
resources and ensure protected resources can be accessed and recovered to continue normal
business operations.
Importance of cloud DR
DR is a central element of any business continuity (BC) strategy. It entails replicating
data and applications from a company's primary infrastructure to a backup infrastructure,
usually situated in a distant geographical location.
Before the advent of cloud connectivity and self-service technologies, traditional DR options
were limited to local DR and second-site implementations. Local DR didn't always protect
against disasters such as fires, floods and earthquakes. A second site -- off-site DR --
provided far better protection against physical disasters, but implementing and maintaining a
second data center imposed significant business costs.
The following reasons highlight the importance of cloud storage and disaster recovery:
Cloud DR ensures business continuity in the event of natural disasters and cyber
attacks, which can disrupt business operations and result in data loss.
With a cloud disaster recovery strategy, critical data and applications can be backed
up to a cloud-based server. This enables quick data recovery for businesses in the wake
of an event, thus reducing downtime and minimizing the effects of the outage.
There are three fundamental components of a cloud-based disaster recovery plan: analysis,
implementation and testing.
Analysis. Any DR plan starts with a detailed risk assessment and analysis, which basically
examines the current IT infrastructure and workflows, and then considers the potential
disasters that a business is likely to face. The goal is to identify potential vulnerabilities and
disasters -- everything from intrusion vulnerabilities and theft to earthquakes and floods --
and then evaluate whether the IT infrastructure is up to those challenges.
An analysis can help organizations identify the business functions and IT elements
that are most critical and predict the potential financial effects of a disaster event. Analysis
can also help determine RPOs and RTOs for infrastructure and workloads. Based on these
determinations, a business can make more informed choices about which workloads to
protect, how those workloads should be protected and where more investment is needed to
achieve those goals.
The goal here is to determine how to address a given disaster, should it occur, and the
plan is matched with the implementation of technologies and services built to handle the
specific circumstances. In this case, the plan includes cloud-based technologies and services.
Testing. Any DR plan must be tested and updated regularly to ensure IT staff are proficient
at implementing the appropriate response and recovery successfully and in a timely manner,
and that recovery takes place within an acceptable time frame for the business. Testing can
reveal gaps or inconsistencies in the implementation, enabling organizations to correct and
update the DR plan before a real disaster strikes.
Approaches to cloud DR
The following are the three main approaches to cloud disaster recovery:
Cold DR typically involves storage of data or virtual machine (VM) images. These
resources generally aren't usable without additional work such as downloading the stored
data or loading the image into a VM. Cold DR is usually the simplest approach -- often
just data storage -- and the least expensive approach, but it takes the longest to recover,
leaving the business with the longest downtime in a disaster.
Warm DR is generally a standby approach where duplicate data and applications are
placed with a cloud DR provider and kept up to date with data and applications in the
primary data center. But the duplicate resources aren't doing any processing. When
disaster strikes, the warm DR can be brought online to resume operations from the DR
provider -- often a matter of starting a VM and redirecting IP addresses and traffic to the
DR resources. Recovery can be quite short, but still imposes some downtime for the
protected workloads.
Hot DR is typically a live parallel deployment of data and workloads running
together in tandem. That is, both the primary data center and the DR site use the same
workload and data running in synchronization -- both sites sharing part of the overall
application traffic. When disaster strikes one site, the remaining site continues without
disruption to handle the work. Users are ideally unaware of the disruption. Hot DR has
no downtime, but it can be the most expensive and complicated approach.
Gen Cloud Computing
1. Unikernels – These are the specialized operating systems, which render enhanced
security, fine-grained optimization, and a smaller footprint required for micro-services. They
are made up of the library OS technology and can be customized on the basis of different
programs and hardware. Unikernels are in the form of executable image, which can be
natively executed on specific hypervisor. It does not require any extra supporting OS.
Unikernels comprises of library OS that is nothing just a library collection, which represents
the core capability of an operating system. For example – MirageOS is a library operating
system, which develops unikernels for networking purpose over a variety of online
computing and mobile environments. Another example of Unikernel OS can be Rumprun
Unikernel. This operating system comprises of thousands of coding lines and works with
POSIX application directly on the raw hardware. It also supports working on the cloud
hypervisors like Xen and KVM.
4. Software Defined Networking – Depending upon the providers and users, the
meaning of this term is different. In general, it is a key component in data centers for the
automation purpose. SDN renders efficient methods for managing virtualization that saves
the extra costing in hardware implementation. Managers of the data center have the right to
manage every aspect associated with the data center to upgrade their hardware as per the
requirements. The world of digitization already comprises of several issues to maintain
stability in the market therefore, automated software becomes important. These automated
tools eliminate the complications that are faced while managing activities. It helps
organizations in enhancing their cloud data security by reducing the human errors.