0% found this document useful (0 votes)
34 views

CC All

Uploaded by

isthemail7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

CC All

Uploaded by

isthemail7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

CLOUD COMPUTING

Define cloud computing:


Cloud computing means storing and accessing the data and programs on remote servers
that are hosted on the internet instead of the computer’s hard drive or local server. Cloud
computing is also referred to as Internet-based computing, The data which is stored can be
files, images, documents, or any other storable document.
Some operations which can be performed with cloud computing are –
Storage, backup, and recovery of data
Delivery of software on demand
Development of new applications and services
Streaming videos and audio

Cloud Computing Architecture:


cloud computing technology is used by both small and large organizations to store
and access information using the internet connection.
Cloud computing architecture is a combination of service-oriented architecture and event-
driven architecture.
Cloud computing architecture is divided into the following two parts -
✓ Front End
✓ Back End
The below diagram shows the architecture of cloud computing -
Front End
The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers
(including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile
devices.
Back End
The back end is used by the service provider. It manages all the resources that are required
to provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
There are the following components of cloud computing architecture -
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to
interact with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s
requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) – It is also known as cloud application services. Mostly, SaaS
applications run directly through the web browser means we do not require to download and
install these applications. Some important example of SaaS is given below –
Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.
ii. Platform as a Service (PaaS) – It is also known as cloud platform services. It is quite similar
to SaaS, but the difference is that PaaS provides a platform for software creation, but using
SaaS, we can access software over the internet without the need of any platform.
ADVERTISEMENT
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.
iii. Infrastructure as a Service (IaaS) – It is also known as cloud infrastructure services. It is
responsible for managing applications data, middleware, and runtime environments.
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco Metapod.
ADVERTISEMENT
ADVERTISEMENT
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
ADVERTISEMENT
ADVERTISEMENT
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge
amount of storage capacity in the cloud to store and manage data.
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud
infrastructure includes hardware and software components such as servers, storage, network
devices, virtualization software, and other storage resources that are needed to support the
cloud computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud,
storage, infrastructure, and other security issues in the backend and establish coordination
between them.
8. Security
Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet
The Internet is medium through which front end and back end can interact and communicate
with each other.

Cloud Service Models:

SOFTWARE AS A SERVICE:
SaaS is also known as "On-Demand Software". It is a software distribution model in which
services are hosted by a cloud service provider. These services are available to end-users over
the internet so, the end-users do not need to install any software on their devices to access
these services.
Platform as a Service:
(PaaS) provides a runtime environment. It allows programmers to easily create, test, run, and
deploy web applications. You can purchase these applications from a cloud service provider
on a pay-as-per use basis and access them using the Internet connection. In PaaS, back end
scalability is managed by the cloud service provider, so end- users do not need to worry about
managing the infrastructure.
PaaS includes infrastructure and platform to support the web application life cycle.
Infrastructure as a services:
Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the cloud
computing platform. It allows customers to outsource their IT infrastructures such as servers,
networking, processing, storage, virtual machines, and other resources. Customers access
these resources on the Internet using a pay-as-per use model. In traditional hosting services,
IT infrastructure was rented out for a specific period of time, with pre-determined hardware
configuration. The client paid for the configuration and time, regardless of the actual use. With
the help of the IaaS cloud computing platform layer, clients can dynamically scale the
configuration to meet changing requirements and are billed only for the services actually
used. IaaS cloud computing platform layer eliminates the need for every organization to
maintain the IT infrastructure. IaaS is offered in three models: public, private, and hybrid
cloud.
Types of deployment cloud
Public cloud:
Public Cloud provides a shared platform that is accessible to the general public through an
Internet connection.
Public cloud operated on the pay-as-per-use model and administrated by the third party, i.e.,
Cloud service provider.
In the Public cloud, the same storage is being used by multiple users at the same time.
Public cloud is owned, managed, and operated by businesses, universities, government
organizations, or a combination of them.
Amazon Elastic Compute Cloud (EC2), Microsoft Azure, IBM's Blue Cloud, Sun Cloud, and
Google Cloud are examples of the public cloud.
Private cloud:
Private cloud is also known as an internal cloud or corporate cloud.
Private cloud provides computing services to a private internal network (within the
organization) and selected users instead of the general public.
Private cloud provides a high level of security and privacy to data through firewalls and
internal hosting. It also ensures that operational and sensitive data are not accessible to third-
party providers.
HP Data Centers, Microsoft, Elastra-private cloud, and Ubuntu are the example of a private
cloud.
Hybrid cloud:
Hybrid cloud is a combination of public and private clouds.
Hybrid cloud = public cloud + private cloud
The main aim to combine these cloud (Public and Private) is to create a unified, automated,
and well-managed computing environment.
In the Hybrid cloud, non-critical activities are performed by the public cloud and critical
activities are performed by the private cloud.
Mainly, a hybrid cloud is used in finance, healthcare, and Universities.
The best hybrid cloud provider companies are Amazon, Microsoft, Google, Cisco, and NetApp.
Community cloud:
Community cloud is a cloud infrastructure that allows systems and services to be accessible
by a group of several organizations to share the information. It is owned, managed, and
operated by one or more organizations in the community, a third party, or a combination of
them.

Challenges of cloud computing:


Cloud computing, an emergent technology, has placed many challenges in different aspects of
data and information handling. Some of these are shown in the following diagram:
Security and Privacy
Security and Privacy of information is the biggest challenge to cloud computing. Security and
privacy issues can be overcome by employing encryption, security hardware and security
applications.
Portability
This is another challenge to cloud computing that applications should easily be migrated from
one cloud provider to another. There must not be vendor lock-in. However, it is not yet made
possible because each of the cloud provider uses different standard languages for their
platforms.
Interoperability
It means the application on one platform should be able to incorporate services from the other
platforms. It is made possible via web services, but developing such web services is very
complex.
Computing Performance
Data intensive applications on cloud requires high network bandwidth, which results in high
cost. Low bandwidth does not meet the desired computing performance of cloud application.
Reliability and Availability
It is necessary for cloud systems to be reliable and robust because most of the businesses are
now becoming dependent on services provided by third-party.
Multiple Cloud Management
Companies have started to invest in multiple public clouds, multiple private clouds or a
combination of both called the hybrid cloud. This has grown rapidly in recent times. So it has
become important to list the challenges faced by such organizations and find solutions to grow
with the trend.
Cost Management
Cloud computing enables you to access application software over a fast internet connection
and lets you save on investing in costly computer hardware, software, management, and
maintenance. This makes it affordable. But what is challenging and expensive is tuning the
organization’s needs on the third-party platform.
Lack of expertise
With the increasing workload on cloud technologies and continuously improving cloud tools,
management has become difficult. There has been a consistent demand for a trained
workforce who can deal with cloud computing tools and services. Hence, firms need to train
their IT staff to minimize this challenge.

Advantages of cloud computing:


Back-up and restore data
Once the data is stored in the cloud, it is easier to get back-up and restore that data using the
cloud.
Improved collaboration
Cloud applications improve collaboration by allowing groups of people to quickly and easily
share information in the cloud via shared storage.
Excellent accessibility
Cloud allows us to quickly and easily access store information anywhere, anytime in the whole
world, using an internet connection. An internet cloud infrastructure increases organization
productivity and efficiency by ensuring that our data is always accessible.
Low maintenance cost
Cloud computing reduces both hardware and software maintenance costs for organizations.
Mobility
Cloud computing allows us to easily access all cloud data via mobile.
IServices in the pay-per-use model
Cloud computing offers Application Programming Interfaces (APIs) to the users for access
services on the cloud and pays the charges as per the usage of service.
Unlimited storage capacity
Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.
Data security
Data security is one of the biggest advantages of cloud computing. Cloud offers many
advanced features related to security and ensures that data is securely stored and handled.

Characteristics of cloud computing:


On-demand self-services: The Cloud computing services does not require any human
administrators, user themselves are able to provision, monitor and manage computing
resources as needed.
Broad network access: The Computing services are generally provided over standard
networks and heterogeneous devices.
Rapid elasticity: The Computing services should have IT resources that are able to scale out
and in quickly and on as needed basis. Whenever the user require services it is provided to
him and it is scale out as soon as its requirement gets over.
Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and
services) present are shared across multiple applications and occupant in an uncommitted
manner. Multiple clients are provided service from a same physical resource.
Measured service: The resource utilization is tracked for each application and occupant, it
will provide both the user and the resource provider with an account of what has been used.
This is done for various reasons like monitoring billing and effective use of resource.
Multi-tenancy: Cloud computing providers can support multiple tenants (users or
organizations) on a single set of shared resources.
Virtualization: Cloud computing providers use virtualization technology to abstract
underlying hardware resources and present them as logical resources to users.
Resilient computing: Cloud computing services are typically designed with redundancy and
fault tolerance in mind, which ensures high availability and reliability.
Flexible pricing models: Cloud providers offer a variety of pricing models, including pay-
per-use, subscription-based, and spot pricing, allowing users to choose the option that best
suits their needs.
Security: Cloud providers invest heavily in security measures to protect their users’ data
and ensure the privacy of sensitive information.
Automation: Cloud computing services are often highly automated, allowing users to
deploy and manage resources with minimal manual intervention.
Sustainability: Cloud providers are increasingly focused on sustainable practices, such as
energy-efficient data centers and the use of renewable energy sources, to reduce their
environmental impact.

Cloud
Computing Virtualization

Cloud computing is used to provide While It is used to make various


1. pools and automated resources that simulated environments through
can be accessed on-demand. a physical hardware system.

While virtualization setup is


Cloud computing setup is tedious,
2. simple as compared to cloud
complicated.
computing.

While virtualization is low


3. Cloud computing is high scalable. scalable compared to cloud
computing.

While virtualization is less


4. Cloud computing is Very flexible.
flexible than cloud computing.

In the condition of disaster


While it relies on single
5. recovery, cloud computing relies on
peripheral device.
multiple machines.

In cloud computing, the workload is In virtualization, the workload is


6.
stateless. stateful.

The total cost of cloud computing is The total cost of virtualization is


7.
higher than virtualization. lower than Cloud Computing.

Cloud computing requires many While single dedicated hardware


8.
dedicated hardware. can do a great job in it.
Cloud
Computing Virtualization

While storage space depends on


Cloud computing provides unlimited
9. physical server capacity in
storage space.
virtualization.

Virtualization is of two types :


Cloud computing is of two types :
10. Hardware virtualization and
Public cloud and Private cloud.
Application virtualization.

In Cloud Computing, Configuration In Virtualization, Configuration is


11.
is image based. template based.

In cloud computing, we utilize the


In Virtualization, the entire
12. entire server capacity and the entire
servers are on-demand.
servers are consolidated.

In cloud computing, the pricing pay


In Virtualization, the pricing is
as you go model, and consumption
13. totally dependent on
is the metric on which billing is
infrastructure costs.
done.

Virtualization:
Virtualization is a technique how to separate a service from the underlying physical delivery
of that service. It is the process of creating a virtual version of something like computer
hardware. It was initially developed during the mainframe era. It involves using specialized
software to create a virtual or software-created version of a computing resource rather than
the actual version of the same resource. With the help of Virtualization, multiple operating
systems and applications can run on the same machine and its same hardware at the same
time, increasing the utilization and flexibility of hardware.
In other words, one of the main cost-effective, hardware-reducing, and energy-saving
techniques used by cloud providers is Virtualization. Virtualization allows sharing of a single
physical instance of a resource or an application among multiple customers and
organizations at one time.
Benefits of Virtualization
More flexible and efficient allocation of resources.
Enhance development productivity.
It lowers the cost of IT infrastructure.
Remote access and rapid scalability.
High availability and disaster recovery.
Pay peruse of the IT infrastructure on demand.
Enables running multiple operating systems.

Characteristics of Virtualization
Increased Security: The ability to control the execution of a guest program in a completely
transparent manner opens new possibilities for delivering a secure, controlled execution
environment. All the operations of the guest programs are generally performed against the
virtual machine, which then translates and applies them to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the
most relevant features.
Sharing: Virtualization allows the creation of a separate computing environment within the
same host.
Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.

Types of Virtualization:
Application Virtualization: Application virtualization helps a user to have remote access to
an application from a server. The server stores all personal information and other
characteristics of the application but can still run on a local workstation through the internet.
Technologies that use application virtualization are hosted applications and packaged
applications.
Storage Virtualization: Storage virtualization is an array of servers that are managed by a
virtual storage system. The servers aren’t aware of exactly where their data is stored and
instead function more like worker bees in a hive. It makes managing storage from multiple
sources be managed and utilized as a single repository. storage virtualization software
maintains smooth operations, consistent performance, and a continuous suite of advanced
functions despite changes, breaks down, and differences in the underlying equipment.
Server Virtualization: This is a kind of virtualization in which the masking of server
resources takes place. Here, the central server (physical server) is divided into multiple
different virtual servers by changing the identity number, and processors. So, each system
can operate its operating systems in an isolated manner. Where each sub-server knows the
identity of the central server. It causes an increase in performance and reduces the operating
cost by the deployment of main server resources into a sub-server resource. It’s beneficial
in virtual migration, reducing energy consumption, reducing infrastructural costs, etc.
Data Virtualization: This is the kind of virtualization in which the data is collected from
various sources and managed at a single place without knowing more about the technical
information like how data is collected, stored & formatted then arranged that data logically
so that its virtual view can be accessed by its interested people and stakeholders, and users
through the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.

Hyper-V VMware

Has a dedicated management tool Boasts a reliable management tool

ReFS, or Resilient File System, for


Virtual Machine File System (VMFS) has enviable
storage deployment, is complex and
clustering capability and is much simpler
challenging to manage

Better snapshot capability. Can


efficiently run snapshots while still in Has snapshot technology (32 snapshots per VM)
production with persistent checkpoints that allows point-in-time copies of VMs to prevent
with migration capabilities. Allows 64 data loss
images per VM

Has a complex and less efficient memory


It has a simpler and more efficient
management system that relies on various
memory management system. Uses a
memory management techniques like
single memory management technique
oversubscription, page sharing, and memory
called Dynamic Memory to boost RAM
compression to ensure optimal RAM usage in
usage in VMs
VMs

Supports only Windows and just a few


Supports more operating systems than Hyper-V.
more operating systems like FreeBSD
These include macOS, Linux, Unix, and Windows
and Linux
Accommodates more physical memory
Can handle more logical and virtual CPUs per host
and virtual CPUs per host, per VM

Has extensive security protocols, such Implements data encryption during storage and
as Active Directory, that manage overall motion. Has a less extensive security suite as
security concerns compared to Hyper-V

Pricing is based on the number of cores


Pricing per processor
per host

Interoperability:
It is defined as the capacity of at least two systems or applications to trade with data and
utilize it. On the other hand, cloud interoperability is the capacity or extent at which one
cloud service is connected with the other by trading data as per strategy to get results.
The two crucial components in Cloud interoperability are usability and connectivity, which
are further divided into multiple layers.
1. Behaviour
2. Policy
3. Semantic
4. Syntactic
5. Transport
6. Portability

It is the process of transferring the data or an application from one framework to others,
making it stay executable or usable. Portability can be separated into two types: Cloud data
portability and Cloud application portability.
Cloud data portability –
It is the capability of moving information from one cloud service to another and so on
without expecting to re-enter the data.
Cloud application portability –
It is the capability of moving an application from one cloud service to another or between a
client’s environment and a cloud service.
Service Interoperability:-

Refers to the ability of various cloud services to interact and complement each other in a
standardized way.
Security portability:
Ensures that security mechanisms and protocols are interoperable across different cloud
providers.

Definition of clouds for enterprise:


Anything as a services (XaaS) is a cloud computing term for the extensive variety of services
and application emerging for user applications emerging for users to access on demand over
the internet.
XaaS term is refers to delivery of anything as a service. In this model of cloud computing
product, tools and technologies are delivered to user service over a network.

Storage as a services:
Storage as a service (STaaS) is a managed service in which the provider supplies the customer
with access to a data storage platform. The service can be delivered on premises from
infrastructure that is dedicated to a single customer, or it can be delivered from the public
cloud as a shared service that's purchased by subscription and is billed according to one or
more usage metrics.
STaaS customers access individual storage services through standard system interface
protocols or application program interfaces (APIs).
Storage as a service was originally seen as a cost-effective way for small and mid-size
businesses that lacked the technical personnel and capital budget to implement and maintain
their own storage infrastructure.
Advantages of STaaS
Key advantages to STaaS in the enterprise include the following:
Storage costs. Personnel, hardware and physical storage space expenses are reduced.
Disaster recovery. Having multiple copies of data stored in different locations can better
enable disaster recovery measures.
Scalability. With most public cloud services, users only pay for the resources that they use.
Syncing. Files can be automatically synced across multiple devices.
Security. Security can be both an advantage and a disadvantage, as security methods may
change per vendor. Data tends to be encrypted during transmission and while at rest.
Database as a Service (DBaaS) :
Like SaaS, PaaS and IaaS of cloud computing we can consider DBaaS (also known as Managed
Database Service) as a cloud computing service. It allows users associated with database activities to
access and use a cloud database system without purchasing it.
DBaaS and cloud database comes under Software as a Service (SaaS) whose demand is
growing so fast
In simple we can say Database as a Service (DBaaS) is self service/ on demand database
consumption coupled with automation of operations. As we know cloud computing services
are like pay per use so DBaaS also based on same payment structure like how much you will
use just pay for your usage. This DBaaS provides same function as like standard traditional
and relational database models. So using DBaaS, organizations can avoid data base
configuration, management, upgradation and security.
Key Characteristics of DBaaS :
A fully managed info service helps to line up, manage, and administer your info within the
cloud and conjointly offer services for hardware provisioning and Backup.
DBaaS permits the availability of info’s effortlessly to Database shoppers from numerous
backgrounds and IT expertise.
Provides on demand services.
Supported the resources offered, it delivers a versatile info platform that tailors itself to the
environment’s current desires.
A team of consultants at your disposal, endlessly watching the Databases.
Automates info administration and watching.
Leverages existing servers and storage.

Advantages of DBaaS :
DBaaS is responsible of the info supplier to manage and maintain info hardware and code.
The hefty power bills for ventilation and cooling bills to stay the servers running area unit
eliminated.
An organization that subscribes to DBaaS is free from hiring info developers or constructing
a info system in-house.
Make use of the most recent automation, straightforward outs of clouds area unit possible
at low price and fewer time.
Human resources needed to manage the upkeep of the system is eliminated.
Since DBaaS is hosted off-site, the organization is free from the hassles of power or network
failure.
Explore the portfolio of Oracle info as a service.
Process as a services:
"Process as a Service" (PaaS) is a cloud computing model that provides a platform allowing
customers to develop, run, and manage business processes without the complexity of building
and maintaining the underlying infrastructure. PaaS is an evolution of traditional business
process outsourcing and aims to provide a more flexible and scalable solution. Here are some
key aspects of Process as a Service:
Information as a services:
"Information as a Service" (IaaS) is a concept that refers to providing access to specific
information or data as a service over the internet. In this model, organizations can leverage
external sources to obtain the information they need without having to maintain the data
internally. IaaS is part of the broader trend of providing various IT resources and capabilities
as services in the cloud.
Integration as a services:
Integration as a Service (IaaS) is a cloud-based service model that provides capabilities for
integrating different systems, applications, and data sources within an organization or
between organizations. IaaS enables seamless connectivity and data exchange, allowing
disparate systems to communicate and share information effectively.
In cloud computing, IaaS typically refers to a specific category of services provided by cloud
service providers. These services focus on simplifying and accelerating the integration process
by offering pre-built connectors, tools, and infrastructure components that facilitate data and
application integration.
Testing as a services:
Companies use the outsourcing approach known as “Testing as a Service” in short “TaaS” to
test their products prior to deployment. The application is tested to find flaws in simulated
real-world environments. Testing solutions are provided by a third-party service provider
with testing knowledge rather than internal employees of the organization.
Over conventional testing environments, TaaS has been shown to have substantial
advantages. TaaS is a highly scalable approach, which is its main advantage. Small businesses
and corporations don’t have to worry about finding empty space for servers or other
infrastructure because it is a cloud-based delivery strategy.
Types of TaaS

• Cloud Testing as a Service: TaaS provider checks all cloud services used by the organizations.
• Functional Testing as a Service: TaaS purposeful Testing could embody UI/GUI Testing,
regression, integration, and automatic User Acceptance Testing (UAT) however not necessary
to be a part of purposeful testing.
• Load Testing as a Service: TaaS test the estimated volume of the software used.

• Performance Testing as a Service: Multiple users are accessing the appliance at an identical
time. TaaS mimic a real-world user setting by making virtual users and playacting the load.

• Quality Assurance Testing as a Service: The vendors ensure a product that meets the
company’s requirements.

• Security Testing as a Service: TaaS scans the applications and websites for any vulnerability
to check malware and virus attacks.

• Penetration Testing as a Service: TaaS seller tests the company’s security natural virtue
against cyber threats by performing mock activity attacks.
Scaling a cloud infrastructure -Capacity Planning:
Scaling a cloud infrastructure involves adjusting the capacity of your cloud resources to meet
the changing demands of your applications and services. Capacity planning is a critical aspect
of this process, as it helps you determine the right amount of resources needed to handle
current and future workloads efficiently. Here are key steps and considerations for capacity
planning when scaling a cloud infrastructure:

Understand Workload Characteristics:

Analyze your application's usage patterns, such as peak times and periods of low activity.
Identify resource-intensive tasks and their impact on different components of your
infrastructure.

Monitor and Collect Data:

Use monitoring tools to collect data on various aspects of your infrastructure, including CPU
usage, memory utilization, storage, and network traffic.
Gather historical performance data to identify trends and patterns.

Define Key Metrics:

Establish key performance indicators (KPIs) that align with your application's goals and user
expectations.
Metrics may include response time, throughput, and error rates.

Set Performance Targets:

Define acceptable performance levels for your application.


Consider user experience and service level agreements (SLAs) when setting performance
targets.

Plan for Scalability:

Choose scalable cloud services that can dynamically adjust resources based on demand.
Consider horizontal scaling (adding more instances) and vertical scaling (increasing the size of
existing instances) based on your application's requirements.

Auto Scaling Policies:

Implement auto-scaling policies to automatically adjust the number of resources based on


predefined criteria, such as CPU utilization or network traffic.
Configure scaling policies to add or remove instances as needed.

Cost Analysis:

Evaluate the cost implications of scaling your infrastructure.


Consider reserved instances for predictable workloads and spot instances for cost savings
during periods of low demand.

Test and Validate:

Conduct load testing to simulate various levels of user activity and assess the performance of
your scaled infrastructure.
Use testing environments to validate the effectiveness of your scaling strategies.

Capacity Reservations:

Consider reserving capacity for critical resources to ensure availability during peak times.
Leverage reserved instances for cost-effective capacity planning.

Continuous Monitoring and Optimization:

Implement continuous monitoring to track the performance of your infrastructure in real-


time.
Regularly review and optimize your capacity planning based on changing usage patterns and
business requirements.

Plan for Failure:

Design your infrastructure to be resilient, considering redundancy and failover mechanisms.


Be prepared for unexpected spikes in demand or sudden resource failures.

Stay Informed About Cloud Services:

Keep abreast of new features and services offered by your cloud provider that may enhance
your capacity planning strategies.
Explore managed services that can simplify certain aspects of your infrastructure
management.

Scaling in Cloud Computing


Cloud scalability in cloud computing refers to increasing or decreasing IT resources as needed
to meet changing demand. Scalability is one of the hallmarks of the cloud and the primary
driver of its explosive popularity with businesses.
Data storage capacity, processing power, and networking can all be increased by using
existing cloud computing infrastructure. Scaling can be done quickly and easily, usually
without any disruption or downtime.
Third-party cloud providers already have the entire infrastructure in place; In the past, when
scaling up with on-premises physical infrastructure, the process could take weeks or months
and require exorbitant expenses.
This is one of the most popular and beneficial features of cloud computing, as businesses can
grow up or down to meet the demands depending on the season, projects, development, etc.
By implementing cloud scalability, you enable your resources to grow as your traffic or
organization grows and vice versa. There are a few main ways to scale to the cloud:
If your business needs more data storage capacity or processing power, you'll want a system
that scales easily and quickly.
Cloud computing solutions can do just that, which is why the market has grown so much.
Using existing cloud infrastructure, third-party cloud vendors can scale with minimal
disruption.
Types of scaling
Vertical Scalability (Scaled-up)
horizontal scalability
diagonal scalability
Vertical Scaling
To understand vertical scaling, imagine a 20-story hotel. There are innumerable rooms inside
this hotel from where the guests keep coming and going. Often there are spaces available, as
not all rooms are filled at once. People can move easily as there is space for them. As long as
the capacity of this hotel is not exceeded, no problem. This is vertical scaling.
With computing, you can add or subtract resources, including memory or storage, within the
server, as long as the resources do not exceed the capacity of the machine. Although it has its
limitations, it is a way to improve your server and avoid latency and extra management. Like
in the hotel example, resources can come and go easily and quickly, as long as there is room
for them.
Horizontal Scaling
Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars travel
smoothly in each direction without major traffic problems. But then the area around the
highway develops - new buildings are built, and traffic increases. Very soon, this two-lane
highway is filled with cars, and accidents become common. Two lanes are no longer enough.
To avoid these issues, more lanes are added, and an overpass is constructed. Although it takes
a long time, it solves the problem.
Horizontal scaling refers to adding more servers to your network, rather than simply adding
resources like with vertical scaling. This method tends to take more time and is more complex,
but it allows you to connect servers together, handle traffic efficiently and execute concurrent
workloads.

Diagonal Scaling
It is a mixture of both Horizontal and Vertical scalability where the resources are added both
vertically and horizontally. Well, you get diagonal scaling, which allows you to experience the
most efficient infrastructure scaling. When you combine vertical and horizontal, you simply
grow within your existing server until you hit the capacity. Then, you can clone that server as
necessary and continue the process, allowing you to deal with a lot of requests and traffic
concurrently.
Cloud Platforms in Industry:
Amazon Web Services (AWS) –
AWS provides different wide-ranging clouds IaaS services, which ranges from virtual compute,
storage, and networking to complete computing stacks. AWS is well known for its storage and
compute on demand services, named as Elastic Compute Cloud (EC2) and Simple Storage Service
(S3). EC2 offers customizable virtual hardware to the end user which can be utilize as the base
infrastructure for deploying computing systems on the cloud. It is likely to choose from a large variety
of virtual hardware configurations including GPU and cluster instances. Either the AWS console,
which is a wide-ranged Web portal for retrieving AWS services, or the web services API available for
several programming language is used to deploy the EC2 instances. EC2 also offers the capability of
saving an explicit running instance as image, thus allowing users to create their own templates for
deploying system. S3 stores these templates and delivers persistent storage on demand. S3 is well
ordered into buckets which contains objects that are stored in binary form and can be grow with
attributes. End users can store objects of any size, from basic file to full disk images and have them
retrieval from anywhere. In addition, EC2 and S3, a wide range of services can be leveraged to build
virtual computing system including: networking support, caching system, DNS, database support, and
others.

Google App Engine –


Google App Engine is a scalable runtime environment frequently dedicated to executing
web applications. These utilize benefits of the large computing infrastructure of Google to
dynamically scale as per the demand. App Engine offers both a secure execution
environment and a collection of which simplifies the development if scalable and high-
performance Web applications. These services include: in-memory caching, scalable data
store, job queues, messaging, and corn tasks. Developers and Engineers can build and test
applications on their own systems by using the App Engine SDK, which replicates the
production runtime environment, and helps test and profile applications. On completion of
development, Developers can easily move their applications to App Engine, set quotas to
containing the cost generated, and make it available to the world. Currently, the supported
programming languages are Python, Java, and Go.
Microsoft Azure –
Microsoft Azure is a Cloud operating system and a platform in which user can develop the
applications in the cloud. Generally, a scalable runtime environment for web applications
and distributed applications is provided. Application in Azure are organized around the fact
of roles, which identify a distribution unit for applications and express the application’s logic.
Azure provides a set of additional services that complement application execution such as
support for storage, networking, caching, content delivery, and others.
Hadoop –
Apache Hadoop is an open source framework that is appropriate for processing large data
sets on commodity hardware. Hadoop is an implementation of MapReduce, an application
programming model which is developed by Google. This model provides two fundamental
operations for data processing: map and reduce. Yahoo! Is the sponsor of the Apache
Hadoop project, and has put considerable effort in transforming the project to an
enterprise-ready cloud computing platform for data processing. Hadoop is an integral part
of the Yahoo! Cloud infrastructure and it supports many business processes of the
corporates. Currently, Yahoo! Manges the world’s largest Hadoop cluster, which is also
available to academic institutions.

Cloud Hypervisor
The key is to enable hypervisor virtualization. In its simplest form, a hypervisor is specialized
firmware or software, or both, installed on a single hardware that will allow you to host
multiple virtual machines. This allows physical hardware to be shared across multiple virtual
machines. The computer on which the hypervisor runs one or more virtual machines is called
the host machine.
Virtual machines are called guest machines. The hypervisor allows the physical host machine
to run various guest machines. It helps to get maximum benefit from computing resources
such as memory, network bandwidth and CPU cycles.
Types of Hypervisors in Cloud Computing
There are two main types of hypervisors in cloud computing.
Type I Hypervisor
A Type I hypervisor operates directly on the host's hardware to monitor the hardware and
guest virtual machines, and is referred to as bare metal. Typically, they do not require the
installation of software ahead of time.
Instead, you can install it directly on the hardware. This type of hypervisor is powerful and
requires a lot of expertise to function well. In addition, Type I hypervisors are more complex
and have few hardware requirements to run adequately. Because of this it is mostly chosen
by IT operations and data center computing.
Examples of Type I hypervisors include Oracle VM Server for Xen, SPARC, Oracle VM Server for
x86, Microsoft Hyper-V, and VMware's ESX/ESXi.
Type II Hypervisor
It is also called a hosted hypervisor because it is installed on an existing operating system, and
they are not more capable of running more complex virtual tasks. People use it for basic
development, testing and simulation.
If a security flaw is found inside the host OS, it can potentially compromise all running virtual
machines. This is why Type II hypervisors cannot be used for data center computing, and they
are designed for end-user systems where security is less of a concern. For example, developers
can use a Type II hypervisor to launch virtual machines to test software products prior to their
release.

Q . Explain the Hardware and software stack of the private cloud.


The hardware and software stack of a private cloud encompasses the infrastructure and
software components that work together to deliver cloud computing services within a
dedicated environment. Here's an overview of the key elements in both the hardware and
software stacks of a private cloud:

Hardware Stack:
Servers:

The foundational hardware components in a private cloud are servers. These servers host
virtual machines (VMs) or containers that run the applications and services.

Storage:

Storage infrastructure includes devices such as hard disk drives (HDDs), solid-state drives
(SSDs), and network-attached storage (NAS). Storage is used for storing virtual machine
images, application data, and other relevant information.

Networking Equipment:

Networking hardware, including routers, switches, and firewalls, is crucial for connecting
servers and storage devices. It enables communication within the private cloud and may
include security measures to protect against unauthorized access.

Hypervisors:

Hypervisors, also known as Virtual Machine Monitors (VMMs), are essential for creating and
managing virtual machines on physical servers. They allow multiple VMs to run on a single
physical server, optimizing resource utilization.

Load Balancers:

Load balancers distribute network traffic evenly across multiple servers to ensure efficient
resource utilization and prevent any single server from becoming a bottleneck. This helps in
improving the performance and availability of applications.
Backup and Recovery Systems:
Private clouds need robust backup and recovery systems to safeguard data and applications
against loss or corruption. This may involve regular backups, snapshot technologies, and
disaster recovery solutions.

Monitoring and Management Tools:

Hardware monitoring tools are used to track the health and performance of physical
components. These tools provide insights into resource utilization, potential issues, and
overall system health.

Software Stack:
Virtualization Software:

Virtualization software, such as VMware vSphere, Microsoft Hyper-V, or KVM (Kernel-based


Virtual Machine), enables the creation and management of virtual machines. It abstracts
physical hardware, allowing multiple VMs to run on a single physical server.

Orchestration and Automation:

Orchestration tools like OpenStack, Kubernetes, or Microsoft Azure Stack are used to
automate and manage the deployment, scaling, and maintenance of applications and
services. They help streamline complex workflows in the private cloud environment.

Operating Systems:

Virtual machines in the private cloud run on operating systems. These could be various flavors
of Linux (e.g., CentOS, Ubuntu) or Windows Server, depending on the application and
organizational preferences.

Cloud Management Platform:

A cloud management platform provides a unified interface for administrators to manage and
monitor the private cloud environment. Examples include OpenStack Horizon, Microsoft
System Center, and VMware vRealize Suite.

Security Software:

Security software, including firewalls, intrusion detection and prevention systems, antivirus
solutions, and encryption tools, is essential to protect the private cloud infrastructure and
data from security threats.

Networking Software:

Software-defined networking (SDN) solutions, such as Open vSwitch, enable the


programmability and automation of network configurations within the private cloud. This
enhances flexibility and adaptability.

Database Management Systems:


Private clouds often host databases to store and manage application data. Database
management systems (DBMS) like MySQL, PostgreSQL, or Microsoft SQL Server are commonly
used.

Application Middleware:

Middleware components, such as web servers, application servers, and messaging systems,
facilitate communication and integration between different software applications and services
within the private cloud.

Logging and Monitoring Tools:

Logging and monitoring tools (e.g., ELK Stack, Prometheus, Grafana) help administrators track
performance metrics, troubleshoot issues, and maintain the health of the private cloud
infrastructure.

Q. State and explain the different examples of Cloud-computing offerings, which include various
vendors available and their service types.

Cloud computing offerings encompass a variety of services provided by different


vendors. These services can be broadly categorized into three main types:
Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a
Service (SaaS). Here are examples of each type, along with some key vendors in the
cloud computing space:

1. Infrastructure as a Service (IaaS):

• IaaS provides virtualized computing resources over the internet. It includes


virtual machines, storage, and networking.

Examples and Vendors:

• Amazon Web Services (AWS): Offers IaaS services through Amazon Elastic
Compute Cloud (EC2) for virtual servers, Amazon Simple Storage Service (S3)
for storage, and Amazon Virtual Private Cloud (VPC) for networking.
• Microsoft Azure: Provides IaaS solutions like Azure Virtual Machines and
Azure Blob Storage.
• Google Cloud Platform (GCP): Offers IaaS components such as Compute
Engine (virtual machines) and Google Cloud Storage.

2. Platform as a Service (PaaS):

• PaaS provides a platform that allows customers to develop, run, and manage
applications without dealing with the complexity of infrastructure.
Examples and Vendors:

• Heroku: A cloud platform that enables developers to build, deploy, and scale
applications easily.
• Google App Engine: A fully managed platform for building and deploying
applications.
• Microsoft Azure App Service: Offers a platform for building, deploying, and
scaling web apps.

3. Software as a Service (SaaS):

• SaaS delivers software applications over the internet, eliminating the need for
users to install, maintain, and update the software locally.

Examples and Vendors:

• Salesforce: Provides a cloud-based customer relationship management (CRM)


platform.
• Microsoft 365 (formerly Office 365): Offers a suite of productivity
applications like Word, Excel, and PowerPoint, delivered as a service.
• Google Workspace (formerly G Suite): Includes cloud-based productivity
tools like Gmail, Google Docs, and Google Drive.

Additional Cloud Service Offerings:

1. Database as a Service (DBaaS):


• Amazon RDS (Relational Database Service): A managed relational
database service by AWS.
• Azure SQL Database: Microsoft's fully managed relational database
service.
2. Functions as a Service (FaaS):
• AWS Lambda: Allows running code without provisioning or managing
servers.
• Azure Functions: Microsoft's serverless computing service.
3. Container as a Service (CaaS):
• Google Kubernetes Engine (GKE): A managed Kubernetes service on
GCP.
• Azure Kubernetes Service (AKS): Microsoft's managed Kubernetes
offering.
4. Desktop as a Service (DaaS):
• Amazon WorkSpaces: Cloud-based desktop service by AWS.
•Windows Virtual Desktop (WVD): Microsoft's virtual desktop
infrastructure service.
5. Security as a Service:
• Cisco Umbrella: A cloud-delivered security service for protecting users
from threats.
• Zscaler: Provides cloud-based security solutions for network and
internet security.
6. Blockchain as a Service (BaaS):
• IBM Blockchain Platform: Offers a fully managed blockchain service.
• Azure Blockchain Service: Microsoft's blockchain offering on Azure.

Q. Explain the virtualization technique of Xen architecture and guest OS management.

Xen is an open-source virtualization platform that provides a hypervisor or Virtual


Machine Monitor (VMM) for running multiple virtual machines (VMs) on a single
physical machine. Xen is known for its performance, scalability, and flexibility. It
supports both paravirtualization and hardware-assisted virtualization, allowing the
virtual machines to run with near-native performance. Here's an overview of the Xen
architecture and guest OS management:

Xen Architecture:

1. **Hypervisor (Xen):
• Xen is the core hypervisor that runs directly on the hardware and
manages the virtualization of resources.
• It is responsible for creating and managing virtual machines and
providing them with access to the physical hardware resources.
2. **Domain 0 (Dom0):
• Dom0 is a privileged domain that runs a modified Linux kernel and
serves as the management domain for the Xen hypervisor.
• It has direct access to physical hardware and performs administrative
tasks, such as creating and configuring other VMs.
3. **Domain U (DomU):
• DomU represents unprivileged domains that run guest operating
systems.
• Multiple DomU instances can run simultaneously, each with its own
guest OS.

Guest OS Management:

1. Paravirtualization:
• Xen initially introduced paravirtualization, which involves modifying the
guest operating system to be aware of the hypervisor.
• Modified or paravirtualized guest OSes, known as DomU in Xen, can
communicate with the Xen hypervisor directly, improving performance
by reducing the need for virtualization overhead.
2. Hardware-Assisted Virtualization (HVM):
• Xen also supports hardware-assisted virtualization, allowing it to run
unmodified guest operating systems.
• For HVM, Xen utilizes hardware virtualization extensions such as Intel
VT-x and AMD-V to improve the performance of virtualization.
3. Virtual Machine Configuration:
• Each VM is configured with a set of parameters, including the amount
of memory, virtual CPUs, disk storage, and network interfaces.
• Configuration files define these parameters, and administrators can
modify them to adjust the resources allocated to each VM.
4. Device Model (QEMU):
• Xen uses a device model, often based on QEMU (Quick Emulator), to
provide emulated devices for VMs.
• QEMU acts as a hardware emulator, allowing VMs to access virtualized
devices even if the underlying physical hardware does not support
direct passthrough.
5. Inter-Domain Communication:
• Xen facilitates communication between VMs and between VMs and
Dom0 using a mechanism known as XenStore. XenStore is a shared
repository for configuration and status information.
6. Xen Management Tools:
• Xen provides command-line tools and graphical interfaces for
managing VMs and the overall virtualized environment.
• Tools like xm (deprecated) and xl are used for VM lifecycle
management, including creation, suspension, migration, and deletion.
7. Live Migration:
• Xen supports live migration, allowing VMs to be moved from one
physical host to another without downtime. This is useful for load
balancing, resource optimization, and maintenance.
8. Resource Isolation:
• Xen provides resource isolation for VMs, ensuring that the performance
of one VM does not significantly impact the performance of others.
• This includes CPU and memory isolation, as well as network and disk
I/O isolation.

Aneka in Cloud Computing


Aneka includes an extensible set of APIs associated with programming models like
MapReduce.

These APIs support different cloud models like a private, public, hybrid Cloud.

Manjrasoft focuses on creating innovative software technologies to simplify the


development and deployment of private or public cloud applications. Our product
plays the role of an application platform as a service for multiple cloud computing.

o Multiple Structures:
o Aneka is a software platform for developing cloud computing applications.
o In Aneka, cloud applications are executed.
o Aneka is a pure PaaS solution for cloud computing.
o Aneka is a cloud middleware product.
o Manya can be deployed over a network of computers, a multicore server, a data
center, a virtual cloud infrastructure, or a combination thereof.

Multiple containers can be classified into three major categories:

o Textile services
o Foundation Services
o Application Services

1. Textile Services:

Backward Skip 10sPlay VideoForward Skip 10s

Fabric Services defines the lowest level of the software stack that represents multiple
containers. They provide access to resource-provisioning subsystems and monitoring
features implemented in many.

2. Foundation Services:

Fabric Services are the core services of Manya Cloud and define the infrastructure
management features of the system. Foundation services are concerned with the
logical management of a distributed system built on top of the infrastructure and
provide ancillary services for delivering applications.

3. Application Services:
Application services manage the execution of applications and constitute a layer that
varies according to the specific programming model used to develop distributed
applications on top of Aneka.

There are mainly two major components in multiple


technologies:
The SDK (Software Development Kit) includes the Application Programming
Interface (API) and tools needed for the rapid development of applications. The Anka
API supports three popular cloud programming models: Tasks,
Threads and MapReduce;

And

A runtime engine and platform for managing the deployment and execution of
applications on a private or public cloud.

One of the notable features of Aneka Pass is to support the provision of private cloud
resources from desktop, cluster to a virtual data center using VMware, Citrix Zen
Server, and public cloud resources such as Windows Azure, Amazon EC2,
and GoGrid cloud service.

Aneka's potential as a Platform as a Service has been successfully harnessed by its


users and customers in three different areas, including engineering, life sciences,
education, and business intelligence.

Architecture of Aneka
Aneka is a platform and framework for developing distributed applications on the
Cloud. It uses desktop PCs on-demand and CPU cycles in addition to a heterogeneous
network of servers or datacenters. Aneka provides a rich set of APIs for developers to
transparently exploit such resources and express the business logic of applications
using preferred programming abstractions.

System administrators can leverage a collection of tools to monitor and control the
deployed infrastructure. It can be a public cloud available to anyone via the Internet or
a private cloud formed by nodes with restricted access.

A multiplex-based computing cloud is a collection of physical and virtualized resources


connected via a network, either the Internet or a private intranet. Each resource hosts
an instance of multiple containers that represent the runtime environment where
distributed applications are executed. The container provides the basic management
features of a single node and takes advantage of all the other functions of its hosting
services.
Services are divided into clothing, foundation, and execution services. Foundation
services identify the core system of Anka middleware, which provides a set of
infrastructure features to enable Anka containers to perform specific and specific tasks.
Fabric services interact directly with nodes through the Platform Abstraction Layer
(PAL) and perform hardware profiling and dynamic resource provisioning. Execution
services deal directly with scheduling and executing applications in the Cloud.

One of the key features of Aneka is its ability to provide a variety of ways to express
distributed applications by offering different programming models; Execution services
are mostly concerned with providing middleware with the implementation of these
models. Additional services such as persistence and security are inverse to the whole
stack of services hosted by the container.

At the application level, a set of different components and tools are provided to

ADVERTISEMENT

o Simplify the development of applications (SDKs),


o Port existing applications to the Cloud, and
o Monitor and manage multiple clouds.

An Aneka-based cloud is formed by interconnected resources that are dynamically


modified according to user needs using resource virtualization or additional CPU cycles
for desktop machines. A common deployment of Aneka is presented on the side. If the
deployment identifies a private cloud, all resources are in-house, for example, within
the enterprise.

This deployment is enhanced by connecting publicly available on-demand resources


or by interacting with several other public clouds that provide computing resources
connected over the Internet.

You might also like