Cloud Computing
Cloud Computing
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over public and
private networks, i.e., WAN, LAN or VPN.
Basic Concepts
There are certain services and models working behind the scene making the cloud
computing feasible and accessible to end users. Following are the working models for
cloud computing:
Deployment Models
Service Models
1.1. Essential characteristics
The following image shows that cloud computing is composed of five essential
characteristics, three deployment models, and four service models as shown in the
following figure:
Let’s look a bit closer at each of the characteristics, service models, and deployment
models in the next sections.
Five essential characteristics of cloud computing
The essential characteristics can be elaborated as follows:
• On-demand self-service: Users are able to provision cloud computing resources without
requiring human interaction, mostly done though a web-based self-service portal
(management console).
• Broad network access: Cloud computing resources are accessible over the network,
supporting heterogeneous client platforms such as mobile devices and workstations.
• Resource pooling: Service multiple customers from the same physical resources, by
securely separating the resources on logical level.
• Rapid elasticity: Resources are provisioned and released on-demand and/or automated
based on triggers or parameters. This will make sure your application will have exactly the
capacity it needs at any point of time.
• Measured service: Resource usage are monitored, measured, and reported (billed)
transparently based on utilization. In short, pay for use.
2.Evolution of Cloud Computing:
Cloud computing is all about renting computing services. This idea first came in the
1950s. In making cloud computing what it is today, five technologies played a vital role.
These are distributed systems and its peripherals, virtualization, web 2.0, service
orientation, and utility computing.
Distributed Systems:
It is a composition of multiple independent systems but all of them are depicted as a
single entity to the users. The purpose of distributed systems is to share resources and
also use them effectively and efficiently. Distributed systems possess characteristics
such as scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was that all the
systems were required to be present at the same geographical location. Thus to solve
this problem, distributed computing led to three more types of computing and they
were Mainframe computing, cluster computing, and grid computing.
Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive
input-output operations. Even today these are used for bulk processing tasks such as
online transactions etc. These systems have almost no downtime with high fault
tolerance. After distributed computing, these increased the processing capabilities of
the system. But these were very expensive. To reduce this cost, cluster computing
came as an alternative to mainframe technology.
Cluster computing:
In 1980s, cluster computing came as an alternative to mainframe computing. Each
machine in the cluster was connected to each other by a network with high
bandwidth. These were way cheaper than those mainframe systems. These were
equally capable of high computations. Also, new nodes could easily be added to the
cluster if it was required. Thus, the problem of the cost was solved to some extent but
the problem related to geographical restrictions still pertained. To solve this, the
concept of grid computing was introduced.
Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different
systems were placed at entirely different geographical locations and these all were
connected via the internet. These systems belonged to different organizations and
thus the grid consisted of heterogeneous (different) nodes. Although it solved some
problems but new problems emerged as the distance between the nodes increased.
The main problem which was encountered was the low availability of high bandwidth
(width of internet) connectivity and with it other network associated issues. Thus
cloud computing is often referred to as “Successor of grid computing”.
Web 2.0:
It is the interface through which the cloud computing services interact with the
clients. It is because of Web 2.0 that we have interactive and dynamic web pages. It
also increases flexibility among web pages. Popular examples of web 2.0 include
Google Maps, Facebook, Twitter, etc. Needless to say, social media is possible because
of this technology only. In gained major popularity in 2004.
Service orientation:
It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable (update able) applications. Two important concepts were introduced in this
computing model. These were Quality of Service (QoS) which also includes the SLA
(Service Level Agreement) and Software as a Service (SaaS).
Utility computing:
It is a computing model that defines service provisioning techniques for services such
as computer services along with other major services such as storage, infrastructure,
etc. which are provisioned on a pay-per-use basis.
3.Business cases
In this blog series we'll present the business case for cloud computing to empower more
organizations to see the financial advantages that augment the technological benefits of
migrating to cloud computing. In this series installment, we'll delve further into the
different types of Cloud and the associated costs.
It’s important to understand that within the three types of cloud, there are advantages and
disadvantages to each depending on their application, the type of business outcome
sought, the level of security and privacy needed and overall control. There are also
deployments of combinations of all or some of the above, but we’ll discuss this in another
series as it’s a huge topic.
1. In building a business case, Let’s look at each, Infrastructure business case, the
Applications business case and the Talent business case.
Infrastructure Case
When is this the correct approach? Usually, there is some compelling event to initiate a
move to the cloud, such as a compute upgrade due to increased demands from users or
applications, end-of-life of data center facility assets or a facility move where everything
needs to be built again. The initial and most significant savings are usually found when
abandoning infrastructure in favor of the cloud, with infrastructure savings being the most
significant part of the business case in terms of cost savings. The reason why is that in-
house IT is typically under-utilized, because when infrastructure purchases are being
considered, not all applications that will be deployed on it are known and so a margin is
added for this misunderstood capacity requirement. Additionally, over-deployment is a
result of companies configuring infrastructure for peak loads.
Applications case
What are the compelling events that drive a move of applications to the cloud? With the
three main types of cloud there are a number of options of what to do with applications,
but this requires a look at what the main drivers are. This includes a major fork-lift to
update an in-house custom application, a shift to a new application requiring higher
availability and performance, and scarcity of maintenance resources such as talent and
quality control.
Staffing case
In most businesses regardless of industry, people are among the highest costs, with IT
staffing being no different. we introduced the opportunities presented by the cloud in that
it liberates staff dedicated to maintenance, keeping infrastructure running, and
customising and editing application code. While all of these roles and responsibilities are
important and critical, they can be better deployed to create unique differentiators in
applications that set their business' apart from the competition and provide more value for
their customer's strategically.
Ultimately this is an opportunity cost and needs to be included in the analysis. The other
costs are more easily counted: permanent staff and temporary, their offices and facilities
and the total burden of benefits, recruitment, retention, risk, training and development
and retirement. Add to this the cost of technology and other local jurisdictional costs
associated with labour and employment such as termination costs, professional advice,
holidays and competitive considerations driving perks to continually attract and retain top
talent - all of those costs should be identified in the business case.
3.Cloud Based Services/ Services types:
Cloud Computing can be defined as the practice of using a network of remote servers
hosted on the Internet to store, manage, and process data, rather than a local server or a
personal computer. Companies offering such kinds of cloud computing services are
called cloud providers and typically charge for cloud computing services based on usage.
Types of Cloud Computing:
Most cloud computing services fall into three broad categories:
1. Software as a service (Saas)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
4. Anything as a service (XaaS)
These are sometimes called the cloud computing stack because they are built on top of
one another. Knowing what they are and how they are different, makes it easier to
accomplish your goals.
Software as a Service(SaaS):
Advantages of SaaS
1. Cost-Effective: Pay only for what you use.
2. Reduced time: Users can run most SaaS apps directly from their web browser without
needing to download and install any software. This reduces the time spent in
installation and configuration and can reduce the issues that can get in the way of the
software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a SaaS
provider to automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.
The various companies providing Software as a service are Cloud9 Analytics,
Salesforce.com, Cloud Switch, Microsoft Office 365, Eloqua, dropBox, and Cloud Tran.
Platform as a Service:
PaaS is a category of cloud computing that provides a platform and environment to allow
developers to build applications and services over the internet. PaaS services are hosted in
the cloud and accessed by users simply via their web browser.
Cloud is a full software stack. Each customer is then able to write or load applications into
this environment, with the provider responsible for expanding or contracting all elements
to adapt to the changing requirements of the users. Where a high degree of availability or
service is required, this provides an impressive advantage for businesses looking to react
and scale rapidly.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result,
PaaS frees users from having to install in-house hardware and software to develop or run a
new application. Thus, the development and deployment of the application take
place independent of the hardware.
The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, or storage, but has control over the deployed
applications and possibly configuration settings for the application-hosting environment.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure and other IT
services, which users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus eliminating
the expenses one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus, the
overall development of the application can be more effective.
The various companies providing Platform as a service are Amazon Web services,
Salesforce, Windows Azure, Google App Engine, cloud Bess and IBM smart cloud.
Infrastructure as a Service
Anything as a Service
Most of the cloud service providers nowadays offer anything as a service that is a
compilation of all of the above services including some additional services.
Advantages of XaaS: As this is a combined service, so it has all the advantages of every
type of cloud service.
Representing the different cloud services in terms of housing, here's what would be
included within each service:
It’s important to understand that within the three types of cloud, there are advantages and
disadvantages to each depending on their application, the type of business outcome
sought, the level of security and privacy needed and overall control. There are also
deployments of combinations of all or some of the above, but we’ll discuss this in another
series as it’s a huge topic.
The cloud deployment model identifies the specific type of cloud environment based on ownership,
scale, and access, as well as the cloud’s nature and purpose. The location of the servers you’re
utilizing and who controls them are defined by a cloud deployment model. It specifies how your
cloud infrastructure will look, what you can change, and whether you will be given services or will
have to create everything yourself. Relationships between the infrastructure and your users are also
defined by cloud deployment types.
Different types of cloud computing deployment models are:
1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud
5. Multi-cloud
Let us discuss them one by one:
1. Public Cloud
The public cloud makes it possible for anybody to access systems and services. The public cloud
may be less secure as it is open for everyone. The public cloud is one in which cloud infrastructure
services are provided over the internet to the general people or major industry groups. The
infrastructure in this cloud model is owned by the entity that delivers the cloud services, not by the
consumer. It is a type of cloud hosting that allows customers and users to easily access systems and
services. This form of cloud computing is an excellent example of cloud hosting, in which service
providers supply services to a variety of customers. In this arrangement, storage backup and
retrieval services are given for free, as a subscription, or on a per-use basis. Example: Google App
Engine etc.
Advantages of the public cloud model:
Minimal Investment: Because it is a pay-per-use service, there is no substantial upfront fee,
making it excellent for enterprises that require immediate access to resources.
No setup cost: The entire infrastructure is fully subsidized by the cloud service providers, thus
there is no need to set up any hardware.
Infrastructure Management is not required: Using the public cloud does not necessitate
infrastructure management.
No maintenance: The maintenance work is done by the service provider (Not users).
Dynamic Scalability: To fulfill your company’s needs, on-demand resources are accessible.
2. Private Cloud
The private cloud deployment model is the exact opposite of the public cloud deployment model.
It’s a one-on-one environment for a single user (customer). There is no need to share your
hardware with anyone else. The distinction between private and public cloud is in how you handle
all of the hardware. It is also called the “internal cloud” & it refers to the ability to access systems
and services within a given border or organization. The cloud platform is implemented in a cloud-
based secure environment that is protected by powerful firewalls and under the supervision of an
organization’s IT department.
The private cloud gives the greater flexibility of control over cloud resources.
Advantages of the private cloud model:
Better Control: You are the sole owner of the property. You gain complete command over
service integration, IT operations, policies, and user behavior.
Data Security and Privacy: It’s suitable for storing corporate information to which only
authorized staff have access. By segmenting resources within the same infrastructure, improved
access and security can be achieved.
Supports Legacy Systems: This approach is designed to work with legacy systems that are
unable to access the public cloud.
Customization: Unlike a public cloud deployment, a private cloud allows a company to tailor
its solution to meet its specific needs.
3. Hybrid cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a safe
environment while taking advantage of the public cloud’s cost savings. Organizations can move
data and applications between different clouds using a combination of two or more cloud
deployment methods, depending on their needs.
Advantages of the hybrid cloud model:
Flexibility and control: Businesses with more flexibility can design personalized solutions that
meet their particular needs.
Cost: Because public clouds provide for scalability, you’ll only be responsible for paying for
the extra capacity if you require it.
Security: Because data is properly separated, the chances of data theft by attackers are
considerably reduced.
4. Community cloud
It allows systems and services to be accessible by a group of organizations. It is a distributed
system that is created by integrating the services of different clouds to address the specific needs of
a community, industry, or business. The infrastructure of the community could be shared
between the organization which has shared concerns or tasks. It is generally managed by a third
party or by the combination of one or more organizations in the community.
Advantages of the community cloud model:
Cost Effective: It is cost-effective because the cloud is shared by multiple organizations or
communities.
Security: Community cloud provides better security.
Shared resources: It allows you to share resources, infrastructure, etc. with multiple
organizations.
Collaboration and data sharing: It is suitable for both collaboration and data sharing.
5. Multi-cloud
We’re talking about employing multiple cloud providers at the same time under this paradigm, as
the name implies. It’s similar to the hybrid cloud deployment approach, which combines public and
private cloud resources. Instead of merging private and public clouds, multi-cloud uses many
public clouds. Although public cloud providers provide numerous tools to improve the reliability of
their services, mishaps still occur. It’s quite rare that two distinct clouds would have an incident at
the same moment. As a result, multi-cloud deployment improves the high availability of your
services even more.
Advantages of a multi-cloud model:
You can mix and match the best features of each cloud provider’s services to suit the demands
of your apps, workloads, and business by choosing different cloud providers.
Reduced Latency: To reduce latency and improve user experience, you can choose cloud
regions and zones that are close to your clients.
High availability of service: It’s quite rare that two distinct clouds would have an incident at
the same moment. So, the multi-cloud deployment improves the high availability of your
services.
SUMMARY OF DEPLOYMENT MODELS
Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community.
Public Cloud
The public cloud allows systems and services to be easily accessible to the general public.
Public cloud may be less secure because of its openness.
Private Cloud
The private cloud allows systems and services to be accessible within an organization. It is
more secured because of its private nature.
Community Cloud
Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical activities are
performed using private cloud while the non-critical activities are performed using public
cloud.
6.Virtual machines, Bare-metal server:
A Virtual Machine (VM) is a compute resource that uses software instead of a physical
computer to run programs and deploy apps. One or more virtual “guest” machines run
on a physical “host” machine. Each virtual machine runs its own operating
system and functions separately from the other VMs, even when they are
all running on the same host. This means that, for example, a virtual MacOS virtual
machine can run on a physical PC.
Virtual machine technology is used for many use cases across on-premises and cloud
environments. More recently, public cloud services are using virtual machines
to provide virtual application resources to multiple users at once, for even more cost
efficient and flexible compute.
The two types of virtual machines
Users can choose from two different types of virtual machines—process VMs and system
VMs:
The hypervisor, also known as a virtual machine monitor (VMM), manages these
VMs as they run alongside each other. It separates VMs from each other logically, assigning
each its own slice of the underlying computing power, memory, and storage. This prevents
the VMs from interfering with each other; so if, for example, one OS suffers a crash or a
security compromise, the others survive.
Agility and speed—Spinning up a VM is relatively easy and quick and is much simpler than
provisioning an entire new environment for your developers. Virtualization makes the
process of running dev-test scenarios a lot quicker.
Lowered downtime—VMs are so portable and easy to move from one hypervisor to
another on a different machine—this means that they are a great solution for backup, in
the event the host goes down unexpectedly.
Scalability—VMs allow you to more easily scale your apps by adding more physical or
virtual servers to distribute the workload across multiple VMs. As a result you can increase
the availability and performance of your apps.
Security benefits— Because virtual machines run in multiple operating systems, using a
guest operating system on a VM allows you to run apps of questionable security and
protects your host operating system. VMs also allow for better security forensics, and are
often used to safely study computer viruses, isolating the viruses to avoid risking their host
computer.
Disadvantages of virtual machines
While virtual machines have several advantages over physical machines, there are also
some potential disadvantages:
Bare-metal server:
A bare-metal server is a physical computer server that is used by one consumer, or tenant,
only.[1] Each server offered for rental is a distinct physical piece of hardware that is a
functional server on its own. They are not virtual servers running in multiple pieces of
shared hardware.
A bare metal server, also called a dedicated server by some, is a form of cloud services in
which the user rents a physical machine from a provider that is not shared with any other
tenants.
Because users get complete control over the physical machine with a bare metal (or
dedicated) server, they have the flexibility to choose their own operating system. A bare
metal server helps avoid the noisy neighbor challenges of shared infrastructure and allows
users to finely tune hardware and software for specific data-intensive workloads.
Taken together, bare metal servers have an important role in the infrastructure mix for
many companies due to their unique combination of performance and control.
What's the difference between a bare metal server and a virtual server?
Today, available compute options for cloud services go beyond just bare metal servers and
cloud servers. Containers are becoming a default infrastructure choice for many cloud-
native applications. Platform-as-a-service (PaaS) offerings have an important niche of the
applications market for developers who don't want to manage an OS or runtime
environment. And serverless computing is emerging as the model of choice for cloud
purists.
But, when evaluating bare metal servers, users still gravitate toward the comparison to
virtual servers. For most companies, the criteria for choice are application specific or
workload specific. It's extremely common for a company to use a mix of bare metal servers
along with virtualized resources across their cloud environment.
Virtual servers are the more common model of cloud computing because they offer
greater resource density, faster provisioning times, and the ability to scale up and down
quickly as needs dictate. But bare metal servers are the right fit for a few primary use cases
that take advantage of the combination of attributes. These attributes are dedicated
resources, greater processing power, and more consistent disk and network I/O
performance.
7.What is container technology?
Container technology gets its name from the shipping industry. Rather than come up with
a unique way to ship each product, goods get placed into steel shipping containers, which
are already designed to be picked up by the crane on the dock, and fit into the ship
designed to accommodate the container’s standard size. In short, by standardizing the
process, and keeping the items together, the container can be moved as a unit, and it costs
less to do it this way.
With computer container technology, it is an analogous situation. Ever have the situation
where a program runs perfectly great on one machine, but then turns into a clunky mess
when it is moved to the next? This has the potential to occur when migrating the software
from a developer’s PC to a test server, or a physical server in a company data center, to a
cloud server. Issues arise when moving software due to differences between machine
environments, such as the installed OS, SSL libraries, storage, security, and network
topology.
Containers are a form of operating system virtualization. A single container might be used
to run anything from a small microservice or software process to a larger application. Inside
a container are all the necessary executables, binary code, libraries, and configuration files.
Compared to server or machine virtualization approaches, however, containers do not
contain operating system images. This makes them more lightweight and portable, with
significantly less overhead. In larger application deployments, multiple containers may be
deployed as one or more container clusters.
Benefits of containers
Containers are a streamlined way to build, test, deploy, and redeploy applications on
multiple environments from a developer’s local laptop to an on-premises data center and
even the cloud. Benefits of containers include:
Less overhead
Containers require less system resources than traditional or hardware virtual machine
environments because they don’t include operating system images.
Increased portability
Applications running in containers can be deployed easily to multiple different
operating systems and hardware platforms.
More consistent operation
DevOps teams know applications in containers will run the same, regardless of where
they are deployed.
Greater efficiency
Containers allow applications to be more rapidly deployed, patched, or scaled.
Better application development
Containers support agile and DevOps efforts to accelerate development, test, and
production cycles.
How containers work
Containers hold the components necessary to run desired software. These components
include files, environment variables, dependencies and libraries. The host OS constrains
the container's access to physical resources, such as CPU, storage and memory, so a single
container cannot consume all of a host's physical resources.
What is containerization?
Containerization is the packaging of software code with just the operating system (OS)
libraries and dependencies required to run the code to create a single lightweight
executable—called a container—that runs consistently on any infrastructure.
Containerization allows developers to create and deploy applications faster and more
securely. With traditional methods, code is developed in a specific computing environment
which, when transferred to a new location, often results in bugs and errors. For example,
when a developer transfers code from a desktop computer to a virtual machine (VM) or
from a Linux to a Windows operating system. Containerization eliminates this problem by
bundling the application code together with the related configuration files, libraries, and
dependencies required for it to run. This single package of software or “container” is
abstracted away from the host operating system, and hence, it stands alone and becomes
portable—able to run across any platform or cloud, free of issues.
Instead of virtualizing the underlying hardware, containers virtualize the operating system
(typically Linux) so each individual container contains only the application and its libraries
and dependencies. The absence of the guest OS is why containers are so lightweight and,
thus, fast and portable.
8.CDN (content delivery network):
A CDN (content delivery network), also called a content distribution network, is a group of
geographically distributed and interconnected servers. They provide cached internet
content from a network location closest to a user to speed up its delivery.
The primary goal of a CDN is to improve web performance by reducing the time
needed to send content and rich media to users.
A content delivery network (CDN) refers to a geographically distributed group of
servers which work together to provide fast delivery of Internet content.
A CDN allows for the quick transfer of assets needed for loading Internet content
including HTML pages, javascript files, stylesheets, images, and videos. The popularity of
CDN services continues to grow, and today the majority of web traffic is served through
CDNs, including traffic from major sites like Facebook, Netflix, and Amazon.
A properly configured CDN may also help protect websites against some common
malicious attacks, such as Distributed Denial of Service (DDOS) attacks.
These Internet exchange points (IXPs) are the primary locations where different Internet
providers connect in order to provide each other access to traffic originating on their
different networks. By having a connection to these high speed and highly interconnected
locations, a CDN provider is able to reduce costs and transit times in high speed data
delivery.
Efficiency. CDNs improve webpage loading times and reduce bounce rates. Both
advantages keep users from abandoning a slow-loading site or e-commerce
application.
Security. CDNs enhance security with services
Availability. CDNs can handle more traffic and avoid network failures better than
the origin server, increasing content availability.
Optimization. These networks provide a diverse mix of performance and web
content optimization services that complement cached site content.
Resource and cost savings. CDNs reduce bandwidth consumption and costs.
What are examples of CDN platforms?
There are many CDNs available with a variety of features. Products include the following:
Some CDN providers, like Cloudflare and Verizon, market their platforms as CDNs with
added services, like DDoS or WAFs. Other providers, such as ArvanCloud, offer CDN
services as one of several broader cloud services, such as cloud security and
managed domain name system.
9.What Is Cloud Storage? Definition, Types:
Cloud storage allows users to store digital data files on virtual servers.
Cloud storage is defined as a data deposit model in which digital information such as
documents, photos, videos and other forms of media are stored on virtual or cloud
servers hosted by third parties. It allows you to transfer data on an offsite storage system
and access them whenever needed. This article delves into the basics of cloud storage.
What Is Cloud Storage?
Cloud storage is a data deposit model in which digital information such as documents,
photos, videos and other forms of media are stored on virtual or cloud servers hosted by
third parties. It allows you to transfer data on an offsite storage system and access them
whenever needed.
Cloud storage is a cloud computing model that allows users to save important data or
media files on remote, third-party servers. Users can access these servers at any time over
the internet. Also known as utility storage, cloud storage is maintained and operated by a
cloud-based service provider.
From greater accessibility to data backup, cloud storage offers a host of benefits.
The most notable being large storage capacity and minimal costs. Cloud storage delivers
on-demand and eliminates the need to purchase and manage your own data storage
Infrastructure. With “anytime, anywhere” data access, this gives you agility, global scale
and durability.
How Cloud Storage Works:
Cloud storage works as a virtual data center. It offers end users and applications virtual
storage infrastructure that can be scaled to the application’s requirements. It generally
operates via a web-based API implemented remotely through its interaction with in-house
cloud storage infrastructure.
Cloud storage includes at least one data server to which a user can connect via the
internet. The user sends files to the data server, which forwards the message to multiple
servers, manually or in an automated manner, over the internet. The stored data can then
be accessed via a web-based interface.To ensure the constant availability of data, cloud
storage systems involve large numbers of data servers. Therefore, if a server requires
maintenance or fails, the user can be assured that the data has been moved elsewhere to
ensure availability.
Private cloud storage is also known as enterprise or internal cloud storage. Data is stored
on the company or organization’s intranet in this case. This data is protected by the
company’s own firewall. Private cloud storage is a great option for companies with
expensive data centers and can manage data privacy in-house. A major advantage of
saving data on a private cloud is that it offers complete control to the user. On the other
hand, one of the major drawbacks of private cloud storage is the cost and effort of
maintenance and updates. The responsibility of managing private cloud storage lies with
the host company.
A web service is a standardized method for propagating messages between client and
server applications on the World Wide Web. A web service is a software module that aims
to accomplish a specific set of tasks. Web services can be found and implemented over a
network in cloud computing.
The web service would be able to provide the functionality to the client that invoked the
web service.
A web service is a set of open protocols and standards that allow data exchange between
different applications or systems. Web services can be used by software programs written
in different programming languages and on different platforms to exchange data through
computer networks such as the Internet. In the same way, communication on a computer
can be inter-processed.
Any software, application, or cloud technology that uses a standardized Web protocol
(HTTP or HTTPS) to connect, interoperate, and exchange data messages over the Internet-
usually XML (Extensible Markup Language) is considered a Web service. Is.
Web services allow programs developed in different languages to be connected between a
client and a server by exchanging data over a web service. A client invokes a web service by
submitting an XML request, to which the service responds with an XML response.
Only the structure of an XML document, not the content, follows a pattern. The great thing
about web services and SOAP is that everything is sent through HTTP, the standard web
protocol.
Every SOAP document requires a root element known as an element. In an XML document,
the root element is the first element.
The "envelope" is divided into two halves. The header comes first, followed by the body.
Routing data, or information that directs the XML document to which client it should be
sent, is contained in the header. The real message will be in the body.
Remote procedure calls are used to perform these requests. The calls to the methods
hosted by the respective web service are known as Remote Procedure Calls (RPC).
Example: Flipkart provides a web service that displays the prices of items offered on
Flipkart.com. The front end or presentation layer can be written in .NET or Java, but the
web service can be communicated using a programming language.
The data exchanged between the client and the server, XML, is the most important part of
web service design. XML (Extensible Markup Language) is a simple, intermediate language
understood by various programming languages. It is the equivalent of HTML.
As a result, when programs communicate with each other, they use XML. It forms a
common platform for applications written in different programming languages to
communicate with each other.
Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between
applications. The data is sent using standard HTTP. A SOAP message is data sent from a
web service to an application. An XML document is all that is contained in a SOAP message.
The client application that calls the web service can be built in any programming language
as the content is written in XML.
(a) XML-based: A web service's information representation and record transport layers
employ XML. There is no need for networking, operating system, or platform bindings
when using XML. At the mid-level, web offering-based applications are highly interactive.
(b) Loosely Coupled: The subscriber of an Internet service provider may not necessarily be
directly connected to that service provider. The user interface for a web service provider
may change over time without affecting the user's ability to interact with the service
provider. A strongly coupled system means that the decisions of the mentor and the server
are inextricably linked, indicating that if one interface changes, the other must be updated.
A loosely connected architecture makes software systems more manageable and easier to
integrate between different structures.
Asynchronous clients get their results later, but synchronous clients get their effect
immediately when the service is complete. The ability to enable loosely connected systems
requires asynchronous capabilities.
(d) Coarse Grain: Object-oriented systems, such as Java, make their services available
differently. At the corporate level, an operation is too great for a character technique to be
useful. Building a Java application from the ground up requires the development of several
granular strategies, which are then combined into a coarse grain provider that is consumed
by the buyer or service.
Corporations should be coarse-grained, as should the interfaces they expose. Building web
services is an easy way to define coarse-grained services that have access to substantial
business enterprise logic.
(e) Supports remote procedural calls: Consumers can use XML-based protocols to call
procedures, functions, and methods on remote objects that use web services. A web
service must support the input and output framework of the remote system.
Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET
components have become more prevalent in architectural and enterprise deployments.
Several RPC techniques are used to both allocate and access them.
A web function can support RPC by providing its services, similar to a traditional role, or
translating incoming invocations into an EJB or .NET component invocation.
(f) Supports document exchanges: One of the most attractive features of XML for
communicating with data and complex entities.