Unit 2 CC

Download as pdf or txt
Download as pdf or txt
You are on page 1of 110

The internet as a platform and Software as a service

 2.1 Internet technology and web-enabled applications


 2.2 Web application servers
 2.3 Internet of services
 2.4 Emergence of software as a service
 2.5 Successful SaaS architectures
 2.6 Dev 2.0 platforms
 2.7 Cloud computing
ROOTS OF CLOUD COMPUTING

It is possible to trace the origins of cloud computing back to the development of a


variety of different technologies, particularly those pertaining to hardware
(virtualization, multi-core chips), the Internet (Web services, service-oriented
architectures, Web 2.0), distributed computing (clusters, grids), and systems
management (autonomic computing, data center automation). To put it another way,
the concept of cloud computing emerged as a result of the confluence of a number of

2|Pa ge
distinct technologies. The confluence of a number of diverse technological domains
that made major developments and contributions to the creation and development of
cloud computing is shown in Figure 1.1. Cloud computing has become more popular
in recent years.

Some of these technologies were believed to be marketing gimmicks in the early stages
of their development; nevertheless, in later phases, they garnered substantial attention
from academics and were sanctioned by important players in the industrial sector. In
the early stages of their development, some of these technologies were regarded to be
marketing gimmicks. This immediately resulted in the commencement of a process of
defining and standardizing the notion, which, in the end, led to the concept developing
into its mature form and gaining acceptance across the board. The expansion and
maturation of the technologies that are the subject of this conversation are inextricably
linked to the advent of cloud computing in its current iteration. This article takes a more
in-depth look at the technologies that constitute the backbone of cloud computing in an
effort to offer a more realistic view of the cloud ecosystem as a whole. Our goal is to
do so by examining these technologies in more detail. Our objective is to provide a
clearer and more comprehensive picture of the ecology of the cloud.

THE TRANSITION FROM CENTRAL PROCESSING UNITS TO CLOUD


STORAGE

The field of information technology is now through a phase of transition as we make


the shift away from the practice of creating computer power in-house and toward the
use of computing resources that are given by utilities and made accessible as web
services via the internet. This change comes as a result of the fact that we are moving
away from the traditional model of producing computer power in-house. This move
away from the custom of creating computing power in-house is part of a bigger trend
that has been seen recently. This pattern is reminiscent of an event that took place more
than a century ago, when manufacturers who had been producing their own electric
power realized that it was more cost effective to merely connect their equipment into
the newly created electric power grid.

This event took place when manufacturers who had been generating their own electric
power realized that it was more cost efficient to link their equipment into the newly
formed electric power grid. This turning point occurred when firms who had been
creating their own electric power realized that it was more cost efficient to simply

3|Pa ge
connect their machines to the newly constructed electric power grid rather than
continuing to produce their own electricity. Computing as a utility refers to the "on
demand delivery of infrastructure, applications, and business processes in a security-
rich, shared, scalable, and based computer environment over the Internet for a fee." The
term "computing delivered as a utility" is a concept that was coined by IBM. Cloud
computing is the method that is used in order to carry out this form of computing. This
phrase originates from the paper titled "What Is Computing Delivered as a Utility?"
which can be found on this website and may be read in its entirety.

FIGURE 1.1. Convergence of various advances leading to the advent of cloud


computing.

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

This paradigm is helpful not just for people who use the services provided by
information technology companies, but also for the companies that supply such
services. Customers have the ability to cut down on the out-of-pocket expenses related
with information technology by opting to purchase more reasonably priced services
from third-party suppliers rather than making significant investments in information
technology infrastructure and the employment of IT staff. This presents the customer

10 | P a g e
with a cost savings opportunity. Thanks to the "on-demand" component of this strategy,
customers have the possibility to alter how they use information technology to serve
fast developing or shockingly high computing demands. This gives them more
flexibility to meet their needs.

Suppliers of information technology services are able to minimize their total operating
expenses, which ultimately results in increased financial performance. This is made
possible as a direct consequence of the construction of hardware and software
infrastructures that are designed to provide a range of solutions and serve a large
number of users. This eventually results in a quicker return on investment (ROI) and a
decreased total cost of ownership (TCO), both of which are caused by the greater
efficiency that is brought about as a result of this. There have been a number of
initiatives made, using a broad range of different technologies, to bring the idea of
utility computing one step closer to becoming a reality. Each of these technological
advancements has achieved this goal in their very own special manner.

In the 1970s, businesses that provided common data processing activities, such as
automated payroll processing, used time-shared mainframes as utilities in their
operations. These businesses included organizations like ADP (Automated Payroll
Processing). It wasn't uncommon for the capacity of these time-sharing mainframes to
be approached and even sometimes surpassed on a consistent basis. They were able to
support a multitude of software programs. In point of fact, it was anticipated that
mainframes would function at very high utilization rates due to the fact that they were
relatively costly, and prices should be justified by effective use of the resource. Because
to the development of microprocessors that were not only more efficient but also more
cost-effective, the era of mainframe computers has come to an end. As a direct
consequence of this, data centers in the information technology sector shifted their
infrastructure to rely on collections of commodity servers.

This cutting-edge method often led to the distribution of workloads over a large number
of dedicated servers, in addition to the benefits that are abundantly obvious to anybody
paying attention. This problem was mostly brought on by incompatibilities between
various software stacks and operating systems, which were the major reason.
Furthermore, since there were so few trustworthy computer networks available at the
time, it was very necessary for the information technology infrastructure to be situated
in close proximity to the places in which it would be used. It is now difficult for real
utility computing to take place on contemporary computer systems due to the

11 | P a g e
combination of all of these reasons. In spite of the fact that they are so widely available,
the computer servers and desktop PCs that are part of a contemporary organization are
often underutilized.

This is due to the architecture of the IT infrastructure, which was created in such a way
as to accommodate any increases in demand. As a consequence, this has resulted in this
situation. This is analogous to the manner in which older power producing facilities
distributed energy to the many different kinds of commercial enterprises back in the
day. In addition, during the early phases of the creation of electricity, electric current
was unable to travel over large distances without experiencing considerable voltage
losses. This was the case since long-distance transmission was not yet possible. Even
when electricity was introduced, people were still restricted in this way. On the other
hand, new paradigms came into being, which finally led to the development of
transmission lines that are able to make power accessible in locations that are hundreds
of kilometers away from the location where it is created.

In a similar vein, the development of faster fiber-optics networks has fanned the fire,
and in recent years, new technologies that facilitate the sharing of computer power over
extremely wide distances have developed. Both of these factors have contributed to the
rapid growth of the industry. These two new discoveries are connected to the growing
trend of cloud computing in some way. The results of this research shed light on the
possibility of providing computer services to organizations with the same degree of
speed and dependability that they now enjoy with their in-house equipment. A third
party may supply these services remotely using their own equipment. Service providers
are able to offer computing at rates that are a fraction of what it would cost for a normal
firm to construct its own computer capacity as a result of the advantages of economies
of scale and massive utilization. In other words, service providers are able to supply
computing at a fraction of the cost. The very high level of interest in these services has
directly contributed to the realization of this possibility.

SOA, WEB SERVICES, WEB 2.0, AND MASHUPS

The development of open standards for web services (WS) has been a key contributor
to the tremendous headway that has been made in the field of software integration in
recent years. Web services have the potential to integrate applications that are operating
on a range of communications product platforms by serving as the glue that binds them
together. This potential is made possible by the fact that web services give the ability

12 | P a g e
to do so. Because of this, it is now possible for information stored inside one program
to be exchanged with other programs, as well as for applications that are used internally
to be made available via the internet. This was not possible in the past. Over the course
of many years' worth of effort, a comprehensive Web Services (WS) software stack has
been built and standardized.

This has led to the development of a wide variety of technologies that are able to design,
compose, and orchestrate services, as well as package and transport messages between
services, publish and locate services, represent quality of service (QoS) characteristics,
and provide security in service access. In addition, these technologies are able to
publish and find services. Because they provide a standardized approach to the delivery
of services and are built on top of pervasive technologies that already exist, such as
HTTP and XML, Web Services (WS) standards are suitable candidates for the
implementation of a service-oriented architecture (SOA). This is because WS standards
offer a standardized approach to the delivery of services. Additionally, during the last
several years, Web Services (WS) standards have become increasingly widespread.

The objectives of a service-oriented architecture, more often referred to as SOA, are to


fulfill the needs of distributed computing in a way that is not dependent on any specific
protocol, is based on standards, and uses loosely coupled connections between its
components. In a service-oriented architecture, often known as SOA, software
resources are packaged as "services," which are self-contained, well-defined modules
that carry out typical business operations. These modules may be used separately from
one another at your discretion. These modules do not interact with the states or contexts
of any other services, despite the fact that they may be used in conjunction with such
services. Both the user interface for a service and the documentation for that service
make use of a standardized language for defining terms. The user interface for the
service is made public. Because to the advent of Web Services (WS), it is now possible
to provide users with sophisticated services that they may use whenever they want and
in a manner that is in line with the standards that they have set for themselves.

Even though certain Web Services (WS) are released with the goal of providing support
for end-user applications, the real strength of these systems lies in the fact that their
interfaces may be used by other services. This allows for a far wider variety of
applications to be supported. This is where the edge really comes into play. An
enterprise application is a collection of services that, when combined together, are
capable of executing intricate business logic. An application is said to be operating

13 | P a g e
within those limitations if it is able to function properly while adhering to the
constraints imposed by the SOA paradigm.

The idea of gluing services together originally gained hold on websites that targeted to
companies. However, it quickly moved to websites that catered to consumers,
particularly following the introduction of Web 2.0. It's possible that the success of Web
2.0 is to blame for this change in emphasis. On the consumer end of the Internet,
information and services have the potential to be aggregated via the use of programmed
methods. Because of this, they are able to function as the component parts of more
complicated compositions, which are collectively referred to as "service mashups." In
order to make the application programming interfaces (APIs) of various service
providers available to the general public, standard protocols are used. Some examples
of these protocols are SOAP and REST.

Amazon, del.icio.us, Facebook, and Google are just few of the companies that provide
these services. Because of this, it is possible to put an idea into action for a fully
working web application by just gluing sections together using a little number of lines
of code. As a result of this, it is viable to put an idea into action. In the realm of software
that is made available to customers on an ongoing basis in the form of a service, which
is more often referred to as SaaS, cloud apps may be constructed as compositions of
various services that are received either from the same provider or from different
sources. This kind of software is known as "software as a service."

An example of a building block that may be reused and included into a business
solution is the authentication of users. Other examples include the administration of
payroll and calendars. There are several such instances like as. This might be especially
helpful in the event that a single pre-made system does not provide protection against
all of these traits. At the present time, a wide variety of solutions and components are
freely accessible for purchase in a variety of public marketplaces located all over the
globe. Programmable Web is a public library of application programming interfaces
(APIs) and mashups that presently provides hundreds of APIs and Mashups as an
illustration of this kind of scenario. This kind of library includes, for instance, the
Programmable Web library.

Application programming interfaces (APIs) such as Google Maps, Flickr, YouTube,


Amazon ecommerce, and Twitter, when integrated, present a wide variety of exciting
possible solutions. These solutions include a broad range of topics, from locating

14 | P a g e
establishments that sell video games to constructing weather maps. The App Exchange
on Salesforce.com makes it simpler for users to exchange solutions that were created
by third-party developers on top of Salesforce.com components in a way that is
somewhat distinct to this. The Salesforce.com user community has access to this
feature.

GRID COMPUTING GRID

Computing not only permits the collection of materials from a wide range of sources
but also offers free and unrestricted access to those sources. The majority of production
grids, such as TeraGrid and EGEE, have as their primary goal the sharing of computing
and storage resources that are dispersed over a wide variety of administrative domains.
This is the case for the most of production grids. The primary goal of these grids is to
accelerate a wide variety of scientific applications, including the modeling of climate
change, the creation of novel medications, and the study of proteins, amongst other
things.

Creating standardized protocols that are supported by web services has been an
essential part of the process of converting the grid vision into a functional reality. This
has been one of the important components that has been involved. These protocols
make it possible for resources situated in different geographic locations to be
"discovered, accessed, allocated, monitored, accounted for, billed for, etc., and in
general managed as a single virtual system." The Open Grid Services Architecture
(OGSA), which defines a collection of important capabilities and behaviors that help
overcome significant issues in grid systems, was intended to fulfill this demand for
standardization. These capabilities and behaviors are described in the Open Grid
Services Architecture. It accomplishes this goal by offering a solution to the issue at
hand. This need for standardization served as the motivation for the establishment of
OGSA in the first place.

Globus Toolkit is a piece of middleware that, during the course of its development, has
assisted in the deployment of a variety of service-oriented Grid infrastructures and
applications. This has been accomplished via the use of a number of different
techniques. The use of a wide variety of instruments allowed us to successfully
complete this task. It is responsible for the operation of a wide range of Grid services,
all of which are used by everyone. When a user wants to interact with service grids,
they have access to a toolbox that has a range of tools, such as grid brokers, to choose

15 | P a g e
from in order to do this. Grid brokers automate the user interaction with a broad range
of middleware and carry out rules in order to meet quality of service requirements. Grid
brokers also reduce the amount of time users spend interacting with software.

It is simpler to prepare the way for the provision of on-demand computing services over
the internet thanks, at least in principle, to the creation of standardized protocols for the
many different types of computing activities that take place on grids. These protocols
were developed for the many various kinds of computing activities that take place on
grids. Nevertheless, in the past, assuring quality of service in grids was believed to be
a challenging job to accomplish. Grids were notoriously complex systems. Grids have
not gained widespread adoption in many different kinds of environments because they
do not provide enough performance isolation. This shortcoming is particularly
problematic in conditions in which resources are already at a premium or in which users
are resistant to change. It is possible for the actions connected with one user or virtual
organization (VO) to unknowingly influence how other users of the same platform
perceive the performance of the platform.

This may happen when that user or VO is using the platform in conjunction with other
users. Users that cooperate on projects using the same platform run the risk of
encountering this issue. Because of this, the difficulties that are involved with ensuring
the quality of the service and guaranteeing the execution time became a concern, and
this was especially true for applications that were time-sensitive. Another problem that
has been shown to be a source of annoyance when employing grids is the availability
of resources under different software setups. These software configurations may consist
of a broad range of components, such as unique operating systems, libraries, compilers,
and runtime environments, amongst others, however this list is not exhaustive. User
programs, on the other hand, were often impossible to work in environments that had
not been painstakingly modified.

A portability barrier has traditionally been present on the vast majority of grid systems
as a direct result of this fact. Customers are discouraged from using grids as utility
computing environments due to this barrier, which prevents data from being readily
transported between grids and so makes it more difficult to move data. It has been
decided that the method of virtualization is the proper response to difficulties that have
been a source of frustration for users of grids. This decision was made after it was
determined that virtualization is the suitable solution. Following an exhaustive amount
of study, this verdict was analyzed and derived.

16 | P a g e
One instance of this notion is the problem that occurs when trying to host a number of
distinct software programs on a single physical platform. This challenge is only one
example of the concept. In this context, a number of research projects, such as Globus
Virtual Workspaces, tried to extend grids by adding an extra layer that virtualized
computation, storage, and network resources. One example of such a project is
described below. This would be a significant step in the previously indicated path,
which is why it is vital.

UTILITY COMPUTING

Large grid installations have come up against new challenges as a direct result of the
growing popularity and application of large grids. These challenges are a direct
consequence of the rising popularity and use of large grids. These problems are caused
by both aggressive and strategic behavior on the part of users, in addition to large surges
in the demand for resources. Moreover, these problems have been going on for quite
some time. In the beginning, the methods for managing grid resources did not guarantee
a fair and balanced distribution of the grid's available resources across its many
different components. This occurred as a result of the high number of different systems
that needed to be managed. Traditional measurements, like throughput, waiting time,
and slowness, were incapable of adequately capturing the increasingly complex
expectations of the consumers.

There were no practical incentives for users to be flexible about the needs for resources
or the deadlines for work, and there were no methods to assist users who had urgent
jobs to do. Moreover, there were no procedures to help users who had urgent duties to
complete. It is possible to place a numerical value on the "utility" that users of a
computer environment designed as a utility contribute to the work that they accomplish.
In situations in which users contribute labor, utility may take on a constant or time-
varying value that reflects a number of QoS requirements (such as deadline,
importance, and satisfaction). This value may or may not be a constant. The term
"value" refers to the amount of cash that a person is prepared to hand over to a service
provider in exchange for having their needs met by that provider.

The next step is for the companies that are providing the service to work toward
increasing their own efficiency, which, depending on the particulars of the scenario,
may be directly proportionate to the amount of money they make. In a situation where
shared systems are considered to be a marketplace; users may compete for resources

17 | P a g e
based on the perceived utility or worth of the work they do. This competition may be
influenced by the perceived value of their jobs. When there are not enough resources
to go around, this form of rivalry may take place. Providers have the option to prioritize
high yield (that is, profit per unit of resource) user workloads, which may result in a
scenario in which shared systems are seen as a marketplace. High yield refers to the
amount of profit generated from a single unit of resource. The term "high yield" refers
to the amount of money that may be made from just one unit of a given resource. An
in-depth analysis of different computing platforms not only provides a comparison of
the utility computing environments that were covered in the previous section, but it
also includes a wealth of other information.

HARDWARE VIRTUALIZATION

The infrastructure that backs up cloud computing services often consists of large data
centers that may hold thousands of individual computers each. These specific varieties
of data centers are designed to provide support for a large user base in addition to a
comprehensive selection of software applications. Because of this, hardware
virtualization has the potential to be considered as an excellent solution for overcoming
the majority of the operational challenges that are connected with the building and
maintenance of data centers. This is because hardware virtualization may simulate
hardware in a way that makes it seem as if it is operating normally.

People have been acquainted with the concept of "virtualizing" the resources of a
computer system, such as the processors, memory, and I/O devices, for a number of
decades now. This concept was developed in the 1970s. Because of its potential to
improve the sharing and use of computer systems, this method has experienced a large
degree of acceptance in recent years. This adoption may be attributed to the prospective
benefits described above. It is possible to install and operate a range of operating
systems and software stacks on a single piece of physical hardware by employing a
technique known as hardware virtualization. This technique was developed by
Microsoft. The existence of hardware virtualization paves the way for this possibility
to be realized.

As can be seen in Figure 1.2, a software layer known as the virtual machine monitor
(VMM), which is also known as a hypervisor, acts as a mediator between the
underlying physical hardware and the guest operating systems that are executing on top
of it. This software layer is also known as a hypervisor. This is made possible by the

18 | P a g e
Virtual Machine Manager (VMM), which assigns a VM, also known as a virtual
machine, specifically tailored to the requirements of each guest operating system. A set
of virtual platform interfaces is known as a virtual machine (VM).

It's possible that the development of a wide range of innovative technologies is to blame
for the rise in popularity of server virtualization. Multicore central processing units
(CPUs), paravirtualization, hardware-assisted virtualization, and live migration of
virtual machines are examples of some of these cutting-edge technologies. People have
long believed that the advantages include more dependability, greater manageability,
and better resource sharing and exploitation. This notion has been around since the
beginning of time. In more recent times, with the acceptance of virtualization across a
wide spectrum of server and client systems, academics and practitioners have been
concentrating increased emphasis on three fundamental aspects surrounding
virtualization. These characteristics are:

FIGURE 1.2. A hardware virtualized server hosting three virtual machines, each
one running distinct operating system and user level software stack.

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

Workload management in a system that has been virtualized refers to the processes of
isolating, consolidating, and shifting workload. Workload management is performed in
a system.

Workload isolation is accomplished as a direct result of all program instructions being


completely contained inside a virtual machine (VM), which ultimately results in an
increase in the level of safety. Because of this, we are able to reach a better degree of

19 | P a g e
safety as a consequence. The fact that software issues that manifest themselves inside
one VM are kept separate from those that manifest themselves within the others makes
a significant contribution to the general increase in dependability. It should also be
possible to get a higher level of control over the performance of the system as a
consequence of the fact that the execution of one virtual machine shouldn't have an
influence on the performance of another virtual machine. This is one way in which this
may be accomplished.

By moving many distinct and unrelated workloads onto a single physical platform, it
is possible to improve the system's overall utilization and get the most out of its
resources. This strategy is also used with the intention of achieving the aim of
overcoming any software and hardware incompatibilities that may emerge as a direct
result of an update. This approach makes use of the fact that it is feasible to run both
older operating systems and newer operating systems simultaneously by taking use of
the fact that it is practicable to do so.

Workload migration, also known as application mobility, is a process that tries to


simplify tasks such as load balancing, maintenance on the hardware, and disaster
recovery processes. This may be accomplished by moving workloads from one location
to another. In order to achieve this objective, the current working state of a guest
operating system is replicated inside a virtual machine, abbreviated as VM. This allows
the operating system to be put into a suspended state, completely serialized, transferred
to a new platform, and then restarted either immediately or saved with the intention of
being restored at a later time. Additionally, the operating system may be transported to
a different platform. The state of a virtual machine (VM) is comprised of an image of
the whole disc or partition, an image of the random access memory (RAM), and
configuration files. These three things work together to create the state.

There are many different VMM platforms that may be accessed, and each of these
platforms serves as the foundation for a different kind of utility or cloud computing
system. In the following sections, an overview will be given of three of the most used
formats for virtual machines: KVM, VMware, and Xen are all examples of software
alternatives for virtual machines.

VIRTUAL APPLIANCES AND THE OPEN VIRTUALIZATION FORMAT

A program that has been integrated with the environment in which it must be run in
order for it to function properly is referred to as a "virtual appliance," and the term

20 | P a g e
"virtual appliance" is used interchangeably with the phrase "virtual appliance." "virtual
appliance" refers to a program that has been integrated with the environment in which
it must be run in order for it to function correctly. An operating system, libraries,
compilers, databases, application containers, and so on are all types of software that
may be included in an environment. Putting application environments in the form of
virtual appliances may make the process of software personalization, setup, and
patching more mobile while also simplifying the process.

This can be accomplished by removing the need for physical appliances. Almost often,
an appliance will be in the form of a disc image that may be used by a virtual machine.
This image will come with a requirements list for the hardware it needs to run, and it
will be easy to install in a hypervisor. There are already online markets where it is
possible to purchase and sell gadgets that have already been assembled. These devices
come with well-known operating systems and valuable software combinations already
loaded on them. Either of these may be purchased for a price or obtained at no cost at
all, depending on the user's preference. Thanks to the VMWare virtual appliance
marketplace, customers have the flexibility to install appliances either on VMWare
hypervisors or on the public clouds managed by partners [30]. In addition, Amazon
allows developers to monetize their utilization of Amazon EC2 infrastructure by
distributing Amazon Machine Images (AMI) that have been customized by the
developer. This is done using the Amazon Elastic Compute Cloud.

When there are many distinct hypervisors, each of which supports a separate virtual
machine image format, and when those formats are not compatible with one another,
there is the potential for interoperability difficulties to occur. Because of this, there is
a potential for a considerable number of problems that cannot be resolved. For instance,
Amazon developed its own image format and gave it the acronym AMI, which stands
for Amazon Machine Image. This format immediately gained a lot of traction on the
Amazon EC2 public cloud and is now considered extremely popular. Citrix XenServer,
the many Linux distributions that come along with KVM, Microsoft Hyper-V, and
VMware ESX are some examples of virtualization software that employ formats other
than Xen.

The Open Virtualization Format (OVF) is a standard that was developed by a variety
of companies, the most notable of which being VMware, IBM, Citrix, Cisco, Microsoft,
Dell, and HP. HP and Microsoft are two other companies that come to mind as having
made contributions to its development. As a direct consequence of the objectives of

21 | P a g e
this project, the process of software packaging and distribution that is designed to
operate on virtual machines (VMs) will be simplified. It asserts that one of its goals is
to be "open, secure, portable, efficient, and extensible," among other laudable qualities.
A single file or a group of files that describe the VM hardware parameters (such as
memory, network cards, and discs) as well as specifics about the operating system,
actions to take during startup and shutdown, the virtual discs themselves, and other
metadata that contains information about the product and its licensing make up an OVF
package.

An OVF package can be a single file or a group of files. There is a possibility that an
OVF package may have extra metadata included in it, which would detail the product
and its licensing. In addition, OVF is able to manage complex packages that consist of
several Virtual Machines (VMs), making it a very versatile format. Applications that
have several tiers are a good illustration of this concept. Open Virtualization Format
(OVF), because to its versatility, has witnessed a surge in the number of extensions that
are essential to the operation of cloud computing and data centers. Mathews and his
coworkers came up with the idea of virtual machine contracts (VMC) with the goal of
increasing the number of features that OVF provides.

A piece of software known as a virtual machine controller, which is abbreviated as


VMC, assists in communicating and managing the complicated expectations that
virtual machines, also known as VMs, have of the runtime environments in which they
operate, and vice versa. VMCs are sometimes referred to by their acronyms. When a
customer of a cloud service wishes to define the minimum and maximum quantities of
a resource that a virtual machine (VM) requires in order to operate effectively, this is
an example of a virtual machine configuration (VMC). A cloud service provider may,
in a similar manner, impose resource limits as a technique for managing the number of
resources that are consumed and the costs that are associated with doing so.

AUTONOMIC COMPUTING

The ever-increasing complexity of computer systems has inspired interest in the field
of research known as autonomic computing. Computers are becoming more and more
complicated. This research is being conducted with the intention of reducing the
amount of human input required for the operation of previously developed systems in
order to achieve the goal of enhancing the effectiveness of such systems. To put it
another way, it is essential for systems to possess the capacity for self-governance, with

22 | P a g e
humans serving solely as a source of support at higher levels. Autonomic systems,
which are also known as self-managing systems, are reliant on monitoring probes and
gauges, which are also known as sensors; an adaptation engine, which is also known as
an autonomous manager, that computes optimizations based on monitoring data; and
effectors, which apply modifications to the system. All of these components are known
together as sensors.

The term "self-managing systems" is often used interchangeably with the term
"autonomic systems." The capacity of a system to self-configure, self-optimize, self-
heal, and self-protect itself is the defining characteristic of an autonomous system.
These are the four components that are essential to the functioning of autonomic
systems. These characteristics are added to the definition of autonomic systems by
IBM's Autonomic Computing Initiative so that the definition of autonomic systems
may provide a more accurate description of autonomic systems. "Monitor Analyze Plan
Execute—Knowledge" is the name of the reference model that IBM has suggested for
the autonomic control loops of autonomic managers. MAP-K is the abbreviation for
this model's full name. This paradigm was developed by IBM, which is responsible for
the project.

Cloud computing service providers are the ones who are accountable for ensuring that
their vast data centers are effectively and efficiently handled in the way in which they
see fit. When seen in this light, the ideas behind autonomic computing might serve as
a source of motivation for the development of software solutions for the automation of
data centers. These software solutions are capable of performing a variety of tasks,
including the control of service levels of operational applications, the management of
data center capacity, proactive disaster recovery, and the automation of VM
provisioning.

LAYERS AND TYPES OF CLOUDS

Infrastructure as a service, platform as a service, and software as a service are the three
main classifications that may be used to organize the many types of cloud computing
services that are now available. The services may be broken down into these three
groups thanks to their diversity. The degree of abstraction of the capabilities that are
supplied by cloud computing service providers is the criterion that is used in order to
categorize these capabilities into the categories that were previously discussed. The
layered architecture of the cloud stack is shown in Figure 1.3. The design begins with

23 | P a g e
the physical infrastructure and then moves on to the applications in order of increasing
complexity. The applications come second in this design, after the physical
infrastructure, which is where it all begins.

These several layers of abstraction may alternatively be seen as a tiered architecture. In


this kind of design, the services of one layer can be produced by using the services of
a lower layer. This particular interpretation of these abstraction layers is only one of
many possible ones. The reference model that was established by Buyya and colleagues
breaks down and describes the role that an integrated architecture's numerous levels are
meant to play in terms of a certain purpose. Core middleware is a term used to describe
the software that is responsible for the management of both the physical resources and
the virtual machines (VMs) that are deployed on top of those resources. In addition to
this, it offers the capabilities (such as accounting and invoicing) that are necessary to
provide multi-tenant pay-as-you-go services to consumers.

These services may be purchased by individual customers. In order to be able to provide


features for application development and deployment, cloud development
environments are built on top of infrastructure services. Because there are so many
different programming models, libraries, application programming interfaces (APIs),
and mashup editors available at this level, it is feasible to build a wide variety of
commercial, internet, and scientific applications. End users will be able to make use of
these apps after they have been successfully deployed to the cloud environment.

INFRASTRUCTURE AS A SERVICE

The process of making virtualized resources (such processing, storage, and


communication) accessible on demand is referred to as "Infrastructure as a Service," or
"IaaS" for short. Another common name for this approach is "cloud computing." A
cloud infrastructure. Allows for the deployment of servers that are capable of running a
variety of operating systems and a software stack that can be customized to meet the
needs of the user at any moment. On-demand installation of these servers is an option.
The many layers that comprise cloud computing systems are built on top of a
foundation that consists of the infrastructure services layer. This level represents the
most fundamental aspect of the structure.

The provision of virtual machines (VMs) that include a software stack that is able to be
configured in a manner that is comparable to the manner in which a conventional
physical server would be customized is referred to as infrastructure as a service, or

24 | P a g e
IaaS, in the context of Amazon's Elastic Compute Cloud (EC2) service. This type of
service is offered by Amazon.com.

FIGURE 1.3. The cloud computing stack.

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

The infrastructure as a service that Amazon Web Services provides is the core product
that they sell. Users have the ability to perform a wide variety of actions on the server,
such as starting and stopping the server, personalizing it by installing software
packages, linking virtual disks to it, and setting access permissions and firewall rules.
In addition to these functions, the server also has the capability of accepting virtual
discs as attachments.

PLATFORM AS A SERVICE

Providing a higher level of abstraction in order to make a cloud easy to program is one
method that may be used in addition to infrastructure-oriented clouds that provide
essential computing and storage capabilities. This method is known as Platform as a
Service (PaaS), and it can be used in order to make a cloud more readily programmable.
The abbreviation PaaS stands for "programmable as a service." The phrase "platform
as a service," which may also be shortened to "PaaS," is an abbreviation. Programmers
are able to design and deploy programs on a cloud platform because the platform

25 | P a g e
provides an environment for doing so. The environment does not need the programmers
to be aware of the number of processors or the amount of memory that the apps will be
utilizing, however. Application developers have the ability to produce and distribute
their work on a platform that is hosted in the cloud.

Additionally, a broad variety of programming models and specialized services (for


example, data access, authentication, and payment processing) are made available so
that they may be utilized as building blocks in the design of new applications. These
can include data access, authentication, and payment processing. The Google
AppEngine is an excellent illustration of Platform as a Service due to the fact that it
provides a scalable environment for the development and hosting of web applications.
These programs are expected to be developed in certain programming languages, such
as Python or Java, and are required to make use of the services' very own unique
structured object data store. Python and Java are examples of such programming
languages. The building components include an in-memory object cache referred to as
memcache, an instant messaging service referred to as XMPP, an image editing service,
and connection with the authentication server for Google Accounts.

SOFTWARE AS A SERVICE

Applications are located at the highest point of the structure that is cloud computing's
apex. End users are able to get access to the services provided by this tier by using web
portals that have been developed with their needs in mind. These portals are particularly
built to meet the requirements of end users. As a consequence of this, users are
increasingly moving away from computer programs that need to be installed locally
and toward online software services that offer the same capabilities. These services
provide the same benefits as those that are provided by the locally installed programs.
Applications for desktop computers that have been around for quite some time, such as
word processing and spreadsheet software, are now being made available as web-based
services that users may use from anywhere in the world. Software as a Service, more
often known as SaaS, is a method of delivering computer programs that relieves
customers of the responsibility of keeping their software up to date.

"Software as a service" is shortened to "SaaS," while the full phrase is "software as a


service." At the same time, the procedure of application construction and testing is
made easier for providers. Because it employs the software as a service, or SaaS,
technique, Salesforce.com is in a position to provide business productivity solutions

26 | P a g e
(CRM). These tools are entirely hosted on servers that are owned and maintained by
Salesforce.com, and they are accessible through their website. Customers now have the
ability to modify their software to meet their specific requirements and to make use of
it at any time of day or night that is most suitable for them.

DEPLOYMENT MODELS

Although cloud computing was first established in large part through the construction
of public computing utilities, a number of deployment choices that vary in terms of
their physical location and distribution have since been put into use. These deployment
choices include a variety of options. The availability of these many possibilities enables
cloud computing to be used in a wide range of settings. As a direct consequence of this
fact, a cloud may be categorized as either public, private, communal, or hybrid,
depending on the kind of deployment that it makes use of, as seen in Figure 1.4. It
makes no difference what sort of service is being offered via the cloud; this is always
the case.

Figure 1.4. type of cloud based on development models.

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

The authors Armbrust et al. propose the following definitions for public cloud and
private cloud, respectively: "cloud made available in a pay-as-you-go manner to the

27 | P a g e
general public" and "internal data center of a business or other organization, not made
available to the general public."

MAINFRAME ARCHITECTURE

We are able to trace the history of corporate computing all the way back to the
introduction of 'thirdgeneration' computers in the 1960s, beginning with the IBM
System/360'mainframe' computer and its offspring, which are still used to this day,
such as the IBM z-series family of computers. Integrated circuits were used in these
computers rather of vacuum tubes. Vacuum tubes were not used.

Up to the 1980s, the most popular method of data entry for mainframe computers was
the use of punched cards, whereas the most frequent method of data output was the use
of teleprinters. The use of punched cards and teleprinters as the primary mode of
computer input was eventually supplanted by the introduction of cathode ray tube, or
CRT, interfaces. The "mainframe" design, which can be seen in figure 1.5, became
more prevalent after the year 1980. The 'virtual telecommunications access method'
(VTAM) would be used in order to show the screens of a terminal-based user interface,
and the mainframe server would be in charge of managing it.)

FIGURE 1.5. Mainframe architecture

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

28 | P a g e
to enable the entering of information and the viewing of this information. Instead of the
more frequent TCP/IP protocol that is used today, the ‘systems network architecture'
(SNA) protocol was utilized for communication between the terminals and the
mainframe.

Although, in compared to the standards of today, the CPU power of these mainframe
computers was rather low, the I/O bandwidth of these machines was (and still is) fairly
generous in relation to their CPU power. These machines also had a pretty large
memory capacity. As a result of this, programs for the mainframe were built using a
batch architecture in order to limit the amount of time the CPU was being utilized for
data entry or retrieval. This was done in order to improve efficiency. As a consequence
of this, data would be saved on disc as soon as it was obtained, and it would then be
processed by background programs that were executed in accordance with a schedule
that had been previously established. In sharp contrast to the intricate business logic
that is presently managed during 'live' transactions on the web, this may be thought of
as a simple example.

In point of fact, the shift from a batch model to an online one was considered as a big
revolution in IT design for a fair number of years, and substantial system migration
efforts were undertaken to attain this aim; the rationale for this is easy to understand:
online models are more efficient than batch models. If a person used a system that
processed transactions in batches, any money that was deposited into a bank account
would normally not be reflected in the account's balance until the next business day,
when the 'end of day' batch operations had finished processing of the batch operations.
In addition, if incorrect data were submitted, a large number of corrective actions would
need to be triggered, as opposed to the quick data validations that we are so used to
utilizing today. This would be a significant change from the current state of affairs.

The application data was kept in either hierarchical or networked database systems in
the early mainframe designs, which were used up until the middle to late 1980s. These
designs remained in use until the early 2000s. The hierarchical IMS database from IBM
and the IDMS network database from Computer Associates are two examples of this
kind of database. Both of these databases are now managed by Computer Associates.
Both a publication and a working prototype of the relational database management
system (RDBMS) paradigm appeared in the 1970s. It was not until the early 1980s
when IBM published SQL/DS for the VM/CMS operating system that it made its first
appearance in the commercial market.

29 | P a g e
On the other hand, relational databases did not become extensively utilized until the
middle of the 1980s, when IBM's DB2 was made accessible on the mainframe and
Oracle's implementation was made available on the emerging Unix platform. Both of
these developments contributed to the widespread use of relational databases. We are
going to examine how some of the principles from these early database systems are
now making a return in the shape of new non-relational cloud database models. These
concepts are making a comeback because cloud databases are becoming more popular.
The storage subsystem of mainframes is referred to as the "virtual storage access
mechanism" (VSAM), and it provides built-in support for a variety of file access and
indexing algorithms in addition to record level locking mechanisms that enable
concurrent users to share data.

These features make it possible for mainframes to store large amounts of data. The first
kinds of data storage were often based on file formats, such as networked and
hierarchical databases. However, these types of databases seldom provided support for
concurrency control that went beyond the most fundamental form of locking. The need
for transaction control, also known as the maintenance of the integrity of a logical unit
of work that is made up of numerous updates, was the motivation for the development
of 'transaction-processing monitors' (TP-monitors), such as the customer information
control system (CIS). CICS took use of the advantages offered by the VSAM layer and
developed both commit and roll back protocols in order to facilitate atomic transactions
in a multi-user setting.

This was accomplished by designing the protocols. CICS is still utilized with relational
databases managed by DB2 on IBM z-series mainframes in some circumstances. DB2
is the database management system. At the same time, people's insatiable need for
speed has led to the continued use of methods that are known as "direct access." When
using these procedures, the application logic is given the responsibility of controlling
the transaction. The TPF system, which is used in the aviation industry, is one example
of such a system. This specific solution is quite likely the application-embedded
transaction processor monitor that is the fastest one that is now accessible.

In addition, mainframes were the first systems to use virtual machine technology on a
major scale. Virtual machine technology is now an essential part of the infrastructure
that underpins cloud computing. Mainframes that used the VM family of 'hypervisors',
were able to run numerous distinct 'guest' operating systems separately from one
another. These computer operating systems included the well-known MVS, which was

30 | P a g e
widely used during the 1990s, as well as z-OS and, more recently, even Linux. In
addition, both the operating systems and the virtual machine environments that were
running on mainframes had high degrees of automation.

These high levels of automation were, in many ways, analogous to those that are
presently being done in cloud settings; yet, they were executed on a far larger scale: In
the event that central processing units (CPUs) or memory units failed, support for
hardware fault tolerance involves the automatic transfer of workloads. In addition, the
MVS operating system was an early leader in the creation of software fault tolerance
characteristics, which are also frequently referred to as "recovery" features. The design
of the mainframe included capabilities for fine-grained resource measurement,
monitoring, and problem diagnostics. These capabilities are now once again becoming
vital for cloud computing systems and are included in the architecture of the
mainframe.

CLIENT-SERVER ARCHITECTURE

A significant advancement in microprocessor technology in the 1980s led to the


widespread use of personal computers in both homes and places of work.
Minicomputers such as those that belonged to the VAX series and RISC-based
computers were commercially available about the same time. Both of these types of
computers were able to run the Unix operating system, supported the C programming
language, and had similar capabilities. It was now viable to move some data processing
chores away from expensive mainframes and make use of desktop CPUs instead, which
were purportedly more powerful and cost less than mainframes. Desktop computers,
the same sort of devices that were beginning to be used for word processing and
spreadsheet applications using newly created PC-based office productivity tools, were
capable of accessing company data.

This added benefit was a welcome addition to the advantages being enjoyed by
businesses. This was a really helpful advancement in the situation. Terminals, on the
other hand, had a well-deserved reputation for being difficult to operate and were
sometimes confined to special "data processing rooms." In addition, relational
databases like as Oracle were available on minicomputers, which enabled them to
transcend the modest degree of acceptability that DB2 had attained in the area of
mainframes. This was possible because relational databases were portable. In the end,
networking via the TCP/IP protocol rapidly became a standard, which meant that data

31 | P a g e
could be transferred across networks consisting of personal computers and
minicomputers.

Data processing in organizations has progressed swiftly to make advantage of these


new technologies in recent years. The following provides an illustration of the
architecture of client-server systems. The 'forms' architecture for data processing,
which was based on minicomputers, first garnered a lot of popularity and was widely
used. This concept initially included using terminals to access server-side functionality
that was written in C, which was a design choice that was made to mimic the
architecture of mainframes. The character-oriented "CUIs" that were utilized in the past
have now been replaced by graphical "GUIs" that are provided by PC-based form
programs. The first implementation of a 'client-server' architecture was the 'forms'
technique for the graphical user interface (GUI).

The 'forms' approach was subsequently superseded by the more general client-server
architecture, in which the bulk of the processing logic is executed inside a client
program rather than on a server.,

FIGURE 1.6. Client-server architectures

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

32 | P a g e
in the same manner as a conventional personal computer: Because of this, the client-
server architecture, as can be shown in Figure 1.6, is also referred to as a 'fat-client'
architecture in certain circles. The client program (also known as a "fat-client") makes
direct calls (via the use of SQL) to the relational database through the use of networking
protocols such as SQL/Net while it is functioning over a local area network (or even a
wide area network) that is using TCP/IP. Other names for this kind of client program
include a "thick client." The vast bulk of the business logic is housed inside the code
of the client application; but, in order to achieve greater levels of productivity, a portion
of the business logic may also be written within the database itself via the use of ‘stored
procedures.'

The client-server architecture has seen an explosion in its level of popularity: The
upkeep of mainframe systems, which had been evolving for more than a decade, was
getting increasingly difficult. When it came to recreating these programs for the new
world of desktop computers and smaller servers running Unix, the client-server
architecture provided an innovative and, at first glance, seemingly less costly choice.
In addition, the processing capacity of desktop computers was used to run validations
and other forms of logic, which made the implementation of 'online' systems possible.
The globe, which had previously depended on batch processing, saw a huge progress
as a result of this development. Finally, the development of graphical user interfaces
made it feasible to create very sophisticated user interfaces. This added to the feeling
that one had been "redeemed" from the domain of mainframe computers.

The client-server revolution took place from the beginning to the middle of the 1990s,
and it was largely responsible for the development of several application software
products as well as their subsequent commercial success. An example of this would be
SAP-R/3, which is the client-server edition of SAP's ERP software2 and is used for the
automation of fundamental aspects of the production process. The functionality of this
program was eventually expanded to include other aspects of business operations. Both
supply chain management (SCM), which i2 introduced to the market, and customer
relationship management (CRM), which Seibel introduced to the market, followed suit
in terms of how popular they were.

With these technologies, it was theoretically conceivable to replace substantial amounts


of the functionality that was being performed on mainframes with client-server
systems, which would incur a fraction of the cost of doing so. This would be a
significant improvement in efficiency. However, as its usage spread beyond the realm

33 | P a g e
of programs used by small workgroups and into the core systems used by big
organizations, the client-server design rapidly began to exhibit its limitations: Because
the processing logic on the 'client' immediately accessed the database layer, client-
server programs often made several calls to the server while processing a single screen.
This was necessary because of the fact that the processing logic on the 'client' directly
contacted the database layer.

When compared to the terminal-based paradigm, in which just the initial input and the
final result of a computation were conveyed, each of these requests was rather onerous
and required a significant amount of space. In addition, the terminal-based paradigm
was no longer in use. CICS and IMS continue to permit 'changed-data only' modes of
terminal images. This is a known fact. In these modes, the only bytes that have been
modified by a user are the ones that are transferred over the network. Other bytes in the
file remain unchanged. Even though network bandwidths were far lower than they are
now, such 'frugal' network topologies made it feasible for terminals situated all over
the globe to connect to a centralized mainframe. This was achieved despite the fact that
network bandwidths were significantly lower than they are now.

Therefore, although the client-server paradigm worked properly over a local area
network (LAN), it became laden with challenges when client-server systems began to
be deployed across wide area networks (WANs) joining offices situated all over the
globe. LANs are a subset of local area networks (LANs). As a direct result of this, a
vast number of businesses were pressured into establishing data centers in their
respective regions. These data centers each recreated the same corporate program, but
did so using the data specific to their particular location. This structure on its own led
to inefficiencies in the administration of global software updates, and that's without
even addressing the additional problems that were produced by the need to upgrade the
'client' programs that were installed on each individual desktop workstation.

In conclusion, as time went on, it became clearly clear that the maintenance costs of an
application were greatly raised when the user interface code and the business logic code
were intermixed. This became abundantly visible as the program was further
developed. This was almost always the case when dealing with 'fat' client-side
applications. Lastly, and most importantly in the long term, the client-server paradigm
could not scale; organizations such as banks and stock exchanges where very high
volume processing were the norm could not be handled by the client-server model. This
was the most critical shortcoming of the client-server paradigm. The client-server

34 | P a g e
methodology had one fatal fault, which was the most significant one. As a direct
consequence of this, the mainframe remained the one and only choice for delivering
tremendous throughput while continuing to achieve good performance in business
operations.

The client-server period has taught us a lot of unfavorable lessons, such as the perils of
dispersing processing and data, the challenges of managing upgrades across a broad
range of instances and versions, and the need of having a computer architecture that
can be scaled up as needed. These lessons have been left to us as a legacy of the client-
server era. When we look at the coming chapters, we will find that many of these
challenges continue to recur even when a wider usage of the new cloud computing
models is envisioned. This is something that we shall witness. This is something that
cannot be overlooked and must be taken into account.

TIER ARCHITECTURES WITH TP MONITORS

What were the contributing issues that resulted in the failure of client-server systems
to scale for the processing of huge volumes of transactional data? It wasn't because the
central processing units, or CPUs, were less powerful than the mainframes; in fact, by
the late 1990s, the raw processing capabilities of RISC CPUs had already exceeded that
of mainframes. On the other hand, in contrast to the mainframe, client-server
architectures did not contain a virtual machine layer or task management techniques to
govern user access to limited resources like as CPU and disc space. Rather, these
features were included in the mainframe.

FIGURE 1.6. Client-server fails

35 | P a g e
As a consequence of this, as seen in Figure 1.6, 10,000 client workstations would
eventually end up requiring 10,000 processes and database connections, in addition to
a proportional amount of RAM, open files, and other resources, which would finally
result in the server crashing. (One viewpoint on computers that was common in the late
1990s may be seen to be represented by the figures in the image. At that time, it was
regarded "too much" to have a server with 500 megabytes of RAM."

FIGURE 1.7. 3-tier architecture scales

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

Despite the fact that customers now have access to terabytes of RAM on servers, the
fundamental notion has not been changed in any way. In an attempt to find a solution
to the issue that was taking place with the midrange database servers, the transaction-
processing monitors were rebuilt. This was done so that a workaround could be found.
(You may recall that the very earliest transaction processing monitors, such as CICS,
were developed to work only on mainframes.) These TP monitors were the earliest
instances of a kind of software called as "middleware," which stood between clients
and a database server to manage access to limited server resources, mostly by queuing
client requests. Middleware was designed to stand between two types of software:
client software and server software.

The 1980s are considered to be the era in which the phrase "middleware" was first used.
As a result of this, as can be seen in Figure 1.7, restricting the number of concurrent
requests to a given amount, such as fifty, allowed the server to manage the tremendous
load. On the other hand, the clients paid only a small price in terms of response time as

36 | P a g e
their requests waited in the TP monitor queues. Figure 1.7: restricting the number of
concurrent requests to a certain amount, such as fifty, enabled the server to handle the
massive load. This was the outcome because the server could only process a certain
number of requests at the same time, which limited the total number of requests it could
handle. Because of the careful configuration of the middleware, the amount of time that
was normally spent waiting was able to be cut down to a length of time that was less
than the amount of time that was necessary for the server to process requests.

This allowed for a significant reduction in the total amount of time that was spent
waiting. As a direct result of this, the total lag in response time was tolerable, and in
many cases, it was not even apparent at all. This was the case because of the direct
impact that this had. In the design of a TP monitor, the requests that were being queued
up were called "services," and they included both database activity and business logic.
These were created as a collection of one-of-a-kind Unix processes, each of which
published a number of services of this kind when they were brought online. In the vast
majority of cases, these services were provided in the form of remote procedure calls.
When developers started working on service-based applications, some of the gaps that
were present in the traditional client-server model of application development started
to be filled in.

While the client applications concentrated their efforts primarily on user interface
management and behavior, the services were in charge of encoding the business logic.
It turned out that these kinds of applications were a lot easier to keep up to date when
compared to so-called "fat-client" programs, which had their user interface (UI) and
business logic jumbled together in the code. These programs are known to be more
difficult to maintain. As can be seen in Figure 1.7, the idea of the TP monitors
eventually developed into what is now known as the 3-tier architectural paradigm. This
evolution can be traced back to the early 1990s.

The client layer, the business layer, and the data layer are all clearly differentiated from
one another within the framework of this approach, and they also generally occupy
different workstations. In addition, this technology made it possible for the data layer
to continue to be stored on mainframes in locations where it was important to
communicate with older computer systems. In order to broadcast 'data only' services to
the business logic that was located in the intermediate layer of the architecture,
transaction-processing monitors that were based on mainframes, such as CICS, were
used.

37 | P a g e
FIGURE 1.8. 3-tier TP monitor architecture

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

It is crucial to note that terminal-based 'forms' architectures and certain GUI-based


client-server systems, i.e., those in which the business logic was constrained to the
database layer in stored procedures, are also fundamentally '3-tier' systems. This is
because the business logic was contained inside the database layer. This is something
that is mentioned just briefly here in this context. In spite of this, in terms of
architecture, they belong more to the client-server group than the 3-tier category. This
is because the TP monitor layer does not provide a feature known as request queuing,
which is often provided by other layers.

The 3-tier architecture saw a relatively slight but significant improvement with the
creation of 'object-based' access to services, which substituted flat remote procedure
calls. This was an important step forward. Additionally, flat remote procedure calls
were substituted by this modification. This improvement was implemented along with
the integration of object-oriented distributed communication technologies such as
CORBA. The fact that this was just a very little improvement does not change the
reality that it was a very significant step forward. By using "methods" on "distributed
objects," the CORBA client application was able to successfully interface with the
services running on the server.

38 | P a g e
As a result, it was not essential to put in application-specific message processing in
order to provide arguments to services and obtain their responses since this made it
possible to do so without doing so. Because of the methods that are available on
distributed objects, the client software was able to communicate with the services that
were being performed on the server. After Java became widely used for the
development of client-side applications, the language's developers made it possible to
‘serialize' objects in built-in fashion, which made it possible to access functionally
equivalent features directly inside the language. When Java first began to get major
attention, this was the situation. As a result of this, the development of programs that
run on the client side may progress swiftly.

We are bringing up this particular component in particular because in the internet


architectures that came after it, a significant portion of the complexities of web-based
systems, in particular 'web services,' have centered on techniques that basically
reproduce such capabilities in a way that is both user-friendly and efficient. This is why
we are bringing it up. Because of this, we bring it up again. Two of the most important
lessons that can be taken away from the 3-tier architecture are (a) how to preserve a
clear distinction between the user interface and the business logic, and (b) how to
handle huge transaction volumes by implementing load balancing that makes use of
request queuing. Both of these lessons are gleaned from the 3-tier architecture. These
are two extremely important factors to consider.

Both of these have emerged as essential components in the process of developing


corporate software architecture, and their significance has stayed unchanged through
the development of internet-based architectures and into the blooming age of cloud
computing. To put it another way, both of them have evolved into critical components
of the procedure. The three-tier model did not go away; rather, it has matured into an
essential component of web-based computing, which makes use of internet standards
rather than the proprietary technologies that were prevalent during the time of the TP
monitor. In other words, the three-tier model is now an integral part of web-based
computing. To phrase it another way, the three-tier architecture has evolved into an
essential part of computing that is web-based.

Other types of architectures, such as client-server, mainframe, and three-tier models,


are among those that have been taken into account. We have compared the aspects of
each of these designs that we consider to be the most important in Table 1.1. In addition
to this, we placed a strong focus on the most important lessons learned in preparation

39 | P a g e
for the anticipated growth of our business into the area of software architecture for
public cloud computing. In the next chapter, we will study the origins of the internet,
as well as its effect on the building of business information systems, as well as its
transformation from a communication network into a platform for computing. In
addition, we will look at its impact on the development of business information
systems. In addition to this, we will investigate how it influences the structure of
information systems used in businesses.

40 | P a g e
CHAPTER 2

INTERNET TECHNOLOGY AND WEB-ENABLED APPLICATIONS

The World Wide Web Consortium (W3C) is now responsible for defining standards
for both the HyperText Transfer Protocol (commonly known as HTTP) and the
HyperText Markup Language (also known as HTML). The HyperText Transfer
Protocol (HTTP) is yet another name for HTTP. The successful functioning of apps
that are based on the Internet relies heavily on both of these technological
advancements. Web browsers such as Internet Explorer and servers using protocols
such as HTTPD (HyperText Transfer Protocol Daemon) are what make it possible for
content to be uploaded to the internet and made accessible to users worldwide. Web
browsers such as Internet Explorer are responsible for putting these standards into
practice.

Figure 2.1 Internet technology and web-enabled applications

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

Additional technologies, such as XML and SOAP, have a significant amount of weight
and will be the topic of discussion in following chapters due to the relevance they hold

41 | P a g e
in this context. In this article, we will go over the basic aspects of these underlying
technologies, which are crucial to comprehending internet-based workplace
applications as well as cloud computing. These qualities are vital because they allow
users to do tasks more efficiently. Computing in the cloud and apps for use in the office
that are based on the internet both rely on these technologies.

As can be seen on the left-hand side of Figure 2.1, a web server is a process that is in
charge of receiving HTTP requests from clients, which are often web browsers. This
function can be seen on the left side of the figure. The Apache HTTPD daemon is an
example of a web server, and you can find more information about it in the following
section. These questions were posed by clients who contacted the business. Requests,
as soon as they are received by the web server process, are put in a queue, where they
wait until a request handler thread can be assigned to them. Following receipt of the
HTTP request, the server sends back a response that contains the requested data. It is
possible that this data was retrieved directly from a path included inside the file system.
Alternatively, it is also possible that it was calculated by a server application that was
executed specifically to react to the request.

In any case, the origin of this information cannot be established. The web server is able
to execute server programs and connect with them thanks to a protocol known as CGI,
which is an acronym for the common gateway interface. This implies that the web
server may provide arguments to the applications and receive their outputs, such as data
that was obtained from a database. The web server is responsible for making all of this
possible. The only responsibility of the browser client is to decipher the HTML code
that is sent back by the server and then present that code to the end user.

The broad use of web servers may be attributed to both the development of the open-
source HTTPD web server (which was constructed in C) and the establishment of the
Apache community (which was established to provide support for the server). Experts
in corporate IT positions, lead by Brian Behlendorf, started on this expedition with the
assumption that the HTTP protocol would become polluted if various manufacturers
continued to add their own proprietary extensions to the standard. This notion was the
impetus for the group's decision to engage on this mission. Because of this, the group
came to the conclusion that they should engage in this activity. In addition, the Apache
web server is widely regarded as the first enterprise IT department to make considerable
use of open source software. This is a distinction that the Apache web server has. This
success is attributable to the fact that the server is capable of running websites.

42 | P a g e
In the early years of the World Wide Web (throughout the 1990s), the HTTP protocol,
in conjunction with qualities widely accessible in HTML providing data input forms,
provided the potential to construct browser-based (or 'web-enabled') interfaces to
legacy systems. These interfaces allowed users to access previously stored data and
interact with applications. During this period, many referred to the World Wide Web
as the "Information Superhighway." The process of putting these interfaces into action
was referred to as "web enablement." This turned out to be particularly useful in terms
of gaining access to mainframe programs, which, in the past, had been accessible only
via the use of specialized terminals. However, standard desktop PCs may also be
utilized now if desired.

The practice of designing software that emulates a terminal application in order to


interface with a mainframe is referred to as "screen scraping," and the phrase "screen
scraping" is the term that is used when referring to the process. After this step, the
program, which is seen on the right-hand side of Figure 2.1, initiates communication
with a web server over the CGI protocol in order to relay the results of the simulation
it conducted. Users within the business may have easier access to the firm's mainframes
and TP monitor or CUI-forms-based apps by using this technology, which makes it
feasible to give easier access to both of these types of applications. In addition, it
became feasible to upload information that was kept in legacy systems straight to the
world wide web, which was in its infancy at the time. This was a significant
development. Additionally, there was the added advantage of the browser being a
"universal client program," which decreased the burdensome work of propagating
updates to user desktops.

This was an extra benefit. This was a perk that added to the overall advantages of using
the browser, therefore including it as a benefit is appropriate. This was an added benefit
that came along with the agreement. It is essential to bear in mind that the client-server
architecture was particularly poor in this regard, despite the fact that it was fairly simple
to build and performed quite well across wide area networks. Web enablement made it
easy to allow operations that were located in different geographic locations access to
applications that were running in data centers. This was made possible by the fact that
the internet protocol was simple to implement and worked well over wide area
networks. It was not important where the data centers were physically situated, since
this was a possibility. Unfortunately, client-server systems were also the most difficult
to web-enable since they stored a considerable portion of (or perhaps all of) their
business logic in the 'fat' client-side programs that were installed on user desktops.

43 | P a g e
This made web-enabling client-server systems a challenge. Because of this, web-
enabling client-server systems was an extremely difficult task. Because of this, it was
not possible to shift the logic to the server side, where it might have been managed in
a more straightforward manner if it had been done. Because of this, the functionality of
client-server systems has to be rewritten before they can be made available over the
web. Until then, they cannot be accessed. Redesigning the systems in the appropriate
way might make this goal attainable. One of the most fundamental problems of web-
based applications was that the user interface behavior could not be built with plain
HTML. This was one of the most significant limitations of web-based applications. The
fact that web-based programs were restricted in this way was the most significant
drawback associated with using them.

When web-enabling the terminal-based interfaces of mainframes, this did not offer a
serious constraint; nevertheless, it did result in a loss of functionality in the case of
client-server applications as well as 3-tier programs that provided more interactive user
interfaces through client-side code. This was the case notwithstanding the fact that this
did not imply an exceptionally stringent limitation on what may be done. In recent
years, new technologies, such as AJAX, have made it possible for 'rich internet
applications' to run in the browser, which has considerably reduced the influence of
this constraint. In the later half of the 1990s, there was an explosion in the number of
initiatives that made it possible to utilize the internet. This was mostly owing to the
many potential benefits that come along with using a web-based interface in some
capacity. In order to accomplish these goals, it was necessary to link historical
programming to the internet using a variety of approaches, each of which had to be
tailored to the particular circumstances of the project and was not the industry norm.
The implementation of this approach resulted in the emergence of a new set of problems
as a direct consequence of the circumstance.

WEB APPLICATION SERVERS

The processing capability of a web-enabled application design, such as accessing a


database, was carried out independently of the operation of the web server by making
use of scripts or programs that were executed by the web server. CGI was used so that
the processes would be better able to interact with one another and this was
accomplished via its usage. It was an expensive burden because every time one of these
so-called "CGI-scripts" was used, the crucial server software needed to be started anew
as a distinct operating-system process. This caused the program to have to be

44 | P a g e
relaunched from scratch every time. This took place each and every time that one of
these "CGI-scripts" was executed. As a result of this inefficiency, there was a need for
a solution that could be implemented in the form of FastCGI.

Inter-process communication was its primary function, and its primary goal was to
achieve its mission of enabling the web server to connect with another server-side
process that was constantly running. Because of this, the inefficiency that had been
there was finally able to be removed. It was also feasible to dynamically link
application C code in the web server itself (as was possible with the mod_c module of
the Apache HTTPD); however, this latter technique was not commonly recognized and
was very seldom used in actual practice. It was conceivable to dynamically connect
application C code in the web server itself. In addition to that, it was feasible to
dynamically link application C code in a standalone web server.

The creation of various ways to execute application functionality inside of the web-
server process was made feasible by the invention and broad acceptance of the Java
programming language. Java was developed with the goal of being portable across a
wide range of computer architectures. It also has an execution model that can be
interpreted and is optimized for speed. As a result of this, a new kind of architecture
emerged, which eventually became known as the "application server," and it is shown
on the left side of Figure 2.2 as follows: In addition to providing HTTP requests that
are produced by files or CGI scripts, requests may also be handled via multi-threaded
execution environments that are located inside the web server. This may be done in
conjunction with the delivery of HTTP requests. These kinds of configurations are most
often referred to as containers.

For example, the'servlet' container, which was at first implemented in the 'pure Java'
Apache Tomcat server, made it feasible for Java applications to work in a multi-
threaded way as ‘servlet code' while still staying inside the limits of the server process.
This was made possible since the'servlet' container was first implemented in the 'pure
Java' Apache Tomcat server. Because Java was the only programming language used
in the development of Apache Tomcat, reaching this conclusion was made possible.
The container, through using these threads, would be responsible for maintaining load
balancing across all incoming requests, in a way that is comparable to that which is
handled by TP monitors. This would be done in a manner that is akin to that which is
performed by TP monitors. In addition to this, the container would collaborate with
other containers to share its connections to the database.

45 | P a g e
Therefore, in contrast to the client-server model, the application-server design was able
to enjoy the benefits of having a three-tiered architecture, which included the potential
to manage bigger workloads. This was possible due to the fact that the application-
server design was able to distinguish itself from the client-server model. This was made
feasible owing to the fact that the client-server model was constructed on top of the
application-server architecture. This made it possible for this to happen. Because
Tomcat was a pure Java implementation of the HTTP protocol, it was able to function
on any platform that supported a Java virtual machine (JVM) without the need for
recompilation. This was made possible by the fact that Tomcat was written entirely in
Java.

FIGURE 2.2. Web application serve

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

This was made feasible due to the fact that Tomcat was totally developed in Java. This
was another factor that led to Tomcat's overall success, along with the many others. It
is crucial to be aware that many people confuse the HTTPD web server and the Tomcat
HTTP request server with one another. Since both of these servers handle HTTP
requests, it is necessary to be aware of this, as it is essential to be aware of the fact that
many people make this error. To refresh your recollection, HTTPD is written in C and
performs its duties in the same manner as a web server. On the other hand, Tomcat is

46 | P a g e
not a web server but rather an application server since it incorporates a servlet container
and is thus designed in Java. This makes it distinct from other web servers. Tomcat

Because of the way in which the HTTP protocol operates, the code that controlled how
the user interface acted also needed to be able to respond to HTTP inquiries. This was
an essential need. This was necessary because the HTML that was sent back from the
server had an influence on the content that was shown in the browser. This effect was
vital. The employment of Servlet code was necessary in order to achieve this objective.
It is vital to keep in mind that one of the most significant issues with client-server
systems was that they mingled the code for the user interface with the code for the
business logic, which led to programs that were difficult to maintain. This was one of
the most significant issues. This was one of the key reasons why these systems were
not as popular as they previously were, and it contributed significantly to the decline in
their popularity.

To directly encode user interface functionality as Java code that was embedded into
HTML, the 'Java server pages' (JSPs) technology, which was also introduced in
Tomcat, made it feasible. The Java Platform, Standard Edition, is responsible for
making this a reality. As shown on the right-hand side of Figure 2.2, the process of
converting these 'JSP' files into servlets is accomplished via the use of a technique
known as dynamic compilation. It was possible, thanks to the use of JSPs, to clearly
segregate the user interface from the business logic that was being implemented in the
application code. This finally led to an increase in the maintainability of the program,
which was the intended outcome. Despite the fact that some components of the load
balancing capabilities of TP monitors were implemented in servlet containers, these
components were not yet capable of scaling to handle enormous quantities of
transactions despite the fact that they were included. In 1999, Sun Microsystems was
the company that established the 'Java 2 Enterprise Edition' (J2EE) standard.

The 'Enterprise Java Beans' (EJBs) application execution container was one of the
innovations that came about as a result of the development of this standard. This was
done as a strategy to accomplish that goal in an effort to fill the void that had been
revealed as a result of the investigation. It is feasible to deploy application code that
has been packaged as EJBs into processes that are distinct from the core web
application server. This is achievable if the application code was written in Java.
Utilizing the EJB container is one possible method for achieving this goal. This opens
the door for the possibility of distributed multiprocessor execution, which has the

47 | P a g e
potential to result in significant improvements to the system's overall performance. In
addition to the features described above, the EJB container offered a plethora of other
services to its users.

The terms "security," "transactions," "improved control over database connection


pooling," and "Java-based connectors to older system architectures" are all examples
of the kind of services that fall under this category. It is important to keep in mind that
Tomcat was the very first "web application server," as we discussed earlier; however,
popular vernacular sometimes makes the error of incorrectly referring to this as a web
server, reserving the word application server solely for situations in which an EJB
container is offered. It is important to keep in mind that Tomcat was the very first "web
application server," as we discussed earlier. It is essential to bear in mind that Tomcat
was the very first "web application server," as was covered in an earlier section of this
article.

FIGURE 2.3. Web application server technology stacks

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

48 | P a g e
The Java platform, which encompasses a variety of web and application servers, has
been the primary topic of discussion up until this point in our talk. In addition to the
rival family of servers that Microsoft was creating at the same time, the J2EE stack is
shown in Figure 2.3. There is a representation of the Microsoft family to the right of
the J2EE stack. Microsoft developed a web and application server known as "Internet
Information Server," or IIS for short. IIS stands for "Internet Information Server." Only
Windows may run on this server; it is not compatible with any other operating system.
In contrast to the J2EE stack, support was made available for a broad range of
programming languages. This was not the case with the J2EE stack. These computer
languages included not just C and C++ but also Microsoft-specific languages such as
C# (also known as C'sharp') and VB (also known as visual basic).

In this particular instance, the COM environment that Microsoft had pre-installed in
the Windows operating system served as the application container. This was made
possible by Microsoft's deployment of the.NET Framework. Because Microsoft
decided to include COM into Windows, this was finally able to be accomplished.
Because of the nature of this environment, it was feasible for several processes to run
at the same time and interact with one another. At this point in time, the collection of
the stack's most recent changes has been given the designation of the.NET framework
as its collective moniker. To a certain degree, the application server has been successful
in accomplishing its primary goal, which was to be able to carry out large-scale
commercial transactions inside an architecture that is purely web-oriented. This goal
was successfully accomplished. This objective was realized as a direct result of the
application server's capacity to carry out the duties associated with its main function.

All high-performance online applications make advantage of a technique called


horizontal scaling. This involves dispersing queries among very large clusters of
application servers. These collections of servers are often referred to as "server farms"
as well. Installing, load balancing, and generally administering such large-scale
distributed systems, which may often involve hundreds or even thousands of servers,
has become a significant issue in today's technological landscape. This is owing to the
fact that settings like this often need hundreds of servers to function properly. This is a
consequence of the growing complexity of settings that are dispersed throughout a
number of different locales.

The bulk of the fault tolerance and manageability that were built into the mainframes
have been lost, which has led to an increase in the costs associated with administration

49 | P a g e
and has a negative influence on the organization's capacity to adapt quickly to changing
circumstances. These data centers have effectively evolved into massive "IT plants,"
which are analogous to advanced nuclear power plants in their level of sophistication.
One of the primary motivating factors for the recent uptick in interest in cloud
computing architectures is the complexity of the ecosystems involved. The purpose of
these designs is to address the challenges that are connected with cloud computing in a
way that is both scalable and, to a large extent, automated.

INTERNET OF SERVICES

It seemed only fitting, when programs started to be enabled for the online, to start
providing the general public access to some of the functionality that these apps give.
This access was made possible by beginning to enable programs for the web. This
access would be to a selection of the functions that are made available by the software.
For instance, end-users were able to carry out tasks autonomously owing to web-based
access to back-end programs. These tasks included tracking the status of courier
packages, requesting price estimates for services, and checking their own bank
balances. A short time later, consumers were able to place orders and make payments
online owing to the introduction of secure payment methods. This allowed customers
to save time and money. This goal was at long last able to be realized as a direct result
of the growing dependence that people have on the internet.

After that, we moved on to the next phase, which was the installation of programmatic
access to the same programs that can be accessed over the internet. This was in
accordance with the tendency that users who had web-based access to applications
would ultimately have extensive access to such applications via an interface that was
supplied by a browser. A piece of software can, in the most fundamental sense,
impersonate a browser without the web-enabled interface being any the wiser; in
addition to being a laborious approach, this technology was (and still is) susceptible to
abuse and activities that are undesirable (such as assaults that deny service, for
example). The first steps toward satisfying this requirement were taken when several
internet services were developed to meet the need. Even though we will go into web
services in a little bit more depth, the primary emphasis of this section will be on the
development of web services from a historical point of view.

"interoperable machine-to-machine interaction over HTTP" is what the World Wide


Web Consortium (W3C) refers to when it talks about a "web service." The markup

50 | P a g e
language known as SGML (standardized general markup language), which was first
developed by IBM in the 1960s and was widely used in the realm of mainframes for
the purpose of report writing, was the ancestor of the HTML format for the purpose of
exchanging data over the internet. SGML is a successor of IBM's GML, which was
developed in the 1960s and was used extensively in the world of mainframes for the
purpose of report production. SGML was created by IBM. HTML was highly popular,
but due to the fact that its syntax is not 'well-formed,' it was not the greatest option for
usage in machine-to-machine interactions due to the constraints it had in this field. For
instance, it is not necessary to "close" a statement, such as the one in HTML, by adding
a matching "body" tag and a "body" tag that ends with a "body" tag.

This is because "closing" a statement is accomplished by introducing a matching


"body" tag. This is due to the fact that "closing" a statement indicates that it can no
longer be edited in any additional ways. On the other hand, SGML was a language that
was well-structured but was difficult to understand due to the complexity it included.
The World Wide Web Consortium (W3C) was the organization that introduced
Extensible Markup Language, more often known as XML, to the general public in
1997. It was a streamlined version of SGML that gave users the ability to construct
well-formed HTML (XHTML) and encouraged browser developers to make XML
support available in addition to HTML. Using this particular version of SGML, users
were able to successfully construct well-formed HTML (XHTML).

The invention of a standardized naming and locating system known as the Uniform
Resource Identifier (URI) may be attributed to the World Wide Web. The notation
referred to as the Uniform Resource Identifier, or URI for short, is a string of characters
that may be used to identify additional resources or even abstractions. An example of
a URI that identifies a physical place on the internet or inside a network is the well-
known Uniform Resource Locator (URL), which on occasion may also be referred to
as a web address. Another kind of URI is the Uniform Resource Identifier (URI). This
paved the way for official web service standards, and it was paired with the use of XML
as a basis for forms of communications that are interoperable. Together, these two
things established the groundwork for official web service standards. The XML-RPC
standard was established to simulate remote procedure calls that are performed via the
use of HTTP, with the data being sent in an XML-based fashion.

XML-RPC, which is theoretically comparable to RPC, restricts itself to primary data


kinds such as integers, texts, and other items of this nature. The SOAP protocol was

51 | P a g e
designed from the ground up to support complex, hierarchical, and object-oriented data
types. The Web Services Description Language (WSDL) is an XML format that is used
in this protocol to describe the structure of the messages that are exchanged as input
and output parameters of published "services." WSDL is an acronym for "web services
description language." After that, these communications are transformed into SOAP
messages and transmitted via HTTP. SOAP is another XML format. programs have the
ability to interface with web services that are offered by other programs via the internet
by using a protocol known as the Simple Object Access Protocol (SOAP). These online
services may be accessed through the worldwide web at any time that was convenient
for the user.

Around the same time that the web services standards were being developed and put to
pioneering use by companies such as Fed-Ex, Amazon, and eBay (for placing and
tracking orders and shipments via web services), there was a storm brewing inside the
data center of large enterprises over how to integrate the proliferating suite of
applications and architectures ranging from mainframes, client-server, and TP monitor
technologies to the emerging systems that are based on web services. In other words,
the web services standards were being developed around the To put it another way, the
standards for web services were being created all across the For instance, back when
the standards for online services were being defined, firms The traditional methods of
integration focused on the laborious task of painstakingly discovering and publishing
functionality included inside each individual business system.

It was necessary to make these functionalities accessible to the public in order for them
to be used by programs that came from outside the system. Due to the semantic
disparities in the ways in which various systems dealt with material that was
semantically similar, integration was in and of itself an application in its own right. For
instance, in a human resources (HR) system, the term "employee" may include retirees,
although in other systems, such as payroll, the term "employee" does not often include
retirees. An example of this would be the phrase "employee." Another illustration of
this would be the possibility that in an accounting system, the word "employee" might
refer to retired individuals.

When developing such integration layers, the application server architecture that was
emerging at the time was thought to be the perfect approach to use because it permitted
users to access older systems in a smooth manner. This was considered to be the ideal
approach to use because it was believed to be the ideal method to use when establishing

52 | P a g e
such integration layers. When constructing integration layers of this kind, it was felt
that this approach was the proper technique to use because it was thought to be the
appropriate technique to use when producing integration layers of this kind.
Application servers, also known as "enterprise service buses," were used in the
manufacturing of products for sale by software companies. This process was essential
to the success of these companies. This procedure was carried out of view of the
audience.

These items alleviated some of the challenges that were associated with the integration
that was shown. Integration middleware later started to build in and promote SOAP
and XML-based integration layers inside of the firm data center as well as it was
realized that the SOAP protocol was proving beneficial in combining applications of
several organizations (also known as B2B integration). When it was revealed that the
SOAP protocol was being used to connect applications together over the internet, this
action was taken. The discovery by developers that the SOAP protocol was combining
applications from a variety of firms brought to the emergence of this phenomena. The
word "service oriented architecture" (SOA) started to garner a lot of attention when it
was most typically used as a term to characterize the usage of SOAP and XML for the
purpose of application integration, and it has continued to do so to this day.

In certain cases, the resolution of challenges with semantic integration that existed
between application data was driven by the use of standards. These standards include
data models that make use of XML. Nevertheless, in the great majority of cases, this
significant aspect of the integration difficulty was hidden by the particulars of the new
technology. This is one of the challenges that must be overcome. This is an issue that
has to be addressed right now. Increased application interoperability was one of the
predicted benefits of service-oriented architecture (SOA), along with a reduction in the
expenses of running the business. If application systems were packaged as bundles of
public services rather than as separate services, it would be much simpler to alter their
use to adapt to the changing requirements of a company over time.

This would be the situation if application systems were bundled together as sets of
publicly available services before being sold. The fulfillment of this promise is
contingent upon the passage of time; so far, a lot of effort has been invested on SOA
projects that are incorrectly technology-focused, with very little return to show for it to
show for it. The world of internet-delivered services did not seem to be completely
pleased with the standardization of interfaces that are based on SOAP. In the year 2000,

53 | P a g e
support was added for a protocol that was known by the unremarkable name of
XMLHTTPRequest. This protocol was used by the programming language JavaScript.

It used to be common practice to make use of code written in JavaScript that was carried
out inside browsers in order to allow dynamic user interface behavior within HTML
pages. This was done in order to facilitate cross-browser compatibility. These web
browsers would carry out the JavaScript code's instructions. Using this method, for
instance, validations of the most basic kind were carried out. This "in-browser" code,
on the other hand, is able to carry out HTTP requests provided that it makes use of the
XMLHTTPRequest library. This is the only need for this functionality. These HTTP
requests might be sent to servers other than the one that is now in charge of delivering
the primary HTML page that is being shown on the screen. This is because other servers
may be involved in the process. Google was the first company to make significant use
of this protocol in order to give rich interactive behavior inside Gmail and, more
critically, in order to 'publish' its Google Maps service as a'mashup.'

Google was also the first company to utilize this protocol in order to offer rich
interactive behavior within Google Maps. Additionally, Google was the first company
to use this protocol in order to deliver rich interactive behavior in Google Docs. This
accomplishment was accomplished by Google. Because of this, it became easy for
anybody to publish an HTML website, and once it was up and running, the page could
include some JavaScript code in order to show a Google Map. Furthermore, Google
was the first company to make major use of this protocol in order to allow rich
interactive behavior inside Gmail. This accomplishment was accomplished by Google.
Google was the company that was successful in achieving this goal. This code, which
was given by Google, would make an internal call to one of Google's servers in order
to acquire data that was pertinent to the map that was being discussed before. As a
direct result of this, a novel approach to the integration of applications came into
existence.

The focus of this approach shifted from the integration process between servers to the
integration process that takes place on the client side. The name "Asynchronous
JavaScript and XML" was eventually shortened to "AJAX" and assigned to this
particular class of user interfaces. AJAX is an acronym for "Asynchronous JavaScript
and XML." AJAX-based mashups have also, in a way, democratized application
integration. This has resulted in the development of rich JavaScript apps that enable
users to ‘mashup' services of their choosing in order to design their own uniquely

54 | P a g e
personalized web sites. This was made feasible as a direct consequence of the
increasing number of rich JavaScript applications. As a consequence of this, there has
been an increase in the number of applications that are created in rich JavaScript.

The architectures of web services and mashups, both of which may be utilized for
integration over the internet, are shown side-by-side in figure 2.4. A mashup provides
the user with a client-side interface that is more extensive and feature-rich (like a
Google Map, for example). This interface was built in JavaScript, and it connects with
the server through HTTP by using asynchronous queries that are executed with the help
of the XMLHTTPRequest package. Application server code that makes use of the
Hypertext Transfer Protocol (HTTP) and the Simple Object Access Protocol (SOAP)
in order to access published services residing on another server is included in the
category of Web services that are typically regarded as being server-to-server.

Figure 2.4. Internet of services

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

provider, which enables the adoption of forms that are not only more efficient but also
distinctive to that specific provider. In this, we will investigate these methodologies in

55 | P a g e
more depth, in addition to exploring an alternative to the SOAP protocol that is known
as REST (which is an abbreviation for representational state transfer). REST is quickly
becoming the protocol of choice for remote data access, particularly in the context of
cloud computing. REST was originally developed as a simple hypertext transfer
protocol. This is especially true with regard to REST as a practice. During the first
stages of the procedure for formulating specifications for web services, a significant
amount of emphasis was placed on UDDI, which is an abbreviation that stands for
universal description, discovery, and integration.

Under the premise of this idea, businesses would register the web services they have
made public on a UDDI register that is open to the public. Software programs would
therefore be able to automatically and dynamically identify the services that they need
as a result of this. The concept of UDDI was also promoted as a panacea for application
integration inside businesses; a technical solution that would eliminate semantic
inconsistencies between application services. This was done in the belief that UDDI
would make application integration easier. The reasoning for this decision was the hope
that UDDI will simplify the process of application integration. This was done in an
effort to broaden the scope of businesses and organizations that make use of the UDDI
standard. Because the problem of application integration needs human intervention to
resolve semantic discrepancies, it is very improbable that the issue can be solved using
an approach as automated as the one being proposed here.

In every one of these scenarios, optimism was misdirected to a location where it did
not belong. The expansion of machine-to-machine communication that takes place
through the internet is partly attributable to a technology that is known as web services.
This is particularly relevant when the phrase "web services" is used to describe
something wider, such as encompassing mashups and REST interfaces. In this context,
"web services" At the same time, the concept of a universal "service broker" based on
UDDI fails to take into account an important component, which might be summarized
as follows: When it comes to successfully meeting contractual responsibilities,
effective cooperation between customers and suppliers must always involve human
decision-making in addition to relying only on automated decision-making. This is
because human decision-making is more complex and nuanced than automated
decision-making.

These contractual difficulties are needed by law to be included into the software as a
service and cloud computing paradigms, both of which place an increasing emphasis

56 | P a g e
on the human component. As we are going to find out, this also serves as an important
lesson for the development of cloud computing in the future, which is why it is essential
that we pay attention to what is being said about it. Numerous accounts paint an image
of cloud computing that is comparable to that of the power grid, in which the vast
majority of participants maintain their anonymity. This comparison is possible thanks
in part to the numerous degrees of organizational separation and contractual duties that
exist from creation to consumption; we are going to investigate these possibilities in
more detail in the next section.

In recent years, the concept of cloud computing has been the subject of a substantial
amount of debate and discussion among IT professionals and academics in both the
professional and academic worlds. Intense debates are now taking place on a broad
number of blogs and websites, as well as within a wide variety of academic projects.
Because of this, a variety of entrepreneurial endeavors have been launched in an effort
to assist in harnessing the cloud and migrating onto it. This is the case in spite of the
numerous concerns, obstacles, benefits, and constraints involved with cloud computing
as well as the lack of an overall knowledge of what it is capable of achieving. On the
one hand, there were these large cloud computing IT providers such as Google,
Amazon, and Microsoft, who had begun offering cloud computing services on what
seemed to be a demonstration and trial basis even though this was not formally
specified.

On the other hand, there were smaller cloud computing IT providers such as Rackspace,
which was a smaller cloud computing IT provider. On the other hand, there were other
cloud computing IT providers that were on the smaller side, such as Rackspace, and
they had not yet begun providing cloud computing services. They would charge clients
fees that, in certain cases, indicated pricing approaches that were highly appealing to
customers. These price strategies would be used to attract more customers. It was
proved that cloud computing in and of itself was a reality, as well as that the "techno-
commercial disruptive business model" was, in fact, creating a larger return on
investment (ROI) for a company than more conventional investments in information
technology.

In addition, it was shown that the "techno-commercial disruptive business model" was,
in fact, generating a greater return on investment (ROI) for a corporation. On the other
side, these first service offers for cloud computing came along far too rapidly. The
businesses that offered cloud computing services struggled to address valid concerns

57 | P a g e
about dispersed systems and business models, and they were forced to contend with a
diverse set of unanswered technical and research questions. All of these factors pointed
to the fact that cloud computing services had not yet reached their full potential in terms
of their level of development.

In recent years, several attempts have been made to define a definition for the term
"cloud computing," however the majority of these efforts have not been successful in
doing so. The vast majority of these efforts have proved fruitless so far. This has proven
to be more difficult than was first expected, particularly in light of the quick rate at
which technology breakthroughs are being gained, as well as the more recent
formulations of business models for cloud services that are being presented. The
following is a description of cloud computing that we believe best describes its
meaning: "It is a techno-business disruptive model that utilizes distributed large-scale
data centers that can either be private or public or hybrid offering customers a scalable
virtualized infrastructure or an abstracted set of services qualified by service-level
agreements (SLAs) and charged only by the abstracted IT resources consumed,".

The vast majority of businesses that participate in today's market have their own data
centers that they use to store and process customer information. Information technology
serves as the primary system of support for the majority of today's organizations,
regardless of the size of those organizations. This is true even for the smallest of those
organizations. These large organizations almost always have data centers positioned in
a broad variety of different locations around the whole planet. They are assembled from
many different generations of hardware and software that have been marketed and sold
over the course of time by a wide range of information technology companies in a
number of different countries. The overwhelming majority of these data centers have a
capacity that is greater than the peak loads that they experience. This is the case in both
the United States and Europe. This enables them to accommodate a greater range of
job demands than they could before.

The load fluctuation will be large if the company's activities are carried out in a sector
that is subject to shifts as a result of seasonality or cyclical patterns. Therefore, what is
often found to be the case is that the available capacity of information technology
resources is far more than the typical demand. This is because information technology
continues to advance at a rapid pace. This is due to the fact that advancements in
information technology are still being made at a fast rate. This adds weight to the idea
that a considerable amount of the capacity that is now accessible is being left

58 | P a g e
underutilized. A significant number of data center management teams have been
continually reinventing their management methods and the technologies that they
deploy in order to potentially wring out the very last possible usable computing
resource cycle by employing appropriate programming, system configurations, service
level agreements, and system administration.

This is done in order to potentially wring out the very last possible usable computing
resource cycle. This is done in the hopes of maximizing the efficiency with which the
very last usable computing resource cycle may be extracted. The ability to offload the
growing demand from existing information technology installations onto the cloud is
one reason why cloud computing is appealing to businesses. Businesses only have to
pay for the number of resources that they use, and they are freed from the responsibility
of operations and maintenance.

THE PROMISE OF THE CLOUD

The overwhelming majority of customers who utilize cloud computing services that are
provided by some of the largest data centers in the world aren't concerned with the
intricacy of the systems that are running in the background or how they operate. These
services are supplied by some of the most advanced data centers in the world. Cloud
computing is how these services are delivered to customers. This is particularly true
when one considers the great range of either the operating systems or the programs that
are used on them for deployment. The traits that stood out to them the most were the
Cloud Computing Service abstractions' clarity, coherence, and user-friendliness. These
were the aspects that made them feel the most impressed. As a direct and immediate
result of using cloud computing for the fulfillment of any and all extra cyclical
information technology needs, small and medium-sized businesses have been able to
generate massive and significant cost savings as a direct and immediate outcome.

On a variety of websites across the internet, individuals have documented and analyzed
a number of success stories similar to this one. These tales may be found on those
websites. The economics of using the cloud computing services, which are widely
typically referred to as "cloudonomics," for the purpose of meeting the seasonal IT
loads of organizations has been the object of much research among IT managers and
technology architects. This economics and the associated trade-offs have been referred
to as "cloudonomics." When discussing the financial implications of using cloud
computing services, the term "cloudonomics" was often used. The term

59 | P a g e
"cloudonomics" was first created in the year 2010, making it one of the more
contemporary economic terms.

FIGURE 2.5. The promise of the cloud computing services

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

The promise of the cloud, both on the commercial front (the alluring cloudonomics)
and on the technical front, has significantly assisted CxOs in spawning out numerous
non-mission essential IT demands from the ambit of their captive conventional data
centers and placing them with the appropriate cloud service. This was accomplished by
placing the demands with the appropriate cloud provider. Figure 2.5 provides an
illustration of this point. In the realm of information technology, these requirements
always have some elements in common, including the following: Due to the fact that
they were often centered on the web, that they mirrored seasonal demands on IT, that
they were amenable to parallel batch processing, and that they were not mission-
critical, they did not have particularly stringent requirements for data security.

As a result, they did not need as much protection. In addition to this, they provided
software for use in the following areas: On the other hand, many small and medium-
sized enterprises, in addition to some larger ones, have developed uses for cloud
computing that are far more complex than those that are used by cautious users. There
have been a lot of new businesses recently that have started their information
technology departments by relying only on cloud services. The majority of these
companies have discovered that this tactic has been highly effective and has led to an
extraordinary return on investment as a consequence of its implementation. These

60 | P a g e
discoveries have directly led to the immediate consequence of a huge number of major
organizations beginning to effectively undertake pilot programs for making use of the
cloud. SAP is relied on by many of the most successful firms in the world in order to
manage the internal operations of their organizations.

SAP is now conducting tests to see whether or not it is possible to operate its product
suite, which consists of SAP Business One and SAP Netweaver, on Amazon cloud
technology. The goal of these tests is to determine whether or not this course of action
is doable. According to projections made by Gartner, Forrester, and a number of other
market research organizations, by the year 2012, a significant proportion of the most
prosperous businesses in the world will have moved the majority of their information
technology requirements to be met by services that are housed in the cloud. This is
expected to have occurred in a number of other countries as well. This is in accordance
with the forecasts that they made. This will serve as evidence of the ubiquitous nature
of the many advantages and benefits that may be obtained via the use of cloud
computing.

FIGURE 2.6. The cloud computing service offering and deployment models

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

In point of fact, the domino effect has resulted in a considerable impact as a direct
consequence of the cloud's capacity to provide what it had promised. The models that
are used for the deployment of the various service offerings that are provided by the
cloud. Because it is so simple to use, cloud computing has emerged as a fascinating

61 | P a g e
option for chief financial officers and chief technology officers of companies. This is
one of the primary reasons why cloud computing has been so popular in recent years.
The success of this endeavor may be credited, at least partially, to major data center
service providers, who are now more often known as cloud service vendors. This is due
to the fact that the scope of their commercial endeavors was the key element that drove
them to attain this level of success. Google,1 Amazon,2

With the exception of open-source Hadoop4, which was created in cooperation with
the Apache ecosystem, the primary participants have consisted of Microsoft3 and a few
other corporations for the most part. As can be seen in Figure 2.6, the cloud service
solutions that these companies make accessible to their customers may often be broken
down into three primary groups. These three separate streams are referred to using the
words "Software as a Service," "Platform as a Service," and "Infrastructure as a
Service" (IaaS, PaaS, and SaaS). Programmers favored PaaS options such as Google
AppEngine (for Java/Python programming) or Microsoft Azure (for.Net
programming), whereas IT managers and system administrators favored IaaS options
such as those provided by Amazon for the majority of their virtualized IT requirements.

PaaS options include Google AppEngine (for Java/Python programming) and


Microsoft Azure (for.Net programming). PaaS options include Google AppEngine
(which supports Java and Python programming) and Microsoft Azure (which
supports.Net programming). Users of enterprise-level software finally came to the
realization that the reason they had been utilizing the cloud was because their utilization
of the particular software package was offered as a service — or, to put it another way,
it was a SaaS offering. This realization led to the discontinuation of their cloud usage.
After making use of the cloud for some time, the program's participants suddenly came
to this realization. This was something that occurred regardless of whether or not they
had been utilizing the cloud to store their information. In the field of software as a
service, Salesforce.com was an example of the absolute finest practices that could be
found on the World Wide Web.

The sort of cloud solutions known as "IaaS" have been the most successful and have
had the most general adoption as of right now, when viewed from a technological
perspective, they are also the kind of solutions that are now available. This is due to the
fact that customers are given the option to save their data in the cloud when they sign
up for the service. On the other hand, PaaS has a significant amount of potential that
has not yet been utilized: The paradigm of PaaS is being used to each and every one of

62 | P a g e
the brand-new application development projects that are being carried out on the cloud.
These projects are all being run by Amazon Web Services (AWS). The enormous effect
that businesses had been experiencing as a direct consequence of their adoption of
infrastructure as a service (IaaS) and platform as a service (PaaS) in the form of cloud-
based services, the consumption of which was analogous to that of software as a service
(SaaS).

The vast majority of users are unaware that the cloud has been involved in the majority
of their online activities. This holds true regardless of whether the user is conducting a
search (for example, on Google, Yahoo, or Bing), sending emails (for example, on
Gmail, Yahoo mail, or Hotmail), or participating in social networking (for example, on
Facebook, Twitter, or Orkut). The deployment and consumption of cloud applications
were modeled at three distinct levels, including the public cloud offerings from cloud
vendors, the private cloud initiatives within large enterprises, and the hybrid cloud
initiatives that leverage both the public cloud and the private cloud or managed service
data centers. These three levels are all described below. Abstracted, virtualized, and
scalable hardware was supplied by the services that were centered on infrastructure as
a service (IaaS). Processing power, storage capacity, and bandwidth are all examples
of things that may be included in this gear.

For instance, as one can see from the pricing tariffs site for the year 2009, Amazon5
provided a total of six different tiers of abstracted elastic cloud computing (EC2) server
capacity. The "small-instance," "large-instance," "extra-large instance," and "high-cpu
instance" were among them, along with the "high-cpu medium instance" and the "high-
cpu extra-large instance." In addition to the performance guarantees and support for
bandwidth, each one of them comes standard with the needed amount of random access
memory (RAM) and storage space. The provision of support for programming
platforms is the core emphasis of the PaaS solutions; the runtimes of these platforms
make implicit use of the cloud services provided by the various cloud service providers.
PaaS technologies are notably vendor-locked, however as of right now, a considerable
number of companies have started the process of creating brand new apps with the help
of these PaaS technologies.

PaaS-based applications, on the other hand, outperform IaaS-based services in terms of


performance. This is because PaaS apps make use of the inherent cloud support that is
made available by the programming platform. To accomplish the primary objective of
the SaaS on Cloud service offerings, which is to simplify the process of using extensive

63 | P a g e
software packages, it is necessary to make full use of the advantages that are made
available by cloud computing. The overwhelming majority of individuals who use
these programs are almost always unaware of the cloud support that is integrated into
them; in fact, the vast majority of them (if not all of them) could not care less about the
cloud support that is integrated into these programs.

In point of fact, a significant number of the functions that are normally included in the
software package serve as support for the cloud computing platform that is running in
the background behind the scenes. Users of Gmail, for instance, don't often give a
second thought to the amount of storage space that is being used, whether or not an
email has to be erased, or where it is being preserved. Invariably, they are emblematic
of the cloud that is stored below, where storage can be readily expanded (despite the
fact that the majority of users are unaware of whatever system it is), or for that matter,
where it is housed or located.

CHALLENGES IN THE CLOUD

In spite of the fact that the cloud service offerings provide a simplified perspective of
information technology in the case of infrastructure as a service (IaaS), a simplistic
view of programming in the case of platform as a service (PaaS), and a simplistic view
of resource utilization in the case of software as a service (SaaS), the underlying
systems level support difficulties are enormous and extremely complicated. In spite of
the fact that the underlying systems are very prone to failure, heterogeneous, resource-
hogging, and display major security vulnerabilities, these issues are the consequence
of the need to present a picture of computing that is consistently consistent and robustly
simplified. The requirement to present a vision of computing that is continuously
consistent and robustly simplified has resulted in these issues, which are the outcome
of this goal.

As can be seen in Figure 2.6, the normal qualities of distributed systems, which the vast
majority of people would desire to have, seem to be very similar to the promise that the
cloud bears, indicating that they are reasonably equivalent to one another. This finding
lends credence to the notion that the cloud and distributed systems are on par with one
another. Inevitably, irrespective of whether one is using IaaS, PaaS, or SaaS cloud
services, one will be provided with features that generate the notion of having flawless
network stability; having "instant" or "zero" network latency; maybe supporting
"infinite" bandwidth; and so on. This will occur regardless of the fact that one is using

64 | P a g e
IaaS, PaaS, or SaaS cloud services. This will take place regardless of the kind of cloud
services that are being used, such as IaaS, PaaS, or SaaS.

These arguments are often backed up by a wide variety of buzzwords that are employed
in marketing. After that, however, resilient distributed systems are constructed while
keeping in mind that there are some fallacies that must be avoided as much as possible
throughout the design process, as well as when developing and deploying them. This
is done while keeping in mind that there are certain fallacies that must be avoided
during the construction of robust distributed systems. While doing so, it is important to
bear in mind that there are some logical errors that must be avoided to the greatest
extent feasible. Computing in the cloud serves the apparently contradictory objective
of portraying an idealized picture of the services it delivers while at the same time
ensuring that the underlying systems are managed in a manner that is compatible with
the needs that are imposed by the actual world.

Figure 2.7. ‘Under the hood’ challenges of the cloud computing services
implementations.

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

This may be accomplished by ensuring that real-world requirements are met by the
underlying system administrators. In point of fact, there are a variety of obstacles that
need to be cleared away before cloud computing services can be used successfully in
any capacity. You are welcome to go at the list of a few of them that is supplied for
your convenience in figure 2.6. Issues connected to the protection and safety of
individuals are among the most important of these concerns. The Cloud Security

65 | P a g e
Alliance is making substantial efforts to find solutions to a major number of these
problems at this time.

BROAD APPROACHES TO MIGRATING INTO THE CLOUD

Going into the cloud is poised to become a large-scale endeavor in the use of the cloud
in a variety of different organizations due to the fact that cloud computing is a "techno-
business disruptive model" and is at the top of the list of the top 10 key technologies to
look for in 2010, according to Gartner7. Cloud computing is also at the top of the list
of the top 10 key technologies to look for in 2011. This is due to the fact that cloud
computing is ranked first on the list of the top 10 crucial technologies to keep an eye
out for in the year 2010. The term "Cloudonomics" refers to the economic rationale for
using the cloud and is absolutely necessary to the viability of cloud computing for usage
in commercial settings. What kind of financial investment is required, both in the short
term and in the long term, for one to transfer their business to the cloud?

The use of cloud computing removes the need for any and all initial expenditures in
hardware, therefore the only costs that need to be covered are the continuous operating
expenses. To what degree, however, does all of the strategic criteria for the information
technology that is utilized in enterprises get satisfied by this? Is there a fair probability
that the total cost of ownership, often known as TCO, will be a significant amount
cheaper than the costs that are associated with operating one's very own private data
center? When it comes to the planning of brand-new information technology projects
for an organization, decision-makers, IT managers, and software architects are
presented with a range of challenging conundrums to contend with.

WHY MIGRATE?

It is possible to establish justification for migrating an enterprise application to the


cloud for a variety of different reasons, including financial, commercial, and business
issues, in addition to technical ones. Migrating an enterprise application to the cloud is
something that may be justified. The majority of these activities emerge as efforts
within the organization's adoption of cloud technologies, which ultimately results in
the integration of business applications that are now operating off of captive data
centers with the new ones that have been established on the cloud. The vast bulk of
these efforts have culminated in this final product. One example of a use case that can
benefit from migration is the implementation of cloud computing services or the
adoption of these services.

66 | P a g e
The process of moving an application to the cloud may be carried out via any one of a
great number of different approaches, including the following: Either the application is
clean and independent, in which case it can be run as is; or perhaps some degree of
code needs to be modified and adapted; or the design (and therefore the code) needs to
be first migrated into the environment of a cloud computing service; or finally, perhaps
the migration results in the core architecture being migrated for a cloud service setting,
resulting in the creation of a new application; or finally, perhaps the migration results
in the creation of a new application because of the migration of the core architecture.

Alternately, it is possible that the program will be transferred in its current state;
nevertheless, the method in which it is used will need to be migrated, which means that
the program will need adaptation and adjustment. Another possibility is that the
program will be relocated in its current state. To put it another way, migration might
take place at any one of these five levels: use, design, architecture, application, or code.

The following is the most efficient approach to summarize the migration of an


enterprise application when appropriate simplification is used:

If we refer to P as the application that was being run in the captive local data center
prior to the migration, then A is the portion of the programmed that was migrated either
into a (hybrid) cloud, B is the portion of the application that is still being run in the
captive local data center, and C refers to the portion of the application that has been
optimized for the cloud. In the event that an enterprise program cannot be transferred
in its entirety, it is conceivable that some components of the application would continue
to be run on the captive local data center while the other components would be migrated
into the cloud. This would be the case even in the event that the enterprise program
could be moved in its whole. This particular situation would only come into play in the
case that a business program could not be transferred in its entirety. This is really an
example of how to utilize a hybrid cloud, so keep that in mind. Having stated that, after
the application was completely transferred to the cloud, this condition became
irrelevant and was no longer necessary.

Consequently, it is no longer necessary to comply with it. The migration of the business
application P may take place on any one of these five levels: application, code, design,
architecture, or use. It is also feasible for the migration to take place simultaneously on

67 | P a g e
all five levels. The migration might happen at any one of the five stages, and it would
not be required to relocate any components at any point in the process. When we
combine this with the type of cloud computing service offering that is being used,
whether it be the IaaS model, the PaaS model, or the SaaS model, we have a wide
variety of migration use cases. In order for the migration architects to be successful,
they need to carefully evaluate and prepare for each of these use cases.

In order to provide a simplified explanation of this event, the following use-case


numbers for the migration scenario have been listed: There are a total of thirty different
use-case scenarios that might be applied when migrating to a solution that is provided
by a cloud-based infrastructure as a service provider. There are twenty different use-
case scenarios that might be taken into consideration throughout the process of
transitioning to a PaaS system. Consumption migration is the sole migration that
happens throughout the process of switching to a SaaS service; there is no simultaneous
migration of business applications that happens during this procedure. This is
analogous to the process of transitioning away from an ERP system that was already in
place on-premises and toward SAP, which is now being made available through the
cloud.

Even while there are, of course, considerable solutions available for each of these
migration use-case situations, organizations have streamlined their migration plan best
practices to account for the great majority of the cases that are often relevant. In point
of fact, the migration sector is highly dependent on individualized and personalized
best practices from its employees and clients. A significant portion of these
recommendations for best practices are directed directly at the level of the several
constituents that constitute a business program. One such instance might be the
relocation of application servers or corporate databases. Cloudonomics. Migration to
the cloud is almost usually motivated by a desire to reduce costs associated with both
the information technology capital expenditure (Capex) and the operational expenses
(Opex).

Utilization of the cloud comes with a number of advantages, the nature of which may
be classified as either short-term or long-term depending on the perspective taken.
Short-term benefits include leveraging opportunistic migration as a weapon against
seasonal and highly unpredictable IT loads, while long-term benefits include using the
cloud's capabilities to the greatest extent possible. One of the advantages that might be
realized in the short term is the balancing of seasonal and highly variable IT burdens

68 | P a g e
with opportunistic migration. As of the year 2009, a significant number of the
difficulties and constraints faced by the cloud computing services need to be overcome
before they can be used in a manner that is both long-term and sustainable. This is
because cloud computing is still in its infancy.

The statement of when it may be possible for a migration to be economically viable or


tenable is at the foundation of the cloudonomics model, which was devised by Ambrust
and his colleagues. If the typical expenses of using an enterprise program on the cloud
are much cheaper than the costs of using it in one's captive data center, and if the cost
of migration does not contribute to the pressure on ROI, then there is a strong argument
for moving onto the cloud. There is a case to be made for moving into the cloud if the
typical costs of using an enterprise application while it is hosted on the cloud are much
lower than the costs of utilizing it while it is hosted in one's own captive data center.

In addition to these costs, additional aspects play an important role in determining the
cloudonomics of migration. These aspects include the difficulties connected with
licensing (perhaps for portions of the business application), the SLA compliances, and
the price of the different cloud service offers. On a large scale, the vast majority of
cloud service providers provide a selection of pricing models for a number of elastic
compute, elastic storage, and elastic bandwidth option combinations. There is no
getting around the unavoidable fact that these pricing regimes may also be subject to
change. As a result, the cloudonomics that support migration need to be able to adjust
to price changes in a way that is suitable and relevant.

DECIDING ON THE CLOUD MIGRATION

In point of fact, several proofs of concept and prototypes of the business application
are studied on the cloud in order to make the most of the assistance they provide in
coming to an informed choice about migrating into the cloud. This is done in order to
make the most of the potential benefits of migrating into the cloud. This is done in order
to take full advantage of the opportunities that may present themselves as a result of
doing so. Following the completion of the migration, the return on investment (ROI)
for the migration should be positive throughout a wide spectrum of price fluctuations.
It makes no difference whose pricing are utilized since this is always the case. In order
to arrive at a decision on the conduct of migration, it is necessary to either have a
complete understanding of the factors that are driving migration or to make use of the
pragmatic approach of consulting a group of knowledgeable specialists.

69 | P a g e
Both of these approaches are necessary in order to reach a conclusion. Neither one of
these choices is the best one. The application of WideBand Delphi Techniques to the
process of decision-making in the second scenario is analogous to the approach that
underpins the estimate of software projects. The strategy that we implement is the one
that is broken down into the following sections: A questionnaire was sent to a specific
group of individuals who had been selected according to their knowledge in both
technological and commercial matters. The survey was made up of a great deal of
different categories of significant questions, all of which related in some way to the
effect that the transfer of the business application would have on the information
technology (IT).

Let's pretend that there are M other classes that are all comparable to this one. Within
the context of the comprehensive questionnaire, each category of questions is given a
certain relative weightage, which may be stated as i. Let's imagine that out of the M
distinct kinds of inquiries, there was one kind that had the ability to ask a maximum of
N questions. Let's also say that this one sort of inquiry was the most popular. As a
consequence of this, the process of making judgments that is dependent on weighing
may be modeled as a weightage matrix, as is shown by the following example:

Where can I locate the lower weightage barrier, and where can I locate the bigger
weightage threshold as well? while is the specific constant that has been allotted for a
question, and is the % between 0 and 1 that shows the degree to which the answer to
the question is relevant and applicable. while is the particular constant that has been
assigned to a question, and while is the percentage, it signifies the degree to which the
response to the question is relevant and applicable. A value of null is given to the
associated variable due to the fact that, with the exception of one class of questions,
none of the others contain all N of those questions.

A lower and a higher criteria have been established in order to exclude any occurrences
of migration that are not essential. The use of a balanced scorecard as the foundation
for decision making may constitute a reduced version of this method that may be
supplied to clients. An evidence of this method may be seen in the fact that Dargha has
adopted cloud computing.

70 | P a g e
SOFTWARE AS A SERVICE AND CLOUD COMPUTING

EMERGENCE OF SOFTWARE AS A SERVICE

Even in the earliest phases of the creation of the internet, it became plainly clear that
separate software products might be bundled together and marketed as "application
services" that were run on a remote server. This occurred even before it became
possible to execute applications remotely over the internet. This was true even back in
those days; there is no question about that. Application service providers (ASPs), such
as Corio, were among the pioneers in the industry when they initially began to establish
their companies. These application service providers, often known as ASPs, offered
hosted versions of the software packages that were, for the most part, identical to those
that businesses operated within their very own data centers. Customers were able to
access the program via the internet when this model was utilized. The program was
stored and administered within the ASP's data centers throughout its whole.

There were a number of factors that contributed to this first group of application service
providers (ASPs) failing to achieve success. To begin, there simply was not enough
available bandwidth on the internet at that time, which made it difficult to get things
started. Second, because the vast majority of the most popular software products, such
as ERP and CRM systems, were client-server applications, application service
providers (ASPs) resorted to the relatively inefficient practice of "screen scraping" in
order to enable remote access to these programs. This was necessary because of the
popularity of the client-server architecture. This decision was made due to the fact that
the vast majority of the most successful software products were client-server
applications.

They did this by utilizing methods for remote desktop sharing, such as Citrix
MetaFrame. This allowed them to do what needed to be done. These methods were
essentially responsible for the communication of screen-buffer visuals in both
directions over the network. Even more critically, the early application service
providers simply were not built to deliver considerable cost savings in comparison to
the older model. This was one of the most important aspects of the problem. Because
of this, they were required to pay licensing rates that were equivalent to those charged
by the suppliers of packaged software, but at the same time, they were unable to

711 | P a g
e
significantly add value by making use of economies of scale. Consequently, they had
to pay licensing fees that were equivalent to those charged by the suppliers of packaged
software.

One prominent early outlier was Intuit, which differentiated itself from its competitors
by essentially hosting their desktop accounting program QuickBooks. This helped
Intuit stand out from its competitors. This accomplishment set Intuit apart from its
rivals in the industry. The rapid ascent to prominence of Salesforce.com and the hosted
customer relationship management (CRM) solution that it supplied came next after this.
These findings may be attributed, at least in part, to the fact that the apps in question
were entirely web-based and were produced with the aim of being shared by a number
of different customers. In addition, the applications were designed with the intention of
being developed with the goal of being developed with the goal of being shared by a
number of different customers. In addition, the apps were designed with the idea of
being made with the goal of being shared by a variety of customers, which was another
goal of the design process.

These accomplishments resulted in the creation of a new generation of hosted


applications, each of which was constructed from the ground up utilizing web-based
architectures. In order to separate these applications from earlier versions of ASPs that
were built on more conventional architectural principles, the term "software as a
service" (SaaS) was coined for them. In comparison to the conventional software
development practices utilized by an organization's information technology
department, these software-as-a-service (SaaS) solutions provide the customer with
three significant benefits. These advantages can be summarized as follows: To begin,
there was no requirement for the company's information technology team to be
involved in the process of business customers subscribing to these services through the
internet using nothing more than a credit card. All that was required was for the
customer to have a credit card.

Users who were disappointed with the frequently time-consuming and drawn-out
process of engaging their corporate IT departments, as well as the lengthy lead times
for project completion, delays, mismatched demands, and other issues, found that this
was a pleasant respite. Users were pleasantly surprised to discover that they had some
degree of control over the "instances" of the SaaS products that they were using once
they got their hands on the necessary authorization credentials. For example, customers
of Salesforce.com were able to add custom fields, develop new forms, and design

712 | P a g
e
workflows all from inside the same browser-based interface. This was made possible
by the Lightning Experience. In the instance of Salesforce.com, this was also the
situation. On the other hand, the installation of equivalent changes to large CRM
systems like Siebel was a procedure that needed to be done by corporate IT and was
subject to the regular delays that are associated with projects of this kind.

This took significantly longer than expected. Therefore, business users would be able
to build a CRM for themselves, once again without assistance from the IT team of the
organization. The implementation of a bespoke CRM would make this objective
attainable. Third, there was no need for concern on the part of customers regarding the
delivery of product upgrades as they were not an issue. Users of the solution that was
offered by Salesforce.com would not experience any disruptions in service during the
process that the company would use to upgrade its service; instead, they would find
that the application that they were utilizing had improved in terms of the capabilities
that it offered.

In addition, it appeared that these upgrades took place on a significantly more frequent
basis than those programs which were maintained by the IT department of the
company, with the frequency of upgrading occurring in weeks as opposed to a
significant number of months or years. In point of fact, the meteoric rise of
Salesforce.com was directly responsible for the final collapse of Siebel, which at the
time was the most successful CRM provider. Salesforce.com's meteoric rise began in
1999. As a direct result of this, Siebel was driven to sell itself to Oracle, who at the
time was Siebel's most significant competitor. Oracle was Siebel's largest opponent.
This entails client lists as well as contact information in the context of customer
relationship management (CRM), both of which are obviously very sensitive and
significant pieces of information for an organization.

It is vital to point out that these purportedly beneficial characteristics of the SaaS model
were seen to be of sufficient value to balance the fact that, while utilizing SaaS, user
information was stored in data centers that were managed by the SaaS provider.
Because this is such a crucial component, it is imperative that you keep this in mind at
all times. The sales process is typically the one that is the most independent and has the
lowest amount of integration with the product delivery process in many different types
of enterprises. It should not have come as a surprise that the early success of the SaaS
model included a tool for managing relationships with customers. The core ERP
systems can be used to drive delivery, financial processing, and analytics once orders

81 | P a g e
have been made by sales and submitted into the system. This occurs after the orders
have been entered.

Because of this, the fact that sales data was stored 'outside the system' made it a great
deal simpler to manage than it would have been otherwise. Subsequent SaaS companies
eventually came to the conclusion that they needed to be selective in the kind of
products that they made available, as exemplified below: As examples of forms of SaaS
software, applications that, for example, manage human resources or provide assistance
to clients have been met with a great deal of positive reception. On the other hand,
supply chain or fundamental ERP activities like payroll and corporate financial
accounting have found significantly less success in a SaaS model.

This is because these functions are more complex and require more customization.
These endeavors are encompassed within the scope of enterprise resource planning,
sometimes known as ERP. In conclusion, analytics is a fast evolving subject that shows
promise as a prospective market for the expansion of software as a service (SaaS)
models, in particular with regard to cloud computing. This promise is exhibited by the
fact that analytics is a subject that is undergoing rapid development.

SUCCESSFUL SAAS ARCHITECTURES

Both Inuit and Salesforce.com developed their hosted solutions from the bottom up, in
contrast to the earlier generation of application service providers (ASPs). In their
business operations, they relied only on web-based architectures and took use of the
opportunity to reduce operating expenses given by "multi-tenancy" and
"configurability." As we will see in the next section, the incorporation of these
architectural features makes it possible for these solutions to give constant financial
benefits in contrast to conventional on-premise software. This will be shown to us by
the integration of these architectural aspects. This benefit was made attainable as a
direct result of the fact that these solutions were developed with the cloud in mind from
the very beginning.

The diagram on the right illustrates the fundamental elements that constitute a lucrative
SaaS architecture. The SaaS model cannot function properly without these constituent
parts. When one performs an analysis of the costs associated with managing a software
product in the conventional manner (that is, by having it installed on the customer's
premises), a sizeable portion of those costs are allotted to managing the various
versions of the product and providing support for updates to a number of different

80 | P a g e
customers. This is because traditional software management involves having the
software installed on the customer's premises.

When one considers the costs involved in managing a software product in the
traditional fashion, one finds that this is the case. This directly causes updates to be
performed less often and on a bigger scale, which leads to an unstable product and
necessitates the distribution and management of interim 'patches.' In addition, as a
result of the unpredictability of the product, it is often required to implement
enhancements on a more extensive scale. There are additional costs associated with the
customer carrying out the necessary processes, and these costs are subject to charge.
These costs cover the cost of obtaining and installing updates, as well as testing them
and performing any locally specific tweaks or integrations that may be necessary.

FIGURE 3.1. SaaS architecture

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

When seen from the perspective of the provider, the "multi-tenant" design helps to cut
the costs of sending updates by an order of magnitude: On the client side, these costs
are reduced to nearly nothing by a hosted software as a service model, while on the
provider side, these costs are reduced to almost nothing by a hosted software as a
service model. With multi-tenancy, the hosted SaaS application serves all of its
customers from a single code base; nevertheless, it nonetheless makes certain that the
information that is shown to each user is exclusive only to that particular customer. To

82 | P a g e
do this, access is granted to an appropriate data partition based on the identity of the
person who is currently signed in as well as the customer organization to which the
user belongs.

In the next chapter, we will go into the specifics of the physics involved in multi-
tenancy and how it may be used in a real-world environment. When a new version of a
software as a service (SaaS) product is released, it acts as a single production update
for the whole of the company's varied customer base. These customers are in essence
coerced into upgrading, and the vast majority of the time, they are not even aware that
they are being coerced into upgrading in the first place. Because the costs associated
with developing a new version of a product are very low, it is not absolutely necessary
to have many distinct iterations of the product in production at the same time as each
other. As a direct result of this, upgrades may be implemented more often, with a
smaller total footprint, and also have a greater chance of being reliable.

When compared to in-house computer systems, current software as a service (SaaS)


offerings provide a significant cost benefit due to multi-tenancy, which allows for more
flexibility and scalability. The adaptability and scalability offered by SaaS may be
responsible for this advantageous feature. When a company runs its operations using
conventional, on-premise software, the information technology department of that
organization is the one that is accountable for the creation, maintenance, and integration
of any software product updates. They are the ones that are in charge of the integration
of the software with the many other IT systems, as well. It is a well-established fact
that the costs associated with maintaining software are often two to three times higher
than the expenses linked with generating software, and the data that is shown below
provides empirical support for this assertion.

This is due to a number of the factors that were mentioned before, including the
management of several versions, which were described earlier. One of the factors that
contributes to the rise in the expenses associated with system maintenance is the
complexity of today's multi-tiered systems, which calls for the implementation of a
variety of technologies at each layer and will be discussed in more detail in the next
chapter. Consequently, building software that is produced on-premises specifically for
an organization is an endeavor that is both time-consuming and expensive. building
software solutions that are packaged needs considerable revisions, which may
frequently be just as costly as the original development. The software as a service
(SaaS) platform that was built by Salesforce.com allows end customers the opportunity

83 | P a g e
to alter the features and functionalities of their view of the product in line with their
own specifications in order to meet their own needs.

This was an important invention since it considerably cut down on the amount of time
that was required to set up a CRM system that would really be helpful. In addition,
corporate clients were able to modify the platform to better suit their requirements and
start making use of it right away, without first having to acquire clearance from the
information technology departments of the individual firms they worked for. Figure 3.1
demonstrates that customer-specific modifications were not saved as lines of code but
rather as an additional kind of data. This may be seen in the figure. In order to protect
the privacy of the information, this was done as a precautionary measure. Metadata is
the term that should be used when referring to information "about the application
functionality." It is the phrase that is used most suitably in this context. This meta-data
is interpreted by the code of the SaaS application when it is being executed, and this
code is the same for each client (or 'tenant').

As an immediate and unavoidable consequence of this fact, the program offers and
executes individualized functionality for each and every client. We will go into the
specifics of the several technological approaches that may be used to accomplish this
goal. Customers were able to save money by avoiding the expensive costs associated
with conventional software development since Salesforce.com gave end users the
chance to make some enhancements themselves, as was stated above. This enabled
customers to realize significant cost savings. These expenditures almost always come
with a significant price tag. In addition, as we have seen in the past, the use of multi-
tenancy enabled them to drastically cut down on their own internal costs, which was a
big benefit. A portion of these cost savings were also transferred to the client in the
form of lower prices, which they received as a result. In addition to the benefits of quick
deployment and independence from the information technology (IT) of the customers'
separate organizations, customers noticed, on average, significant cost savings.

DEV 2.0 PLATFORMS

The ability to show functionality by interpreting meta-data was quickly expanded by


Salesforce.com so that it could include a great deal more application functions. This
enabled Salesforce.com to accommodate a lot more customers. Because to the
company's dedication to using open source software, this advancement was made
feasible. This opened the door for the creation of apps that were mostly independent

84 | P a g e
from one another but yet made use of the same hosted SaaS platform. After some time,
a scripting language was created in order to expose these features as their own
independent hosted development platform. Today, we refer to this as Force.com, and it
has a different identity in contrast to the CRM product.

At the same time, a number of start-up organizations such as Coghead and Zoho, in
addition to certain big firms such as TCS1, had launched their own attempts to develop
interpretative platforms on which programs might potentially be written "over the
web," perhaps by end-users. These several platforms were being worked on at the same
time. We refer to these kinds of platforms as "Dev 2.0" platforms since its primary
objective is to include end users in the process of application development. The
platforms that come within this category include those that were detailed previously.
This is analogous to the manner in which Web 2.0 technologies, such as social
networking and blogs, have made it possible for end users to contribute content to the
internet.

FIGURE 3.2. Dev 2.0 architecture

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

85 | P a g e
The general configuration of the architecture of a Dev 2.0 platform is shown in figure
3.2 below. A piece of software known as an "application player" is tasked with the
responsibility of demonstrating an application's capabilities in accordance with the
meta-data that has been supplied. This is comparable to how a media player displays a
video file or how a word processor displays a page. In other words, it is the same thing.
Users of the Dev 2.0 platform have the opportunity, in a manner similar to that of a
word processor, to make changes to the features and capabilities of an application while
the product is still "played," which is often the case. Users are given access to this
capacity in a way that is analogous to that of a word processor. Comparable programs
include word processors that, for the most part, make use of a WYSIWYG interface.

Supplanted earlier typesetting languages that needed compilation prior to their being
able to be formatted, and the goal of Dev 2.0 is to replace application-specific code
with meta-data that can be viewed (and updated) in a web application player. In contrast
to the previous version, which used a number of different languages in the process of
typesetting, this one uses just one language. It is recommended that you make use of a
typesetting language (in this instance, LATEX) while working on long and complex
publications like as this book; similarly, Dev 2.0 is most beneficial when working on
smaller programs as opposed to really large ones. We will talk about a variety of other
Dev 2.0 platforms, as well as their applications, restrictions, and potential next steps.
we take you step-by-step through the process of designing Dev 2.0 platforms and all
the stages required.

At this point in time, the technology that underpins Development 2.0 platforms is still
very much in its infancy. In the not too distant future, it is likely that even programs
that are designed from the ground up may have the power to interpret meta-data. As a
consequence of this, it is feasible that the combination of Development 2.0 and cloud
computing will lead to the development of new paradigms for the design of business
system architecture. There is some evidence that this combination has already started
to take place, which may lend credence to this theory. We are going to spend some time
reviewing the past of cloud computing, and then ultimately, we will circle back around
to this fundamental problem.

COMPUTING IN THE CLOUD

The term "software as a service," abbreviated as "SaaS," refers to the process of making
pre-packaged software available to users through the internet. In contrast, cloud

86 | P a g e
computing makes a more basic level of infrastructure and tools available via the internet
in data centers that are managed by a cloud provider. These data centers house the cloud
provider's servers. These data centers are operated by cloud companies. Another term
for "software as a service" is "software as a service." "Software as a service" It is of the
utmost importance to bear in mind that hosting services, such as those that are used for
websites, have been there for a number of years concurrently with the rise of the
internet. However, on their own, these services do not meet the criteria to be classified
as "cloud" services.

In order to have an understanding of the other factors that are responsible for the recent
rise in interest in cloud computing, it is necessary to begin by tracing the development
of cloud computing as it was implemented by the industry's forerunners, namely
Amazon and Google. Because of this, we will be able to get an understanding of the
other factors that are responsible for the recent spike in interest in cloud computing.
Because of this, it will be possible for us to have a knowledge of the additional traits
that are to blame for the current rise in interest that has been shown in cloud computing.
It is conceivable for SaaS providers like Salesforce.com to develop such systems using
web application server architectures, which were covered in the chapter that came
before this one. Because of this, SaaS companies like Salesforce.com are able to design
such systems.

The number of end users for each tenant application was often on the lower end of the
range since these agreements were frequently aimed at either small or medium-sized
organizations. As a direct result of this, it was very simple to offer more servers as the
number of tenants expanded. This was because each server serviced a large number of
renters, which made it easy for the server to serve a significant number of renters.
Unless there are extraordinary circumstances, such as when working with a limited
number of large clients who each presumably had a few thousand users, a typical
application server design is adequate. The only exception to this is when there are
unique circumstances. This architecture distributes the load among a number of servers
in the same way as it would in the event of an on-premise deployment, which is to say
that it performs exactly the same function. The extent of use was startlingly equal to
that of conventional business software as a direct result of this fact.

As Amazon transitioned from being an online bookstore to a hub for online purchasing,
the company encountered a whole new set of challenges, which it had not before faced.
On the other hand, the company came up with a solution to these problems that was not

87 | P a g e
only highly inventive but also reusable. This, in the end, led to the birth of a whole new
industry that is centered on cloud computing. Amazon was the first company to provide
its services via the use of a platform that was hosted in the cloud. Let's begin by
discussing how difficult Amazon's software package really is to use. It is important to
make use of a range of services that are given by very complex applications in order to
show a single page that highlights a book. This is because displaying the page will
highlight the book. Platforms for rating and reviewing products, as well as
recommender and collaborative filtering systems, are included in this category of
services.

Next, since seasonal retail businesses have both highs and lows throughout the year, it
was necessary for Amazon to consistently check load and automatically give more
capacity based on the demand from consumers. This was a need because seasonal retail
enterprises experience both highs and lows. Because of the ups and downs that seasonal
retail enterprises experience throughout the year, this was a must. They finally became
a store that catered to a "long tail" of highly unique items, and at that point, they got
the realization that they needed to assist their suppliers with some basic parts of
information technology. This led to their helping their vendors with certain
fundamental aspects of information technology. The fact that many of their vendors
depended on a small number of personal computers to meet their computing
requirements meant that this presented a challenge for the organization.

Take, for example, the idea of a virtual machine, which originated with the
development of mainframe computers and was first put into practice during that time
period. In recent years, technologies for virtual machines have been created (or, more
precisely, adapted) for well-known contemporary hardware architectures such as the
Intel X86 family. Examples of these technologies include virtualization software and
hypervisors. Software and hardware for virtualization are both examples of these types
of technologies. These kinds of technologies include, for instance, virtualization
software and hardware in both their physical and software forms. (Some examples of
software that fall under this category are VMware, Xen, and KVM; the latter two are
examples of pieces of software that may be acquired at no cost.)

Through the use of virtualization, many logical operating systems are able to share the
resources of a single physical machine. This is feasible because to the fact that many
versions of an operating system may coexist on the same piece of hardware. This is
made feasible with the aid of a hypervisor, which serves as a simulator for the

88 | P a g e
underlying hardware model and is responsible for running the several operating
systems that are installed as guests. This makes it possible to run multiple operating
systems simultaneously on a single piece of hardware. This is comparable, on a high
level, to multi-tenant software as a service (SaaS), which is a model in which a single
application player 'runs' a variety of distinct meta-data configurations for a number of
different tenants. In the next chapter, we are going to go even further into the principle
of virtualization.

In order to automatically and dynamically provide computers with those programs that
were most used as demand fluctuated during the year, Amazon made extensive use of
virtualization. This is something that many other large companies are now doing as
well, and it's something that Amazon did first. This is something that a great number
of other significant organizations throughout the world are also doing at the moment.
They were able to establish a high degree of automation in this process, which allowed
them to conceive of and launch their S3 (simple storage system) and EC2 (elastic
computing cloud), which allowed clients to rent storage and computation capacity on
Amazon servers. S3 stands for "simple storage system," while EC2 stands for "elastic
computing cloud." S3 is an abbreviation for "simple storage system," while EC2 is an
abbreviation for "elastic computing cloud."

This was first envisioned as a tool that could be used by Amazon's business partners
and suppliers; however, in order to carry out an experiment, it was decided to make this
resource available to the general public. Because of the immense success of the firm,
Amazon found itself in the position of being an early pioneer in the field of cloud
computing, despite the fact that this was entirely unintentional. There are key
differences between the Amazon cloud and traditional hosting providers: (a) the degree
of automation made available to end-users, as web services, to control the number of
virtual instances running at any point in time, (b) the ability for users to package and
save their own configurations of virtual machines (as Amazon machine images, or
AMIs) and (c) charging per hour of actual usage as opposed to the monthly or yearly
charges for traditional hosting.

In addition to this, Amazon made some of the system software tools that it had built
accessible to customers. These tools were developed by Amazon itself. The Amazon
SQS (simple queuing service) and the SimpleDB (a non-relational database) are
examples of the technologies that fall under this category. These allowed a huge
number of cloud users to construct complicated applications without having to depend

89 | P a g e
on the installation and setup of conventional middleware and database technologies.
This was made possible by the cloud. The cloud service makes it possible to do all of
this.

Google, on the other hand, needed orders of magnitude more computational power than
even the largest enterprise could muster. This was due to the scale of computing power
that was required to support large-scale indexing of the web, an enormous volume of
searches, and machine learning-based targeting of advertisements across this volume.
Even the largest enterprise could not muster this amount of computing power. To put
it another way, Google needs to be able to handle the indexing of vast amounts of
content on the web. On the other hand, it is said that Google has more than a million
servers up at any one time. It is rather rare for the large banks of today to have tens of
thousands or even hundreds of thousands of servers.

Google came up with a range of innovative methods to programming models for large-
scale distributed processing while it was working to resolve the issues it was having
with its computing infrastructure. These concepts have now developed into standards
for the industry. The Map Reduce model is one of these models, and it allows the
splitting of a series of activities that are to be done on a very large data collection, as
well as the execution of these tasks in parallel over a very broad number of computers.
Additionally, the model enables the separation of a series of activities that are to be
done on an extremely large data collection. In addition, the MapReduce model is the
name given to this particular paradigm. Their 'big table' concept of a data store, which
was a non-relational database that was distributed across a very wide collection of
physical storage and had built-in redundancy as well as fault tolerance, was what made
this practical.

Their 'big table' concept of a data store also had built-in fault tolerance. A non-relational
database with built-in redundancy and fault tolerance is referred to as a huge table. This
kind of database is spread out over an extremely large collection of physical storage.
In conclusion, in order to effectively handle the enormous amount of search requests,
it was necessary to construct a very scalable application server architecture that was
conceived of right from the start as being able to run in parallel with other processes.
This was an absolutely necessary step in the process before we could get the outcomes
we wanted. As a consequence of this, when Google launched their cloud solution,
which is now known as the Google App Engine, it seemed to be very distinct from the
cloud that Amazon offers.

90 | P a g e
Users were able to write code with the assistance of development libraries (at first in
Python, but now also in Java), and once the code was complete, it could be deployed
to Google's App Engine. Because it was configured in this manner, the code would be
considered to be "running" anytime a web request was sent to the appropriate URL.
This was the result of the way it was put up. This is because the code had already been
put into production. In addition to this, the application would grow itself in a
mechanized fashion on top of the large-scale distributed execution architecture that
App Engine provides. Users did not need to do anything out of the ordinary in order to
make their application expand from supporting very few to many millions of requests
per day (except for paying for larger load volumes, with a basic level of up to 500,000
hits a month being free!).

Instead, all that was required was for users to pay for larger load volumes. It was
possible to accomplish scalability without asking consumers to do any actions that were
out of the usual. As a direct result of this, users did not have to do any actions that were
especially unusual in order to make their application support anywhere from a very
small number to many millions of requests on a daily basis. The Google Datastore,
which operates in a way that is comparable to that of a non-relational distributed
database, was employed in order to fulfill the need of providing data services. If the
code for the application were constructed in such a way that it exploited the resources
that were available in an appropriate manner, then massively parallel data
dissemination and querying of that data would be automatic. The Google App Engine
is a great example of an entirely new application design since it utilizes automated load
balancing and distributed execution of code.

Since it is general information that vanilla Python does not scale well to a high number
of CPUs on a normal machine, it is very plausible that Google's Python interpreter also
incorporates improvements of a similar sort. This is because it is common knowledge
that vanilla Python does not scale well to a large number of CPUs. This is because the
Java version of the App Engine utilizes a JVM that was custom-made by Google
particularly to fit this paradigm at the most basic level. The reason for this may be
found in the fact that the App Engine was built by Google. Because Google has not
made any information on the structure of its distributed execution architecture publicly
available, all that we can do with regard to this topic is hypothesize about it.

A comparison of the Amazon cloud and the Google cloud is shown in a head-to-head
fashion in Figure 3.3. This comparison is shown in a side-by-side layout. In contrast to

91 | P a g e
the Google App Engine, Amazon customers are expected to actively construct a parallel
architecture by making advantage of the web services that are made available to them.
This is in contrast to the situation with Google's App Engine. This is in contrast to
Google App Engine, which constructs such an architecture automatically for the users.
This is required in order to resolve concerns about the scalability of the system, and it
is essential that it be done. On the other hand, they are granted permission to install any
software of their choosing on virtual machines that are running either Linux or
Windows and are given root access to those computers. This allows them to run any
operating system. Users of the App Engine platform are required to create any new
apps with the help of the App Engine Software Development Kit (SDK).

FIGURE 3.3. Cloud models

Source: Enterprise cloud computing data collection and processing through by


Gautum Shroff

A further corporation that has joined the cloud computing sector is Microsoft, and it
has done so with the assistance of its Azure platform. This is somewhat comparable to
what Google does; the primary distinction is in the fact that rather than only providing
clients with access to raw virtual machines, it offers them a software platform on which
they are able to build applications. This is a relatively close analog to what Google

92 | P a g e
does. The fact that Azure was constructed on top of Microsoft's existing well-
established and successful programming stack is perhaps the most significant
component of this cloud computing platform. Azure users now have the ability to
accurately characterize the runtime properties of an application thanks to the platform's
newest addition. Because of this, the customer now has some level of indirect control
over the deployment of the program, which is a major step forward and a tremendous
improvement.

The Amazon model is an illustration of infrastructure as a service (IaaS), while the


Google and Microsoft models are demonstrations of 'platform as a service' (PaaS)
offerings. Which will be made accessible to readers very soon, we are going to go even
farther and more deeply into each of these cloud models. Now, let's circle back to the
issue we were talking before, Development 2.0, and analyze the possible consequences
for major enterprises that the marriage of Development 2.0 with cloud computing may
have. In particular, we'll be focusing on the potential benefits and drawbacks of this
combination.

DEV 2.0 IN THE CLOUD FOR ENTERPRISES

The ramifications of the IaaS cloud model for software development processes in IT
firms might be significant, particularly in the context of outsourcing and internationally
dispersed development:

1. The problem of 'control' of server infrastructure is frequently a barrier;


historically, a server is controlled by either the business that is considered the
'client' or the organization that is considered the ‘services provider.' In the cloud,
control may be ‘shared' and transferred at whim, which enables efficient
distributed development independent of physical location. Also, when it comes
to geographically dispersed teams, employing the cloud ensures that no group
is treated differently simply due to the fact that they are physically closer to the
underlying server infrastructure.
2. The process of procurement and provisioning in the cloud may be time limited
and is orders of magnitude quicker. This can considerably speed up the process
of project execution, user testing, and assessing production faults, among other
things.
3. 'True' copies of the real production environment may be dynamically provided
in the cloud for a short duration, allowing for early performance testing that
would otherwise be impossible due to the prohibitive cost.

93 | P a g e
IaaS cloud architecture offers a high degree of control, which is one of the reasons why
it is more likely to discover various applications in large organizations sooner than
PaaS cloud architecture does. Another argument is that IaaS cloud architecture is more
secure than PaaS cloud architecture. IaaS cloud architecture also offers more flexibility,
which is another advantage. PaaS clouds provide less control.

On the other hand, the question of whether or not there are overall economic benefits
to using cloud computing is primarily dependent on the computational profile that is
required by an organization. This is because cloud computing is a relatively new
phenomenon. The question of whether or not there is a net economic gain to using
cloud computing is one that is constantly being debated. It is not immediately evident
why organizations cannot copy these models internally (after all, Amazon was a retail
firm first, and a cloud provider only afterwards), nor is it immediately clear that
reproducing them would necessarily be worth the effort for all enterprises.
Nevertheless, it is not immediately clear that organizations cannot duplicate these
models internally because of legal restrictions. Despite this, it is not immediately
obvious why organizations are unable to replicate these models inside their own walls.

Regardless of this, it should come as no surprise that companies are unable to replicate
these types of models. We go further deeper into this subject and provide a solution
that is more comprehensive in the section where we examine the financial implications
of cloud computing. The title of this section is "The Economics of Cloud Computing."
The proliferation of cloud computing platforms not only causes an issue, but it also
presents an opportunity to assess the "success" of the SaaS business model. In spite of
its broad usage in some application areas and marketplaces, "control over data" has
always been the primary barrier that has prevented larger organizations from adopting
SaaS in greater numbers. This is the case even if its use is prevalent in certain
application industries and markets. This has been the case for as long as there has been
the capability to do so thanks to advancements in technology.

However, if the cloud infrastructure that is used by SaaS providers and customers is the
same, it is possible for SaaS apps to utilize storage and databases that are 'owned' by
the customers. This is only the case if the cloud infrastructure that is used by SaaS
providers and customers is the same. This would not have a negative impact on
performance, nor would it result in the loss of any of the advantages that are associated
with the SaaS model. Neither of these outcomes would be a consequence of doing this.
Because of this, it would also be possible to communicate with the internal information

94 | P a g e
technology systems of the end customer in a manner that was previously not
practicable. This would make the process much smoother.

In a similar vein, hosted Dev 2.0 platforms may be used on 'customer-owned' data that
is housed in the cloud rather than a proprietary database that is controlled by the Dev
2.0 platform provider. This allows for more flexibility and customization for the end
user. In addition to this, it is possible to use a number of different independent Dev 2.0
platforms all at the same time while sharing the same data that is owned by the client
in the cloud. It's possible that these platforms originated from the same platform
provider, but they might also have originated from separate platform providers. For
instance, one tool may be used for operations based on forms, while another tool might
be used for analytics. Both tools are examples of software applications. It's possible
that one and the same piece of software might handle both of these tasks. We investigate
some of these possibilities and talk about one Dev 2.0 platform called TCS InstantApps
that can operate with data that belongs to the user even if it is kept in the cloud.

Last but not least, the newly developed extremely scalable and distributed PaaS
platforms, such as App Engine and Azure, bring about a new potential for businesses
to participate in large-scale analytical activities that are now prohibitive to do so: For
instance, a retail chain that might reap the benefits of regularly conducting a highly
compute demanding computation on enormous quantities of data (such as point-of-sale
data), but might not do so at the moment because doing so would require the chain to
invest in thousands of servers that would typically go unused, is an example of an
organization that might consider delaying the implementation of such a practice.
However, in the future, the network could be able to gain advantages by carrying out
calculations of this kind at regular intervals. There is a possibility that the retail chain
will not do so even if there is a possibility that it will gain from doing so at this moment.

On the other hand, scaling would be automated via the use of apps that were built on
scalable distributed cloud platforms, and a broad range of hardware resources would
only be used when they were required. This would allow for more flexibility. Because
of this, a significant amount of capacity would be made available inside the system.
Even though these apps would be using a much smaller number of resources to work,
they would still have access to the massive data collection and would perhaps be able
to update it whenever new information was obtained. In other situations, the same
applications may be able to function while using a significantly smaller number of
resources.

95 | P a g e
As was described in, in order to take use of the underlying non-relational yet highly
distributed data stores that are supplied by these models, analytics programs would
need to be upgraded. This would be necessary in order to take advantage of the benefits
that are given by these models. This subject was discussed in great depth. What type of
a future would it be for the company's information technology environment if it
simultaneously consisted of conventional systems, internet services, and in-house
corporate clouds? Once interpreters that are based on models start working in an
environment that is hosted on the cloud, what does this indicate for the future of
software development utilizing interpreters that are based on models? We have a theory
that as the complexity of IT infrastructure continues to increase, all businesses,
regardless of how large or small they are, will eventually be forced to either adopt cloud
computing and development 2.0 technologies on their own, or make use of similar
publicly accessible services, in order to keep their current levels of productivity.

This is because the complexity of IT infrastructure is expected to continue increasing


at a rate that is faster than the rate at which it is expected to increase. They will not be
able to stay ahead of the competition in any other manner than by following this
strategy. They will choose how to distribute their data across the "private" clouds that
are located on their internal networks and the "public" clouds that are located on their
external networks. They will base their choices on the priorities that are most important
to them. In order for them to carry out any activities on this data, they will need a
combination of conventional applications and SaaS applications. They will construct
these apps by making use of conventional tools and procedures, but in addition to that,
they will make greater use of Dev 2.0 platforms. Because we believe "Dev 2.0 in the
cloud" to be a potentially paradigm-shifting event for business applications and
corporate information technology, one of our goals is to investigate such likely future
possibilities. In the next chapters of this book, in addition to talking about the specific
technical characteristics of cloud computing and other linked technologies, we will also
speak about the requirements of corporate architecture.

THE ONSET OF KNOWLEDGE ERA

The paradigm of cloud computing, which is fast becoming the industry standard, is in
reality the consequence of the creative aggregation of a number of online and business
technologies that have been tested and proved in addition to those that have the
potential to be lucrative in the future. In other words, the paradigm of cloud computing
is the result of the clever aggregation of a number of online and business technologies.

96 | P a g e
The cloud is not a novel concept in terms of the underlying concepts that drive it; yet,
the deployment of the cloud has resulted in a number of earthquakes across the whole
information and communication technology (ICT) sector. The concepts revolving
around the cloud have gradually and obviously had an effect on the information
technology industry as well as the business sector with respect to a variety of essential
areas.

Cloud computing has made available a number of innovative approaches to


deployment, distribution, consumption, and pricing, while service orientation promotes
an approach to application design that is far less difficult. The rapid implementation
and spread of dynamic, convergent, adaptive, on-demand, and online compute
infrastructures, which are the key necessity for future information technology, is the
considerable contribution of the much-discussed and deliberated cloud computing. This
is a major step forward in the evolution of information technology. Because this is the
primary need for the development of future information technology, this contribution
is very important. It is impossible to place an adequate amount of emphasis on the
relevance of this specific aspect of cloud computing. Clouds, on the other hand,
guarantee that the bulk of the non-function demands (attributes of Quality of Service,
or QoS) are met.

These needs include things like accessibility and usability on a global scale,
affordability, on-demand scalability and elasticity, and energy efficiency, amongst
other things. Clouds also provide high levels of availability and performance. This is a
refreshing departure from the conventional method that has been used. As a result of
the remarkable properties of cloud infrastructures, which from this point forward will
simply be referred to as clouds, the vast majority of businesses all over the world,
including small, medium, and even large ones, are in the process of gradually migrating
their information technology (IT) products, such as business services and applications,
to clouds. This is happening as a direct result of the fact that cloud infrastructures are
becoming increasingly popular. Clouds are going to be referred to as clouds from this
point on.

Because of this change, it will be feasible to expand one's sphere of influence farther
and more profoundly, as well as to grow one's supply of applications and the use of
those applications. Additionally, it will be possible to increase one's supply of
applications and the use of those applications. Product companies are migrating their
platforms, databases, and middleware to environments that are hosted in the cloud

97 | P a g e
because they have realized that the cloud model gives a major advantage over the
competition. This realization came about as a result of the fact that the cloud model.
Cloud infrastructure providers are now building cloud centers in order to facilitate the
hosting of a diverse range of information and communications technology (ICT)
services and platforms. Individuals, company owners, and organizations located in a
variety of geographic locations throughout the globe make use of the aforementioned
services and platforms.

When it comes to the testing of innovative cloud technologies and the widespread
adoption of such technologies, cloud service providers, also known as CSPs, are very
active participants. At the present, all commercial and technical services are being
saved in the clouds so that they may be made accessible to customers, clients, and
consumers situated in any part of the world utilizing the communication infrastructure
that is supplied by the internet. This is being done so that these services can be offered
to customers, clients, and consumers located anywhere in the world. For instance,
security as a service, often known as SaaS, is a well-known cloud-hosted security
service that enables users of any connected device to subscribe to its safeguards and
get updates remotely.

Users may also refer to this service as SaaS. Users are only required to pay for the exact
amount or period of time that they make use of the service, and there is not a recurring
monthly payment that is associated with the purchase of this service. In a nutshell, the
local and on-premises execution of programs is gradually losing way to the online,
remote, hosted, on-demand, and off-premises execution of such programs. The most
prominent market research surveys suggest that the cloud movement is quickly picking
up momentum as a consequence of the unparalleled promotion, articulation, and
acceptance of cloud-based ideas. This is the result of the unprecedented promotion,
articulation, and adoption of cloud-based concepts. This is as a result of the fact that
cloud computing is gaining more and more attention these days.

In addition to the modernization of legacy programs and the installation of the updated
and enhanced versions of these programs in clouds, new applications are being
developed and deployed on clouds in order to provide services that are accessible to
millions of users all over the world at the same time and that are also cost-effective. In
light of this, it should not come as a surprise that, behind the scenes, the rapidly
developing area of cloud computing is home to a number of significant and strategic
adjustments. This is because cloud computing is home to a number of crucial and
strategic developments.

98 | P a g e
All of these reasons indicate to the addition of a new component to the integration
scenario, and they also serve as a forecast for its occurrence. This new component will
be included since all of these indicators point to it. Business data and applications have
been brought together within of the context of the corporate intranet up to this point by
using a wide variety of integration platforms, brokers, engines, and containers that are
consistent with standards. Integration between businesses, also known as B2B
integration, is now handled via the use of specialized data formats, message templates,
and networks, in addition to the Internet itself. This kind of integration is also referred
to as B2B integration. When businesses want to improve the products and services they
provide, one strategy that they often use is to consistently extend their operations into
new geographic locations throughout the globe.

They may accomplish this goal in one of two ways: either by cultivating one-of-a-kind
relationships with their existing business partners or by buying new businesses
headquartered in other parts of the world. Cloud computing platforms are quickly
becoming the platform of choice, and as a result, the migration of business applications
is taking place on these platforms an increasing amount. Concerns about the integrity
of the data force the organization to continue storing the great majority of its sensitive
information on corporate servers, despite the fact that this information belongs to the
organization. The duty of integration only gets more extensive, and the complexity of
the process of integration becomes more complicated, as a result of the addition of the
cloud area to the system. Because of this, it makes perfect sense to shift integration
middleware to cloud environments in order to facilitate the simplification and
standardization of enterprise-to-enterprise (E2E), enterprise-to-cloud (E2C), and
cloud-to-cloud (C2C) integrations.

After having started its run as the most critical component in the success of
organizations, information technology (IT) is swiftly becoming a crucial ingredient and
a facilitator in every part of human life. This comes after IT began its run as the most
essential component in the success of businesses. Some examples of the people-
centered and forward-thinking new technologies that are now evolving include
miniaturization, virtualization, federation, composition, and collaboration. These are
just a few examples. The testing, development, and establishment of these technologies
are being done with the intention of making it feasible for information technology to
be intelligent, simple, flexible, and sensitive towards the situational needs of users. This
applies to both professional and personal information technology. People's levels of
comfort, care, convenience, and choice will all significantly improve as a result of this.

99 | P a g e
All three of these ideas—federation, composition, and cooperation—are connected to
one another in some way. New computing paradigms (such as grid, on-demand,
service, and cloud computing, among others) are rapidly spreading and expanding, and
as a result, they are becoming more influential and perceptive. Grid computing, on-
demand computing, service computing, and cloud computing are a few examples of
these types of computing models. During the era of the monolithic mainframe, a single
huge and centralized computer system was used to carry out millions of activities in
order to deliver a response to thousands of users (one-to-many). This was accomplished
using a process known as "one-to-many." Today, everyone has their own computing
machine (one-to-one), and tomorrow, there will be a multitude of smart objects and
electronic devices (nomadic, wearable, portable, implantable, etc.) that will seamlessly
and spontaneously coexist, corroborate, correlate, and coordinate with one another
dynamically and dexterously in order to understand the intentions of one or more users.

The skill of being able to carry out computational activities anytime and wherever has
a propensity to grow into the skill of being able to carry out such operations continually
and everywhere. The word "ambient intelligence" (AmI) is the most up-to-date term
that has surfaced in the recent years; it describes technologies that combine ambient
sensing, networking, perception, decision-making, and actuation. "Ambient
intelligence" (AmI) is the most recent phrase to emerge in recent years. This term was
created in order to characterize the aforementioned technological developments. There
has been a significant increase in the production of technologies that combine several
distinct modes of communication in an effort to make human connection nicer and
more fruitful.

Dynamic, virtualized, and autonomic infrastructures, flexible, integrated, and lean


processes, constructive and contributive building-blocks (such as service, model,
composite, agent, aspect, and so on), slim and sleek devices and appliances, smart
objects empowered by invisible tags and stickers, natural interfaces, and ad hoc and
situational networking capabilities all combine adaptively together to accomplish the
grandiose goals of the upcoming days and decades of ambient computing. To put it
more simply, the mission is to improve the quality of every aspect of our lives on our
planet by fostering the development of and making widespread use of information
technology. The most advanced use of available technology is what is required to
achieve this goal. As shown by the development of service-oriented ideas and the
software as a service (SaaS) model, software engineering seems to be moving in the
correct way.

100 | P a g e
Clouds have made a significant contribution to the development of contemporary
technology, which is largely responsible for the so-called "information age" that we
live in today. The technologies are brought together to form a dynamic cluster in real
time in order to contribute considerably and enormously for all of the current,
developing, and ridiculous demands that people have. Bringing these technologies
together has been described as "clustering."

THE EVOLUTION OF SAAS

The Software as a Service (SaaS) concept is gaining ground as a consequence of the


inherent capabilities and possibilities it has. Executives, entrepreneurs, and end-users
are all overjoyed about the success of the SaaS paradigm as it continues to expand and
flourish. This success can be measured in terms of the tactical and strategic applications
that it offers. As it progressed, this model started to show indications of a wide range
of positive and advantageous advancements. There is a continuous process of
preparation going place, in which more current actions and resources are getting ready
to be given as part of a service. Industry professionals and fans alike have praised the
cloud as the best prospective infrastructure choice for effective service delivery, and
they are all in agreement that it is poised to shake up the whole IT community.

Cloud computing has a broad range of opportunities for creative and amazing problem-
solving applications that may be utilized to address a number of information technology
challenges. There is just a very limited list of services that are now being given via the
use of cloud computing; but, in the not too distant future, a great deal more software
that is mission-critical will be installed and utilized. In a word, cloud computing is
going to eradicate all sorts of inflexibility in information technology and bring in an
expanding number of advances that will get today's IT ready for a future of sustainable
prosperity.

IT as a Service (ITaaS) is the most cutting-edge and efficient method of distribution


that is now accessible in the dynamic world of information technology. Every single
information technology resource, activity, and infrastructure is now being viewed and
imagined as a service that sets the tone for the significant unfolding of the anticipated
age of service. This shift in perception and imagination is paving the way for the
anticipated age of service. This is because the concepts of service orientation have had
a rapid and interesting surge in popularity. In today's world, the design and
development of systems center on the creation of complex collections of cutting-edge

101 | P a g e
services that are still in the process of being developed. Because of the services that are
enabled on infrastructures, it is now possible for them to actively participate in joint
projects and activities.

In the same spirit, the much-maligned delivery aspect has also gone through different
alterations, and at this point, the whole world has completely elected upon the
environmentally friendly paradigm known as 'IT as a service (ITaaS)'. The
pervasiveness of the Internet brings this point home in a very clear and compelling way.
In addition to this, we are confronted with an overwhelming variety of implementation
strategies and technologies to choose from. Clouds are the most evident and viable
infrastructure for the deployment of ITaaS, as was stated before. Another component
that has a large effect and is exceptional is the consumption-based metering and
invoicing capability, which has achieved a mature level. This is an example of an
amazing component. Even HP has acknowledged this growing trend, which it refers to
as "everything as a service." The term "everything as a service" was coined by HP.

IaaS, which stands for "integration as a service," refers to the growing and specialized
capabilities of cloud computing to address the requirements of corporate integration. It
is becoming more normal practice to host business applications in cloud environments
in order to take advantage of the many financial and technological benefits. On the
other hand, despite the fact that there are an endless number of locally stationed apps
and data sources, the key reason for this is security concerns. In order to allow
applications that are hosted on the cloud and those that are installed locally to work
together, the current subject of discussion is how to create a link between the two types
of software that is invisible to the user. IaaS is able to get around these challenges by
skillfully using the tried-and-true business-to-business (B2B) integration technology as
the value-added bridge between on-premises business applications and SaaS solutions.
This allows IaaS to provide a seamless transition for its customers.

Because they have traditionally been used to automate commercial operations between
manufacturers and their trade partners, business-to-business (B2B) systems are in a
position to drive the newly developed paradigm of on-demand integration. As a direct
consequence of this, these computer systems are now in a position to be able to do so.
This suggests that in addition to the feature that is very vital for securely linking internal
and external programs, they also allow application-to-application connection. This is
crucial since it enables apps to communicate with one another. B2B platforms offer the
capacity to encrypt information to ensure their security while being sent over a public

102 | P a g e
network. Additionally, these platforms can handle massive amounts of data, transfer
files in batches, convert files to formats that are not compatible with one another, and
guarantee data delivery across numerous businesses. In contrast to this, classic EAI
systems were developed with the exclusive intention of facilitating the exchange of
data inside an organization.

IaaS only duplicates this tried-and-true paradigm of communication and cooperation in


order to provide a reliable and long-lasting connection that allows traditional and cloud-
based systems to exchange data in a way that is seamless across the underlying
architecture of the web. This is made possible by IaaS's ability to offer a connection
that is trustworthy and secure. The implementation of a hub-and-spoke (H&S) design
not only further simplifies the installation but also helps to avoid the client sides from
being overwhelmed with an excessive amount of processing work. This is
accomplished by distributing the workload across several nodes in the network. For the
purpose of carrying out labor-intensive processes such as the reformatting of data, the
hub is situated inside the cloud center that is run by the SaaS provider. The majority of
the time, a spoke unit that is situated at each user site will be the one to carry out the
duties associated with fundamental data transit utility.

When all of these components are in place, software as a service (SaaS) firms will be
able to provide integration services using the same subscription and usage-based
pricing model that they use for their primary products. This will allow these companies
to charge customers for the integration services in the same manner as they do for their
main goods. The process of moving various kinds of standardized and centralized
services to the cloud is getting more common these days and is growing more prevalent.
Because resources are becoming increasingly distributed and decentralized, an
infrastructure that consists of a number of different components is necessary in order
to link and utilize these resources to accomplish a range of goals. Clouds, being Web-
based infrastructures, are the perfect fit for hosting scores of unified and utility-like
platforms to take care of all various sorts of brokering needs across connected and
dispersed ICT systems. This is because clouds are the ideal match for hosting scores of
unified and utility-like platforms. This is due to the fact that clouds provide the optimal
environment for hosting these platforms.

THE CHALLENGES OF SaaS PARADIGM

The SaaS and cloud computing ideas, like any other emerging form of technology, are
subject to a variety of restrictions. The applicability of these technologies to various

103 | P a g e
settings and circumstances is being carefully investigated. Investigation is being done
into the perplexing and difficult problems that exist across several layers and levels.
The many points of view are summarized down below. The absence or reduction of the
following characteristics inhibits widespread adoption of cloud computing.

1. Controllability
2. Visibility & flexibility
3. Security and Privacy
4. High Performance and Availability
5. Integration and Composition
6. Standards

In order to solve the issues and shortcomings that have been identified, we are now
investigating a range of possible solutions. The bulk of these inefficiencies and
drawbacks are now being addressed by implementing the most recent cloud computing
technologies, which include community clouds, hybrid clouds, and private clouds.
These are the solutions that are advised. Recent entries on the weblogs of several people
made the insightful comment that in spite of this, there is still a significant distance to
go. There are a number of companies that are devoting their resources to finding a
solution to this issue. One of them is called Boomi, and it may be found at the website
http://www.dell.com/. This company has created a series of high-quality white papers
that expand on the issues that are encountered by organizations who are either
contemplating or trying to embrace the usage of third-party public clouds for the
hosting of their services and applications. These white papers can be found on the
company's website.

Challenges Presented by Integration. Despite the fact that they provide an incredible
value in terms of the features and functions they give in comparison to the cost,
integration with SaaS applications may be complicated by a number of challenges that
are specific to this kind of software. The first issue is that the great majority of software
as a service (SaaS) applications are point solutions, which means they only provide
assistance for a specific kind of business activity. As a direct result of this, businesses
who do not have a plan for synchronizing data across their many different lines of
business are at a huge disadvantage when it comes to maintaining accurate data,
producing precise projections, and automating essential business activities. Cloud
computing relies heavily on the simultaneous exchange of data and capabilities in real
time as one of its core components.

104 | P a g e
APIs are not enough on their own. Because it is difficult to integrate their services, a
significant number of SaaS providers have established application programming
interfaces, sometimes known as APIs. Due to the continual changes and updates that
are made to the API, obtaining access to data via an API and retaining that data takes a
significant amount of code in addition to regular maintenance. This is because the API
is always being adjusted and updated. In addition, despite the development of online
services, there is very no standardization or consensus over the form or syntax of
application programming interfaces (APIs) used by SaaS providers. Because of this,
the IT department has to commit an excessive amount of time and resources to the
process of building and maintaining a one-of-a-kind means of communication for the
application programming interface (API) of each and every SaaS application that is
installed inside the company. This is a result of the fact that the SaaS applications are
all different.

Security for the transmission of data. The providers of software as a service (SaaS) go
to great pains to assure that the data of their clients will remain secure while it is being
kept in a hosted environment. When data is sent from on-premise systems or
applications that are situated within a firewall to SaaS applications that are hosted
outside of the client's data center, new challenges arise. These challenges have to be
addressed by the integration solution that is chosen. It is of the highest necessity that
the integration solution be able to synchronize data in both ways, from the SaaS to the
on-premise environment, without the need to open the firewall. This is of the biggest
importance. Best-of-breed integration providers are able to offer this capacity because
they are able to use the same security that is used when a user is manually entering data
into a web browser behind a firewall. This allows them to provide users the power to
accomplish what they need to do. The user is still afforded the same amount of privacy
and protection as before as a result of this.

Compatibility between SaaS applications and on-premise business software packages


is the minimum requirement for every relocated program to give the promised
advantage for organizations and customers. Compatibility is a basic condition for each
relocated program. This is true for each and every application, irrespective of where
they are located. Because SaaS applications were not initially developed with the need
for interoperability in mind, the process of integration has grown considerably more
difficult as a consequence. When it comes to the process of routing communications
between on-demand applications and on-premise resources, there are extra challenges
and impediments that need to be conquered. It is important for message, data, and

105 | P a g e
protocol translations to take place either at the endpoints or at the middleware layer in
order to break down the barrier that is stopping the participants from participating in
deliberate collaboration and spontaneous information exchange with one another.

This barrier is prohibiting the participants from engaging in deliberate cooperation and
impromptu information exchange with one another since the participants cannot
communicate with one another in their native language. It is of the utmost importance
to have a variety of integration technologies and methods in order to assist smooth out
the integration difficulty that is produced by the fact that applications and data are
unique, dispersed, and decentralized. Reflective middleware is a fundamental need for
the provision of an enterprise-wide, real-time, and synchronized viewpoint of
information for the benefit of executives, decision-makers, and users on both a strategic
and a tactical level. This information may be seen in synchronization with one another.
It is vital to protect the data's integrity, confidentiality, quality, and value when diverse
services and applications are networked and pushed to collaborate with one another.
This is because networking requires the services and applications to work together.

The Effects That Clouds Have. Cloud computing has made a spectacular debut into the
scene in recent times, and as a result, the horizon as well as the boundary of corporate
applications, events, and data have all been stretched as a result. On the front of the
infrastructure, this new development took place. To put it another way, business
applications, platform development environments, and other similar resources are
being moved to elastic cloud infrastructures that are online and can be accessed
whenever necessary. To be more precise, preparations are being made to migrate
applications and services to highly scalable and accessible cloud environments. These
environments are hosted in the cloud.

This is occurring for a number of reasons, some of which are associated with business,
some with technology, others with economics, and yet others with the environment.
When it comes to the process of establishing expanded and integrated processes and
views, the direct result and impact is that integration strategies and middleware
solutions need to take into mind clouds as well. This is a need that must be satisfied
immediately. As a consequence of this, there is a pressing need for adaptive integration
engines that are able to automatically and effortlessly combine internal applications
with cloud services. Integration is being pushed even further to the level of the growing
Internet, and this serves as a genuine litmus test for both system architects and
integrators.

106 | P a g e
Integration is being pushed even further to the level of the developing Internet. In order
for the SaaS model to reach the degree of success that was first envisioned for it, the
everlasting integration problem has to be correctly handled. This will allow the SaaS
model to achieve the amount of success that was initially envisioned for it.
Interoperability between SaaS and non-SaaS solutions is still the fundamental need,
despite the fact that integration is leading to composite systems and services that are
focused on the company and the people using them. In order for companies to
effectively design strategies that will lead to improved success and value, as well as the
fulfillment of the elusive goal of customer pleasure, there must be no restrictions placed
on the free flow of information.

Integration has proven to be a substantial challenge for a variety of businesses,


including those in the Fortune 500, companies that specialize in system integration, and
growing industry behemoths. The availability, affordability, and appropriateness of
cloud-sponsored and cutting-edge infrastructures for application deployment and
delivery are contributing to the expansion of the scope, size, and scale of the
integration. This is happening as a consequence of the expansion of the integration. In
spite of the fact that integration architects, experts, and advisors are in deeper problems
as a result of this helpful extension, the integration's breadth, size, and scale are all
expanding.

APPROACHING THE SaaS INTEGRATION ENIGMA

The provision of Integration as a Service (Infrastructure as a Service) refers to the


process of moving the functionality of a traditional enterprise application integration
(EAI) hub or enterprise service bus (ESB) into the cloud. This is done in order to
facilitate the transfer of data between any corporate applications and SaaS services in
a smooth manner. Users subscribe to IaaS in the same way that they would subscribe
to any other SaaS product. The process is completely standardized. Utilization of
middleware that is hosted in the cloud is the next logical progression in the
development of traditional middleware solutions. To put it another way, cloud-based
middleware will be provided to users in the form of a service that they may access. Due
to the fact that there is a diverse range of integration requirements and scenarios, there
is also a diverse range of middleware technologies and solutions.

These factors contributed to the diversity. Message queues that are compatible with the
JMS standard are one example of this, as are integration backbones such as EAI, ESB,

107 | P a g e
EII, EDB, CEP, and others. It is now being done in order to achieve the goal of
enhancing performance to make use of clusters, fabrics, grids, and federations of hubs,
brokers, and buses. For the purpose of integrating services, it is referred to as the
enterprise service bus (ESB), and for the purpose of integrating data, it is referred to as
the enterprise data bus (EDB). In addition to this, there is a kind of software known as
message oriented middleware (MOM), which is sometimes referred to as message
brokers. These types of software are used to integrate previously isolated applications
by passing messages and picking them up.

Complex event processing (CEP) engines are those that take in a stream of different
events coming from a variety of different sources, analyze those events in real time in
order to extract and figure out the contained information, and then pick and activate
one or more applications based on that knowledge. As a consequence of this, a more
lightweight form of connectivity and integration takes place between the apps that are
used to initiate the connection and the applications that are used to receive the
connection. The orchestration of services and the choreography of their delivery are
what make the integration of processes feasible. While CEP connects systems that only
have a tenuous connection to one another, service interaction via an ESB brings
together completely unrelated systems.

In addition to the provision of data services, mashups are also capable of executing and
supplying composite services, data, and views. Because of this, there are capable
integration modules and guidelines that are now being created at each layer or tier of
the company's information technology stack. The deployment of dynamic integration
is something that is eagerly awaited, and this is being done in preparation for that. It is
unavoidable that integration software will transition to cloud hosting as the number of
individuals utilizing cloud services continues to increase at an exponential pace. The
Amazon Simple Queue Service, often known as SQS, is a messaging service that is
hosted in the cloud and makes it simple for applications to interact with one another
via the use of queues. When a well-known on-premise service is turned into a cloud
service, such as what happens with SQS, it may be challenging to understand the
changes that take place as a result of the transition.

In spite of this, there are a great deal of problems with this. Because SQS duplicates
messages across numerous queues, an application reading from a queue is not
guaranteed to receive all messages from all queues on a particular read request. In
addition, SQS does not ensure that packages will be delivered in the exact order or at

108 | P a g e
the time that was indicated. Amazon is able to make SQS more scalable as a result of
these simplifications; however, this also implies that developers have to use SQS in a
way that is different from how they would use an on-premise message queuing
infrastructure.

Cloud infrastructure isn't going to be of much use if there aren't any software as a
service (SaaS) apps that run on top of it. In a similar vein, SaaS applications cannot
provide a lot of value if the user does not have access to crucial business data, which is
often kept in a number of different corporate systems. Therefore, in order for cloud
apps to provide their users with the maximum amount of value, they need to provide a
simple method to import or load data from external sources, export or duplicate their
data for the purpose of conducting reports or analyses, and lastly, keep their data in
sync with applications that are installed locally. This is necessary so that cloud apps
can provide their users with the maximum amount of value. This demonstrates how
critically vital the subject of SaaS integration actually is.

One of the white papers that David Linthicum has written asserts that the process of
addressing SaaS-to-enterprise integration is actually simply a subject of making options
that are informed and logical. The key factors that determine the options are the
integration techniques that are utilized to take advantage of architectural patterns, the
physical location of the integration engine, and, last but not least, the supporting
technology. As a result of the rapid growth of the software as a service (SaaS) business,
an increasing number of software components are being moved to off-premise SaaS
platforms so that they may make use of the advantages offered by these platforms.
Therefore, the requirement for integration between remote cloud platforms and on-
premise enterprise platforms, in which the customer and corporate data are stored for
the purpose of ensuring that the data is unbreakable, impeccable, and impenetrable, has
caught the serious and sincere attention of SaaS providers and has captured their
imagination.

This is due to the fact that the integration must take place in order to ensure that the
data is kept in an unbreakable, impeccable, and impenetrable state. Why is it so difficult
to integrate SaaS? According to the results of the research presented in the white paper,
a medium-sized paper manufacturing has only just become a client of the customer
relationship management platform that is provided by Salesforce.com. At the moment,
the company is using an on-premises proprietary system that is supported by an Oracle
database in order to keep track of the data pertaining to its inventory as well as its sales.

109 | P a g e
The use of the Salesforce.com platform brings a huge amount of value to the
organization in terms of the management of client relationships as well as the
management of sales. On the other hand, the data that is stored inside the
Salesforce.com system (for instance, customer data) is somewhat redundant with the
data that is housed within the on-premise conventional system.

This is because Salesforce.com is a cloud-based service. Therefore, the condition "as


is" is in a murky state and suffers from a broad range of costly inefficiencies across the
board. One of these inefficiencies is the need to enter and keep data in two separate
locations, which ultimately results in greater costs for the firm. When considering this
kind of dual operation, one will always suffer a loss in data quality, which is an
additional discomfort that is unavoidable and becomes an additional cause of
dissatisfaction. This involves problems with data integrity, which is a natural
phenomenon that arises when data is being updated via distinct ways and there is no
active synchronization between the SaaS and on-premise systems. Specifically, the data
integrity concerns originate from the fact that the on-premise systems are not
synchronized with the cloud-based systems.

Because of this, there is a possibility that the data will include inaccuracies and
inconsistencies. Since the "to be" state has been grasped and defined, the data
synchronization technology that has been proposed as the best match between the
source, which is Salesforce.com, and the destination, which is the existing legacy
system that uses Oracle, is the best fit. This is because the "to be" state has been
understood. This technology is able to provide automatic mediation of the differences
between the two systems, including application semantics, security, interfaces,
protocols, and native data formats. This is made possible by the fact that it can
differentiate between the two systems. Because it is able to provide automated
mediation, this is now feasible thanks to the capability of the system. Along with other
operational data like as inventories, things sold, and so on, data put into the CRM
system would also reside in the legacy systems and vice versa.

This would be the case for both directions. The ultimate result is that all of the
information included inside the legacy systems and the SaaS-delivered systems is
synced in a manner that is both comprehensive and condensed. This implies that the
final result is that the information is totally synchronized, and this is the case as a
consequence of the end result. Because of this, any issues that may have existed with
the data's quality or integrity prior to the "to be" stage have been totally resolved.

110 | P a g e
Because of this, it is now feasible to make savings of several thousand dollars each
month and to generate a quick return on investment (ROI) from the integration
technology that is now being explored and used.

Integration, which brings a sense of order to the chaos and disorder generated by many
systems, networks, and services, has been the primary focus of study and research by
academic students and academics for many years. This is due to the fact that integration
brings about this sense of order. This is because integration results in a feeling of order
being established. The term "integration" refers to the process of bringing together a
broad variety of separate components, such as different technologies, tools, standards,
recommended procedures, patterns, metrics, and platforms. Integration is not a simple
task to do, especially considering the fact that effectively escaping the tangled situation
is a key obstacle that has to be overcome before moving forward.

The process of integration is truly fairly complicated; as a consequence, adopting a


best-in-class scheme that allows for flexible and forward-looking integration is often
stressed. This is because the web of application and data silos makes integration rather
difficult. In order to discover an integration strategy that is suitable for our
requirements, the first step is to acquire a grasp of the distinct features and principles
that govern SaaS services. This will allow us to find an integration strategy that is
suitable for our needs. Applications that are hosted in the cloud have the following
limiting characteristics:

 The ever-evolving dynamic character of the software as a service (SaaS)


interface
 The ever-evolving dynamic nature of the metadata that is unique to a SaaS
provider like Salesforce.com
 Managing assets that are located outside the perimeter of the network
 Dealing with enormous volumes of data that need to be transferred between
cloud-based and on-premises systems on a regular basis Ensuring that data
quality and integrity are preserved

In light of the rapid accumulation of software as a service (SaaS) in cloud


infrastructures, it is imperative that we reflect on the challenges that are provided by
clouds and propose tried and tested solutions. If we run into problems with the
integration at the local level, the integration in the cloud will almost certainly be more
challenging. The most likely explanations are as follows:

111 | P a g e
 New integration scenarios
 Access to the cloud may be limited
 Dynamic resources
 Performance

Dynamic Resources. The resources that are made available by the cloud are
virtualized, and the cloud's primary emphasis is on the delivery of services. To put it
another way, everything is conveyed and offered as a service to everyone who is
interested. Because of the dynamism factor that is now influencing the whole could
ecosystem, application versioning and alterations to the underlying infrastructure are
both subject to dynamic changes. This is because both are sensitive to dynamic changes
as a consequence of the dynamism factor. There is no question that these factors would
have an impact on the model of integration. In other words, the cloud is the location
where the integration of the tightly coupled systems fails and breaks down. It should
come as no surprise that the low-level interfaces should adhere to the Representational
State Transfer (REST) architectural style. REST is a simple architectural style that
adheres to the standard methods that the HTTP protocol offers. It is also very clear that
one need to go down this road.

Performance. Clouds facilitate both the adaptability of the resources and the scalability
of the programs that run on them. On the other hand, we are unable to regulate the
network distances that now exist between the many components that comprise the
cloud. The round trip latency is an issue that cannot be avoided, and it is not the
bandwidth that is the limiting restriction in the great majority of integration scenarios;
rather, it is the round trip latency. As a result of the lengthened amount of time it takes
for the integration with the cloud to take effect, the performance is almost guaranteed
to suffer.

NEW INTEGRATION SCENARIOS

In order for us to be able to leverage the cloud model, we first had to painstakingly
stitch and link all of our local systems together. We now have to connect local programs
to the cloud, and we also have to connect cloud applications to each other. This adds
new permutations to the already intricate integration channel matrix that we have to
deal with since the transition to a cloud-based paradigm is on the horizon. Local
applications must be linked to the cloud, and cloud applications must be connected to
each other. Even the most basic of scenarios will need some kind of local and remote

112 | P a g e
integration since it is quite unlikely that everything will go to a cloud model all at once.
It is also conceivable that some of our applications may never be made available to the
general public. This may be the case due to legislative constraints, such as those
imposed by HIPPA and GLBA, as well as more general security issues. All of this
points to the fact that integration necessarily involves breaching barriers in some form
or another.

Cloud Integration Scenarios. The following is a discussion of the three primary


integration possibilities that we have found.

Within the bounds of a Public Cloud (see figure 3.3 for further information). Two
completely independent applications are hosted by the same cloud service. It is the job
of cloud integration middleware, also known as cloud-based enterprise service bus
(ESB) software or internet service bus (ISB) software, to make it feasible for these
applications to connect with one another in a way that is unobstructed and
uncomplicated. There is a potential that these mobile applications are owned by two
distinct companies. This is a plausible alternative to the main scenario. Even though
they could take up real estate on the same physical server, they would each run on their
own independent virtual machine.

Figure 3.4 within a public cloud.

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

Clouds that are consistent in appearance throughout (figure 3.6). The cloud
infrastructures that will be hosting the applications that are going to be included are
dispersed around the globe in various countries. There are three different cloud
environments that might host the integration middleware: cloud 1, cloud 2, or the
middleware's own cloud. The ISB is the entity responsible for carrying out both the
data transformation and the protocol transformation that are both essential. The process

113 | P a g e
is analogous to, to a greater or lesser degree, the approach used to integrate business
applications.

Figure 3.5 Across homogeneous cloud

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

Figure 3.6 Across heterogeneous clouds

Source: Cloud computing principal and paradigms data collection and processing
through cloud computing by Rajkur Buyya

Heterogeneous Clouds (figure 3.6). The public cloud is used by one application, while
the private cloud is used by the other application.

As was said before, this is the condition that predominates right now in terms of how
cloud integration is being done. To put it another way, businesses are subscribing to
popular on-demand enterprise software packages from well-known providers such as
Salesforce.com and Ramco Systems (http://www.ramco.com/), which offer services
such as customer relationship management (CRM), enterprise resource planning
(ERP), and a variety of other similar services. The first two possibilities will have a
greater chance of occurring when there are a number of commercial clouds available

114 | P a g e
and when cloud services are used extensively. In the event that this does place, the areas
of service integration and composition will grow to become a crucial and outstanding
component of global computing.

THE INTEGRATION METHODOLOGIES

Excluding the custom integration through hand-coding, there are three types for cloud
integration

 Traditional Enterprise Integration Tools can be empowered with special


connectors to access Cloud-located Applications— For information
technology businesses that have already made significant investments in
integration suites to meet their application integration requirements, this is the
most probable path to choose. As the need for accessing and integrating cloud
applications continues to grow, specialized drivers, connectors, and adapters
are being developed and put into current integration platforms in order to
provide bidirectional interaction with the cloud services that are participating.
As was said previously, there are a number of well-known and pioneering
enterprise integration methods and platforms, such as EAI/ESB. These
enterprise integration methods and platforms are appropriately enabled,
configured, and customized in order to access and exploit the expanding variety
of cloud apps as well. Integration appliances are becoming more popular in the
market as a means of achieving improved performance.

 Traditional Enterprise Integration Tools are hosted in the Cloud— This


strategy is quite similar to the previous alternative; the only significant
difference is that the integration software suite is now housed in any third-party
cloud infrastructures. As a result, the organization does not need to worry about
acquiring and administering the hardware or installing the integration software.
IT companies that outsource the integration projects to IT service organizations
and systems integrators, who have the expertise and resources to build and
deploy integrated systems, are a suitable match for this solution since it is a
good fit for their business model. With this strategy, the information technology
departments of commercial companies do not need to be concerned about the
initial investment of high-end computing machines, integration packages, or the
upkeep of such components. In a similar vein, system integrators are able to
concentrate only on their core capabilities, which include the design,

115 | P a g e
development, testing, and deployment of integrated systems. It is an excellent
match for cloud-to-cloud connection (also known as C2C integration), however
in order to access on-premise business data, a secure VPN tunnel is required.
Informatica PowerCenter Cloud Edition on Amazon EC2 is a good example of
a hosted integration solution.

 Integration-as-a-Service (IaaS) or On-Demand Integration Offerings—


These cloud-to-cloud apps are SaaS applications, which are meant to give
integration services in a safe manner via the internet and are able to interface
cloud applications with on-premises systems. This integration service makes it
possible to integrate on-premises systems with one another as well as with other
on-premises applications. This strategy is an excellent match for businesses
who place a high priority on the simplicity of use, the ease of maintenance, the
speed to deployment, and who are working with a limited budget. It is attractive
to businesses of all sizes, including major corporations with departmental
application deployments, as well as small and medium-sized businesses. It is
also a fantastic match for businesses who want to utilize their SaaS
administrator or business analyst as the major resource for managing and
maintaining their integration efforts. This may be accomplished with the help
of our product. Informatica On-Demand Integration Services is a nice example
of this kind of service.

In a nutshell, the integration requirements can be realized using any one of the
following methods and middleware products

1. An Internet service bus (also known as a cloud integration bus) that is hosted
and expanded
2. Online Message Queues, Brokers and Hubs
3. Integration platforms that use a wizard and are dependent on configuration
(Niche integration solutions)
4. The Approach to Integration of Service Portfolios
5. Integration based on Appliances (Either Standalone or Hosted)

As a result of the proliferation of cloud computing, the scope of integration work has
grown substantially, and as a result, more and more individuals are searching for
reliable and dependable solutions and services that will accelerate and simplify the
whole process of integration.

116 | P a g e
Integrity and Reliability of Integration Products and Solutions. Connectivity, semantic
mediation, data mediation, integrity, security, governance, and so on are some of the
essential characteristics of integration platforms and backbones that may be acquired
and gained from previous experience with integration projects.

 Connectivity means that the integration engine is able to interact with both the
source system and the destination system via the use of their respective native
interfaces. This entails taking use of the interface that each one offers, which
may be a standards-based interface, such as that offered by web services, or an
older and more proprietary one. Systems that are becoming linked are mostly
responsible for the internalization of information after it has been processed by
the integration engine, as well as the externalization of the proper information
to be shared with other systems.

 Semantic Mediation means having the capacity to take into consideration the
many distinctions that exist between the application semantics of two or more
different systems. In the context of information systems, "semantics" refers to
the process through which information is comprehended, processed, and
represented. When two separate and dispersed systems are connected, it is
necessary to account for the variations in their respective semantics, which are
unique to each system.

 Data Mediation transforms data from a source data format into destination data
format. Data mediation, also known as data transformation, is the act of
changing data from one native format on the source system to another data
format suitable for the destination system. This procedure is sometimes
performed in conjunction with semantic mediation.

 Data Migration data migration refers to the process of moving information


from one kind of storage medium or format to another. The term "data
migration" refers to the process of mapping the data stored in an older system
to the data stored in a newer system. This mapping process often makes use of
data extraction and data loading technologies.

 The capacity to ensure that information retrieved from source systems is put in
target systems in a safe manner is an essential part of maintaining data security.
The integration technique has to make use of the native security systems of both

117 | P a g e
the source system and the destination system, arbitrate any discrepancies that
exist between them, and offer a means by which information may be sent
securely between the linked systems.
 The data are comprehensive and consistent: this is what we mean by "data
integrity." Integrity must thus be ensured whenever data is mapped and
preserved through integration activities, such as the synchronization of data
between on-premise and SaaS-based systems. These operations fall under the
category of "data synchronization."
 Governance is the term used to describe the procedures and technology that
surround a system or set of systems and regulate how those systems are
accessed and used. Governance refers to the process of controlling changes to
fundamental information resources, such as the semantics, structure, and
interfaces of data. This falls within the purview of the integration viewpoint.

These are the prominent qualities carefully and critically analyzed for when deciding
the cloud / SaaS integration providers.

 Data Integration Engineering Lifecycle. It is very necessary to have an


efficient data integration lifecycle due to the fact that most company data are
still maintained in locally located and on-premises server and storage
equipment. According to Mr. David Linthicum, a world-renowned specialist on
integration, the most important steps are comprehending, defining, designing,
implementing, and testing the system.

 Understanding Defining the metadata that is native to both the source system
(let's say Salesforce.com) and the destination system (let's say an on-premise
inventory system) is the current issue area. When this is done, there is a
comprehensive comprehension of the semantics of both the source system and
the destination system. If there are more systems that can be integrated, the
same procedure has to be carried out.

 Definition refers to the process of taking the information that was filtered
during the prior phase and defining it at a high level, including what the
information represents, ownership, and physical properties. This step follows
the stage before it. Beyond the straightforward information, this helps to a
greater understanding of the data that is being worked with. This ensures that
the process of integration proceeds in the desired direction from here on out.

118 | P a g e
 Design the integration solution that revolves on the flow of data from one
location to another, taking into account the changes in the semantics along the
way. This is accomplished via the use of an underlying data transformation and
mediation layer, which maps one schema from the source to the schema of the
target. This outlines the process by which the data is to be taken from one or
more systems, changed so that it seems to be native, and then updated in the
system or systems that are the intended recipients. Technology that uses visual
mapping is being increasingly used to accomplish this task. In addition, there is
a need to take into consideration security and governance, as well as to take
these ideas into consideration within the design of the solution for data
integration.

 Implementation refers to the process of actually putting the data integration


solution into action within the framework of the chosen technology. This
involves establishing a connection between the source system and the
destination system, putting into action the integration processes that were
created in the prior phase, and completing any further procedures necessary to
have the data integration solution up and operating.

 Testing refers to making sure that the integration is planned and executed
correctly, as well as making sure that the data synchronizes across all of the
systems that are participating in the process. This entails monitoring the flow
of information to the target system while looking at known test data contained
inside the source system. It is necessary for us to examine not only the general
performance, durability, security, modifiability, and sustainability of the
integrated systems, but also to make certain that the data mediation mechanisms
operate appropriately.

SAAS INTEGRATION PRODUCTS AND PLATFORMS

In order to demonstrate their potential for integrating corporate software with cloud-
based systems, integration solutions with a focus on the cloud are now being developed
and tested. The integration problem has long been the most challenging task because
of the heterogeneity and multiplicity-induced complexity. As a consequence of the
recent rise and general acceptance of cloud computing as a paradigm changer, every
piece of information and communications technology (ICT) is now being turned into a
collection of services that may be supplied over the open Internet. So that any and all

119 | P a g e
integration needs, coming from any and all places on the planet, may be simply,
inexpensively, and promptly addressed, the suites of integration that are compatible
with standards are being transformed into services. The need for data integration
solutions has increased significantly in comparison to service- or message-based
application integration.

These items are presently getting a lot of attention as a consequence. Application and
service integration, however, will become more necessary in the industry as time goes
on. Interoperability will undoubtedly take center stage as the main issue. As clouds are
recommended and stated to be the next-generation infrastructure for developing,
deploying, and supplying swarms of ambient, creative, adaptive, and flexible services,
composition and collaboration will become essential and required for their broad usage.
The mainstream use of clouds will depend heavily on composition and cooperation.
Interoperability across clouds is the most crucial need, as seen in figure 3.6, for the
development of cloud peers, clusters, fabrics, and grids. Cloud computing and software
as a service (SaaS) have combined to provide substantial advancements in business and
IT strategy. While the Internet continues to be the main communication backbone,
software as a service (SaaS) applications are increasingly being hosted on cloud
computing infrastructures.

The recent availability of infrastructures and game-changing ideas has proved to be an


extraordinary gift and a benefit to humanity in the midst of the global financial crisis
and uncertainties. The creation of these newly chosen and well discussed ideas is
helping to achieve the goal of "more with less" that has been established. Applications
are being moved carefully to clouds. After that, these clouds are made accessible to
users as services that are dispersed throughout the internet to user agents or individuals
and are accessible via the widely used web browsers. As it has already sparked a lot of
conversation about newer business, price, delivery, and accessibility models, the
unprecedented acceptance is meant to inspire and infuse a number of innovations. This
is due to the fact that fresh business, price, distribution, and accessibility models have
already generated a lot of noise as a result of the adoption.

It will become commonplace to use the terms ubiquitous and beneficial


interchangeably. The finished solution will combine on-demand information
technology with a value-added business transformation, augmentation, and
optimization. There are certain limiting limits that must be carefully considered and
dealt with in order to create a larger environment for intelligent collaboration in the

120 | P a g e
midst of all the enthusiasm and promise. This is necessary. One of these problems is
integration, which is why experts are now outlining a number of various ways. The
development of integration platforms, patterns, processes, and best practices for their
particular sectors is a top priority for product makers, consulting organizations, and
service providers.

This topic may be approached from both a wide and narrow (or specialty) perspective.
The study and suggestions that are being made for pure SaaS middleware as well as
standalone middleware solutions are based on the "as-is" situation and the "to-be" goal.
An internet service bus, often referred to as an internet-scale business service bus, is
presently being heralded as the next breakthrough in the more esoteric cloud space.
This is because cloud middleware suites' commercial and technical application cases
are always evolving and expanding. We have created and clarified the necessity for an
innovative and future ISB in this chapter, one that might hasten and make it easier to
integrate various clouds (public, private, and hybrid).

121 | P a g e
122 | P a g e

You might also like