0% found this document useful (0 votes)
39 views23 pages

Bca CC Unit6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views23 pages

Bca CC Unit6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Web Services in Cloud Computing

The Internet is the worldwide connectivity of hundreds of thousands of computers belonging to


many different networks.

A web service is a standardized method for propagating messages between client and server
applications on the World Wide Web. A web service is a software module that aims to
accomplish a specific set of tasks. Web services can be found and implemented over a network
in cloud computing.

The web service would be able to provide the functionality to the client that invoked the web
service.

A web service is a set of open protocols and standards that allow data exchange between
different applications or systems. Web services can be used by software programs written in
different programming languages and on different platforms to exchange data through computer
networks such as the Internet. In the same way, communication on a computer can be inter-
processed.

Any software, application, or cloud technology that uses a standardized Web protocol (HTTP or
HTTPS) to connect, interoperate, and exchange data messages over the Internet-usually XML
(Extensible Markup Language) is considered a Web service. Is.

Web services allow programs developed in different languages to be connected between a client
and a server by exchanging data over a web service. A client invokes a web service by
submitting an XML request, to which the service responds with an XML response.

o Web services functions

o It is possible to access it via the Internet or intranet network.

o XML messaging protocol that is standardized.

o Operating system or programming language independent.

o Using the XML standard is self-describing.


A simple location approach can be used to detect this.

Web Service Components


XML and HTTP is the most fundamental web service platform. All typical web services use the
following components:

1. SOAP (Simple Object Access Protocol)

SOAP stands for "Simple Object Access Protocol". It is a transport-independent messaging


protocol. SOAP is built on sending XML data in the form of SOAP messages. A document
known as an XML document is attached to each message.

Only the structure of an XML document, not the content, follows a pattern. The great thing about
web services and SOAP is that everything is sent through HTTP, the standard web protocol.

Every SOAP document requires a root element known as an element. In an XML document, the
root element is the first element.

The "envelope" is divided into two halves. The header comes first, followed by the body.
Routing data, or information that directs the XML document to which client it should be sent, is
contained in the header. The real message will be in the body.

2. UDDI (Universal Description, Search, and Integration)

UDDI is a standard for specifying, publishing and searching online service providers. It provides
a specification that helps in hosting the data through web services. UDDI provides a repository
where WSDL files can be hosted so that a client application can search the WSDL file to learn
about the various actions provided by the web service. As a result, the client application will
have full access to UDDI, which acts as the database for all WSDL files.

The UDDI Registry will keep the information needed for online services, such as a telephone
directory containing the name, address, and phone number of a certain person so that client
applications can find where it is.
3. WSDL (Web Services Description Language)

The client implementing the web service must be aware of the location of the web service. If a
web service cannot be found, it cannot be used. Second, the client application must understand
what the web service does to implement the correct web service. WSDL, or Web Service
Description Language, is used to accomplish this. A WSDL file is another XML-based file that
describes what a web service does with a client application. The client application will
understand where the web service is located and how to access it using the WSDL document.

How does web service work?

The diagram shows a simplified version of how a web service would function. The client will
use requests to send a sequence of web service calls to the server hosting the actual web service.

Remote procedure calls are used to perform these requests. The calls to the methods hosted by
the respective web service are known as Remote Procedure Calls (RPC). Example: Flipkart
provides a web service that displays the prices of items offered on Flipkart.com. The front end or
presentation layer can be written in .NET or Java, but the web service can be communicated
using a programming language.
The data exchanged between the client and the server, XML, is the most important part of web
service design. XML (Extensible Markup Language) is a simple, intermediate language
understood by various programming languages. It is the equivalent of HTML.

As a result, when programs communicate with each other, they use XML. It forms a common
platform for applications written in different programming languages to communicate with each
other.

Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between
applications. The data is sent using standard HTTP. A SOAP message is data sent from a web
service to an application. An XML document is all that is contained in a SOAP message. The
client application that calls the web service can be built in any programming language as the
content is written in XML.

Features of Web Service


Web services have the following characteristics:

(a) XML-based: A web service's information representation and record transport layers employ
XML. There is no need for networking, operating system, or platform bindings when using
XML. At the mid-level, web offering-based applications are highly interactive.

(b) Loosely Coupled: The subscriber of an Internet service provider may not necessarily be
directly connected to that service provider. The user interface for a web service provider may
change over time without affecting the user's ability to interact with the service provider. A
strongly coupled system means that the decisions of the mentor and the server are inextricably
linked, indicating that if one interface changes, the other must be updated.

A loosely connected architecture makes software systems more manageable and easier to
integrate between different structures.

(c) Ability to be synchronous or asynchronous: Synchronicity refers to the client's connection


to the execution of the function. Asynchronous operations allow the client to initiate a task and
continue with other tasks. The client is blocked, and the client must wait for the service to
complete its operation before continuing in synchronous invocation.

Asynchronous clients get their results later, but synchronous clients get their effect immediately
when the service is complete. The ability to enable loosely connected systems requires
asynchronous capabilities.

(d) Coarse Grain: Object-oriented systems, such as Java, make their services available
differently. At the corporate level, an operation is too great for a character technique to be useful.
Building a Java application from the ground up requires the development of several granular
strategies, which are then combined into a coarse grain provider that is consumed by the buyer or
service.

Corporations should be coarse-grained, as should the interfaces they expose. Building web
services is an easy way to define coarse-grained services that have access to substantial business
enterprise logic.

(e) Supports remote procedural calls: Consumers can use XML-based protocols to call
procedures, functions, and methods on remote objects that use web services. A web service must
support the input and output framework of the remote system.

Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET
components have become more prevalent in architectural and enterprise deployments. Several
RPC techniques are used to both allocate and access them.

A web function can support RPC by providing its services, similar to a traditional role, or
translating incoming invocations into an EJB or .NET component invocation.

(f) Supports document exchanges: One of the most attractive features of XML for
communicating with data and complex entities.
App Engine
Azures Platform
Platform as a service (PaaS) is a deployment and development environment within the cloud that
delivers simple cloud-based apps to complex, cloud-enabled applications. PaaS is designed to
support the complete web application lifecycle of building, testing, deploying, managing, and
updating.

PaaS includes a complete infrastructure of servers, storages, networking, and middleware


development tools like business intelligence services (BI), database management systems, etc. A
complete platform is offered in PaaS in which the client can host their applications without the
need to worry about the maintenance of the servers and its operating systems. However, the user
of the PaaS service should look after the implementation of the developed application to decide
whether to scale it up or down depending on the traffic that the application receives.

Figure 2 Source: Microsoft


The PaaS backbone utilizes virtualization techniques, where the virtual machine is independent
of the actual hardware that hosts it.

Azure Cloud Services has two main components; the application files such as the source code,
DLL, etc. and the configuration file. Together these two will spin up a combination of Worker
Roles and Web Roles. On the cloud services, Azure handles all the hard work of the operating
systems on your behalf, so that the full focus is to build a quality application for the end users.

The Web Role is an Azure VM that is preconfigured as a web server running IIS (Internet
Information Service) which automatically loads the developed application when the Virtual
machine boots up. This results in the creation of the public endpoint for the application which is
usually in the form of a website but could be an API or similar.

The Worker Role runs alongside with the Web Role and performs the computing functions
needed for the smooth operation of your application. The Web Role will accept the user’s input
and will queue up for an action to process later by the Work Role. Subsequently, this enables the
Web Role to be more productive and responsive.

Azure PaaS services


Azure offers five main services of Platform as a Service in which multiple service types host a
custom application or business logic for specific use cases:

1. Web apps

These are an abstraction of a Web Server such as IIS and Tomcat that run applications written in
mostly in Java, Python,.NET, PHP, Node.js, etc. These are simple to set up and provide a variety
of benefits, available 99.9% of the time which is a key benefit.
2. Mobile apps

The back ends of mobile apps can be hosted on the Azure PaaS easily using the SDKs available
for all major mobile operating systems of iOS, Android, Windows, etc. It enables the unique
ability of offline sync so the user can use the app even if they are offline and sync the data back
when they are back online. Another major benefit is the ability to push notifications allowing
sending of custom notifications for all targeted application users.

3. Logic apps

No apps are hosted, but there is an orchestrated business logic app to automate a business
process. These are initiated by a trigger when a predefined business condition is met.

4. Functions

Functional apps can perform multiple tasks within the same application. These functional apps
host smaller applications such as microservices and background jobs that only run for short
periods.

5. Web jobs

These are a part of a service that runs within an app service on web apps or mobile apps. They
are similar to Functions but do not require any coding to set it up.

Where PaaS is used

PaaS is often seen in Business Organizations for the following scenarios:

Development Framework

PaaS offers application developers the ability to create applications using the in-build software
components of PaaS such as scalability, multi-tenancy and high availability which highly
reduces the amount of coding for the application that the developers must do, making the
development life cycle significantly shorter.

Analytics/Business intelligence (BI)

Additional intelligence tools of PaaS allow organizations to mine and analyze both user
behavioral data and application data, predict the outcomes to improve the product design
decisions, business decisions, and increase the return on investment by analyzing insights and
application usage patterns.

Along with the scenarios mentioned earlier, PaaS includes additional services that enable users to
have a stable PaaS platform and enhance the applications hosted, like security and workflow
scheduling. It allows new capabilities without the need to add additional staff with specific skills
to implement these features.

Aneka
Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing. It acts as a
framework for building customized applications and deploying them on either public or private
Clouds. One of the key features of Aneka is its support for provisioning resources on different
public Cloud providers such as Amazon EC2, Windows Azure and GoGrid. In this chapter, we
will present Aneka platform and its integration with one of the public Cloud infrastructures,
Windows Azure, which enables the usage of Windows Azure Compute Service as a resource
provider of Aneka PaaS. The integration of the two platforms will allow users to leverage the
power of Windows Azure Platform for Aneka Cloud Computing, employing a large number of
compute instances to run their applications in parallel. Furthermore, customers of the Windows
Azure platform can benefit from the integration with Aneka PaaS by embracing the advanced
features of Aneka in terms of multiple programming models, scheduling and management
services, application execution services, accounting and pricing services and dynamic
provisioning services. Finally, in addition to the Windows Azure Platform we will illustrate in
this chapter the integration of Aneka PaaS with other public Cloud platforms such as Amazon
EC2 and GoGrid, and virtual machine management platforms such as Xen Server. The new
support of provisioning resources on Windows Azure once again proves the adaptability,
extensibility and flexibility of Aneka.

Open challenges
1. Security issues

Security risks of cloud computing have become the top concern in 2018 as 77% of respondents
stated in the referred survey. For the longest time, the lack of resources/expertise was the number
one voiced cloud challenge. In 2018 however, security inched ahead.
We already mentioned the hot debate around data security in our business intelligence
trends 2019 article, and security has indeed been a primary, and valid, concern from the start of
cloud computing technology: you are unable to see the exact location where your data is stored
or being processed. This increases the cloud computing risks that can arise during the
implementation or management of the cloud. Headlines highlighting data breaches, compromised
credentials, and broken authentication, hacked interfaces and APIs, account hijacking haven’t
helped alleviate concerns. All of this makes trusting sensitive and proprietary data to a third party
hard to stomach for some and, indeed, highlighting the challenges of cloud computing. Luckily
as cloud providers and users, mature security capabilities are constantly improving. To ensure
your organization’s privacy and security is intact, verify the SaaS provider has secure user
identity management, authentication, and access control mechanisms in place. Also, check
which database privacy and security laws they are subject to.

While you are auditing a provider’s security and privacy laws, make sure to also confirm the
third biggest issue is taken care of: compliance. Your organization needs to be able to comply
with regulations and standards, no matter where your data is stored. Speaking of storage, also
ensure the provider has strict data recovery policies in place.

The security risks of cloud computing have become a reality for every organization, be it small
or large. That’s why it is important to implement a secure BI cloud tool that can leverage proper
security measures.

2. Cost management and containment

The next part of our cloud computing risks list involves costs. For the most part cloud computing
can save businesses money. In the cloud, an organization can easily ramp up its processing
capabilities without making large investments in new hardware. Businesses can instead access
extra processing through pay-as-you-go models from public cloud providers. However, the on-
demand and scalable nature of cloud computing services make it sometimes difficult to define
and predict quantities and costs.
Luckily there are several ways to keep cloud costs in check, for example, optimizing costs by
conducting better financial analytics and reporting, automating policies for governance, or
keeping the management reporting practice on course, so that these issues in cloud computing
could be decreased.

3. Lack of resources/expertise

One of the cloud challenges companies and enterprises are facing today is lack of resources
and/or expertise. Organizations are increasingly placing more workloads in the cloud while cloud
technologies continue to rapidly advance. Due to these factors, organizations are having a tough
time keeping up with the tools. Also, the need for expertise continues to grow. These challenges
can be minimized through additional training of IT and development staff. A strong CIO
championing cloud adoption also helps. As Cloud Engineer Drew Firment puts it:

“The success of cloud adoption and migrations comes down to your people — and the
investments you make in a talent transformation program. Until you focus on the #1 bottleneck
to the flow of cloud adoption, improvements made anywhere else are an illusion.”

SME (small and medium-sized) organizations may find adding cloud specialists to their IT teams
to be prohibitively costly. Luckily, many common tasks performed by these specialists can be
automated. To this end companies are turning to DevOps tools, like Chef and Puppet, to perform
tasks like monitoring usage patterns of resources and automated backups at predefined time
periods. These tools also help optimize the cloud for cost, governance, and security.

4. Governance/Control

There are many challenges facing cloud computing and governance/control is in place number 4.
Proper IT governance should ensure IT assets are implemented and used according to agreed-
upon policies and procedures; ensure that these assets are properly controlled and maintained,
and ensure that these assets are supporting your organization’s strategy and business goals.

In today’s cloud-based world, IT does not always have full control over the provisioning, de-
provisioning, and operations of infrastructure. This has increased the difficulty for IT to provide
the governance, compliance, risks and data quality management required. To mitigate the various
risks and uncertainties in transitioning to the cloud, IT must adapt its traditional IT governance
and control processes to include the cloud. To this effect, the role of central IT teams in the cloud
has been evolving over the last few years. Along with business units, central IT is increasingly
playing a role in selecting, brokering, and governing cloud services. On top of this third-party
cloud computing/management providers are progressively providing governance support and
best practices.

5. Compliance

One of the risks of cloud computing is facing today is compliance. That is an issue for anyone
using backup services or cloud storage. Every time a company moves data from the internal
storage to a cloud, it is faced with being compliant with industry regulations and laws. For
example, healthcare organizations in the USA have to comply with HIPAA (Health Insurance
Portability and Accountability Act of 1996), public retail companies have to comply with SOX
(Sarbanes-Oxley Act of 2002) and PCI DSS (Payment Card Industry Data Security Standard).

Depending on the industry and requirements, every organization must ensure these standards are
respected and carried out.

This is one of the many challenges facing cloud computing, and although the procedure can take
a certain amount of time, the data must be properly stored.

Cloud customers need to look for vendors that can provide compliance and check if they are
regulated by the standards they need. Some vendors offer certified compliance, but in some
cases, additional input is needed on both sides to ensure proper compliance regulations.

6. Managing multiple clouds

Challenges facing cloud computing haven’t just been concentrated in one, single cloud.
The state of multi-cloud has grown exponentially in recent years. Companies are shifting or
combining public and private clouds and, as mentioned earlier, tech giants like Alibaba and
Amazon are leading the way.

In the referred survey, 81 percent of enterprises have a multi-cloud strategy. Enterprises with a
hybrid strategy (combining public and private clouds) fell from 58 percent in 2017 to 51 percent
in 2018, while organizations with a strategy of multiple public clouds or multiple private clouds
grew slightly.

While organizations leverage an average of almost 5 clouds, it is evident that the use of the cloud
will continue to grow. That’s why it is important to answer the main questions organizations are
facing today: what are the challenges for cloud computing and how to overcome them?

7. Performance

When a business moves to the cloud it becomes dependent on the service providers. The next
prominent challenges of moving to cloud computing expand on this partnership. Nevertheless,
this partnership often provides businesses with innovative technologies they wouldn’t otherwise
be able to access. On the other hand, the performance of the organization’s BI and other cloud-
based systems is also tied to the performance of the cloud provider when it falters. When your
provider is down, you are also down.

This isn’t uncommon, over the past couple of years all the big cloud players have experienced
outages. Make sure your provider has the right processes in place and that they will alert you if
there is ever an issue.

For the data-driven decision making process, real-time data for organizations is imperative.
Being able to access data that is stored on the cloud in real-time is one of the imperative
solutions an organization has to consider while selecting the right partner.

With an inherent lack of control that comes with cloud computing, companies may run into real-
time monitoring issues. Make sure your SaaS provider has real-time monitoring policies in place
to help mitigate these issues.

8. Building a private cloud

Although building a private cloud isn’t a top priority for many organizations, for those who are
likely to implement such a solution, it quickly becomes one of the main challenges facing cloud
computing – private solutions should be carefully addressed.

Creating an internal or private cloud will cause a significant benefit: having all the data in-house.
But IT managers and departments will need to face building and gluing it all together by
themselves, which can cause one of the challenges of moving to cloud computing extremely
difficult.

It is important to keep in mind also the steps that are needed to ensure the smooth operation of
the cloud:
 Automating as many manual tasks as possible (which would require an inventory management
system)
 Orchestration of tasks which has to ensure that each of them is executed in the right order.

As this article stated: the cloud software layer has to grab an IP address, set up a virtual local
area network (VLAN), put the server in the load balancing queue, put the server in the firewall
rule set for the IP address, load the correct version of RHEL, patch the server software when
needed and place the server into the nightly backup queue.

That being said, it is obvious that developing a private cloud is no easy task, but nevertheless,
some organizations still manage and plan to do so in the next years.

9. Segmented usage and adoption

Most organizations did not have a robust cloud adoption strategy in place when they started to
move to the cloud. Instead, ad-hoc strategies sprouted, fueled by several components. One of
them was the speed of cloud adoption. Another one was the staggered expiration of data center
contracts/equipment, which led to intermittent cloud migration. Finally, there also were
individual development teams using the public cloud for specific applications or projects. These
bootstrap environments have fostered full integration and maturation issues including:

 Isolated cloud projects lacking shared standards


 Ad hoc security configurations
 Lack of cross-team shared resources and learnings

In fact, a recent survey by IDC of 6,159 executives found that just 3% of respondents define their
cloud strategies as “optimized”. Luckily, centralized IT, strong governance and control policies,
and some heavy lifting can get usage, adoption, and cloud computing strategies inline.

Nearly half of the decision makers believe that their IT workforce is not completely prepared to
address the cloud computing industry challenges and managing their cloud resources over the
next 5 years. Since businesses are adopting the cloud strategy more often than ever, it is eminent
that the workforce should keep up and carefully address the potential issues.
10. Migration

One of the main cloud computing industry challenges in recent years concentrates on migration.
This is a process of moving an application to a cloud. An although moving a new application is a
straightforward process, when it comes to moving an existing application to a cloud
environment, many cloud challenges arise.

A recent survey conducted by Velostrata showed that over 95% of companies are currently
migrating their applications to the cloud, and over half of them find it more difficult than
expected – projects are over budget and deadline.

What are the challenges faced during storing data in the cloud? Most commonly cited were:

 Extensive troubleshooting
 Security challenges
 Slow data migrations
 Migration agents
 Cutover complexity
 Application downtime

In another survey, although not that recent, but a picturesque perception of the migration to the
cloud; IT professionals stated they would rather “get a root canal, dig a ditch, or do their own
taxes” than address challenges in cloud computing regarding the deployment process.

Scientific applications
With popularity of Cloud Computing running complex scientific applications is more accessible
to the research community by accessing on-demand compute resources in minutes instead of
spending waiting times for their compute jobs in queues, experiencing peak demand bottlenecks.
Cloud Computing offers great potential for scientific applications, but Clouds have been
designed for running business and web applications, whose resource requirements are different
from communication intensive tightly coupled scientific applications which typically require low
latency and high bandwidth interconnections and parallel file systems to achieve best
performance. Most commercial Clouds use commodity networking and storage devices which
are suitable to effectively host loosely coupled scientific applications which frequently require
large amounts of computation with modest data requirements and infrequent communication
among tasks. Several studies have shown that Cloud Computing is viable platform for running
loosely coupled scientific applications and workflow applications composed of loosely coupled
parallel applications consisting of a set of computational tasks linked via data and control
dependencies. New capabilities, challenges and performance behavior have not been
conclusively answered and need to be addressed for each type of scientific application in Cloud
Computing environment. The rest of the paper is organized as follows. Section 2 presents Cloud
Computing paradigm, deployment models relevant to scientific applications, and HPC Cloud
solutions. Section 3 proposes scientific computing applications. In Section 4, we present Cloud
Computing benefits and challenges for scientific applications. Section 5 discusses future trends
of running scientific applications in Cloud Computing environment. We conclude the paper with
a summary in Section 6

Scientific computing applications


Scientific applications in Cloud Computing environment, based on their resource requirements,
can be classified into tightly and loosely coupled scientific applications.

Tightly coupled applications Supercomputers and clusters have traditionally been executing
tightly coupled applications within a particular machine over low latency interconnects, which
makes it possible to share data very rapidly between a large numbers of processors working on
the same problem, and with message passing interface (MPI) to achieve inter process
communication. These systems are typically optimized to maximize the number of operations
per seconds. Tightly coupled applications are common classes of scientific HPC applications
which require low latency network of high bandwidth because frequent communication is
necessary. Examples include domain decomposition solvers, linear algebra, FFTs, N-body
systems, etc. 3.2 Loosely coupled applications Grids have been the preferred platform for more
loosely coupled applications that are managed and executed through workflow systems. In
contrast to HPC (tightly coupled applications), the loosely coupled applications are known to
make up high throughput computing (HTC). HTC is a computing paradigm that focuses on the
efficient execution of a large number of loosely-coupled tasks. Tasks can be executed on clusters
or using grid technologies because low parallel communication requirements. HTC systems are
optimized to maximize the throughput over a long period of time (jobs per month or year) .These
applications are also called embarrassingly or pleasingly parallel applications which can be
parallelized with minimal effort and with the Map-Reduce. based frameworks are good
candidates for running in Cloud Computing environment with commodity interconnects. There
are many scientific applications that fall in to this category. Few examples would be Monte Carlo
simulations as they involve large numbers of compute cycles with a relatively small data set,
BLAST searches, many image processing applications such as ray tracing, parametric studies,
etc.

CLOUD COMPUTING BENEFITS AND CHALLENGES FOR SCIENTIFIC


APPLICATIONS

Cloud Computing benefits The Cloud Computing offers scientific applications a range of
benefits, including cost advantages for some type of scientific applications, ability to rapidly
provision new clusters and instantly access them, elasticity for instant adding and removing
resources, configurability with root access, and sharing and collaboration of data, results,
methods, and resources between partners. Cost advantages When evaluating options for Cloud
based scientific applications, costs are often a major consideration. With Cloud Computing users
can eliminate the cost and complexity of procuring, configuring, operating, managing and
maintaining their own cluster infrastructure with low, pay-peruse pricing for actual resource
usage. Gupta et al. indicated that small and medium scale scientific applications with modest
communication and data requirements could be cost effective on Cloud resources. Large scale
scientific tightly coupled applications running on virtualized Clouds using commodity networks
can be more cost effective on dedicated optimized clusters than Cloud showed Magellan report.
Works such as have studied the cost or benefit of using Cloud technologies versus the cost of
owning a datacenter infrastructure. performs a detailed comparison between physical and virtual
HPC clusters from the point of view of the TCO, considering energetic, management, and
infrastructural issues. With advantage of spot instances users can optimize and keep Cloud
Computing cost as low as possible. Cloud providers like Amazon start to establish spot markets
on which they sell excess capacity even for scientific computing . Spot Instances enable users to
bid for unused Amazon capacity. Instances are charged the Spot Price, which is set by Amazon
and fluctuates periodically depending on the supply of and demand for Spot Instance capacity.
To use Spot Instances, users place a Spot Instance request, specifying the instance type, the
region desired, the number of Spot Instances they want to run, and the maximum price they are
willing to pay per instance hour. However, there is no guarantee for continuous operation
because a virtual machine is stopped if the market price exceeds the maximum bid. Beside
Amazon spot instances there are specific market services like Spot Cloud. Spot Cloud is an easy
to use, structured Cloud capacity marketplace where service providers can sell their excess
computing capacity to a wide array of buyers and resellers. Spot Cloud has implemented an
environment to buy and sell computing capacity globally based on price, location, and quality on
a fast and secure platform. Spot Cloud platform provides an easy method to maximize revenue
for Cloud providers, datacenters, etc. for their unused capacity. For users it provides an easy way
to discover and access targeted premium or commodity compute capacity. Users can choose the
most efficient Cloud Computing resources suited for their application and budget. Instant access
Ability to rapidly provision new clusters and access compute resources, and configure them, in
minutes instead of spending hours or days waiting in queues, and in case of the initial
procurement of a new cluster waiting for months. Thereby, time to get the job done improves
even if the performance of a Cloud Computing cluster is lower than that of a traditional
dedicated cluster. These resources can be released when they are no more needed and they are
offered within the context of a Service Level Agreement (SLA), which ensure the Quality of
Service (QoS). To achieve maximum efficiency and scalability users can use resources by using
simple APIs or management tools and automate workflows. Elasticity In traditional cluster
infrastructure it is difficult to adequately size a system so resources will be idle or inadequate for
application requirements. With Cloud Computing elasticity users can instantly add and remove
resources to meet their application requirements. Similar option is Cloud bursting where private
Cloud users could burst into a public Cloud to get more resources or during peak load
requirements of local dedicated cluster to accelerate results. Bright Computing has introduced a
Cloud bursting component to its Bright Cluster Manager platform to easily create new clusters in
the Cloud, or add Cloud based resources on-the-fly to existing private infrastructure running
HPC computations. Configurability On Clouds in contrast to traditional dedicated clusters users
have root access (IaaS) and they can customize their cluster instances with specific libraries,
compilers, applications, running the operating system of choice, even different parallel file
systems and disk configurations according to the needs of scientific application, etc. or can use
pre-built environment. Sharing and collaboration In Cloud users can create a common space to
share data, results, and methods and even to extend HPC resources to community partners
without their own HPC infrastructure. Amazon provides a centralized repository of public data
sets that can be seamlessly integrated into Cloud based applications. AWS is hosting the public
data sets at no charge for the community, users pay only for the compute and storage they use for
their own applications.

Business and Consumer application


Cloud computing has become one of the newer buzzwords in business circles for small and
medium-sized businesses. While not all small businesses have started using cloud computing,
many find the cloud offers them additional services or storage space for their projects or files.
Most companies are accessing the cloud to enhance their ability to share files internally to
employees or externally to customers; however, there are many good reasons to add cloud
computing to your company’s resource list if you haven’t already done so.

1. Infrastructure as a Service

With cloud computing, you can offer your clients the use of your infrastructure to host their
cloud services. Another option is to sell third-party infrastructure for creating websites to
promote clients’ products and services.

2. Platform as a Service

Alternatively, you can offer your platform to clients through cloud computing. Most online
platforms run on cloud computing today, because they are available to anyone with a connection.
Companies are rapidly moving to the cloud to eliminate the need for computer hardware in their
own offices decreasing their IT cost.

3. File Storage
File storage is a common use for cloud computing. You can store pretty much any type of file on
the cloud, and if you need limited access to your files, private cloud services can be made highly
secure. Cloud storage is essentially only limited to the maximum storage available from your
provider.

4. Data Backup

Data backup is another common use of cloud computing. While you can back up to your
computer or a drive, either of these can be physically damaged in a storm, flood or fire. The
cloud offers you a place to backup data that is away from your location and will keep data safe in
a secure environment. If needed, you can also share these backup files to other members of your
team.

5. Disaster Recovery

In the event of a disaster, you can recover your files, programs and data from the cloud as long as
you have a computer and an internet connection. Cloud computing is a good way to safeguard
your important business information for recovery later on.

6. Increasing Collaboration

Collaboration within your company and with other companies has become a global concern.
Cloud computing makes collaboration easy no matter where you and your collaborators are
physically located because you can all access the project files on a shared cloud space. This type
of collaboration can be made private until you are ready to bring your finished work to public
attention. Even if you are working from home with other people from your company, the cloud
offers more opportunities to work 24/7 with your partners without any restrictions based on
available resources.

7. Testing New Projects

Tech companies often use their company’s private cloud to test new programs or processes
before they see the light of day. An engineer can easily set up a test program on the cloud, add
data sets and then run the data through the program to find any leftover problems, before
pronouncing it ready to send to a client. Once the test is done, the cloud space is relinquished
back to the pool.

You might also like