Cloud Computing Final
Cloud Computing Final
1
Distributed Systems:
It is a composition of multiple independent systems but all of them are depicted as a single
entity to the users. The purpose of distributed systems is to share resources and also use
them effectively and efficiently. Distributed systems possess characteristics such as
scalability, concurrency, continuous availability, heterogeneity, and independence in
failures. But the main problem with this system was that all the systems were required to
be present at the same geographical location. Thus to solve this problem, distributed
computing led to three more types of computing and they were Mainframe computing,
cluster computing, and grid computing.
Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive input-
output operations. Even today these are used for bulk processing tasks such as online
transactions etc. These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these
were very expensive. To reduce this cost, cluster computing came as an alternative to
mainframe technology.
Cluster computing:
In 1980s, cluster computing came as an alternative to mainframe computing. Each
machine in the cluster was connected to each other by a network with high bandwidth.
These were way cheaper than those mainframe systems. These were equally capable of
high computations. Also, new nodes could easily be added to the cluster if it was required.
Thus, the problem of the cost was solved to some extent but the problem related to
2
geographical restrictions still pertained. To solve this, the concept of grid computing was
introduced.
Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different systems
were placed at entirely different geographical locations and these all were connected via
the internet. These systems belonged to different organizations and thus the grid consisted
of heterogeneous nodes. Although it solved some problems but new problems emerged
as the distance between the nodes increased. The main problem which was encountered
was the low availability of high bandwidth connectivity and with it other network
associated issues. Thus. cloud computing is often referred to as “Successor of grid
computing”.
Virtualization:
It was introduced nearly 40 years back. It refers to the process of creating a virtual layer
over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware
virtualization is still one of the most common types of virtualization.
Web 2.0:
It is the interface through which the cloud computing services interact with the clients. It
is because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps,
Facebook, Twitter, etc. Needless to say, social media is possible because of this
technology only. In gained major popularity in 2004.
Service orientation:
It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable applications. Two important concepts were introduced in this computing
model. These were Quality of Service (QoS) which also includes the SLA (Service Level
Agreement) and Software as a Service (SaaS).
Utility computing:
It is a computing model that defines service provisioning techniques for services such as
compute services along with other major services such as storage, infrastructure, etc
which are provisioned on a pay-per-use basis.
1. Resource pooling
Resource pooling is one of the essential characteristics of Cloud Computing. Resource pooling
means that a cloud service provider can share resources among several clients, providing
everyone with a different set of services as per their requirements. It is a multi-client strategy
3
that can be applied to data storage services, processing services, and bandwidth provided
services. The administration process of allocating resources in real-time doesn’t conflict with
the client’s experience.
2. On-demand service
It is one of the significant and essential features of Cloud Computing. It enables the client to
constantly monitor the server uptime, abilities, and allotted network storage. This is a
fundamental characteristic of Cloud Computing, and a client can likewise control the
computing abilities as per his needs.
3. Easy maintenance
This is one of the best cloud characteristics. The servers are effortlessly maintained, and the
downtime remains low or absolutely zero sometimes. Cloud Computing powered resources
undergo several updates frequently to optimize their capabilities and potential. The updates are
more viable with the devices and perform quicker than the previous versions.
4. Scalability and rapid elasticity
A key characteristic and benefit of cloud computing is its rapid scalability. This cloud
characteristic enables cost-effective running of workloads that require a vast number of servers
but only for a short period. Many clients have such workloads, which can be run very cost-
effectively because of the rapid scalability of Cloud Computing.
5. Economical
This cloud characteristic helps in reducing the IT expenditure of the organizations. In Cloud
Computing, the client needs to pay the administration for the space they have used. There is no
covered up or additional charge which needs to be paid. The administration is economical, and
more often than not, some space is allotted for free.
6. Measured and reporting service
Reporting services are one of the many cloud characteristics that make it the best choice for
organizations. Measuring & reporting service is helpful for both cloud providers and their
clients. It enables both the provider and the client to monitor and report what services have
been used and for what purpose. This helps in monitoring billing and ensuring the optimum
usage of resources.
7. Security
Data security is one of the best characteristics of Cloud Computing. Cloud services create a
copy of the data that is stored to prevent any form of data loss. If one server loses the data by
any chance, the copy version is restored from the other server. This feature comes handy when
several users work on a particular file in real-time and a file suddenly gets corrupted.
4
8. Automation
Automation is an essential characteristic of cloud computing. The ability of cloud computing
to automatically install, configure, and maintain a cloud service is known as automation in
cloud computing. In simple terms, it is the process of making the most of technology and
reducing manual effort. However, to achieve automation in the cloud ecosystem is not so easy.
It requires the installation and deployment of virtual machines, servers, and large storage. Upon
successful deployment, these resources require constant maintenance as well.
9. Resilience
Resilience in cloud computing means the ability of the service to quickly recover from any
disruption. A cloud’s resilience is measured by how fast its servers, databases, and network
system restarts and recovers from any kind of harm or damage. Availability is another major
characteristic of cloud computing. Since cloud services can be accessed remotely, there is no
geographic restriction or limitation when it comes to utilizing cloud resources.
10. Large network access
A big part of the cloud characteristics is its ubiquity. The client can access the cloud data or
transfer the data to the cloud from any place just with a device and internet connection. These
capacities are accessible everywhere in the organization and get to with the help of the internet.
Cloud providers save that large network access by monitoring and guaranteeing different
measurements that reflect how clients access cloud resources and data: latency, access time,
data throughput, etc.
Types of cloud
Public Cloud: The cloud resources that are owned and operated by a third-party cloud
service provider are termed as public clouds. It delivers computing resources such as
servers, software, and storage over the internet
Private Cloud: The cloud computing resources that are exclusively used inside a single
business or organization are termed as a private cloud. A private cloud may physically be
located on the company’s on-site datacentre or hosted by a third-party service provider.
Hybrid Cloud: It is the combination of public and private clouds, which is bounded
together by technology that allows data applications to be shared between them. Hybrid
cloud provides flexibility and more deployment options to the business.
5
Cloud services
1. Infrastructure as a Service (IaaS): In IaaS, we can rent IT infrastructures like servers and
virtual machines (VMs), storage, networks, operating systems from a cloud service vendor.
We can create VM running Windows or Linux and install anything we want on it. Using
IaaS, we don’t need to care about the hardware or virtualization software, but other than
that, we do have to manage everything else. Using IaaS, we get maximum flexibility, but
still, we need to put more effort into maintenance.
2. Platform as a Service (PaaS): This service provides an on-demand environment for
developing, testing, delivering, and managing software applications. The developer is
responsible for the application, and the PaaS vendor provides the ability to deploy and run
it. Using PaaS, the flexibility gets reduce, but the management of the environment is taken
care of by the cloud vendors.
3. Software as a Service (SaaS): It provides a centrally hosted and managed software
services to the end-users. It delivers software over the internet, on-demand, and typically
on a subscription basis. E.g., Microsoft One Drive, Dropbox, WordPress, Office 365, and
Amazon Kindle. SaaS is used to minimize the operational cost to the maximum extent.
6
Benefits of cloud computing
Cloud Computing has numerous advantages. Some of them are listed below -
One can access applications as utilities, over the Internet.
One can manipulate and configure the applications online at any time.
It does not require to install a software to access or manipulate cloud application.
Cloud Computing offers online development and deployment tools, programming
runtime environment through PaaS model.
Cloud resources are available over the network in a manner that provide platform
independent access to any type of clients.
Cloud Computing offers on-demand self-service. The resources can be used without
interaction with cloud service provider.
Cloud Computing is highly cost effective because it operates at high efficiency with
optimum utilization. It just requires an Internet connection
Cloud Computing offers load balancing that makes it more reliable.
Challenges of cloud computing
Although cloud Computing is a promising innovation with various benefits in the world of
computing, it comes with risks.
7
Security and Privacy
It is the biggest concern about cloud computing. Since data management and infrastructure
management in cloud is provided by third-party, it is always a risk to handover the sensitive
information to cloud service providers.
Although the cloud computing vendors ensure highly secured password protected accounts,
any sign of security breach may result in loss of customers and businesses.
Lock In
It is very difficult for the customers to switch from one Cloud Service Provider (CSP) to
another. It results in dependency on a particular CSP for service.
Isolation Failure
This risk involves the failure of isolation mechanism that separates storage, memory, and
routing between the different tenants.
Management Interface Compromise
In case of public cloud provider, the customer management interfaces are accessible through
the Internet.
Insecure or Incomplete Data Deletion
It is possible that the data requested for deletion may not get deleted. It happens because either
of the following reasons
Extra copies of data are stored but are not available at the time of deletion
Disk that stores data of multiple tenants is destroyed.
Cloud service providers provide various applications in the field of art, business, data storage
and backup services, education, entertainment, management, social networking, etc. The most
widely used cloud computing applications are given below -
8
1. Art Applications
Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:
Moo
Moo is one of the best cloud art applications. It is used for designing and printing
business cards, postcards, and mini cards.
Vistaprint allows us to easily design various printed marketing products such as business
cards, Postcards, Booklets, and wedding invitations cards.
Adobe creative cloud is made for designers, artists, filmmakers, and other creative
professionals. It is a suite of apps which includes PhotoShop image editing programming,
Illustrator, InDesign, TypeKit, Dreamweaver, XD, and Audition.
2. Business Applications
Business applications are based on cloud service providers. Today, every organization
requires the cloud business application to grow their business. It also ensures that business
applications are 24*7 available to users.
MailChimp
Salesforce
Salesforce platform provides tools for sales, service, marketing, e-commerce, and more.
It also provides a cloud development platform.
Chatter
Chatter helps us to share important information about the organization in real time.
Bitrix24
Paypal
9
Paypal offers the simplest and easiest online payment mode using a secure internet
account. Paypal accepts the payment through debit cards, credit cards, and also from
Paypal account holders.
Slack
Slack stands for Searchable Log of all Conversation and Knowledge. It provides
a user-friendly interface that helps us to create public and private channels for
communication.
Quickbooks
Cloud computing allows us to store information (data, files, images, audios, and videos) on
the cloud and access this information using an internet connection. As the cloud provider is
responsible for providing security, so they offer various backup recovery application for
retrieving the lost data.
A list of data storage and backup applications in the cloud are given below -
Box.com
Mozy
Mozy provides powerful online backup solutions for our personal and business data. It
schedules automatically back up for each day at a specific time.
Joukuu
Joukuu provides the simplest way to share and track cloud-based backup files. Many
users use joukuu to search files, folders, and collaborate on documents.
Google G Suite
Google G Suite is one of the best cloud storage and backup application. It includes
Google Calendar, Docs, Forms, Google+, Hangouts, as well as cloud storage and tools
for managing cloud apps. The most popular app in the Google G Suite is Gmail. Gmail
offers free email services to users.
10
4. Education Applications
Cloud computing in the education sector becomes very popular. It offers various online
distance learning platforms and student information portals to the students. The
advantage of using cloud in the field of education is that it offers strong virtual classroom
environments, Ease of accessibility, secure data storage, scalability, greater reach for the
students, and minimal hardware requirements for the applications.
Google Apps for Education is the most widely used platform for free web-based email,
calendar, documents, and collaborative study.
Chromebook for Education is one of the most important Google's projects. It is designed
for the purpose that it enhances education innovation.
It allows educators to quickly implement the latest technology solutions into the
classroom and make it available to their students.
AWS in Education
5. Entertainment Applications
Entertainment industries use a multi-cloud strategy to interact with the target audience.
Cloud computing offers various entertainment applications such as online games and video
conferencing.
Online games
Today, cloud gaming becomes one of the most important entertainment media. It offers
various online games that run remotely from the cloud. The best cloud gaming services
are Shaow, GeForce Now, Vortex, Project xCloud, and PlayStation Now.
Video conferencing apps provides a simple and instant connected experience. It allows us
to communicate with our business partners, friends, and relatives using a cloud-based
video conferencing. The benefits of using video conferencing are that it reduces cost,
increases efficiency, and removes interoperability.
11
6. Management Applications
Cloud computing offers various cloud management tools which help admins to manage all
types of cloud activities, such as resource deployment, data integration, and disaster recovery.
These management tools also provide administrative control over the platforms, applications,
and infrastructure.
Toggl
Toggl helps users to track allocated time period for a particular project.
Evernote
Evernote allows you to sync and save your recorded notes, typed notes, and other notes
in one convenient place. It is available for both free as well as a paid version. It uses
platforms like Windows, macOS, Android, iOS, Browser, and Unix.
Outright
Outright is used by management users for the purpose of accounts. It helps to track
income, expenses, profits, and losses in real-time environment.
GoToMeeting
GoToMeeting provides Video Conferencing and online meeting apps, which allows
you to start a meeting with your business partners from anytime, anywhere using mobile
phones or tablets. Using GoToMeeting app, you can perform the tasks related to the
management such as join meetings in seconds, view presentations on the shared screen,
get alerts for upcoming meetings, etc.
7. Social Applications
Social cloud applications allow a large number of users to connect with each other using
social networking applications such as Facebook, Twitter, Linkedln, etc.
Facebook is a social networking website which allows active users to share files,
photos, videos, status, more to their friends, relatives, and business partners using the
cloud storage system. On Facebook, we will always get notifications when our friends
like and comment on the posts.
12
Twitter
Yammer
Yammer is the best team collaboration tool that allows a team of employees to chat,
share images, documents, and videos.
Cloud Storage
Cloud storage is a cloud computing model that stores data on the Internet through a cloud
computing provider who manages and operates data storage as a service. It’s delivered on
demand with just-in-time capacity and costs, and eliminates buying and managing your own
data storage infrastructure. This gives you agility, global scale and durability, with “anytime,
anywhere” data access.
Pros
Off-site management: Your cloud provider assumes responsibility for maintaining and
protecting the stored data. This frees your staff from tasks associated with storage, such
as procurement, installation, administration, and maintenance. As such, your staff can
focus on other priorities.
Quick implementation: Using a cloud service accelerates the process of setting up and
adding to your storage capabilities. With cloud storage, you can provision the service and
start using it within hours or days, depending on how much capacity is involved.
Cost-effective: As mentioned, you pay for the capacity you use. This allows your
organization to treat cloud storage costs as an ongoing operating expense instead of a
capital expense with the associated upfront investments and tax implications.
Scalability: Growth constraints are one of the most severe limitations of on-premise
storage. With cloud storage, you can scale up as much as you need. Capacity is virtually
unlimited.
Business continuity: Storing data offsite supports business continuity in the event that a
natural disaster or terrorist attack cuts access to your premises.
Cons
Security: Security concerns are common with cloud-based services. Cloud storage
providers try to secure their infrastructure with up-to-date technologies and practices, but
occasional breaches have occurred, creating discomfort with users.
Administrative control: Being able to view your data, access it, and move it at will is
another common concern with cloud resources. Offloading maintenance and management
to a third party offers advantages but also can limit your control over your data.
13
Latency: Delays in data transmission to and from the cloud can occur as a result of traffic
congestion, especially when you use shared public internet connections. However,
companies can minimize latency by increasing connection bandwidth.
Regulatory compliance: Certain industries, such as healthcare and finance, have to
comply with strict data privacy and archival regulations, which may prevent companies
from using cloud storage for certain types of files, such as medical and investment
records. If you can, choose a cloud storage provider that supports compliance with any
industry regulations impacting your business.
There are three types of cloud data storage: object storage, file storage, and block storage.
1. Object Storage - Applications developed in the cloud often take advantage of object
storage's vast scalablity and metadata characteristics. Object storage solutions like Amazon
Simple Storage Service (S3) are ideal for building modern applications from scratch that
require scale and flexibility, and can also be used to import existing data stores for analytics,
backup, or archive.
2. File Storage - Some applications need to access shared files and require a file system. This
type of storage is often supported with a Network Attached Storage (NAS) server. File
storage solutions like Amazon Elastic File System (EFS) are ideal for use cases like large
content repositories, development environments, media stores, or user home directories.
3. Block Storage - Other enterprise applications like databases or ERP systems often require
dedicated, low latency storage for each host. This is analogous to direct-attached storage
(DAS) or a Storage Area Network (SAN). Block-based cloud storage solutions like Amazon
Elastic Block Store (EBS) are provisioned with each virtual server and offer the ultra low
latency required for high performance workloads
2. Data security: Cloud offers many advanced security features that guarantee that data is
securely stored and handled. Cloud storage providers implement baseline protections for
their platforms and the data they process, such authentication, access control, and
encryption.
4. Mobility: Cloud computing allows mobile access to corporate data via smartphones and
devices, which is a great way to ensure that no one is ever left out of the loop. Staff with
busy schedules, or who live a long way away from the corporate office, can use this
feature to keep instantly up-to-date with clients and co-workers.
14
5. Disaster recovery: Data loss is a major concern for all organizations, along with data
security. Storing your data in the cloud guarantees that data is always available, even if
your equipment like laptops or PCs, is damaged. Cloud-based services provide quick data
recovery for all kinds of emergency scenarios.
6. Control: Cloud enables you complete visibility and control over your data. You can
easily decide which users have what level of access to what data.
7. Market reach: Developing in the cloud enables users to get their applications to market
quickly.
15
7. Resilience: This infrastructure provides the feature of resilience means the services are
resilient. It means the infrastructure is safe from all sides. The IT operations will not be
easily get affected.
Cloud Adoption
16
3. Adoption: During the adoption phase, IT leaders should develop risk mitigation strategies.
They should also have an expert understanding of their servers, software, and data stores
for the future reiteration and scalability of their strategy.
4. Optimization: By meeting regularly with their executive team, IT departments can discuss
lessons learned in their cloud computing strategy and create new and improved solutions
for further processes and tasks.
Security aspects of cloud adoption
Businesses that are transitioning to the cloud naturally have concerns about the safety of
sensitive company and customer data. To ensure information is not lost or compromised
through data breaches or account hijacking, the following cloud security considerations are
necessary:
1. Use secure interfaces and APIs: Enterprises should take care to ensure software user
interfaces (UIs) and application programming interfaces (APIs) are updated – and safe.
Consistent management and monitoring of reputable tools will help protect against
malicious and unforeseen breaches and errors.
2. Prevent system vulnerabilities: Program bugs allow hackers to take control of cloud
systems or steal data. Keeping track of system updates and quickly identifying
vulnerabilities can help eliminate this risk.
3. Create training programs and disaster plans: Natural disasters, accidental deletion, and
insufficient due diligence in adopting cloud technologies can lead to data loss and malicious
attacks. Companies of all sizes should create a cloud computing roadmap and employee
training program to mitigate these issues.
17
Unit-2
Cloud Computing Architecture
Cloud Reference Model
To achieve the potential of cloud computing, there is a need to have a standard cloud reference
model for the software architects, software engineers, security experts and businesses, since it
provides a fundamental reference point for the development of cloud computing. The Cloud
Reference Model brings order to this cloud landscape.
There are three types of cloud reference models:
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
1
Characteristics
Virtual machines with pre-installed software.
Virtual machines with pre-installed operating systems such as Windows, Linux, and
Solaris.
On-demand availability of resources.
Allows to store copies of particular data at different locations.
The computing resources can be easily scaled up and down.
Benefits
IaaS allows the cloud provider to freely locate the infrastructure over the Internet in a cost-
effective manner. Some of the key benefits of IaaS are listed below:
2
3. Portability, interoperability with legacy applications
It is possible to maintain legacy between applications and workloads between IaaS clouds.
For example, network applications such as web server or e-mail server that normally runs
on customer-owned server hardware can also run from VMs in IaaS cloud.
Issues
Because IaaS offers the customer to run legacy software in provider's infrastructure, it
exposes customers to all of the security vulnerabilities of such legacy software.
The VM can become out-of-date with respect to security updates because IaaS allows the
customer to operate the virtual machines in running, suspended and off state. However, the
provider can automatically update such VMs, but this mechanism is hard and complex.
The customer uses virtual machines that in turn use the common disk resources provided
by the cloud provider. When the customer releases the resource, the cloud provider must
ensure that next customer to rent the resource does not observe data residue from previous
customer.
Major IaaS Vendors and Products
There are many examples of IaaS vendors and products. IaaS products offered by the three
largest public cloud service providers -- Amazon Web Services (AWS), Google, and
Microsoft -- include the following:
AWS offers storage services such as Simple Storage Service (S3) and Glacier, as well as
compute services, including its Elastic Compute Cloud (EC2).
Google Cloud Platform (GCP) offers storage and compute services through Google
Compute Engine.
Microsoft Azure Virtual Machines offers cloud virtualization for many different cloud
computing purposes.
3
2. Platform as a Service (PaaS)
Platform-as-a-Service offers the runtime environment for applications. It also offers
development and deployment tools required to develop applications. PaaS has a feature
of point-and-click tools that enables non-developers to create web applications.
App Engine of Google and Force.com are examples of PaaS offering vendors. Developer
may log on to these websites and use the built-in API to create web-based applications.
But the disadvantage of using PaaS is that, the developer locks-in with a particular vendor.
For example, an application written in Python against API of Google, and using App Engine
of Google is likely to work only in that environment.
Characteristics
Here are the characteristics of PaaS service model:
1. PaaS offers browser based development environment. It allows the developer to
create database and edit the application code either via Application Programming
Interface or point-and-click tools.
2. PaaS provides built-in security, scalability, and web service interfaces.
3. PaaS provides built-in tools for defining workflow, approval processes, and business
rules.
4. It is easy to integrate PaaS with other applications on the same platform.
5. PaaS also provides web services interfaces that allow us to connect the applications
outside the platform.
4
Application delivery-only environments
The stand-alone PaaS works as an independent entity for a specific function. It does
not include licensing or technical dependencies on specific SaaS applications.
Open PaaS offers an open source software that helps a PaaS provider to run
applications.
Benefits
Following are the benefits of PaaS model:
Customer need not bother about the administration because it is the responsibility of cloud
provider.
Customer need not purchase expensive hardware, servers, power, and data storage.
Scalable solutions
It is very easy to scale the resources up or down automatically, based on their demand.
5
More current system software
It is the responsibility of the cloud provider to maintain software versions and patch
installations.
Issues
Although standard languages are used, yet the implementations of platform services may
vary. For example, file, queue, or hash table interfaces of one platform may differ from
another, making it difficult to transfer the workloads from one platform to another.
The PaaS applications are event-oriented which poses resource constraints on applications,
i.e., they have to answer a request in a given interval of time.
Since PaaS applications are dependent on network, they must explicitly use cryptography
and manage security exposures.
3. Software as a Service (SaaS)
Software-as–a-Service (SaaS) model allows to provide software application as a service to
the end users. It refers to a software that is deployed on a host service and is accessible via
Internet. There are several SaaS applications listed below:
6
They can be scaled up or down on demand.
They are automatically upgraded and updated.
SaaS offers shared data model. Therefore, multiple users can share single instance of
infrastructure. It is not required to hard code the functionality for individual users.
All users run the same version of the software.
Benefits
Using SaaS has proved to be beneficial in terms of scalability, efficiency and performance.
Some of the benefits are listed below:
The SaaS application deployment requires a little or no client side software installation,
which results in the following benefits:
The customer can have single license for multiple computers running at different
locations which reduces the licensing cost. Also, there is no requirement for license
servers because the software runs in the provider's infrastructure.
The cloud provider stores data centrally. However, the cloud providers may store data
in a decentralized manner for the sake of redundancy and reliability.
Multitenant solutions
7
If the customer visits malicious website and browser becomes infected, the subsequent
access to SaaS application might compromise the customer's data. To avoid such risks,
the customer can use multiple browsers and dedicate a specific browser to access SaaS
applications or can use virtual desktop while accessing the SaaS applications.
Network dependence
The SaaS application can be delivered only when network is continuously available.
Also network should be reliable but the network reliability cannot be guaranteed either
by cloud provider or by the customer.
Transferring workloads from one SaaS cloud to another is not so easy because work
flow, business logics, user interfaces, support scripts can be provider specific.
Cloud deployment models
Deployment models define the type of access to the cloud, i.e., how the cloud is located? Cloud
can have any of the four types of access: Public, Private, Hybrid, and Community.
1. Public cloud
Public Cloud provides a shared platform that is accessible to the general public through
an Internet connection.
Public cloud operated on the pay-as-per-use model and administrated by the third party,
i.e., Cloud service provider.
In the Public cloud, the same storage is being used by multiple users at the same time.
Public cloud is owned, managed, and operated by businesses, universities, government
organizations, or a combination of them.
Amazon Elastic Compute Cloud (EC2), Microsoft Azure, IBM's Blue Cloud, Sun Cloud,
and Google Cloud are examples of the public cloud.
8
Advantages of Public Cloud
1. Low Cost
Public cloud has a lower cost than private, or hybrid cloud, as it shares the same resources with
a large number of consumers.
2. Location Independent
Public cloud is location independent because its services are offered through the internet.
3. Save Time
In Public cloud, the cloud service provider is responsible for the manage and maintain data
centers in which data is stored, so the cloud user can save their time to establish connectivity,
deploying new products, release product updates, configure, and assemble servers.
Organizations can easily buy public cloud on the internet and deployed and configured it
remotely through the cloud service provider within a few hours.
5. Business Agility
Public cloud provides an ability to elastically re-size computer resources based on the
organization's requirements.
9
6. Scalability and reliability
Public cloud offers scalable (easy to add and remove) and reliable (24*7 available) services to
the users at an affordable cos
1. Low Security
2. Performance
In the public cloud, performance depends upon the speed of internet connectivity.
3. Less customizable
2. Private cloud
Private Cloud allows systems and services to be accessible within an organization. The
Private Cloud is operated only within a single organization. However, it may be managed
internally by the organization itself or by third-party.
10
Advantages of Private cloud
1. More Control
Private clouds have more control over their resources and hardware than public clouds
because it is only accessed by selected users.
Security & privacy are one of the big advantages of cloud computing. Private cloud
improved the security level as compared to the public cloud.
3. Improved performance
Private cloud offers better performance with improved speed and space capacity.
The cost is higher than a public cloud because set up and maintain hardware resources are
costly.
As we know, private cloud is accessible within the organization, so the area of operations
is limited.
3. Limited scalability
Private clouds are scaled only within the capacity of internal hosted resources.
4. Skilled people
3. Hybrid Cloud
Hybrid Cloud is a mixture of public and private cloud. Non-critical activities are performed
using public cloud while the critical activities are performed using private cloud.
The main aim to combine these cloud (Public and Private) is to create a unified,
automated, and well-managed computing environment.
Mainly, a hybrid cloud is used in finance, healthcare, and Universities.
11
The best hybrid cloud provider companies are Amazon, Microsoft, Google,
Cisco, and NetApp.
It provides flexible resources because of the public cloud and secure resources because of
the private cloud.
2. Cost effective
Hybrid cloud costs less than the private cloud. It helps organizations to save costs for both
infrastructure and application support.
3. Security
Hybrid cloud is secure because critical activities are performed by the private cloud.
4. Risk Management
Hybrid cloud provides an excellent way for companies to manage the risk.
1. Networking issues
In the Hybrid Cloud, networking becomes complex because of the private and the public
cloud.
2. Infrastructure Compatibility
12
Infrastructure compatibility is the major issue in a hybrid cloud. With dual-levels of
infrastructure, a private cloud controls the company, and a public cloud does not, so there
is a possibility that they are running in separate stacks.
3. Reliability
4. Community cloud
1. Cost effective
Community cloud is cost effective because the whole cloud is shared between several
organizations or a community.
The community cloud is flexible and scalable because it is compatible with every user. It
allows the users to modify the documents as per their needs and requirement.
3. Security
13
Community cloud is more secure than the public cloud but less secure than the private
cloud.SQL CREATE TABLE
4. Sharing infrastructure
Community cloud allows us to share cloud resources, infrastructure, and other capabilities
among various organizations.
SOA allows users to combine a large number of facilities from existing services to form
applications.
SOA encompasses a set of design principles that structure system development and
provide means for integrating components into a coherent and decentralized system.
SOA based computing packages functionalities into a set of interoperable services, which
can be integrated into different software systems belonging to separate business domains.
14
Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice is
known as service orchestration Another important interaction pattern is service
choreography, which is the coordinated interaction of services without a single point of
control.
Principles of SOA:
15
Advantages of SOA:
Service reusability: In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
Platform independent: SOA allows making a complex application by combining
services picked from different sources, independent of the platform.
Availability: SOA facilities are easily available to anyone on request.
Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes.
Scalability: Services can run on different servers within an environment, this increases
scalability.
Disadvantages of SOA:
16
our data. On one side we need to decide whether to trust the provider itself; on the other side,
specific regulations can simply prevail over the agreement the provider is willing to establish
with us concerning the privacy of the information managed on our behalf. Moreover, cloud
services delivered to the end user can be the result of a complex stack of services that are
obtained by third parties via the primary cloud service provider. In this case there is a chain of
responsibilities in terms of service delivery that can introduce more vulnerability for the secure
management of data, the enforcement of privacy rules, and the trust given to the service
provider. In particular, when a violation of privacy or illegal access to sensitive information is
detected, it could become difficult to identify who is liable for such violations. The challenges
in this area are, then, mostly concerned with devising secure and trustable systems from
different perspectives: technical, social, and legal.
17
Unit-3
Cloud Virtualization technology
Overview of virtualization techniques
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical instance
of a resource or an application among multiple customers and organizations. It does by
assigning a logical name to a physical storage and providing a pointer to that physical resource
when demanded.
Virtualization plays a very important role in the cloud computing technology, normally in the
cloud computing, users share the data present in the clouds like application etc, but actually
with the help of virtualization users shares the Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the standard
versions to their cloud users, suppose if the next version of that application is released, then
cloud provider has to provide the latest version to their cloud users and practically it is possible
because it is more expensive.
Types of virtualization
1. Data virtualization
2. Hardware virtualization
3. Software virtualization
4. Server virtualization
5. Storage virtualization
6. Operating system virtualization
1. Data virtualization
Data virtualization is the process of retrieve data from various resources without knowing its
type and physical location where it is stored. It collects heterogeneous data from different
resources and allows data users across the organization to access this data according to their
work requirements. This heterogeneous data can be accessed using any application such as web
portals, web services, E-commerce, Software as a Service (SaaS), and mobile application.
1
Advantages of data virtualization
It allows users to access the data without worrying about where it resides on the memory.
It offers better customer satisfaction, retention, and revenue growth.
It provides various security mechanism that allows users to safely store their personal and
professional information.
It reduces costs by removing data replication.
It provides a user-friendly interface to develop customized views.
It provides various simple and fast deployment resources.
It increases business user efficiency by providing data in real-time.
It is used to perform tasks such as data integration, business integration, Service-Oriented
Architecture (SOA) data services, and enterprise search.
Analyze performance
Data virtualization is used to analyze the performance of the organization compared to
previous years.
Data Management
Data virtualization provides a secure centralized layer to search, discover, and govern the
unified data and its relationships.
2
Industries that use Data Virtualization
Finance
In the field of finance, DV is used to improve trade reconciliation, empowering data
democracy, addressing data complexity, and managing fixed-risk income.
Government
In the government sector, DV is used for protecting the environment.
Healthcare
Data virtualization plays a very important role in the field of healthcare. In healthcare,
DV helps to improve patient care, drive new product innovation, accelerating M&A
synergies, and provide a more efficient claims analysis.
Manufacturing
In manufacturing industry, data virtualization is used to optimize a global supply chain,
optimize factories, and improve IT assets utilization.
2. Hardware virtualization
Previously, there was "one to one relationship" between physical servers and operating system.
Low capacity of CPU, memory, and networking requirements were available. So, by using this
model, the costs of doing business increased. The physical space, amount of power, and
hardware required meant that costs were adding up.
The hypervisor manages shared the physical resources of the hardware between the guest
operating systems and host operating system. The physical resources become abstracted
versions in standard formats regardless of the hardware platform. The abstracted hardware is
represented as actual hardware. Then the virtualized operating system looks into these
resources as they are physical entities.
When the virtual machine software or virtual machine manager (VMM) or hypervisor software
is directly installed on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other
hardware resources.
3
After virtualization of hardware system we can install different operating system on it and run
different applications on those OS.
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
Advantages of Hardware Virtualization
The main benefits of hardware virtualization are more efficient resource utilization, lower
overall costs as well as increased uptime and IT flexibility.
Increased IT Flexibility:
Hardware virtualization helps for quick deployment of server resources in a managed and
consistent ways. That results in IT being able to adapt quickly and provide the business
with resources needed in good time.
3. Software Virtualization
Managing applications and distribution becomes a typical task for IT departments. Installation
mechanism differs from application to application. Some programs require certain helper
applications or frameworks and these applications may have conflict with existing applications.
Software virtualization is just like a virtualization but able to abstract the software
installation procedure and create virtual software installations.
Virtualized software is an application that will be "installed" into its own self-contained unit.
4
Advantages of Software Virtualization
Client Deployments Become Easier:
Copying a file to a workstation or linking a file in a network then we can easily install
virtual software.
Easy to manage:
To manage updates becomes a simpler task. You need to update at one place and deploy
the updated virtual application to the all clients.
Software Migration:
Without software virtualization, moving from one software platform to another platform
takes much time for deploying and impact on end user systems. With the help of virtualized
software environment, the migration becomes easier.
4. Server virtualization
Server Virtualization is the process of dividing a physical server into several virtual servers,
called virtual private servers. Each virtual private server can run independently.
The concept of Server Virtualization widely used in the IT infrastructure to minimizes the costs
by increasing the utilization of existing resources.
1. Hypervisor
The hypervisor is mainly used to perform various tasks such as allocate physical hardware
resources (CPU, RAM, etc.) to several smaller independent virtual machines, called "guest"
on the host machine.
2. Full Virtualization
Full Virtualization uses a hypervisor to directly communicate with the CPU and physical
server. It provides the best isolation and security mechanism to the virtual machines.
The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its
own processing needs, so it can slow down the application and server performance.
5
3. Para Virtualization
Para Virtualization is quite similar to the Full Virtualization. The advantage of using this
virtualization is that it is easier to use, Enhanced performance, and does not require
emulation overhead. Xen primarily and UML use the Para Virtualization.
The difference between full and pare virtualization is that, in para virtualization hypervisor
does not need too much processing power to manage the OS.
Independent Restart
In Server Virtualization, each server can be restart independently and does not affect
the working of other virtual servers.
Low Cost
Server Virtualization can divide a single server into multiple virtual private servers, so
it reduces the cost of hardware components.
Disaster Recovery<
Disaster Recovery is one of the best advantages of Server Virtualization. In Server
Virtualization, data can easily and quickly move from one server to another and these
data can be stored and retrieved from anywhere.
Faster deployment of resources
Server virtualization allows us to deploy our resources in a simpler and faster way.
Security
It allows uses to store their sensitive data inside the data centers.
The biggest disadvantage of server virtualization is that when the server goes offline,
all the websites that are hosted by the server will also go down.
There is no way to measure the performance of virtualized environments.
It requires a huge amount of RAM consumption.
It is difficult to set up and maintain.
Some core applications and databases are not supported virtualization.
It requires extra hardware resources.
6
It allows organizations to make efficient use of resources.
It reduces redundancy without purchasing additional hardware components.
5. Storage virtualization
Storage virtualization is a major component for storage servers, in the form of functional RAID
levels and controllers. Operating systems and applications with device can access the disks
directly by themselves for writing. The controllers configure the local storage in RAID groups
and present the storage to the operating system depending upon the configuration. However,
the storage is abstracted and the controller is determining how to write the data or retrieve the
requested data for the operating system.
Storage virtualization is becoming more and more important in various other forms:
File servers: The operating system writes the data to a remote location with no need to
understand how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested blocks
at LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating system. NAS
presents the storage as file operations (like NFS). SAN technologies present the storage as
block level storage (like Fibre Channel). SAN technologies receive the operating instructions
only when if the storage was a locally attached device.
1. Data is stored in the more convenient locations away from the specific host. In the case
of a host failure, the data is not compromised necessarily.
2. The storage devices can perform advanced functions like replication, reduplication, and
disaster recovery functionality.
3. By doing abstraction of the storage level, IT operations become more flexible in how
storage is provided, partitioned, and protected.
7
How does OS Virtualization works?
Components needed for using OS Virtualization in the infrastructure are given below:
The first component is the OS Virtualization server. This server is the centre point in the OS
Virtualization infrastructure. The server manages the streaming of the information on the
virtual disks for the client and also determines which client will be connected to which virtual
disk (using a database, this information is stored). Also the server can host the storage for the
virtual disk locally or the server is connected to the virtual disks via a SAN (Storage Area
Network). In high availability environments there can be more OS Virtualization servers to
create no redundancy and load balancing. The server also ensures that the client will be unique
within the infrastructure.
Secondly, there is a client which will contact the server to get connected to the virtual disk and
asks for components stored on the virtual disk for running the operating system.
The available supporting components are database for storing the configuration and settings
for the server, a streaming service for the virtual disk content, a (optional) TFTP service and a
(also optional) PXE boot service for connecting the client to the OS Virtualization servers.
As it is already mentioned that the virtual disk contains an image of a physical disk from the
system that will reflect to the configuration and the settings of those systems which will be
using the virtual disk. When the virtual disk is created then that disk needs to be assigned to
the client that will be using this disk for starting. The connection between the client and the
disk is made through the administrative tool and saved within the database.
First we start the machine and set up the connection with the OS Virtualization server. Most of
the products offer several possible methods to connect with the server. One of the most popular
and used methods is using a PXE service, but also a boot strap is used a lot (because of the
disadvantages of the PXE service). Although each method initializes the network interface card
(NIC), receiving a (DHCP-based) IP address and a connection to the server.
When the connection is established between the client and the server, the server will look into
its database for checking the client is known or unknown and which virtual disk is assigned to
the client. When more than one virtual disk are connected then a boot menu will be displayed
on the client side. If only one disk is assigned, that disk will be connected to the client which
is mentioned in step number 3.
After the desired virtual disk is selected by the client, that virtual disk is connected through the
OS Virtualization server . At the back-end, the OS Virtualization server makes sure that the
client will be unique (for example computer name and identifier) within the infrastructure.
8
As soon the disk is connected the server starts streaming the content of the virtual disk. The
software knows which parts are necessary for starting the operating system smoothly, so that
these parts are streamed first. The information streamed in the system should be stored
somewhere (i.e. cached). Most products offer several ways to cache that information. For
examples on the client hard disk or on the disk of the OS Virtualization server.
5) Additional Streaming:
After that the first part is streamed then the operating system will start to run as expected.
Additional virtual disk data will be streamed when required for running or starting a function
called by the user (for example starting an application available within the virtual disk).
Virtualization is not that easy to implement. A computer runs an OS that is configured to that
particular hardware. Running a different OS on the same hardware is not exactly feasible.
To tackle this, there exists a hypervisor. What hypervisor does is, it acts as a bridge between
virtual OS and hardware to enable its smooth functioning of the instance.
There are five levels of virtualizations available that are most commonly used in the industry.
9
1. Instruction Set Architecture Level (ISA)
In ISA, virtualization works through an ISA emulation. This is helpful to run heaps of legacy
code which was originally written for different hardware configurations. These codes can be
run on the virtual machine through an ISA. A binary code that might need additional layers to
run can now run on an x86 machine or with some tweaking, even on x64 machines. ISA helps
make this a hardware-agnostic virtual machine. The basic emulation, though, requires an
interpreter. This interpreter interprets the source code and converts it to a hardware readable
format for processing.
4. Library Level
OS system calls are lengthy and cumbersome. Which is why applications opt for APIs from
user-level libraries. Most of the APIs provided by systems are rather well documented. Hence,
library level virtualization is preferred in such scenarios. Library interfacing virtualization is
made possible by API hooks. These API hooks control the communication link from the system
to the applications. Some tools available today, such as vCUDA and WINE, have successfully
demonstrated this technique. Thus, it is no surprise that currently, Xen hypervisors are using
HAL to run Linux and other OS on x86 based machines.
10
5. Application level
Application-level virtualization comes handy when you wish to virtualize only an application.
It does not virtualize an entire platform or environment. On an operating system, applications
work as one process. Hence it is also known as process-level virtualization. It is generally
useful when running virtual machines with high-level languages. Here, the application sits on
top of the virtualization layer, which is above the application program. The application program
is, in turn, residing in the operating system. Programs written in high-level languages and
compiled for an application-level virtual machine can run fluently here.
Virtualization benefits
Technology is always at the risk of crashing down at the wrong time. Businesses can tolerate
a few glitches, but if your developer is working on an important application that needs to be
finished immediately, the last thing you could wish for is a system crash.
To counter this risk, virtualization lets you open the same work on another device. Store all
your backup data through virtualization on cloud services or virtual networks and get easy
access to it from any device. Apart from that, there are usually two servers working side-by-
side keeping all your data accessible. If one faces any problem, the other is always available to
avoid any interruption.
You can easily transfer data from physical storage to a virtual server, and vice versa.
Administrators don’t have to waste time digging out hard drives to find data. With a dedicated
server and storage, it’s quite easy to locate the required files and transfer them within no time.
You'll realize virtualization's actual worth when you’ll have to transfer data over a long-
distance. You also have the choice of getting a virtual disk space. If you don’t need much space,
you can opt for a thin-provisioned virtual disk.
Security is a major aspect IT professionals have to focus on. However, with virtual firewalls,
access to your data is restricted at much lower costs as compared to traditional methods.
Through virtualization, you get protected by a virtual switch that protects all your data and
applications from harmful malware, viruses, and other cyber threats.
You are allotted the firewall feature for network virtualization to create segments within the
system. Server virtualization storage on cloud services will save you from the risks of having
your data get lost or corrupted. Cloud services are also encrypted with high-end protocols that
protect your data from other various threats.
11
So it’s a good idea to virtualize all your storage and then create a backup on a server that you
can store on cloud services. However, in order to ensure that you do this correctly, it’s
preferable to first go through a cloud computing online course, to avoid making any errors.
4. Smoother IT Operations
Virtual networks help IT professionals become efficient and agile at work. These networks are
easy to operate and process faster, reducing the effort and time required to work on them.
Before virtual networks were introduced in the digital world, it would take days and weeks for
technical workers to maintain and install devices and software on physical servers.
Apart from the operations, visualization has also benefited IT support teams in solving
technical problems in physical systems. As all the data is available on a virtual server,
technicians don’t have to waste time recovering it from crashed or corrupted devices. Learn all
the skills behind virtualization with cloud training online, and become a successful technician.
5. Cost-Effective Strategy
Virtualization is a great way to reduce operational costs. With all the data stored on virtual
servers or clouds, there’s hardly a need for physical systems or hardware, thus allowing
businesses to witness a vast reduction in wastage, electricity bills, and maintenance costs. 70%
of senior executives have supported virtualization by calling it efficient and cost-saving.
Virtualization also helps companies save a significant amount of space which can be utilized
to increase the operations of a profitable department. This cost-effective strategy is both a
profitability and productivity booster!
This is the same concept as making a supervisor call to the same level operating system.
12
Hypervisors are classified into two types:
1. Bare metal/ Native Hypervisors
2. Embedded/ Host Hypervisors
A guest operating system thus runs on another level above the hypervisor. This is the
classic implementation of virtual machine architectures.
Considering the hypervisor layer being a distinct software layer, guest operating
systems thus run at the third level above the hardware
13
Virtual infrastructure
By decoupling physical hardware from an operating system, a virtual infrastructure can help
organizations achieve greater IT resource utilization, flexibility, scalability and cost
savings. These benefits are especially helpful to small businesses that require reliable
infrastructure but can’t afford to invest in costly physical hardware.
Virtual infrastructure components
By separating physical hardware from operating systems, virtualization can provision compute,
memory, storage and networking resources across multiple virtual machines (VMs) for greater
application performance, increased cost savings and easier management. Despite variances in
design and functionality, a virtual infrastructure typically consists of these key components:
Virtualized compute: This component offers the same capabilities as physical servers, but
with the ability to be more efficient. Through virtualization, many operating systems
and applications can run on a single physical server, whereas in traditional infrastructure
servers were often underutilized. Virtual compute also makes newer technologies like cloud
computing and containers possible.
Virtualized storage: This component frees organizations from the constraints and
limitations of hardware by combining pools of physical storage capacity into a single, more
manageable repository. By connecting storage arrays to multiple servers using storage area
networks, organizations can bolster their storage resources and gain more flexibility in
provisioning them to virtual machines. Widely used storage solutions include
fiber channel SAN arrays, iSCSI SAN arrays, and NAS arrays.
Virtualized networking and security: This component decouples networking services from
the underlying hardware and allows users to access network resources from a centralized
management system. Key security features ensure a protected environment for virtual
machines, including restricted access, virtual machine isolation and user provisioning
measures.
The benefits of virtualization touch every aspect of an IT infrastructure, from storage and
server systems to networking tools. Here are some key benefits of a virtual infrastructure:
14
Cost savings: By consolidating servers, virtualization reduces capital and operating
costs associated with variables such as electrical power, physical security, hosting and server
development.
From design to disaster recovery, there are certain virtual infrastructure requirements
organizations must meet to reap long-term value from their investment.
Plan ahead: When designing a virtual infrastructure, IT teams should consider how business
growth, market fluctuations and advancements in technology might impact their hardware
requirements and reliance on compute, networking and storage resources.
Look for ways to cut costs: IT infrastructure costs can become unwieldly if IT teams don’t
take the time to continuously examine a virtual infrastructure and its deliverables. Cost-
cutting initiatives may range from replacing old servers and renegotiating vendor
agreements to automating time-consuming server management tasks.
Prepare for failure: Despite its failover hardware and high availability, even the most
resilient virtual infrastructure can experience downtime. IT teams should prepare for worst-
case scenarios by taking advantage of monitoring tools, purchasing extra hardware and
relying on clusters to better manage host resources.
A virtual infrastructure architecture can help organizations transform and manage their IT
system infrastructure through virtualization. But it requires the right building blocks to
deliver results. These include:
Host: A virtualization layer that manages resources and other services for virtual machines.
Virtual machines run on these individual hosts, which continuously perform monitoring and
management activities in the background. Multiple hosts can be grouped together to work on
the same network and storage subsystems, culminating in combined computing and memory
15
resources to form a cluster. Machines can be dynamically added or removed from a cluster.
Hypervisor: A software layer that enables one host computer to simultaneously support
multiple virtual operating systems, also known as virtual machines. By sharing the same
physical computing resources, such as memory, processing and storage,
the hypervisor stretches available resources and improves IT flexibility.
User interface: This front-end element means administrator can view and manage virtual
infrastructure components by connecting directly to the server host or through a browser-
based interface.
16
Unit-5
Cloud Security
Cloud security refers to the technologies, policies, controls, and services that protect cloud data,
applications, and infrastructure from threats.
Most cloud providers attempt to create a secure cloud for customers. Their business model
hinges on preventing breaches and maintaining public and customer trust. Cloud providers can
attempt to avoid cloud security issues with the service they provide, but can’t control how
customers use the service, what data they add to it, and who has access. Customers can weaken
cybersecurity in cloud with their configuration, sensitive data, and access policies. In each
public cloud service type, the cloud provider and cloud customer share different levels of
responsibility for security. By service type, these are:
1
exploiting errors or vulnerabilities in a cloud deployment without using malware,
“expand” their access through weakly configured or protected interfaces to locate
valuable data, and “exfiltrate” that data to their own storage location.
Misconfiguration – Cloud-native breaches often fall to a cloud customer’s
responsibility for security, which includes the configuration of the cloud service.
Research shows that just 26% of companies can currently audit their IaaS environments
for configuration errors. Misconfiguration of IaaS often acts as the front door to a
Cloud-native breach, allowing the attacker to successfully land and then move on to
expand and exfiltrate data. Research also shows 99% of misconfigurations go unnoticed
in IaaS by cloud customers. Here’s an excerpt from this study showing this level of
misconfiguration disconnect.
Disaster recovery – Cybersecurity planning is needed to protect the effects of
significant negative breaches. A disaster recovery plan includes policies, procedures,
and tools designed to enable the recovery of data and allow an organization to continue
operations and business.
Insider threats – A rogue employee is capable of using cloud services to expose an
organization to a cybersecurity breach. A recent McAfee Cloud Adoption and Risk
Report revealed irregular activity indicative of insider threat in 85% of organizations.
Weak cloud security measures within an organization include storing data without
encryption or failing to install multi-factor authentication to gain access to the service.
2. Compliance violations
Organizations can quickly go into a state of non-compliance, which puts them in the risk
of serious consequences.
Even tech giants like Facebook have been victims of resource exploitation due to user
error or misconfigurations. Keeping employees informed about the dangers and risks of
data sharing is of at most importance.
3. Malware attacks
Cloud services can be a vector for data exfiltration. As technology improves, and
protection systems evolve, cyber-criminals have also come up with new techniques to
deliver malware targets. Attackers encode sensitive data onto video files and upload them
to YouTube.
Skyhigh reports that cyber-criminals use private twitter accounts to deliver the malware.
The malware then exhilarates sensitive data a few characters at a time. Some have also
been known to use phishing attacks through file-sharing services to deliver the malware.
4. End-user control
When a firm is unaware of the risk posed by workers using cloud services, the employees
could be sharing just about anything without raising eyebrows. Insider threats have
2
become common in the modern market. For instance, if a salesman is about to resign
from one firm to join a competitor firm, they could upload customer contacts to cloud
storage services and access them later.
The example above is only one of the more common insider threats today. Many more
risks are involved with exposing private data to public servers.
5. Contract breaches with clients and/or business partners
Contracts restrict how business partners or clients use data and also who has the
authorization to access it. Employees put both the firm and themselves at risk of legal
action when they move restricted data into their cloud accounts without permission from
the relevant authorities.
6. Shared vulnerabilities
Cloud security is the responsibility of all concerned parties in a business agreement. From
the service provider to the client and business partners, every stakeholder shares
responsibility in securing data. Every client should be inclined to take precautionary
measures to protect their sensitive data.
While the major providers have already taken steps to secure their side, the more delicate
control measures are for the client to take care of. Dropbox, Microsoft, Box, and Google,
among many others, have adopted standardized procedures to secure your data. These
measures can only be successful when you have also taken steps to secure your sensitive
data.
Key security protocols such as protection of user passwords and access restrictions are the
client’s responsibility. According to an article named “Office 365 Security and Share
Responsibility” by Skyfence, users should consider high measures of security as the most
delicate part of securing their data is firmly in their hands.
3
the level of vulnerability to their data. Cyber-criminals have more opportunities to take
advantage of thanks to these vulnerabilities
9. Loss of data
Data stored on cloud servers can be lost through a natural disaster, malicious attacks, or a
data wipe by the service provider. Losing sensitive data is devastating to firms, especially
if they have no recovery plan. Google is an example of the big tech firms that have
suffered permanent data loss after being struck by lightning four times in its power supply
lines.
Amazon was another firm that lost its essential customer data back in 2011.
An essential step in securing data is carefully reviewing the terms of service of your
provider and their back up procedures. The backup protocol could relate to physical
access, storage locations, and natural disasters.
The breaches reduce customer trust in the security of their data. A breach in an
organization’s data will inevitably lead to a loss of customers, which ultimately impacts
the firm’s revenue.
The seven security issues which one should discuss with a cloud-computing vendor:
1. Privileged user access —inquire about who has specialized access to data, and about
the hiring and management of such administrators.
2. Regulatory compliance—make sure that the vendor is willing to undergo external
audits and/or security certifications.
3. Data location—does the provider allow for any control over the location of data?
4
4. Data segregation —make sure that encryption is available at all stages, and that these
encryption schemes were designed and tested by experienced professionals.
5. Recovery —Find out what will happen to data in the case of a disaster. Do they offer
complete restoration? If so, how long would that take?
6. Investigative support —Does the vendor have the ability to investigate any
inappropriate or illegal activity?
7. Long-term viability —What will happen to data if the company goes out of business?
How will data be returned, and in what format?
To address the security issues listed above, SaaS providers will need to incorporate and enhance
security practices used by the managed service providers and develop new ones as the cloud
computing environment evolves. The baseline security practices for the SaaS environment as
currently formulated are discussed in the following sections.
Security Management (People): One of the most important actions for a security team
is to develop a formal charter for the security organization and program. This will foster
a shared vision among the team of what security leadership is driving toward and
expects, and will also foster “ownership” in the success of the collective team. The
charter should be aligned with the strategic plan of the organization or company the
security team works for. Lack of clearly defined roles and responsibilities, and
agreement on expectations, can result in a general feeling of loss and confusion among
the security team about what is expected of them, how their skills and experienced can
be leveraged, and meeting their performance goals. Morale among the team and pride
in the team is lowered, and security suffers as a result.
Security Governance: A security steering committee should be developed whose
objective is to focus on providing guidance about security initiatives and alignment
with business and IT strategies. A charter for the security team is typically one of the
first deliverables from the steering committee. This charter must clearly define the roles
and responsibilities of the security team and other groups involved in performing
information security functions. Lack of a formalized strategy can lead to an
unsustainable operating model and security level as it evolves. In addition, lack of
attention to security governance can result in key needs of the business not being met,
including but not limited to, risk management, security monitoring, application
security, and sales support. Lack of proper governance and management of duties can
also result in potential security risks being left unaddressed and opportunities to
improve the business being missed because the security team is not focused on the key
security functions and activities that are critical to the business.
Risk Management: Effective risk management entails identification of technology
assets; identification of data and its links to business processes, applications, and data
stores; and assignment of ownership and custodial responsibilities. Actions should also
include maintaining a repository of information assets. Owners have authority and
accountability for information assets including protection requirements, and custodians
implement confidentiality, integrity, availability, and privacy controls. A formal risk
assessment process should be created that allocates security resources linked to business
continuity.
Risk Assessment: Security risk assessment is critical to helping the information
security organization make informed decisions when balancing the dueling priorities of
business utility and protection of assets. Lack of attention to completing formalized risk
assessments can contribute to an increase in information security audit findings, can
jeopardize certification goals, and can lead to inefficient and ineffective selection of
5
security controls that may not adequately mitigate information security risks to an
acceptable level. A formal information security risk management process should
proactively assess information security risks as well as plan and manage them on a
periodic or as-needed basis. More detailed and technical security risk assessments in
the form of threat modeling should also be applied to applications and infrastructure.
Doing so can help the product management and engineering groups to be more
proactive in designing and testing the security of applications and systems and to
collaborate more closely with the internal security team. Threat modeling requires both
IT and business process knowledge, as well as technical knowledge of how the
applications or systems under review work.
Security Monitoring and Incident Response: Centralized security information
management systems should be used to provide notification of security vulnerabilities
and to monitor systems continuously through automated technologies to identify
potential issues. They should be integrated with network and other systems monitoring
processes (e.g., security information management, security event management, security
information and event management, and security operations centers that use these
systems for dedicated 24/7/365 monitoring). Management of periodic, independent
third-party security testing should also be included. Many of the security threats and
issues in SaaS center around application and data layers, so the types and sophistication
of threats and attacks for a SaaS organization require a different approach to security
monitoring than traditional infrastructure and perimeter monitoring. The organization
may thus need to expand its security monitoring capabilities to include application- and
data-level activities. This may also require subject-matter experts in applications
security and the unique aspects of maintaining privacy in the cloud. Without this
capability and expertise, a company may be unable to detect and prevent security threats
and attacks to its customer data and service stability.
Third-Party Risk Management: As SaaS moves into cloud computing for the storage
and processing of customer data, there is a higher expectation that the SaaS will
effectively manage the security risks with third parties. Lack of a third-party risk
management program may result in damage to the provider’s reputation, revenue losses,
and legal actions should the provider be found not to have performed due diligence on
its third-party vendors.
Cloud monitoring provides an easier way to identify patterns and pinpoint potential security
vulnerabilities in cloud infrastructure. As there’s a general perception of a loss of control when
valuable data is stored in the cloud, effective cloud monitoring can put companies more at ease
with making use of the cloud for transferring and storing data.
When customer data is stored in the cloud, cloud monitoring can prevent loss of business and
frustrations for customers by ensuring that their personal data is safe. The use of web services
6
can increase security risks, yet cloud computing offers many benefits for businesses, from
accessibility to a better customer experience. Cloud monitoring is one initiative that enables
companies to find the balance between the ability to mitigate risks and taking advantage of the
benefits of the cloud – and it should do so without hindering business processes.
One of the most efficacious measures for mitigating cloud security risks is gaining
complete control over data across all endpoints. Using solutions that scan, assess and
take action on the data, before its transition from an enterprise network, enables a robust
defense against any data loss through the cloud. This further helps in avoiding
vulnerabilities, such as uploading of sensitive files to unprotected cloud repositories.
Securing codes must be one of the top priorities, to eliminate risks from potential
cyberthreats. During the development of codes for websites, the focus lies in selecting
the right security development lifecycle (SDL), which aligns well with the delivery
strategies of companies. Key benefits associated with secure SDL include continuous
security, early risk detection, improved stakeholder awareness, and reduction of
business risks.
While several cloud security monitoring solutions have been launched, the above
guidelines would help in the successful execution of online monitoring system
practices.
4. Patch Management
7
5. Automation
Organizations must ensure the policies created are attached to roles or groups instead
of individual users to deprive unnecessary or excessive privileges or permissions to
users. To stem any unauthorized access to resources, organizations must provision
access to these resources using roles instead of giving separate set of credentials for
access.
It must be made sure that users are provided minimal access privileges to resources,
which would not restrict them to fulfil their responsibilities. All users must have a
multifactor authentication for their individual accounts as a defensive strategy against
any compromised account.
Organizations must also limit the number of users with administrative privileges. The
access keys must be rotated on a regular basis, and standardized periodically for
password expiration, ensuring inaccessibility to data with a potential stolen or lost key.
7. Data Protection
New technologies entail distinct issues and with the cloud-based storage solutions on
the fore, data protection has become an imperative challenge to be addressed.
Organizations must opt for the cloud service providers that deliver data encryption as a
standard feature in their offerings. Encrypting highly sensitive data including
personally identifiable information (PII) or protected health information with the aid of
customer-controlled keys has become a must-have for organizations eyeing the move
to cloud. Customer-controlled key put the pressure of management on the customer,
however they provide better control.
8
Organizations must choose the provider who guarantees protection against loss of
critical data and meet the needs of data backup and recovery. Data replication is a
common practice among the cloud service providers to ensure persistence.
Organizations must thoroughly analyze the cloud deployment to identify where the
replicated data has been cached or stored and take actions accordingly to ensure that
the copied data has been deleted.
When we need to explain cloud computing security architecture, it is important to get a clear
idea about cloud computing and what are the safety measures it has been taking to protect the
data. In general, we use cloud service providers (CSP) and service level agreements (SLA) to
provide basic security to the stored data. Apart from these, several researchers also
recommended some safety measures for cloud security architecture to protect the data at a basic
level. They are-
To monitor the cloud data easily. It is advisable to use a single sign-in ID for multiple
accounts.
Always prefer virtual firewalls and virtual comments to avoid threats.
Maintain preventive measures for the Incorporated data.
The cloud security architecture plan may vary from model to model. So it is necessary to
understand cloud computing security architecture for every model separately.
9
IaaS Cloud Computing Security Architecture
Infrastructure as a service architecture has majorly security and networking tools to protect the
network of data. Hear the application programming interface impact is high rather than the
other. Even though the cloud service provider’s (CSP) provides security for the infrastructure,
the remaining Security will be provided by Network tools like network packet brokers (NPB).
The attributes of IaaS Cloud Computing Security Architecture are,
Segmentation of the network will be in practice.
Virtual Network tools are stored in the cloud.
Virtual firewalls and web applications were preferred to use. It helps to prevent
malware or threats.
Optimum utilization of Intrusion Detection Systems and Intrusion Prevention Systems
(IDS/IPS).
Virtual routers are also introduced.
SaaS Cloud Computing Security Architecture
The cloud security architecture in cloud computing follows a different plan for software as a
service. as the name itself specifies that the Cloud security architecture majorly monitors the
software and the data can be accessed with the help of the Internet’s connection for the
management. Management needs to negotiate with the CSP team to maintain proper security
based on the legal contract between the security team and the management. Cloud access
security brokers play a vital role in this Security model. The features of SaaS Cloud Computing
Security Architecture are,
Usage of multiple logs is highly prioritized.
IP restrictions were strictly followed to maintain the security concerning management
also.
Application programming interface gateways are also used In This Cloud computing
security architecture plan.
PaaS Cloud Computing Security Architecture
In general, the platform as a service model can be defined as the deployment of applications
that can be done by considering the capabilities of the host and underlying software and
hardware. But it doesn’t consider the cost and complexity of buying those applications. Cloud
security reference architecture for Paas majorly depends on the cloud security providers. As
we already know that the applications can be taken care of by the management, the remaining
data needs to look after by CSPs. Features of PaaS Cloud Computing Security Architecture are
mostly similar to that of the SaaS plan. They are-
Maintenance of several logs and audits.
Do not get compromised with IP restrictions.
Half of the security can be achieved by API gateways
CASB Cloud access security brokers also had a significant part in maintaining security.
10
Application Security
Application security describes security measures at the application level that aim to prevent
data or code within the app from being stolen or hijacked. It encompasses the security
considerations that happen during application development and design, but it also involves
systems and approaches to protect apps after they get deployed.
Application security may include hardware, software, and procedures that identify or minimize
security vulnerabilities. A router that prevents anyone from viewing a computer’s IP address
from the Internet is a form of hardware application security. But security measures at the
application level are also typically built into the software, such as an application firewall that
strictly defines what activities are allowed and prohibited. Procedures can entail things like an
application security routine that includes protocols such as regular testing.
Application security controls are techniques to enhance the security of an application at the
coding level, making it less vulnerable to threats. Many of these controls deal with how the
application responds to unexpected inputs that a cybercriminal might use to exploit a weakness.
A programmer can write code for an application in such a way that the programmer has more
control over the outcome of these unexpected inputs. Fuzzing is a type of application security
11
testing where developers test the results of unexpected values or inputs to discover which ones
cause the application to act in an unexpected way that might open a security hole.
Security is a problem. Network security is an even bigger problem because of the complex
factors that define risks and the profound negative effects that can occur if you fail.
Virtual network security is the worst problem of all because it combines issues generated by
traditional hosting and application security with those from network security, and then adds the
challenges of virtual resources and services. It's no wonder we're only now starting to recognize
the problems of cloud-virtual networking. And we're a long way from solving them.
Security should always be viewed as an incremental matter. How much different is the current
situation than situations already experienced, tolerated or addressed? In the case of cloud-
virtual security for networks, the biggest difference is virtual-to-resource mapping, meaning
the framework required to connect hosted components. In short, cloud-virtual service security
issues occur because security tools designed to protect hosted software features are different
than those safeguarding physical devices.
Step one in securing virtual machine security in cloud computing is to isolate the new hosted
elements. For example, let's say three features hosted inside an edge device could be deployed
in the cloud either as part of the service data plane, with addresses visible to network users, or
as part of a private subnetwork that's invisible. If you deploy in the cloud, then any of the
features can be attacked, and it's also possible your hosting and management processes will
become visible and vulnerable. If you isolate your hosting and feature connections inside a
private subnetwork, they're protected from outside access.
In container hosting today, both in the data center and in the cloud, application components
deploy inside a private subnetwork. As a result, only the addresses representing APIs that users
are supposed to access are exposed. That same principle needs to be applied to virtual functions;
expose the interfaces that users actually connect to and hide the rest with protected addresses.
12
Ensure all components are tested and reviewed
Step two in cloud-virtual security is to certify virtual features and functions for security
compliance before you allow them to be deployed. Outside attacks are a real risk in virtual
networking, but an insider attack is a disaster. If a feature with a back-door security fault is
introduced into a service, it becomes part of the service infrastructure and is far more likely to
possess open attack vectors to other infrastructure elements.
Private subnetworks can help in addressing virtual machine security in cloud computing. If
new components can only access other components in the same service instance, the risk is
reduced that malware can be introduced in a new software-hosted feature. Yes, a back-door
attack could put the service itself at risk, but it's less likely the malware will spread to other
services and customers.
This approach, however, doesn't relieve operators of the burden of security testing. It's
important to insist on a strong lifecycle management compliance process flow for all hosted
features and functions -- one that operators can audit and validate. If the companies supplying
your hosted features or functions properly test their new code, it's less likely it will contain
accidental vulnerabilities or deliberately introduced back-door faults.
Step three is to separate infrastructure management and orchestration from the service.
Management APIs will always represent a major risk because they're designed to control
features, functions and service behavior. It's important to protect all such APIs, but it's critical
to protect the APIs that oversee infrastructure elements that should never be accessed by service
users.
The most important area to examine to ensure virtual machine security in cloud computing is
the virtual-function-specific piece of the Virtual Network Functions Manager (VNFM)
structure mandated by the ETSI NFV (Network Functions Virtualization) Industry
Specification Group. This code is provided by the suppler of the VNF, and it is likely to require
access to APIs that represent infrastructure elements as well as orchestration or deployment
tools. Nothing more than bad design is needed for these elements to open a gateway to
infrastructure management APIs that could affect the security and stability of features used in
virtual-cloud services.
13
Securing the VNFM means requiring that providers of VNFs offer their architectures governing
VNFM connections to infrastructure or deployment/management APIs for review and possible
security enhancements. The important point is to ensure that no service user, VNF or VNFM
associated with other VNFs or services can access an infrastructure management API.
By containing access, you limit your security risk. Additionally, operators should require that
access to infrastructure management and orchestration APIs by any source is chronicled, and
that any access or change is reviewed to prevent a management access leak from occurring.
The fourth and final point in cloud-virtual network security is to ensure that virtual network
connections don't cross over between tenants or services. Virtual networking is a wonderful
way of creating agile connections to redeployed or scaled features, but each time a virtual
network change is made, it's possible it can establish an inadvertent connection between two
different services, tenants or feature/function deployments. This can produce a data plane leak,
a connection between the actual user networks or a management or control leak that could
allow one user to influence the service of another.
Ironclad practices and policies governing virtual connectivity can reduce the risk of an error
like this, but it's very difficult to prevent one altogether. That's because of the indirect
relationship between a virtual network that connects features and a real network that connects
users. One remedy is to use a network scanner or inventory tool to search for devices and virtual
devices on a virtual network and compare that result with the service topology that's expected.
This can be run as the last step of a virtual network change.
Cloud-virtual networking introduces the cloud to networking, but it can also usher in server,
hosting and virtual network security issues as well. Anyone who thinks these risks can be
addressed through traditional means -- using tools and practices from the era when we built
networks only from devices -- is going to face a hard reality, and soon.
An Identity and Access Management (IAM) system defines and manages user identities and
access permissions. Users of IAM include customers (customer identity management) and
employees (employee identity management). With IAM technologies, IT managers can ensure
14
that users are who they say they are (authentication) and that users access the applications and
resources they have permission to use (authorization).
1. Eliminating weak passwords—research shows over 80% of data breaches are caused
by stolen, default, or weak passwords. IAM systems enforce best practices in
credential management, and can practically eliminate the risk that users will use weak
or default passwords. They also ensure users frequently change passwords.
2. Mitigating insider threats—a growing number of breaches is caused by insiders.
IAM can limit the damage caused by malicious insiders, by ensuring users only have
access to the systems they work with, and cannot escalate privileges without
supervision.
3. Advanced tracking of anomalies—modern IAM solutions go beyond simple
credential management, and include technologies such as machine learning, artificial
intelligence, and risk-based authentication, to identify and block anomalous activity.
4. Multi-factor security—IAM solutions help enterprises progress from two-factor to
three-factor authentication, using capabilities like iris scanning, fingerprint sensors,
and face recognition.
In cloud computing, data is stored remotely and accessed over the Internet. Because users can
connect to the Internet from almost any location and any device, most cloud services are device-
and location-agnostic. Users no longer need to be in the office or on a company-owned device
to access the cloud. And in fact, remote workforces are becoming more common.
As a result, identity becomes the most important point of controlling access, not the network
perimeter. The user's identity, not their device or location, determines what cloud data they can
access and whether they can have any access at all.
However, with cloud computing, sensitive files are stored in a remote cloud server. Because
employees of the company need to access the files, they do so by logging in via browser or an
15
app. If a cyber-criminal wants to access the files, now all they need is employee login
credentials (like a username and password) and an Internet connection; the criminal doesn't
need to get past a network perimeter.
16
Unit-6
Cloud Platforms and Applications
App Engine
Google App Engine is Google's platform as a service offering that allows developers and
businesses to build and run applications using Google's advanced infrastructure. These
applications are required to be written in one of a few supported languages, namely: Java,
Python, PHP and Go. It also requires the use of Google query language and that the database
used is Google Big Table. Applications must abide by these standards, so applications either
must be developed with GAE in mind or else modified to meet the requirements.
GAE is a platform, so it provides all of the required elements to run and host Web applications,
be it on mobile or Web. Without this all-in feature, developers would have to source their own
servers, database software and the APIs that would make all of them work properly together,
not to mention the entire configuration that must be done. GAE takes this burden off the
developers so they can concentrate on the app front end and functionality, driving better user
experience.
5. Increased Scalability
Scalability is synonymous with growth — an essential factor that assures success and
competitive advantage. The good news is that the Google App Engine cloud development
platform is automatically scalable. Whenever the traffic to the web application increases,
GAE automatically scales up the resources, and vice-versa.
6. Improved savings
With Google App Engine, you do not have to spend extra on server management of the
app. The Google Cloud service is good at handling the backend process.
Also, Google App Engine pricing is flexible as the resources can scale up/down based on
the app’s usage. The resources automatically scale up/down based on how the app performs
in the market, thus ensuring honest pricing in the end.
7. Smart pricing
The major concern of organizations revolves around how much does Google App Engine
cost? For your convenience, Google App Engine has a daily and a monthly billing cycle, i.e.,
Daily: You will be charged daily for the resources you use
Monthly: All the daily charges are calculated and added to the taxes (if
2
Azures platform
Microsoft Azure is a public cloud platform with more than 200 products and services
accessible over the public internet. Like other public cloud vendors, Azure manages and
maintains hardware, infrastructure, and resources that can be accessed for free or pay-per-
use, on-demand basis.
Features of Azure
1. Enhance and Implement Backup and Disaster Recovery
Azure is a backup and disaster recovery dream tool. Why? Because of its flexibility,
advanced site recovery, and built-in integration.
Tape backup has a time and place, but it has limited abilities as a stand-alone backup
and disaster recovery solution. Azure site recovery can enhance your tape backup with
offsite replication, minimal onsite maintenance, up to ninety-nine years of data
retention, minimal or no capital investment, and minimal operational costs. Azure
backup stores three copies of your data in three different locations in the data center,
and then another three copies in a remote Azure data center, so you never have to worry
about losing data.
If you’re in a Windows virtual environment, Azure’s built-in integration for
additional backup will be a quick and painless solution. Azure site recovery integrates
with System Center and HyperV architectures, creating a robust and seamless cohesion
between Azure, System Center, and HyperV.
2. Host and develop web and mobile apps
Whether you’re looking for a platform for hosting, developing, or managing a web or
mobile app, Azure makes those apps autonomous and adaptive with patch
management, AutoScale, and integration for on-premise apps.
With Automatic patch management for your virtual machines, you can spend less
time managing your infrastructure and focus on improving your apps. Azure also comes
with continuous deployment support, which allows you to streamline ongoing code
updates.
AutoScale is a feature built into Azure Web Apps that adjusts your resources
automatically based on customer web traffic so you have the resources you need when
traffic is high, and save money when you’re not in peak times.
Through Azure, you can seamlessly link your web app to an on-premise app.
Connecting apps in both locations lets both employees and
partners securely access resources inside your firewall—resources that would
otherwise be difficult to access externally.
3
3. Distribute and supplement active directory
Azure can integrate with your Active Directory to supplement your identity and access
capabilities—this gives your DNS a global reach, centralized management, and robust
security.
With Azure, you can globally distribute an Active Directory environment that is direct
connect enabled. No other cloud provider has the ability to extend the reach of your
domain controller and consolidate AD management like Azure.
If you have multiple locations or use on-premise apps or cloud apps like Microsoft 365,
Active Directory integration with Azure will be the central tool for managing and
maintaining access to all of these tools.
Azure also enables you to utilize multi-factor authentication, adding a new layer
of security to your data and applications with zero hassle for your users. You can also
easily implement single sign-on for Windows, Mac, Android, and iOS cloud apps.
Within the Azure IoT Hub, you can monitor and manage billions of devices and gain
insights to help you make better business decisions, improve customer experiences,
reduce complexity, lower costs, and speed up development.
The enhanced security of Azure is a huge asset for IoT solutions, which traditionally
have security gaps that hackers can take advantage of. Other benefits include remote
monitoring and predictive maintenance and analytics.
Infrastructure on Azure isn’t just a bunch of services. You need to understand the basic pillars:
compute, network, and storage.
Everything on Azure is built on top of those pillars, and they form a foundation for your cloud
infrastructure too. You can build your own architecture from infrastructure-as-a-service (IaaS)
products, such as Azure Virtual Networks, Azure VMs, Azure VDI, and Azure Disc
Storage. Or you can take advantage of the platform-as-a-service (PaaS) offerings, such as
Azure SQL and Azure app services. All are built on top of the pillars, but offer different layers
of abstraction for you to build your business applications.
4
Azure pros and strengths
1. First, Azure has a lot of data centers, and they keep expanding. This means services
and your applications will be closer to users. It also means specific legal requirements
for certain countries when it comes to cloud computing are more likely to be met.
Because Microsoft has been supporting on-premises customers for 40-plus years, they
have an extensive hybrid cloud offering to get all of their existing customers into
cloud. They also have a very good integration with existing tools and technologies
such as Visual Studio, Active Directory, and File Storage.
Since Azure is trying to be all things to all cloud-computing crowds, at times some
services just don’t get enough attention. This can mean that the new data analytics
service you have made that uses a certain Azure feature might fall behind a bit as the
feature disappears.
Azure will try and keep up with every single trend in cloud computing, so the number
of new services and renamed services (thank you, Microsoft) can be overwhelming.
The key is to focus on just the ones you need for your project.
Aneka
Aneka is a platform for deploying Clouds developing applications on top of it. It provides a
runtime environment and a set of APIs that allow developers to build .NET applications that
leverage their computation on either public or private clouds. One of the key features of Aneka
is the ability of supporting multiple programming models that are ways of expressing the
execution logic of applications by using specific abstractions. This is accomplished by creating
a customizable and extensible service oriented runtime environment represented by a collection
of software containers connected together. By leveraging on these architecture advanced
services including resource reservation, persistence, storage management, security, and
performance monitoring have been implemented. On top of this infrastructure different
programming models can be plugged to provide support for different scenarios as demonstrated
by the engineering, life science, and industry applications.
5
Aneka container can be classified into three major categories:
1. Fabric Services
2. Foundation Services
3. Application Services
1. Fabric services:
Fabric Services define the lowest level of the software stack representing the Aneka
Container. They provide access to the resource-provisioning subsystem and to the monitoring
facilities implemented in Aneka.
2. Foundation services:
Fabric Services are fundamental services of the Aneka Cloud and define the basic
infrastructure management features of the system. Foundation Services are related to the
logical management of the distributed system built on top of the infrastructure and provide
supporting services for the execution of distributed applications.
3. Application services:
Application Services manage the execution of applications and constitute a layer that
differentiates according to the specific programming model used for developing distributed
applications on top of Aneka.
6
Open Challenges
Let us dive into the challenges of cloud computing, With a multitude of benefits of
implementing cloud computing in all-size businesses, cloud computing has become a popular
trend in the market. To put in simple words: Cloud computing is nothing but moving on-
premises computing to the internet. It saves both time and money.
People around the globe get access to an open pool of resources like apps, services, servers,
data, and computer networks. It is made possible either by using a privately-owned cloud or a
3rd-party server. It improves the way data is accessed and removes inconsistency in further
updates. Also, a minimal amount of administration is required.
Cloud computing also ensures data security, better data storage, increased synchronization
between employees, and flexibility. Organizations have become capable of making better
decisions to scale and grow.
1. Security
The topmost concern in investing in cloud services is security issues in cloud computing.
It is because your data gets stored and processed by a third-party vendor and you cannot
see it. Every day or the other, you get informed about broken authentication, compromised
credentials, account hacking, data breaches, etc. in a particular organization. It makes you
a little more skeptical.
Fortunately, the cloud providers, these days have started to put efforts to improve security
capabilities. You can be cautious as well by verifying if the provider implements a safe user
identity management system and access control procedures. Also, ensure it implements
database security and privacy protocols.
2. Password Security
As large numbers of people access your cloud account, it becomes vulnerable. Anybody
who knows your password or hacks into your cloud will be able to access your
confidential information.
7
Here the organization should use a multiple level authentication and ensure that the
passwords remain protected. Also, the passwords should be modified regularly, especially
when a particular employee resigns and leave the organization. Access rights to
usernames and passwords should be given judiciously.
3. Cost Management
Cloud computing enables you to access application software over a fast internet
connection and lets you save on investing in costly computer hardware, software,
management and maintenance. This makes it affordable. But what is challenging and
expensive is tuning the organization’s needs on the third-party platform.
Another costly affair is the cost of transferring data to a public cloud, especially for a
small business or project
4. Lack of Expertise
With the increasing workload on cloud technologies and continuously improving cloud
tools, the management has become difficult. There has been a consistent demand for a
trained workforce who can deal with cloud computing tools and services. Hence, firms
need to train their IT staff to minimize this challenge.
5. Internet Connectivity
The cloud services are dependent on a high-speed internet connection. So the businesses
that are relatively small and face connectivity issues should ideally first invest in a good
internet connection so that no downtime happens. It is because internet downtime might
incur vast business losses.
6. Control or Governance
Another ethical issue in cloud computing is maintaining proper control over asset
management and maintenance. There should be a dedicated team to ensure that the assets
used to implement cloud services are used according to agreed policies and dedicated
8
procedures. There should be proper maintenance and that the assets are used to meet your
organization’s goals successfully.
7. Compliance
Companies have started to invest in multiple public clouds, multiple private clouds or a
combination of both called the hybrid cloud. This has grown rapidly in recent times. So it
has become important to list challenges faced by such organizations and find solutions to
grow with the trend
9. Performance
When your business applications move to a cloud or a third party vendor, so your
business performance starts to depend on your provider as well. Another major problem
in cloud computing is investing in the right cloud service provider.
Before investment, you should look for providers with innovatory technologies. The
performance of the BI’s and other cloud-based systems are linked to the provider’s
systems as well. Be cautious about choosing the provider and investigate that they have
protocols to mitigate issues that arise in real-time
10. Migration
9
Scientific applications
Cloud Computing offers great potential for scientific applications, but Clouds have been
designed for running business and web applications, whose resource requirements are
different from communication intensive tightly coupled scientific applications which
typically require low latency and high bandwidth interconnections and parallel file systems
to achieve best performance. Most commercial Clouds use commodity networking and
storage devices which are suitable to effectively host loosely coupled scientific applications
which frequently require large amounts of computation with modest data requirements and
infrequent communication among tasks. Several studies have shown that Cloud Computing
is viable platform for running loosely coupled scientific applications and workflow
applications composed of loosely coupled parallel applications consisting of a set of
computational tasks linked via data and control dependencies.
Healthcare is an area where computer technology can be seen and as impacting a wide variety
of items in our health: from offering help for business issues to helping in scientific growth.
With recent technological developments such as cell phones and cloud computing, a range of
services and devices are developed to provide health care. In the cloud system, medical data
can be gathered and distributed automatically to medical practitioners anywhere in the world.
From there, doctors in the field have the capability of returning input to specific patients.
An ECG is just a visual image of a record of the electrical activity of the heart muscle as it
varies over time, typically printed on paper for easier study. Similarly, like other muscles, the
heart contracts in response to electrical depolarization caused in the muscle cells. When it's the
time of day, it's the amount of the electrical activity, when amplified and registered for just a
few seconds that we call a heart rhythm.
With ECG data collection and tracking, it's possible to test for chest pain, low-grade heart
rhythm disturbances, arrhythmias, and more. An E-G (electrocardiogram) is the electrical
expression of the contractile movement of the myocardium.
Due to the invention of the internet or we can say due to the availability of the internet cloud
computing has come into the picture and portray itself as an attractive choice for developing a
health monitoring system. The study of the shape of the waveform is used to classify
arrhythmias. It can be used as the most common way of detecting heart diseases. That way a
patient who has a cardiac arrhythmia (or some other abnormal heart rate) can be continuously
monitored through ECG tests. Since E-cigarettes allow for immediate notification for doctors
and first-aid workers, at the same time such alerts do not slow down the movement of the
patient. Cloud computing technologies allow the patient to have his/her heartbeat monitored
remotely via the Internet.
10
Advantages
Since cloud computing systems are now readily available and deliver the services in
less time, it's got the promise to be a massive disruptor to how the technology is
distributed.
As a consequence, the doctor doesn't need to put a huge effort into computing, since
there is a lot of software on which to run.
Cloud infrastructure is highly scalable; it can be maximized and minimized according
to the needs of each user.
Cloud computing (or cloud computing) systems are now available and aim to provide
reliable services to consumers with less time.
The doctor's office would not need to invest in a broad computer system.
The analysis of gene expression is most conveniently defined as the study of how genes are
transcribed to synthesize functional gene products or protein products of functional RNA
organisms. The gene regulation research offers insights into normal cellular processes, such
as differentiation, and processes that are abnormal or pathological.
The new software has been developed by researchers that significantly enhance the speed at
which scientists can interpret data from RNA sequencing. The app, known as Myrna, uses
11
"cloud computing," a way of sharing computer resources based on the Internet. Faster, cost-
effective gene expression analysis may be a useful instrument to explain the genetic causes
of disease.
Some of the tools for gene expression analysis are-
AltAnalyze
Dchip
geWorkbench 2.5.1 from NCI.
Babelomics suit
Myrna
Cloud-CoXCS, is a machine learning classification system for gene expression datasets on the
Cloud infrastructure. It is composed of three components: CoXCS, Aneka, and Cloud
computing infrastructure.
Satellite Image Processing is an important field in research and development and consists of
the images of earth and satellites taken by the means of artificial satellites. Firstly, the
photographs are taken in digital form and later are processed by the computers to extract the
information. Statistical methods are applied to the digital images and after processing the
various discrete surfaces are identified by analyzing the pixel values.
The satellite imagery is widely used to plan the infrastructures or to monitor the
environmental conditions or to detect the responses of upcoming disasters.
In broader terms we can say that the Satellite Image Processing is a kind of remote sensing
which works on pixel resolutions to collect coherent information about the earth surface.
Majorly there are four kinds of resolutions associated with satellite imagery. These are:
Spatial resolution –
It is determined by the sensors Instantaneous Field of View(IFoV) and is defined as the
pixel size of an image that is visible to the human eye being measured on the ground.
Since it has high resolving power or the ability to separate and hence is termed as Spatial
Resolution.
Spectral resolution –
This resolution measures the wavelength internal size and determines the number of
wavelength intervals that the sensor measures.
Temporal resolution –
The word temporal is associated with time or days and is defined as the time that passes
between various imagery cloud periods.
Radiometric resolution –
This resolution provides the actual characteristics of the image and is generally
expressed in bits size. It gives the effective bit depth and records the various levels of
brightness of imaging system.
The variety of datasets of advanced positioning techniques nowadays would have more
variety. To extract the knowledge of such datasets, the remote sensing scientist needs to be
themselves equipped with a better and more efficient computer and storage. Cloud computing
is a good idea because it offers all the requisite computing resources (compute power). Possibly
the most cost-effective way to access computers as a service accessible online, to see which
current cloud platform shall be suitable for the complex analysis of remote sensing (RS) data,
12
we present here a comparative study between two popular cloud platforms, Amazon and
Microsoft, and the newest rival Cloud Sigma.
Business and Consumer Applications
Cloud computing innovations are likely to help the commercial and consumer sectors the most.
On the one hand, the ability to convert capital expenses into operating costs makes clouds an
appealing alternative for any IT-centric business. On the other hand, the cloud’s feeling of
ubiquity in terms of accessing data.
Customer relationship management and Enterprise resource planning systems are two industry
categories that are thriving in the cloud, with CRM being the more mature of the two. Cloud
CRM programs provide small business and start-ups a terrify way to get fully working CRM
software without big upfront expenditures and by paying monthly fees. Furthermore, CRM is
not a task that needs a certain set of requirements and it can be readily migrated to the cloud.
Cloud CRM software has grown in popularity as a result of this features, as well as the ability
to view your business and customer data from anywhere and on any device. Cloud based ERP
systems are less mature, and they must compete with well-established in-house solutions.
Finance and accounting, human resources, manufacturing, supply chain management, project
management, and CRM are all integrated into ERP systems. Their purpose is to give a unified
perspective and access to all activities required to keep a complicated company running.
13