UNIT 1 CC - Merged
UNIT 1 CC - Merged
UNIT-1
Cloud computing
"The cloud" refers to servers that are accessed over the Internet, and the software and databases that run on those servers. Cloud servers are located in data
centers all over the world. By using cloud computing, users and companies do not have to manage physical servers themselves or run software applications
on their own machines.
The cloud enables users to access the same files and applications from almost any device, because the computing and storage takes place on servers in a data
center, instead of locally on the user device. This is why a user can log in to their Instagram account on a new phone after their old phone breaks and still
find their old account in place, with all their photos, videos, and conversation history. It works the same way with cloud email providers like Gmail or
Microsoft Office 365, and with cloud storage providers like Dropbox or Google Drive.
For businesses, switching to cloud computing removes some IT costs and overhead: for instance, they no longer need to update and maintain their own
servers, as the cloud vendor they are using will do that. This especially makes an impact for small businesses that may not have been able to afford their own
internal infrastructure but can outsource their infrastructure needs affordably via the cloud. The cloud can also make it easier for companies to operate
internationally, because employees and customers can access the same files and applications from any location.
Virtual machines also make more efficient use of the hardware hosting them. By running many virtual machines at once, one ser ver can run many virtual
"servers," and a data center becomes like a whole host of data centers, able to serve many organizations. Thus, cloud providers can offer the use of their
servers to far more customers at once than they would be able to otherwise, and they can do so at a low cost.
Even if individual servers go down, cloud servers in general should be always online and always available. Cloud vendors gene rally back up their services
on multiple machines and across multiple regions.
Users access cloud services either through a browser or through an app, connecting to the cloud over the Internet — that is, through many interconnected
networks — regardless of what device they are using.
Distributed Systems
Distributed System is a composition of multiple independent systems but all of them are depicted as a single entity to the users. The purpose
of distributed systems is to share resources and also use them effectively and efficiently. Distributed systems possess characteristics such
as scalability, concurrency, continuous availability, heterogeneity, and independence in failures. But the main problem with this system
was that all the systems were required to be present at the same geographical location. Thus to solve this problem, distributed computing
led to three more types of computing and they were-Mainframe computing, cluster computing, and grid computing.
Mainframe Computing
Mainframes which first came into existence in 1951 are highly powerful and reliable computing machines. These are responsible for
handling large data such as massive input-output operations. Even today these are used for bulk processing tasks such as online transactions
etc. These systems have almost no downtime with high fault tolerance. After distributed computing, these increased the processing
capabilities of the system. But these were very expensive. To reduce this cost, cluster computing came as an alternative to mainframe
technology.
Cluster Computing
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in the cluster was connected to each other by
a network with high bandwidth. These were way cheaper than those mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required. Thus, the problem of the cost was solved to some extent but the
problem related to geographical restrictions still pertained. To solve this, the concept of grid computing was introduced.
Grid Computing
In 1990s, the concept of grid computing was introduced. It means that different systems were placed at entirely different geographical
locations and these all were connected via the internet. These systems belonged to different organizations and thus the grid consisted of
heterogeneous nodes. Although it solved some problems but new problems emerged as the distance between the nodes increased. The main
problem which was encountered was the low availability of high bandwidth connectivity and with it other network associated issues. Thus.
cloud computing is often referred to as “Successor of grid computing”.
Virtualization
Virtualization was introduced nearly 40 years back. It refers to the process of creating a virtual layer over the hardware which allows the
user to run multiple instances simultaneously on the hardware. It is a key technology used in cloud computing. It is the base on which
major cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware virtualization is still one of the most
common types of virtualization.
Web 2.0
Web 2.0 is the interface through which the cloud computing services interact with the clients. It is because of Web 2.0 that we have
interactive and dynamic web pages. It also increases flexibility among web pages. Popular examples of web 2.0 include Google Maps,
Facebook, Twitter, etc. Needless to say, social media is possible because of this technology only. It gained major popularity in 2004.
Axis Institute of Technology & management
Service Orientation
A service orientation acts as a reference model for cloud computing. It supports low-cost, flexible, and evolvable applications. Two
important concepts were introduced in this computing model. These were Quality of Service (QoS) which also includes the SLA (Service
Level Agreement) and Software as a Service (SaaS).
Utility Computing
Utility Computing is a computing model that defines service provisioning techniques for services such as compute services along with
other major services such as storage, infrastructure, etc which are provisioned on a pay-per-use basis.
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that are hosted on the internet instead of the
computer’s hard drive or local server. Cloud computing is also referred to as Internet-based computing, it is a technology where the resource
is provided as a service through the Internet to the user. The data that is stored can be files, images, documents, or any other storable
document.
Advantages of Cloud Computing
Cost Saving
Data Redundancy and Replication
Ransomware/Malware Protection
Flexibility
Reliability
High Accessibility
Scalable
Disadvantages of Cloud Computing
Internet Dependency
Issues in Security and Privacy
Data Breaches
Limitations on Control
Difference between Parallel Computing and Distributed Computing
Thus, Parallel Computing and Distributed Computing are two important models of computing that have important roles in today’s high-
performance computing. Both are designed to perform a large number of calculations breaking down the processes into several parallel
tasks; however, they differ in structure, function, and utilization. Therefore, in the following article, there is a dissection of Parallel
Computing and Distributed Computing, their gains, losses, and applications.
Hardware Costs: The implementation of parallel computing does probably involve the use of certain components such as multi-core
processors which could possibly be costly than the normal systems.
What is Distributed Computing?
In distributed computing we have multiple autonomous computers which seems to the user as single system. In distributed systems there
is no shared memory and computers communicate with each other through message passing. In distributed computing a single task is
divided among different computers.
Processors communicate with each Computer communicate with each other through
5.
other through bus message passing.
Cloud Elasticity
Cloud Elasticity is the property of a cloud to grow or shrink capacity for CPU, memory, and
storage resources to adapt to the changing demands of an organization. Cloud Elasticity can be
automatic, without need to perform capacity planning in advance of the occasion, or it can be a
manual process where the organization is notified they are running low on resources and can then
decide to add or reduce capacity when needed. Monitoring tools offered by the cloud provider
dynamically adjust the resources allocated to an organization without impacting existing cloud-
based operations.
A cloud provider is said to have more or less elasticity depending on the degree to which it is able
to adapt to workload changes by provisioning or de-provisioning resources autonomously to match
demand as closely as possible. This eliminates the need for IT administration staff to monitor
resources to determine if additional CPU, memory, or storage resources are needed, or whether
excess capacity can be decommissioned.
Cloud Elasticity is often associated with horizontal scaling (scale-out) architecture, and it generally
associated with public cloud provider resources that are billed on a pay-as-you-go basis. This
approach brings real-time cloud expenditures more closely in alignment with the actual
consumption of cloud services, for example when virtual machines (VMs) are spun up or down as
demand for a particular application or service varies over time.
Cloud Elasticity provides businesses and IT organizations the ability to meet any unexpected jump
in demand, without the need to maintain standby equipment to handle that demand. An
organization that normally runs certain processes on-premises can ‘cloudburst’ to take advantage
of Cloud Elasticity and meet that demand, returning to on-premises operations only when the
demand has passed. Thus, the result of cloud elasticity is savings in infrastructure costs, in human
capital, and in overall IT costs.
automatically allocates or deallocates resources on the basis of real-time demand. Amazon has
stated that organizations that adopt its instance scheduler with their EC2 cloud service can achieve
savings of over 60 percent versus organizations that do not.
High Availability: Cloud elasticity facilitates both high availability and fault tolerance, since VMs
or containers can be replicated if they appear to be failing, helping to ensure that business services
are uninterrupted and that users do not experience downtime. This helps ensure that users perceive
a consistent and predictable experience, even as resources are provisioned or deprovisioned
automatically and without impact on operations.
Efficiency: As with most automations, the ability to autonomously adjust cloud resources as
needed enables IT staff to shift their focus away from provisioning and onto projects that are more
beneficial to the organization.
Speed/Time-to-market: organizations have access to capacity in minutes instead of the weeks or
months it may take through a traditional procurement process.
Security measures such as firewalls, threat detection, and encryption are also integral to cloud provisioning.
There are three types of provisioning in cloud computing with varying degrees of flexibility, control, and pricing structure. It includes:
Advanced Provisioning
Advanced provisioning is ideal for businesses that need stable, reliable, and high-performance cloud resources. This method involves:
Detailed Contracts: Agreements clearly define the responsibilities of both the provider and the client, including the specific resources allocated
and service level agreements (SLAs).
Fixed Pricing Structures: Clients typically pay a fixed monthly or annual fee, making budgeting more predictable.
Resource Guarantees: Providers allocate specific amounts of storage, CPU, RAM, and GPU (for graphic-intensive tasks) as agreed upon in the
contract.
Businesses with consistent workloads and resource requirements benefit most from this model. Examples include financial institutions, healthcare
organizations, and large enterprises with steady operational demands.
Dynamic Provisioning
Dynamic provisioning, or on-demand provisioning, is the most flexible and scalable cloud computing model. Key features include:
Automatic Resource Allocation: Resources such as processing power, storage, and network bandwidth are allocated dynamically based on
current needs, reducing manual intervention.
Cloud Automation: Automation tools streamline the provisioning process, ensuring resources are available instantly when needed. This includes
autoscaling, which adjusts resource allocation in real time based on usage patterns.
Pay-Per-Use Pricing: Clients are billed based on the resources they consume, making it cost-effective for businesses with variable workloads.
Startups, seasonal businesses, and organizations with fluctuating resource needs benefit from dynamic provisioning. It supports rapid scaling up or down,
ensuring cost efficiency and flexibility.
User Self-Provisioning
User self-provisioning, also known as cloud self-service, empowers customers to manage their cloud resources directly through a provider’s platform.
Features include:
Direct Access: Users can log into a web portal, select the resources they need (such as virtual machines, storage, and software), and deploy them
immediately.
Autonomy and Agility: This model allows businesses to quickly adapt to changing needs without waiting for the provider’s intervention,
enhancing operational agility.
Simple Subscription Process: Setting up an account and subscribing to services is straightforward, making it accessible for businesses of all
sizes.
Small to medium-sized businesses, individual developers, and teams need fast, self-service access to cloud resources. Such solutions allow users to easily
manage their subscriptions and resources, offering a high degree of control and flexibility.
Resource Allocation
Organizations may require multiple provisioning tools to effectively manage, customize, and utilize their cloud resources.
Axis Institute of Technology & management
With the deployment of workloads on multiple cloud platforms, a centralized console is set up to monitor and manage all resou rces for optimized allocation.
It results in a more optimized and efficient allocation of required resources.
The industry’s best practices for optimized resource allocation include the following:
Load Balancing: Distribute incoming network traffic across multiple servers to ensure no single server is overwhelmed. This enhances the
performance and reliability of applications.
Autoscaling: Configure autoscaling to automatically adjust the number of active servers based on the load. This ensures that resources a re used
efficiently and cost-effectively, scaling up during high demand and scaling down when demand decreases.
Capacity Planning: Project future resource needs based on current usage patterns and trends. This helps in planning and allocating resources to
meet future demands, ensuring scalability and avoiding resource shortages.
By implementing these practices, organizations can improve and optimize operational efficiency by eliminating the need for ma nual workloads. It also
ensures your mission-critical, CPU-intensive apps keep performing without experiencing downtimes.
Regular Updates: Continuously update engine images to include the latest software versions, patches, and security updates. This helps in
maintaining performance and security standards.
Security Measures: Ensure that all engine images are secure and free from vulnerabilities. Regularly scan and test images for security threats.
Availability: Keep engine images readily available to speed up the deployment process. This includes having a repository of commonly used
images that can be quickly accessed and deployed.
Customization: Maintain a variety of engine images tailored to different application needs, reducing the time required for configuration and
deployment.
Network Configuration
In cloud computing, network configuration is the process of setting up and managing virtual networks, security groups, subnet s, and other network resources
to ensure a secure and efficient transition of resources.
Proper network configuration is vital for secure and efficient data flow within the cloud environment:
Virtual Networks: Set up virtual networks to isolate and manage cloud resources effectively. This provides better control over data traffic and
enhances security.
Security Groups: Implement security groups to define and enforce network access rules. This helps in protecting cloud resources from
unauthorized access.
Subnets: Use subnets to segment network traffic and improve performance. This also allows for more granular control over network traffic
management.
Firewall Configuration: Configure firewalls to monitor and control incoming and outgoing network traffic based on predetermined security
rules. This adds an extra layer of security to the cloud environment.
Storage Configuration
Storage configuration is another crucial aspect of cloud provisioning that involves deploying, managing, and optimizing cloud storage resources.
Define Storage Requirements: Clearly define storage requirements for different applications and services. This helps in allocating the right
amount and type of storage.
Storage Classes: Utilize different storage classes based on performance and cost requirements. For instance, use high-performance storage for
critical applications and more cost-effective storage for less demanding tasks.
Resource Allocation: Allocate storage resources based on current and anticipated needs to prevent over-provisioning and under-provisioning.
Data Management: Implement data management practices, such as data lifecycle policies, to manage data storage efficiently. This includes
archiving old data and deleting unnecessary files to free up space.
Axis Institute of Technology & management
Regular monitoring and maintenance are crucial for ensuring the health and performance of cloud infrastructure:
Continuous Monitoring: Set up continuous monitoring systems to track the performance and health of cloud resources. This includes monitoring
CPU usage, memory usage, disk I/O, and network performance.
Performance Optimization: Regularly analyze performance data to identify and resolve bottlenecks. This helps in maintaining optimal
performance and preventing downtime.
Routine Maintenance: Schedule regular maintenance activities, such as software updates, hardware checks, and system backups. This ensures
that the infrastructure remains up-to-date and reliable.
Alerts and Notifications: Configure alerts and notifications to promptly inform IT teams of any issues or irregularities. This enables quick
response and resolution to minimize impact on services.
It involves:
Cloud Automation: This involves using software tools to automate repetitive tasks, such as provisioning and de-provisioning resources, applying
patches, and managing backups. Automation reduces human error and speeds up processes, making your cloud operations more efficient.
Orchestration Tools: These tools coordinate and manage automated tasks across complex workflows and multi-cloud environments. Examples
include Kubernetes for container orchestration and Terraform for infrastructure as code (IaC).
AIOps: Artificial Intelligence for IT Operations (AIOps) leverages machine learning to enhance automation and orchestration by predicting
issues and optimizing resource allocation. This ensures smooth and efficient cloud operations.
Scalability
Scalability ensures continuous availability and performance during traffic spikes, supports business growth without significant downtime, and optimizes
resource usage and costs. It involves:
Vertical Scaling (Scaling Up/Down): Add or remove resources (like CPU, RAM, storage) to an existing server to handle increased or decreased
workloads. This method is straightforward but has a limit based on the physical server’s capacity.
Horizontal Scaling (Scaling Out/In): Add more servers to distribute the load across multiple machines. This method offers more flexibility and
virtually unlimited growth potential, but it requires sophisticated load balancing and application design.
Scalability Testing: Regular testing helps ensure that your infrastructure can handle growth. It involves stress testing to measure how the system
performs under heavy loads, network request handling, CPU load analysis, and memory usage monitoring.
Security
Security is the central area in cloud provisioning, involving several security and scrutiny measures. Industry-standard compliance ensures that cloud
infrastructure is up-to-date and secured up to the standard benchmarks.
It involves:
Multi-Factor Authentication (MFA): Adds an extra layer of security by requiring two or more verification steps to access resources. This
reduces the risk of unauthorized access.
Data Encryption: Encrypt data both at rest and in transit to prevent unauthorized access. Use strong encryption protocols like AES-256 for data
storage and TLS for data transmission.
Axis Institute of Technology & management
Threat Intelligence: Implement tools that continuously monitor for threats and vulnerabilities. These tools can detect unusual activities and alert
security teams to potential breaches.
Disaster Recovery Plan: Develop and regularly update a disaster recovery plan to ensure business continuity in case of a major incident. This
includes regular backups, redundant systems, and clear procedures for restoring services.
Compliance: Ensure your cloud infrastructure complies with industry standards and regulations such as SOC 2, ISO 27001, and PCI DSS.
Compliance ensures that you meet the required security and privacy standards.
Security Best Practices: Regularly update and patch systems, conduct security audits and penetration testing, and employ robust access controls.
cloud services
The resources available in the cloud are known as "services," since they are actively managed by a cloud provider. Cloud services include infrastructure,
applications, development tools, and data storage, among other products. These services are sorted into several different cat egories, or service models.
Software-as-a-Service (SaaS): Instead of users installing an application on their device, SaaS applications are hosted on cloud servers, and users access
them over the Internet. SaaS is like renting a house: the landlord maintains the house, but the tenant mostly gets to use it as if they owned it. Examples of
SaaS applications include Salesforce, MailChimp, and Slack.
Platform-as-a-Service (PaaS): In this model, companies don't pay for hosted applications; instead they pay for the things they need to build their own
applications. PaaS vendors offer everything necessary for building an application, including development tools, infrastructure, and operating systems, over
the Internet. PaaS can be compared to renting all the tools and equipment necessary for building a house, instead of renting the house itself. PaaS examples
include Heroku and Microsoft Azure.
Infrastructure-as-a-Service (IaaS): In this model, a company rents the servers and storage they need from a cloud provider. They then use that cloud
infrastructure to build their applications. IaaS is like a company leasing a plot of land on which they can build whatever they want — but they need to provide
their own building equipment and materials. IaaS providers include DigitalOcean, Google Compute Engine, and Ope nStack.
Formerly, SaaS, PaaS, and IaaS were the three main models of cloud computing, and essentially all cloud services fit into one of these categories.
cloud infrastructure
Cloud infrastructure refers to the resources needed for hosting and building applications in the cloud. IaaS and PaaS services are often included in an
organization's cloud infrastructure, although SaaS can be said to be part of cloud infrastructure as well, and FaaS offers the ability to construct infrastructure
as code.
Private cloud: A private cloud is a server, data center, or distributed network wholly dedicated to one organization.
Public cloud: A public cloud is a service run by an external vendor that may include servers in one or multiple data centers. Unlike a private
cloud, public clouds are shared by multiple organizations. Using virtual machines, individual servers may be shared by different companies,
a situation that is called "multitenancy" because multiple tenants are renting server space within the same server.
Hybrid cloud: hybrid cloud deployments combine public and private clouds, and may even include on-premises legacy servers. An organization
may use their private cloud for some services and their public cloud for others, or they may use the public cloud as backup for their private
cloud.
Multi-cloud: multi-cloud is a type of cloud deployment that involves using multiple public clouds. In other words, an organization with a multi-
cloud deployment rents virtual servers and services from several external vendors — to continue the analogy used above, this is like leasing
several adjacent plots of land from different landlords. Multi-cloud deployments can also be hybrid cloud, and vice versa.
Axis Institute of Technology & management
1. Frontend
Frontend of the cloud architecture refers to the client side of cloud computing system. Means it
contains all the user interfaces and applications which are used by the client to access the cloud
computing services/resources. For example, use of a web browser to access the cloud platform.
2. Backend
Backend refers to the cloud itself which is used by the service provider. It contains the resources
as well as manages the resources and provides security mechanisms. Along with this, it includes
huge storage, virtual applications, virtual machines, traffic control mechanisms, deployment
models, etc.
Components of Cloud Computing Architecture
Following are the components of Cloud Computing Architecture
1. Client Infrastructure – Client Infrastructure is a part of the frontend component. It contains
the applications and user interfaces which are required to access the cloud platform. In other
words, it provides a GUI( Graphical User Interface ) to interact with the cloud.
2. Application : Application is a part of backend component that refers to a software or platform
to which client accesses. Means it provides the service in backend as per the client requirement.
3. Service: Service in backend refers to the major three types of cloud based services like SaaS,
PaaS and IaaS. Also manages which type of service the user accesses.
4. Runtime Cloud: Runtime cloud in backend provides the execution and Runtime
platform/environment to the Virtual machine.
Axis Institute of Technology & management
5. Storage: Storage in backend provides flexible and scalable storage service and management
of stored data.
6. Infrastructure: Cloud Infrastructure in backend refers to the hardware and software
components of cloud like it includes servers, storage, network devices, virtualization software
etc.
7. Management: Management in backend refers to management of backend components like
application, service, runtime cloud, storage, infrastructure, and other security mechanisms etc.
8. Security: Security in backend refers to implementation of different security mechanisms in
the backend for secure cloud resources, systems, files, and infrastructure to end-users.
9. Internet: Internet connection acts as the medium or a bridge between frontend and backend
and establishes the interaction and communication between frontend and backend.
10. Database: Database in backend refers to provide database for storing structured data, such as
SQL and NOSQL databases. Example of Databases services include Amazon RDS, Microsoft
Azure SQL database and Google CLoud SQL.
11. Networking: Networking in backend services that provide networking infrastructure for
application in the cloud, such as load balancing, DNS and virtual private networks.
12. Analytics: Analytics in backend service that provides analytics capabilities for data in the
cloud, such as warehousing, business intelligence and machine learning.
Benefits of Cloud Computing Architecture
Makes overall cloud computing system simpler.
Improves data processing requirements.
Helps in providing high security.
Makes it more modularized.
Results in better disaster recovery.
Gives good user accessibility.
Reduces IT operating costs.
Provides high level reliability.
Scalability.
NIST Cloud Computing Reference Architecture and Taxonomy
The NIST Cloud Computing Reference Architecture and Taxonomy was designed to accurately
communicate the components and offerings of cloud computing. The guiding principles used to
create the reference architecture were:
1. Develop a vendor-neutral architecture that is consistent with the NIST definition
2. Develop a solution that does not stifle innovation by defining a prescribed technical
solution
Axis Institute of Technology & management
NIST
Cloud Computing Reference Architecture and Taxonomy
Actors in Cloud Computing
The NIST cloud computing reference architecture defines five major actors. Each actor is an entity
(a person or an organization) that participates in a transaction or process and/or performs tasks in
cloud computing. The five actors are:
Cloud user/cloud customer: A user is accessing either paid-for or free cloud services and
resources within a cloud. These users are generally granted system administrator privileges
to the instances they start (and only those instances, as opposed to the host itself or other
components).
Cloud provider: A company that provides a cloud-based platform, infrastructure,
application, or storage services to other organizations and/or individuals, usually for a fee
(otherwise known to clients as “as a service”).
Cloud auditor: A party that can conduct independent assessments of cloud services,
information system operations, performance, and security of the cloud implementation.
Cloud carrier: An intermediary that provides connectivity and transport of cloud services
between cloud consumers and cloud providers.
Cloud services broker (CSB): The CSB is typically a third-party entity or company that
looks to extend value to multiple customers of cloud-based services through relationships
with multiple cloud service providers. It acts as a liaison between cloud services customers
and cloud service providers, selecting the best provider for each customer and monitoring
the services. A CSB provides:
Service intermediation: A CSB enhances a given service by improving some
specific capability and providing value-added services to cloud consumers. The
improvement can be managing access to cloud services, identity management,
performance reporting, enhanced security, etc.
Service aggregation: A CSB combines and integrates multiple services into one
or more new services. The broker provides data integration and ensures the secure
data movement between the cloud consumer and multiple cloud providers.
Service arbitrage: Service arbitrage is similar to service aggregation except that
the services being aggregated are not fixed. Service arbitrage means a broker has
the flexibility to choose services from multiple agencies. The cloud broker, for
example, can use a credit-scoring service to measure and select an agency with the
best score.
Axis Institute of Technology & management
Most cloud hubs have tens of thousands of servers and storage devices to enable fast loading. It is often
possible to choose a geographic area to put the data "closer" to users. Thus, deployment models for cloud
computing are categorized based on their location. To know which model would best fit the requirements
of your organization, let us first learn about the various types.
Public Cloud
The name says it all. It is accessible to the public. Public deployment models in the cloud are perfect for
organizations with growing and fluctuating demands. It also makes a great choice for companies with low-
security concerns. Thus, you pay a cloud service provider for networking services, compute virtualization
& storage available on the public internet. It is also a great delivery model for the teams with development
and testing. Its configuration and deployment are quick and easy, making it an ideal choice for test
environments.
Axis Institute of Technology & management
o Minimal Investment - As a pay-per-use service, there is no large upfront cost and is ideal for
businesses who need quick access to resources
o No Hardware Setup - The cloud service providers fully fund the entire Infrastructure
o No Infrastructure Management - This does not require an in-house team to utilize the public cloud.
o Data Security and Privacy Concerns - Since it is accessible to all, it does not fully protect against
cyber-attacks and could lead to vulnerabilities.
o Reliability Issues - Since the same server network is open to a wide range of users, it can lead to
malfunction and outages
o Service/License Limitation - While there are many resources you can exchange with tenants, there
is a usage cap.
Private Cloud
Now that you understand what the public cloud could offer you, of course, you are keen to know what a
private cloud can do. Companies that look for cost efficiency and greater control over data & resources will
find the private cloud a more suitable choice.
It means that it will be integrated with your data center and managed by your IT team. Alternatively, you
can also choose to host it externally. The private cloud offers bigger opportunities that help meet specific
organizations' requirements when it comes to customization. It's also a wise choice for mission-critical
processes that may have frequently changing requirements.
o Data Privacy - It is ideal for storing corporate data where only authorized personnel gets access
o Security - Segmentation of resources within the same Infrastructure can help with better access and
higher levels of security.
o Supports Legacy Systems - This model supports legacy systems that cannot access the public cloud.
o Higher Cost - With the benefits you get, the investment will also be larger than the public cloud.
Here, you will pay for software, hardware, and resources for staff and training.
o Fixed Scalability - The hardware you choose will accordingly help you scale in a certain direction
o High Maintenance - Since it is managed in-house, the maintenance costs also increase.
Community Cloud
The community cloud operates in a way that is similar to the public cloud. There's just one difference - it
allows access to only a specific set of users who share common objectives and use cases. This type of
deployment model of cloud computing is managed and hosted internally or by a third-party vendor.
However, you can also choose a combination of all three.
o Smaller Investment - A community cloud is much cheaper than the private & public cloud and
provides great performance
Axis Institute of Technology & management
o Setup Benefits - The protocols and configuration of a community cloud must align with industry
standards, allowing customers to work much more efficiently.
o Shared Resources - Due to restricted bandwidth and storage capacity, community resources often
pose challenges.
o Not as Popular - Since this is a recently introduced model, it is not that popular or available across
industries
Hybrid Cloud
As the name suggests, a hybrid cloud is a combination of two or more cloud architectures. While each
model in the hybrid cloud functions differently, it is all part of the same architecture. Further, as part of this
deployment of the cloud computing model, the internal or external providers can offer resources.
Let's understand the hybrid model better. A company with critical data will prefer storing on a private cloud,
while less sensitive data can be stored on a public cloud. The hybrid cloud is also frequently used for 'cloud
bursting'. It means, supposes an organization runs an application on-premises, but due to heavy load, it can
burst into the public cloud.
o Cost-Effectiveness - The overall cost of a hybrid solution decreases since it majorly uses the public
cloud to store data.
Axis Institute of Technology & management
o Security - Since data is properly segmented, the chances of data theft from attackers are
significantly reduced.
o Flexibility - With higher levels of flexibility, businesses can create custom solutions that fit their
exact requirements
o Complexity - It is complex setting up a hybrid cloud since it needs to integrate two or more cloud
architectures
o Specific Use Case - This model makes more sense for organizations that have multiple use cases
or need to separate critical and sensitive data
While numerous benefits are realized with hybrid cloud deployments and cloud models, these can
often be time consuming and laborious at the start, as most companies and entities encounter
integration and migration issues at the outset.
Issues in Cloud Computing
Cloud Computing is a new name for an old concept. The delivery of computing services
from a remote location. Cloud Computing is Internet-based computing, where shared
resources, software, and information are provided to computers and other devices on
demand.
These are major issues in Cloud Computing:
1. Privacy: The user data can be accessed by the host company with or without
permission. The service provider may access the data that is on the cloud at any point
in time. They could accidentally or deliberately alter or even delete information.
2. Compliance: There are many regulations in places related to data and hosting. To
comply with regulations (Federal Information Security Management Act, Health
Insurance Portability and Accountability Act, etc.) the user may have to adopt
deployment modes that are expensive.
3. Security: Cloud-based services involve third-party for storage and security. Can one
assume that a cloud-based company will protect and secure one’s data if one is using
their services at a very low or for free? They may share users’ information with others.
Security presents a real threat to the cloud.
4. Sustainability: This issue refers to minimizing the effect of cloud computing on the
environment. Citing the server’s effects on the environmental effects of cloud
computing, in areas where climate favors natural cooling and renewable electricity is
readily available, the countries with favorable conditions, such as Finland, Sweden, and
Switzerland are trying to attract cloud computing data centers. But other than nature’s
favors, would these countries have enough technical infrastructure to sustain the high-
end clouds?
Axis Institute of Technology & management
5. Abuse: While providing cloud services, it should be ascertained that the client is not
purchasing the services of cloud computing for a nefarious purpose. In 2009, a banking
Trojan illegally used the popular Amazon service as a command and control channel
that issued software updates and malicious instructions to PCs that were infected by the
malware So the hosting companies and the servers should have proper measures to
address these issues.
6, Higher Cost: If you want to use cloud services uninterruptedly then you need to have
a powerful network with higher bandwidth than ordinary internet networks, and also if
your organization is broad and large so ordinary cloud service subscription won’t suit
your organization. Otherwise, you might face hassle in utilizing an ordinary cloud
service while working on complex projects and applications. This is a major problem
before small organizations, that restricts them from diving into cloud technology for
their business.
7. Recovery of lost data in contingency: Before subscribing any cloud service
provider goes through all norms and documentations and check whether their services
match your requirements and sufficient well-maintained resource infrastructure with
proper upkeeping. Once you subscribed to the service you almost hand over your data
into the hands of a third party. If you are able to choose proper cloud service then in the
future you don’t need to worry about the recovery of lost data in any contingency.
8. Upkeeping(management) of Cloud: Maintaining a cloud is a herculin task because
a cloud architecture contains a large resources infrastructure and other challenges and
risks as well, user satisfaction, etc. As users usually pay for how much they have
consumed the resources. So, sometimes it becomes hard to decide how much should be
charged in case the user wants scalability and extend the services.
9. Lack of resources/skilled expertise: One of the major issues that companies and
enterprises are going through today is the lack of resources and skilled employees.
Every second organization is seeming interested or has already been moved to cloud
services. That’s why the workload in the cloud is increasing so the cloud service hosting
companies need continuous rapid advancement. Due to these factors, organizations are
having a tough time keeping up to date with the tools. As new tools and technologies
are emerging every day so more skilled/trained employees need to grow. These
challenges can only be minimized through additional training of IT and development
staff.
10. Pay-per-use service charges: Cloud computing services are on-demand services a
user can extend or compress the volume of the resource as per needs. so you paid for
how much you have consumed the resources. It is difficult to define a certain pre-
defined cost for a particular quantity of services. Such types of ups and downs and price
variations make the implementation of cloud computing very difficult and intricate. It
is not easy for a firm’s owner to study consistent demand and fluctuations with the
Axis Institute of Technology & management
seasons and various events. So it is hard to build a budget for a service that could
consume several months of the budget in a few days of heavy use.
Eucalyptus
The open-source cloud refers to software or applications publicly available for the users
in the cloud to set up for their own purpose or for their organization.
Eucalyptus is a Linux-based open-source software architecture for cloud computing
and also a storage platform that implements Infrastructure a Service (IaaS). It provides
quick and efficient computing services. Eucalyptus was designed to provide services
compatible with Amazon’s EC2 cloud and Simple Storage Service(S3).
Eucalyptus Architecture
Eucalyptus CLIs can handle Amazon Web Services and their own private instances.
Clients have the independence to transfer cases from Eucalyptus to Amazon Elastic
Cloud. The virtualization layer oversees the Network, storage, and Computing.
Occurrences are isolated by hardware virtualization.
Important Features are:-
1. Images: A good example is the Eucalyptus Machine Image which is a module
software bundled and uploaded to the Cloud.
2. Instances: When we run the picture and utilize it, it turns into an instance.
3. Networking: It can be further subdivided into three modes: Static mode(allocates
IP address to instances), System mode (assigns a MAC address and imputes the
instance’s network interface to the physical network via NC), and Managed mode
(achieves local network of instances).
Axis Institute of Technology & management
Nimbus
Nimbus is a powerful toolkit focused on converting a computer cluster into an Infrastructure-as-a-Service (IaaS)
cloud for scientific communities. Essentially, it allows a deployment and configuration of virtual machines
(VMs) on remote resources to create an environment suitable for the users’ requirements. Being written
in Python and Java, it is totally free and open-source software, released under the Apache License.
Nimbus Infrastructure is an open source EC2/S3-compatible IaaS solution with features that benefit scientific
community interests, like support for auto-configuring clusters, proxy credentials, batch schedulers, best-effort
allocations, etc.
Nimbus Platform is an integrated set of tools for a multi-cloud environment that automates and simplifies the
work with infrastructure clouds (deployment, scaling, and management of cloud resources) for scientific users.
This toolkit is compatible with Amazon's Network Protocols via EC2 based clients, S3 REST API clients, as
well as SOAP API and REST API that have been implemented in Nimbus. Also it provides support for X509
credentials, fast propagation, multiple protocols, and compartmentalized dependencies. Nimbus features flexible
user, group and workspaces management, request authentication and authorization, and per-client usage tracking.
To open all power and versatility of IaaS to scientific users Nimbus project developers targeted the main three
goals and their open source implementations:
Give capabilities to providers of resources for private or community IaaS clouds development. The Nimbus
Workspace Service enables lease of computational resources by deploying virtual machines on those resources.
Cumulus is an open source implementation of the S3 REST API that was built for scalable quota-based storage
cloud implementation and multiple storage cloud configuration.
Give capabilities to users for IaaS clouds application. Among Nimbus scaling tools (users can automatically
scale across multiple distributed providers) the Nimbus Context Broker is especially robust. It coordinates large
virtual cluster launches automatically and repeatedly using a common configuration and security context across
resources.
Give capabilities to developers for extension, experimentation and customization of IaaS. For instance, the
Workspace Service can support several virtualization implementations (either Xen or KVM), resource
management options (including schedulers such as Portable Batch System), interfaces (including compatibility
with Amazon EC2), and other options.
OpenNebula
Axis Institute of Technology & management
OpenNebula is a simple, feature-rich and flexible solution for the management of virtualised data
centres. It enables private, public and hybrid clouds. Here are a few facts about this solution.
OpenNebula is an open source cloud middleware solution that manages heterogeneous distributed data
centre infrastructures. It is designed to be a simple but feature-rich, production-ready, customisable
solution to build and manage enterprise clouds—simple to install, update and operate by the
administrators; and simple to use by end users. OpenNebula combines existing virtualisation
technologies with advanced features for multi-tenancy, automated provisioning and elasticity. A built-
in virtual network manager maps virtual networks to physical networks. Distributions such as Ubuntu
and Red Hat Enterprise Linux have already integrated OpenNebula. As you’ll learn in this article, you
can set up OpenNebula by installing a few packages and performing some cursory configurations.
OpenNebula supports Xen, KVM and VMware hypervisors.
Master node: A single gateway or front-end machine, sometimes also called the master node, is
responsible for queuing, scheduling and submitting jobs to the machines in the cluster. It runs several
other OpenNebula services mentioned below:
Provides an interface to the user to submit virtual machines and monitor their status.
Manages and monitors all virtual machines running on different nodes in the cluster.
It hosts the virtual machine repository and also runs a transfer service to manage the transfer
of virtual machine images to the concerned worker nodes.
Provides an easy-to-use mechanism to set up virtual networks in the cloud.
Finally, the front-end allows you to add new machines to your cluster.
Worker node: The other machines in the cluster, known as ‘worker nodes’, provide raw computing
power for processing the jobs submitted to the cluster. The worker nodes in an OpenNebula cluster are
machines that deploy a virtualisation hypervisor, such as VMware, Xen or KVM.
Axis Institute of Technology & management
CloudSim
CloudSim Architecture:
Datacenter: used for modelling the foundational hardware equipment of any cloud
environment, that is the Datacenter. This class provides methods to specify the
functional requirements of the Datacenter as well as methods to set the allocation
policies of the VMs etc.
Host: this class executes actions related to management of virtual machines. It also
defines policies for provisioning memory and bandwidth to the virtual machines,
as well as allocating CPU cores to the virtual machines.
VM: this class represents a virtual machine by providing data members defining a
VM’s bandwidth, RAM, mips (million instructions per second), size while also
providing setter and getter methods for these parameters.
Cloudlet: a cloudlet class represents any task that is run on a VM, like a processing
task, or a memory access task, or a file updating task etc. It stores parameters
defining the characteristics of a task such as its length, size, mi (million
instructions) and provides methods similarly to VM class while also providing
methods that define a task’s execution time, status, cost and history.
DatacenterBroker: is an entity acting on behalf of the user/customer. It is
responsible for functioning of VMs, including VM creation, management,
destruction and submission of cloudlets to the VM.
CloudSim: this is the class responsible for initializing and starting the simulation
environment after all the necessary cloud entities have been defined and later
stopping after all the entities have been destroyed.
Features of CloudSim:
UNIT-2
Cloud Services
Where development teams are scattered globally, or across various geographic locations,
the ability to work together on software development projects within the same environment
can be extremely beneficial
Services are available and can be obtained from diverse sources that cross international
boundaries
Upfront and recurring or ongoing costs can be significantly reduced by utilizing a single
vendor, rather than maintaining multiple hardware facilities and environments
Software as a Service (SaaS)
Software as a service (SaaS) is a distributed model where software applications are hosted by a
vendor or cloud service provider and made available to customers over network resources. SaaS
is currently the most widely used and adopted form of cloud computing, with users most often
simply needing an internet connection and credentials to have full use of the cloud service,
application, and data housed.
Within SaaS, there are two delivery models currently used. First is hosted application management
(hosted AM), where a cloud provider hosts commercially available software for customers and
delivers it over the web (internet). Second is software on demand, where a cloud provider provides
customers with network-based access to a single copy of an application created specifically for
SaaS distribution (typically within the same network segment). Within either delivery model, SaaS
can be implemented with a custom application, or the customer may acquire a vendor -specific
application that can be tailored to the customer.
SaaS has several key benefits for organizations, which include, but are not limited to:
Ease of use and limited/minimal administration
Automatic updates and patch management; always running the latest version and most up-
to-date deployment (no manual updates required)
Standardization and compatibility (all users have the same version of software)
Global accessibility
Definition:
Database as a Service (DBaaS) is a cloud-based managed database service that enables users to
access, manage, and operate databases without handling the underlying infrastructure. It allows
organizations to deploy databases quickly while offloading maintenance tasks such as backups,
scaling, security, and updates to the cloud provider.
Features of DBaaS:
1. Managed Infrastructure: The cloud provider manages hardware, software, and network
configurations.
2. Automatic Scaling: The database can scale storage and compute resources automatically based
on demand.
3. High Availability & Disaster Recovery: Built-in replication, backup, and failover mechanisms
ensure data availability.
Axis Institute of Technology & Management
4. Security & Compliance: Advanced security features, including encryption, access control, and
compliance with standards (e.g., GDPR, HIPAA).
5. Multi-Tenancy: Supports multiple users on a shared infrastructure while maintaining isolation.
6. Pay-as-You-Go Pricing: Customers pay based on usage, reducing upfront investment.
7. Integration with Cloud Services: Seamless connectivity with cloud applications, AI, and
analytics services.
Advantages of DBaaS:
Cost-Efficient: No need for on-premise hardware and DBA (Database Administrator)
management.
Easy Deployment & Management: Simplifies database provisioning, updates, and
maintenance.
Improved Performance: Optimized configurations for high-speed queries and transactions.
Security & Compliance: Providers handle security patches, encryption, and regulatory
compliance.
Disaster Recovery & Backup: Automated backups and failover mechanisms ensure business
continuity.
A cloud service that provides scalable A cloud service that provides a managed
Definition storage solutions for storing and database system for structured or unstructured
retrieving files, objects, or block data. data storage, retrieval, and management.
Accessed through file systems, APIs, or Accessed through query languages (SQL for
Data Access
object storage protocols. relational, APIs for NoSQL).
Scales storage capacity without Scales compute and storage based on database
Scaling
performance concerns. workload.
Axis Institute of Technology & Management
AWS S3, Google Cloud Storage, Azure AWS RDS, Google Cloud SQL, Azure SQL
Examples
Blob Storage Database, MongoDB Atlas
Definition:
4. Security Monitoring: Detects threats, unauthorized access, and compliance violations (e.g.,
Splunk, IBM QRadar).
5. Log Monitoring: Aggregates logs for system performance and security insights (e.g., ELK Stack,
Datadog).
6. User Experience Monitoring: Evaluates website and application performance from an end-user
perspective (e.g., Pingdom).
Advantages of MaaS:
Cost-Effective: No need for on-premise monitoring infrastructure.
Scalability: Easily adapts to growing IT needs.
Improved System Reliability: Detects and resolves issues before they impact operations.
Enhanced Security: Monitors for cyber threats and compliance violations.
Centralized Visibility: Provides a single dashboard for monitoring multiple IT components.
Challenges of MaaS:
Data Privacy Concerns: Monitoring sensitive data in the cloud requires strong security
measures.
Latency Issues: Real-time monitoring depends on network speed and connectivity.
Integration Complexity: Some MaaS tools may not easily integrate with legacy systems.
Cost Overhead: Advanced features may come with high subscription costs.
Advantages of CaaS:
Cost Savings: No need for expensive on-premise PBX systems.
Scalability: Easily scales to meet business growth.
Remote Accessibility: Employees can communicate from anywhere.
Reliability & Uptime: Cloud providers ensure high availability and redundancy.
Enhanced Security: End-to-end encryption and compliance with industry regulations.
Challenges of CaaS:
Internet Dependency: Requires a stable internet connection for optimal performance.
Latency Issues: Poor network conditions can impact call and video quality.
Security Concerns: Sensitive communication data needs strong encryption.
Integration Complexity: Some businesses may need custom integrations with existing systems.
Remote Work & Collaboration: Video conferencing and virtual team communication.
E-commerce & Sales: Chatbots and automated messaging for customer engagement.
Healthcare & Telemedicine: Secure video consultations and patient communication.
Education & E-learning: Virtual classrooms and online training sessions.
Google Cloud
o Payment Model: Pay-as-you-go model, where customers pay for the resources they use.
Google offers free credits for new users and offers cost estimation tools.
o Pricing: Based on usage of services like compute time, storage, network usage, etc.
Discounts available for sustained usage.
Amazon Web Services (AWS)
o Payment Model: AWS follows a pay-per-use pricing model. There is no upfront cost,
and users only pay for what they consume.
o Pricing: Offers various pricing plans based on instance type, storage, and services used.
AWS also offers Reserved Instances for long-term savings and Savings Plans.
Microsoft Azure
o Payment Model: Similar to Google and AWS, Azure uses a pay-as-you-go pricing
model, with billing based on consumption.
o Pricing: Azure also offers a free tier and pay-as-you-go model with pricing based on
compute, storage, and data transfer. Discounts are available for long-term usage or
reserved instances.
IBM Cloud
o Payment Model: IBM Cloud offers pay-per-use pricing and subscription options. It
provides flexible pricing options for users based on resource consumption.
o Pricing: IBM offers pricing calculators to help estimate costs and provides a pay-as-you-
go model, as well as volume discounts for large enterprises.
Salesforce
o Payment Model: Salesforce primarily uses a subscription-based payment model for its
cloud services (Salesforce CRM, Marketing Cloud, etc.).
o Pricing: Subscription prices depend on the number of users, services required, and
contract length. It offers different pricing tiers depending on the features.
Google Cloud
o Compute: Google Compute Engine, Google Kubernetes Engine, Google App Engine.
o Storage: Google Cloud Storage, Google Cloud Bigtable, Google Cloud Spanner.
o AI and Machine Learning: Google Cloud AI, Google Cloud Vision, Google Cloud
Natural Language.
o Networking: Google Cloud Load Balancing, Cloud Interconnect.
Axis Institute of Technology & Management
Google Cloud
o Public Cloud: Google Cloud is primarily a public cloud offering, providing resources
like compute, storage, and networking.
o Hybrid Cloud: Supports hybrid cloud deployments with tools like Anthos for multi-
cloud management.
o Multi-cloud: Google supports multi-cloud environments, especially with Google Anthos.
Amazon Web Services (AWS)
o Public Cloud: AWS operates primarily as a public cloud service provider.
o Hybrid Cloud: AWS supports hybrid cloud environments through AWS Outposts and
AWS Direct Connect.
o Multi-cloud: AWS is also used in multi-cloud environments, though its services are
more focused on single-cloud environments.
Microsoft Azure
o Public Cloud: Azure is mainly a public cloud provider offering a broad range of
services.
Axis Institute of Technology & Management
o Hybrid Cloud: Azure’s hybrid cloud offerings are robust, including services like Azure
Arc and Azure Stack.
o Multi-cloud: Azure integrates well with other cloud platforms, facilitating multi-cloud
solutions.
IBM Cloud
o Public Cloud: IBM Cloud provides public cloud services, with a focus on enterprise
solutions.
o Private Cloud: Offers private cloud solutions for organizations needing more control
over their infrastructure.
o Hybrid Cloud: IBM promotes hybrid cloud environments, especially through the use of
IBM Cloud Satellite and Red Hat OpenShift.
o Multi-cloud: IBM supports multi-cloud deployments, providing solutions for managing
applications across multiple cloud providers.
Salesforce
o Public Cloud: Salesforce operates primarily in the public cloud, offering its software
through SaaS solutions.
o Hybrid Cloud: Through integrations and tools, Salesforce supports hybrid cloud
deployments, especially with its Salesforce Platform.
o Multi-cloud: Salesforce enables multi-cloud environments by connecting and integrating
various cloud services, particularly with its MuleSoft platform.
Google Cloud
o Benefits:
Strong AI and machine learning tools.
Excellent networking capabilities (Google’s global infrastructure).
High scalability and flexibility.
o Drawbacks:
Smaller ecosystem compared to AWS and Azure.
Limited enterprise-focused features.
Amazon Web Services (AWS)
o Benefits:
Largest range of services and tools.
Mature and well-established with a vast global infrastructure.
Strong security features.
o Drawbacks:
Can be complex for beginners due to the large number of services.
Pricing can be difficult to understand, leading to potential cost overruns.
Microsoft Azure
o Benefits:
Excellent integration with existing Microsoft products like Office 365, Windows
Server, and SQL Server.
Strong hybrid cloud capabilities.
Extensive enterprise focus and support.
o Drawbacks:
More complicated billing system.
Sometimes criticized for inconsistent service performance.
Axis Institute of Technology & Management
IBM Cloud
o Benefits:
Strong focus on AI, data, and enterprise-level applications.
Good support for hybrid and multi-cloud environments.
Unique offerings like IBM Watson and IBM Blockchain.
o Drawbacks:
Smaller market share compared to AWS, Azure, and Google Cloud.
User interface can be less intuitive for some users.
Salesforce
o Benefits:
Comprehensive CRM and customer-centric tools.
Excellent integration with other tools via AppExchange and APIs.
Scalable and flexible cloud platform.
o Drawbacks:
Primarily focused on CRM and may not be ideal for general-purpose cloud
services.
High subscription costs, especially for smaller businesses.
UNIT- 3
COLLABORATING USING CLOUD SERVICES
What is Cloud Collaboration?
Cloud collaboration enables employees to work together seamlessly on documents and files stored off-premises or
outside the company's firewall. This collaborative process occurs when a user creates or uploads a file online and shares
access with other individuals, allowing them to share, edit, and view documents in real-time. All changes made are
automatically saved and synced to the cloud, ensuring that all users have access to the latest version of the document.
Cloud collaboration is essential for modern businesses looking to enhance teamwork, productivity, and adaptability
in an increasingly digital and remote work environment. By leveraging the right cloud collaboration tools and implementing
best practices, organizations can streamline workflows, improve communication, and achieve better outcomes.
Benefits of Cloud Collaboration:
1. Improved Team Collaboration: Storing documents in a shared online location makes it easier for team members
to access and collaborate on them. This eliminates the need for constant emailing of files and ensures everyone is
on the same page, leading to enhanced teamwork and smoother discussions.
2. Faster Access to Large Files: Cloud collaboration allows for the quick sharing of large files without the limitations
of email servers. This is crucial for teams, especially those working remotely, as it eliminates delays and distribution
challenges associated with offline file sharing methods.
3. Support for Remote Employees: Cloud-based collaboration tools empower remote teams to collaborate
effectively regardless of their geographical locations. This flexibility is vital for the success of remote teams and
ensures they can work efficiently without being tied to a physical office.
4. Embracing BYOD Trend: Cloud collaboration aligns with the Bring Your Own Device (BYOD) trend, enabling
employees to access work-related files and data from their personal devices without the need for complex network
setups or VPNs. This increases productivity and employee satisfaction.
Top Cloud Collaboration Features:
1. Internet Access to Files: Cloud collaboration tools should be accessible via web browsers or mobile devices, with
2. Real-Time Communication: Features like instant messaging, team channels, and comments facilitate real-time
3. Custom Permission Levels: Tools should allow administrators to set custom permission levels for different users,
4. Version Control: Automatic syncing and version control ensure that users always have access to the latest version
5. Centralized File Storage: Cloud collaboration tools should provide a centralized repository for storing all work-
related data securely and facilitating easy access for team members.
Challenges in Implementing Cloud Collaboration:
1. Application Overload: Managing multiple cloud collaboration apps alongside existing systems can lead to
2. Lack of Collaboration Strategy: Without a clear collaboration strategy and practices, adopting cloud collaboration tools may
not yield optimal results.
1. Access Settings: Organize teams and control access permissions to ensure data security and privacy.
2. Choose the Right Tool: Select a cloud collaboration tool that aligns with your organization's needs, security standards,
and integrates seamlessly with existing systems.
3. Layered Security: Implement multiple layers of security to protect assets and data beyond the company firewall. 4. End-User
Training: Train employees on using the collaboration tool effectively and adhere to security protocol.
What Is CRM?
CRM, or Customer Relationship Management, encompasses all the tools, techniques, strategies, and technologies used by
organizations to manage and improve customer relationships, as well as customer data acquisition, retention, and analysis.It
involves storing customer data such as demographics, purchase behavior, history, and interactions to foster strong relationships,
enhance sales, and boost profits.
1. Operational CRM
● Purpose: Automates and streamlines business processes related to sales, marketing, and customer service.
● Key Features:
○ Contact management
○ Marketing automation
○ Service automation
2. Analytical CRM
● Purpose: Analyzes customer data to improve decision-making and strategies.
● Key Features:
○ Predictive analytics
● Key Features:
○ Interaction management
● Key Features:
○ Performance tracking
○ ROI analysis
5. Social CRM
● Purpose: Integrates social media platforms with CRM to better engage with customers.
● Key Features:
○ Handles customer queries through tickets, live chat, and knowledge base
Benefits of CRM
1. Improved Customer Relationships
1. Business Requirements – Identify the specific features needed to meet goals and customer expectations.
2. Budget and Cost – Review pricing, subscription plans, and total cost of ownership for affordability.
3. Scalability – Ensure the CRM can grow with the business and support more users or features.
4. Integration Capabilities – Check compatibility with existing systems and third-party tools.
5. Support and Training – Look for strong customer support and training to aid smooth implementation and adoption.
EXAMPLES OF CRM
Salesforce is one of the most powerful and widely used cloud-based CRM platforms in the world. It offers a wide range of
features including lead and opportunity management, workflow automation, customer support, and advanced analytics. What sets
Salesforce apart is its high level of customization and scalability, making it suitable for businesses of all sizes, especially large
enterprises. The platform also includes artificial intelligence capabilities through Einstein AI, which helps users gain predictive
insights and automate complex processes. Additionally, Salesforce has a vast marketplace called AppExchange that allows
businesses to extend CRM functionality with various third-party apps.
HubSpot CRM is known for its simplicity and ease of use. It is particularly popular among startups and small to medium-sized
businesses because it offers a free version with essential CRM features. HubSpot focuses on aligning marketing, sales, and
customer service efforts. It includes tools for contact management, email tracking, sales pipeline visualization, and marketing
automation. One of its key strengths is its integration with HubSpot’s broader marketing platform, making it a powerful choice
for inbound marketing strategies. Businesses can start small with the free version and scale up as they grow.
Zoho CRM is another cloud-based solution that caters to businesses looking for a cost-effective yet feature-rich CRM. It supports
sales automation, multi-channel communication, customer analytics, and artificial intelligence through its smart assistant, Zia.
The platform is highly customizable and offers a variety of modules for marketing, sales, and support functions. Zoho CRM also
integrates well with other Zoho applications as well as third-party tools, making it a flexible solution for small to medium-sized
enterprises that require collaboration across different teams.
Microsoft Dynamics 365 is a cloud-based CRM and ERP suite that offers deep integration with Microsoft products like Office
365 and Azure. It is designed for enterprises that need advanced data analytics, customer service management, and sales
forecasting. Dynamics 365 combines operational and analytical CRM capabilities, providing users with real-time insights through
embedded AI tools. Its strong data connectivity and seamless workflow with Microsoft apps make it ideal for large businesses
already using the Microsoft ecosystem. It supports a modular approach, allowing businesses to purchase only the tools they need.
Freshsales is a modern, intuitive CRM platform designed primarily for sales teams. It offers built-in phone and email features, a
visual sales pipeline, AI-based lead scoring, and automated workflows. It is easy to set up and use, making it suitable for small
and medium-sized businesses that want a sales-focused CRM without the complexity. Freshsales stands out for its affordable
pricing and responsive customer support, providing all the essential tools to manage leads, track customer interactions, and
improve conversion rates.
Pipedrive is a sales-centric CRM known for its visual and user-friendly interface. It is designed to help small businesses and sales
teams manage leads and deals more effectively. Users can easily track each opportunity through the sales pipeline and set up
automated tasks and reminders to follow up with potential clients. While it may not have the extensive features of enterprise-level
CRMs, its simplicity and focus on boosting sales performance make it a favorite among startups and growing companies.
Insightly combines CRM features with project management capabilities, making it ideal for businesses that need to manage
customer relationships and internal projects in one place. It offers lead and opportunity management, workflow automation, email
templates, and seamless integration with apps like G Suite and Microsoft 365. Insightly is best suited for small to mid-sized
businesses that want both CRM and project tracking tools without the need for separate platforms.
One of the most notable advantages of cloud-based project management is its easy setup and minimal installation
requirements. There is no need for complex hardware or software installations, and the interface is typically intuitive and
user-friendly. This allows organizations to onboard team members quickly and get projects running without delays.
Another key benefit is seamless collaboration. Cloud platforms enable teams—regardless of their physical location—to
communicate and work together in real time. Features such as shared task boards, file storage, live comments, and integrated chat
tools help improve communication, foster teamwork, and keep everyone aligned with project goals. This is especially valuable in
remote and hybrid work environments.
Cloud project management also brings increased efficiency by centralizing project data and automating repetitive tasks.
Managers can easily track project status, assign responsibilities, and monitor deadlines, while team members receive real-time
updates and reminders. This leads to quicker decision-making, better time management, and optimized resource utilization.
Another critical aspect is the reduction in maintenance and infrastructure costs. Since cloud service providers handle system
updates, data backups, and security enhancements, organizations do not need to invest heavily in IT support or additional
hardware. This not only lowers operational costs but also ensures the platform is always up to date with the latest features.
In terms of security, cloud-based project management platforms offer advanced protection, including encrypted communication,
user access controls, and compliance with data privacy regulations. These measures help ensure that sensitive project information
remains secure and accessible only to authorized users.
Cloud platforms are also known for their scalability and flexibility. Organizations can easily adjust the number of users, storage
space, or features as project demands change. Whether scaling up for a large project or scaling down during slower periods, cloud
systems adapt without the need for restructuring or reinstallation.
Moreover, using these systems often results in improved employee satisfaction. The simplicity, accessibility, and collaborative
nature of cloud tools empower employees to manage their tasks more effectively and stay connected with their teams. This
creates a more organized, less stressful work environment, boosting morale and productivity.
● ClickUp: A versatile tool that provides features like task tracking, goal setting, time management, automation, and
real-time collaboration. It’s ideal for both individual users and large teams looking for an all-in-one platform.
● Monday.com: Known for its visually appealing interface, it helps teams organize work using customizable boards,
dashboards, and workflow automation. It’s widely used by marketing, sales, and creative teams for its simplicity and
clarity.
● Smartsheet: Combines the functionality of spreadsheets with project management capabilities. It allows for detailed
planning, resource allocation, reporting, and automation—making it suitable for data-driven projects.
These tools demonstrate the diversity and effectiveness of cloud-based project management solutions. They help
organizations manage projects more efficiently, ensure better team coordination, and adapt to changing business needs with ease
With cloud computing, event managers can handle all aspects of event planning through a centralized, web-based system. These
systems offer a wide array of tools for registration management, ticketing, attendee communication, scheduling, resource
allocation, and post-event analysis. Since everything is hosted online, there is no need for on-site software installations or
complex infrastructure—everything can be accessed from anywhere, at any time, on any device.
2. Real-Time Registration and Ticketing: Attendees can register online, make payments, and receive instant
confirmations. Organizers can monitor sign-ups in real-time, generate digital tickets, and manage capacity limits with
ease.
3. Cost-Effective Operations: Since the infrastructure is managed by cloud service providers, there is no need for
additional hardware or IT support. This reduces operational costs significantly, especially for recurring or large-scale
events.
4. Scalability and Flexibility: Cloud platforms can handle both small meetings and large international conferences. As
attendee numbers grow, resources can be scaled up automatically without any disruptions.
5. Enhanced Communication: Built-in email and notification systems ensure that attendees receive timely updates,
reminders, and important information. Some platforms also support live chat, Q&A sessions, and polling during events.
6. Virtual and Hybrid Event Support: Many cloud-based systems support virtual events and hybrid formats, including
live streaming, breakout rooms, and networking lounges. This broadens audience reach and engagement.
7. Data Analytics and Feedback: After the event, organizers can generate detailed reports on attendance, engagement
levels, and survey responses. These insights are valuable for evaluating success and planning future events.
8. Security and Privacy: Cloud service providers ensure data encryption, access control, and compliance with global
standards like GDPR, ensuring the safety of both organizer and attendee data.
● Cvent: A comprehensive event management platform used for enterprise-level conferences and meetings. It offers
registration, venue selection, marketing automation, mobile event apps, and attendee engagement features.
● Hopin: Focused on virtual and hybrid events, Hopin enables live streaming, interactive sessions, expo booths, and
networking lounges, all hosted within a cloud environment.
● Whova: Known for its mobile-friendly interface, Whova helps with event promotion, agenda management, attendee
engagement, and live interaction features.
Collaboration Tools in Cloud Computing (with Examples)
Cloud computing enables real-time collaboration by offering accessible tools that can be used by multiple users across locations.
Below is an explanation of key tools used for collaboration, along with real-world examples:
1. Calendar
Definition: A cloud-based calendar allows teams to organize and manage schedules, meetings, and reminders collaboratively.
Example:
Google Calendar lets users create events, invite team members, add meeting links, and set reminders. Team members can see
each other's availability and schedule accordingly.
2. Schedules
Definition: Scheduling tools help in planning project timelines, assigning tasks, and setting deadlines collaboratively.
Example:
Trello uses boards, lists, and cards to assign tasks, set due dates, and monitor progress. All team members can view and update
the schedule in real-time.
3. Word Processing
Definition: Cloud-based word processors allow multiple users to work on documents at the same time, with automatic saving and
version control.
Example:
Google Docs allows real-time co-authoring of documents, commenting, and version history. Users can collaborate on reports or
research papers simultaneously from different locations.
4. Presentation
Definition: Online presentation tools help in jointly creating slides, sharing feedback, and delivering content virtually.
Example:
Microsoft PowerPoint Online allows team members to co-create slides, add animations, and practice presentations online. Edits
are saved automatically and can be viewed in real-time.
5. Spreadsheet
Definition: Cloud spreadsheets enable shared editing, data analysis, and financial tracking in collaborative environments.
Example:
Google Sheets lets multiple users enter data, apply formulas, and build charts at the same time—commonly used for budgets,
project timelines, and research data.
6. Databases
Definition: Cloud databases provide centralized, real-time access to data for applications and teams, allowing collaboration in
data entry, management, and analytics.
Example:
Airtable combines the features of a spreadsheet and a database. Teams use it for inventory management, CRM systems, and
event planning—collaboratively managing data with forms, filters, and views.
Example:
Amazon WorkSpaces allows users to access a cloud-hosted Windows or Linux desktop. This is useful in organizations where
employees need a standardized, secure work environment from anywhere.
8. Social Networks
Definition: Enterprise social platforms enable informal communication and networking within organizations to enhance team
bonding and sharing.
Example:
Workplace by Meta (Facebook) lets employees chat, post updates, create groups, and host live video sessions, encouraging an
open and social culture in remote teams.
9. Groupware
Definition: Groupware is collaborative software that combines communication, task management, file sharing, and more within
one integrated platform.
Example:
Microsoft Teams allows users to chat, meet, call, and collaborate on files all in one app. It’s integrated with other Microsoft 365
tools like Word, Excel, and SharePoint for seamless teamwork.
By using these cloud-based collaboration tools, organizations and teams can achieve higher productivity, better communication,
and streamlined project execution. These tools break down geographical barriers and support real-time interaction, making them
essential for modern workplaces.
Virtualization for Cloud:
Unit -4
Need for Virtualization
Pros and cons of Virtualization
Types of Virtualization
System VM
Process VM
Virtual Machine monitor
Virtual Machine Properties
Interpretation and Binary Translation
HLL VM
Supervisors
Xen, KVM, VMware, Virtual Box, Hyper-V.
Good Reading & Reference Material available @
https://www.sciencedirect.com/topics/computer-science/virtual-machine-
monitor
History of Virtualization
(from “Modern Operating Systems” 4th Edition, p474 by Tanenbaum and Bos)
1960’s, IBM: CP/CMS control program: a virtual machine operating system for the IBM System/360
Model 67
2000, IBM: z-series with 64-bit virtual address spaces and backward compatible with the System/360
1974: Popek and Golberg from UCLA published “Formal Requirements for Virtualizable Third
Generation Architectures” where they listed the conditions a computer architecture should satisfy to
support virtualization efficiently. The popular x86 architecture that originated in the 1970s did not
support these requirements for decades.
1990’s, Stanford researchers, VMware: Researchers developed a new hypervisor and founded
VMware, the biggest virtualization company of today’s. First virtualization solution was is 1999 for x86.
IBM was the first to produce and sell virtualization for the mainframe. But, VMware popularised
virtualization for the masses.
Need for Virtualization
Need
Need for Virtualization
1. Enhanced Performance
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely
used by the user. Most of their systems have sufficient resources which can host a virtual
machine manager and can perform a virtual machine with acceptable performance so far.
4. ECO-FRIENDLY INITIATIVES-
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as
well as a good amount of energy is needed to keep them cool for well-functioning.
Therefore, server consolidation drops the power consumed and cooling impact by having a
fall in number of servers. Virtualization can provide a sophisticated method of server
consolidation.
Contd……
5. ADMINISTRATIVE COSTS
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel-intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.
Benefits of Virtualization
1. More flexible and efficient allocation of resources.
2. HOST
The host represents the original environment where the guest is supposed to be managed. Each guest
runs on the host using shared resources donated to it by the host. The operating system, works as the
host and manages the physical resource management, and the device support.
3. VIRTUALIZATION LAYER
The virtualization layer is responsible for recreating the same or a different environment where the
guest will operate. It is an additional abstraction layer between a network and storage hardware,
computing, and the application running on it. Usually it helps to run a single operating system per
machine which can be very inflexible compared to the usage of virtualization.
Types of Virtualization:
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data Virtualization
Contd……
1. Application Virtualization
Application virtualization helps a user to have remote access of an application from a
server. The server stores all personal information and other characteristics of the
application but can still run on a local workstation through the internet. Example of this
would be a user who needs to run two different versions of the same software.
Technologies that use application virtualization are hosted applications and packaged
applications.
2. Network Virtualization
The ability to run multiple virtual networks with each has a separate control and data plan.
It co-exists together on top of one physical network. It can be managed by individual parties
that potentially confidential to each other.
Network virtualization provides a facility to create and provision virtual networks—logical
switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and workload
security within days or even in weeks.
Contd……
3. Desktop Virtualization
Desktop virtualization allows the users’ OS to be remotely stored on a server in the data
centre. It allows the user to access their desktop virtually, from any location by a different
machine. Users who want specific operating systems other than Windows Server will need
to have a virtual desktop. Main benefits of desktop virtualization are user mobility,
portability, easy management of software installation, updates, and patches.
4. Storage Virtualization
Storage virtualization is an array of servers that are managed by a virtual storage system.
The servers aren’t aware of exactly where their data is stored, and instead function more
like worker bees in a hive. It makes managing storage from multiple sources to be
managed and utilized as a single repository. storage virtualization software maintains
smooth operations, consistent performance and a continuous suite of advanced functions
despite changes, break down and differences in the underlying equipment.
Contd……
5. Server Virtualization
This is a kind of virtualization in which masking of server resources takes place. Here, the central-
server(physical server) is divided into multiple different virtual servers by changing the identity
number, processors. So, each system can operate its own operating systems in isolate manner. Where
each sub-server knows the identity of the central server. It causes an increase in the performance and
reduces the operating cost by the deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reduce energy consumption, reduce infrastructural cost, etc.
6. Data Virtualization
This is the kind of virtualization in which the data is collected from various sources and managed that
at a single place without knowing more about the technical information like how data is collected,
stored & formatted then arranged that data logically so that its virtual view can be accessed by its
interested people and stakeholders, and users through the various cloud services remotely. Many big
giant companies are providing their services like Oracle, IBM, At scale, Cdata, etc.
System VM & Process VM
A System Virtual Machine (System VM) provides a complete system
platform which supports the execution of a complete operating system
(OS).
Process virtual machines are implemented using an interpreter; for improving performance
these virtual machines will use just-in-time compilers internally.
Examples of Process VMs - JVM (Java Virtual Machine) is used for the Java language
PVM (Parrot Virtual Machine) is used for PERL Language, CLR (Common Language
Runtime) is used for .NET Framework
Virtual Machine Monitor (VMM)
A Virtual Machine Monitor (VMM) is a software program that enables the creation, management and
governance of virtual machines (VM) and manages the operation of a virtualized environment on top
of a physical host machine.
VMM is also known as Virtual Machine Manager and Hypervisor. However, the provided architectural
implementation and services differ by vendor product.
VMM is the primary software behind virtualization environments and implementations. When installed
over a host machine, VMM facilitates the creation of VMs, each with separate operating systems (OS)
and applications. VMM manages the backend operation of these VMs by allocating the necessary
computing, memory, storage and other input/output (I/O) resources.
VMM also provides a centralized interface for managing the entire operation, status and availability of
VMs that are installed over a single host or spread across different and interconnected hosts.
Virtual Machine Monitor (VMM / Hypervisor)
A virtual machine monitor (VMM/hypervisor) partitions the resources of computer system into one
or more virtual machines (VMs). Allows several operating systems to run concurrently on a single
hardware platform.
A VM is an execution environment that runs an OS
VM – an isolated environment that appears to be a whole computer, but actually only has access to a
portion of the computer resources
A VMM allows:
Multiple services to share the same platform
Live migration - the movement of a server from one
platform to another
System modification while maintaining backward compatibility with the original
system
Enforces isolation among the systems, thus security
A guest operating system is an OS that runs in a VM under the control of the VMM.
VMM Virtualizes the CPU and the Memory
A VMM (also hypervisor)
Traps the privileged instructions executed by a guest OS and enforces the
correctness and safety of the operation
Traps interrupts and dispatches them to the individual guest operating systems
Maintains a shadow page table for each guest OS and replicates any modification made
by the guest OS in its own shadow page table. This shadow page table points to the
actual page frame and it is used by the Memory Management Unit (MMU) for dynamic
address translation.
Monitors the system performance and takes corrective actions to avoid performance
degradation. For example, the VMM may swap out a VM to avoid thrashing.
Type 1 and 2 Hypervisors
Type 1 Hypervisor Type 2 Hypervisor
Taxonomy of VMMs:
1. Type 1 Hypervisor (bare metal, native): supports multiple virtual machines
and runs directly on the hardware (e.g., VMware ESX , Xen, Denali)
2. Type 2 Hypervisor (hosted) VM - runs under a host operating system (e.g.,
user-mode Linux)
Virtual Machine Properties
Being able to use apps and operating systems without the need for hardware presents users
with some advantages over a traditional computer. The benefits of virtual machines include:
1. Compatibility
Virtual machines host their own guest operating systems and applications, using all the
components found in a physical computer (motherboard, VGA card, network card controller,
etc). This allows VMs to be fully compatible with all standard x86 operating systems,
applications and device drivers. You can therefore run all the same software that you would
usually use on a standard x86 computer.
2. Isolation
VMs share the physical resources of a computer, yet remain isolated from one another. This
separation is the core reason why virtual machines create a more secure environment for
running applications when compared to a non-virtual system. If, for example, you’re running
four VMs on a server and one of them crashes, the remaining three will remain unaffected
and will still be operational.
Contd……
3. Encapsulation
A virtual machine acts as a single software package that encapsulates a complete set of
hardware resources, an operating system, and all its applications. This makes VMs
incredibly portable and easy to manage. You can move and copy a VM from one location
to another like any other software file, or save it on any storage medium — from storage
area networks (SANs) to a common USB flash drive.
4. Hardware independence
Virtual machines can be configured with virtual components that are completely
independent of the physical components of the underlying hardware. VMs that reside on
the same server can even run different types of operating systems. Hardware
independence allows you to move virtual machines from one x86 computer to another
without needing to make any changes to the device drivers, operating system or
applications.
Interpretation and Binary Translation
Interpretation in Cloud Computing, In simple terms, the behavior of the hardware is
produced by a software program. Emulation process involves only those hardware
components so that user or virtual machines does not understand the underlying
environment. This process is also termed as interpretation.
Binary Translation is one specific approach to implementing full virtualization that does
not require hardware virtualization features.
It involves examining the executable code of the virtual guest for "unsafe" instructions,
translating these into "safe" equivalents, and then executing the translated code.
A static compiler is probably the best solution when performance is paramount, portability is not a great concern, destinations of calls are
known at compile time and programs bind to external symbols before running. Thus, most third generation languages like C and FORTRAN
are implemented this way. However, if the language is object-oriented, binds to external references late, and must run on many
platforms, it may be advantageous to implement a compiler that targets a fictitious high-level language virtual machine (HLL VM)
instead.
In Smith's taxonomy, an HLL VM is a system that provides a process with an execution environment that does not correspond to any
particular hardware platform. The interface offered to the high-level language application process is usually designed to hide differences
between the platforms to which the VM will eventually be ported. For instance, UCSD Pascal p-code and Java bytecode both express virtual
instructions as stack operations that take no register arguments. Gosling, one of the designers of the Java virtual machine, has said that he
based the design of the JVM on the p-code machine. Smalltalk, Self and many other systems have taken a similar approach. A VM may also
provide virtual instructions that support peculiar or challenging features of the language. For instance, a Java virtual machine has
specialized virtual instructions
Contd……
This approach has benefits for the users as well. For instance, applications can be
distributed in a platform neutral format. In the case of the Java class libraries or UCSD
Pascal programs, the amount of virtual software far exceeds the size of the VM.
The advantage is that the relatively small amount of effort required to port the VM to a
new platform enables a large body of virtual applications to run on the new platform also.
There are various approaches a HLL VM can take to actually execute a virtual program.
An interpreter fetches, decodes, then emulates each virtual instruction in turn. Hence,
interpreters are slow but can be very portable.
Faster, but less portable, a dynamic compiler can translate to native code and dispatch
regions of the virtual application. A dynamic compiler can exploit runtime knowledge of
program values so it can sometimes do a better job of optimizing the program than a
static compiler
Supervisors
A supervisory program or supervisor is a computer program, usually part of an operating system, that controls the
execution of other routines and regulates work scheduling, input/output operations, error actions, and similar
functions and regulates the flow of work in a data processing system. It is thus capable of executing both
input/output operations and privileged operations. The operating system of a computer usually operates in this
mode.
Supervisor mode is "an execution mode on some processors which enables execution of all instructions, including
privileged instructions. It may also give access to a different address space, to memory management hardware and to
other peripherals. This is the mode in which the operating system usually runs.“
It can also refer to a program that allocates computer component space and schedules computer events by task
queuing and system interrupts. Control of the system is returned to the supervisory program frequently enough to
ensure that demands on the system are met.
Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360.
In other operating systems, the supervisor is generally called the kernel. In the 1970s, IBM further abstracted the
supervisor state from the hardware, resulting in a hypervisor that enabled full virtualization, i.e. the capacity to run
multiple operating systems on the same machine totally independently from each other. Hence the first such system
was called Virtual Machine or VM.
Xen
Xen (pronounced /ˈzɛn/) is a type-1 hypervisor, providing services that allow multiple
computer operating systems to execute on the same computer
hardware concurrently.
The Xen Project community develops and maintains Xen Project as free and open-
source software, subject to the requirements of the GNU General Public
License (GPL), version 2. Xen Project is currently available for the IA-32, x86-
64 and ARM instruction sets.
Contd……
Xen provides a form of virtualization known as Paravirtualization, in which guests run a
modified operating system.
The guests are modified to use a special hypercall ABI, instead of certain architectural
features.
Through Paravirtualization, Xen can achieve high performance even on its host
architecture (x86) which has a reputation for non-cooperation with traditional virtualization
techniques.
Xen can run paravirtualized guests ("PV guests" in Xen terminology) even on CPUs without
any explicit support for virtualization.
Paravirtualization avoids the need to emulate a full set of hardware and firmware services,
which makes a PV system simpler to manage and reduces the attack surface exposed to
potentially malicious guests. On 32-bit x86, the Xen host kernel code runs in Ring 0, while
the hosted domains run in Ring 1 (kernel) and Ring 3 (applications).
KVM
Xen (pronounced /ˈzɛn/) is a type-1 hypervisor, providing services that allow multiple
computer operating systems to execute on the same computer
hardware concurrently.
The Xen Project community develops and maintains Xen Project as free and open-
source software, subject to the requirements of the GNU General Public
License (GPL), version 2. Xen Project is currently available for the IA-32, x86-
64 and ARM instruction sets.
VMware
Xen (pronounced /ˈzɛn/) is a type-1 hypervisor, providing services that allow multiple
computer operating systems to execute on the same computer
hardware concurrently.
The Xen Project community develops and maintains Xen Project as free and open-
source software, subject to the requirements of the GNU General Public
License (GPL), version 2. Xen Project is currently available for the IA-32, x86-
64 and ARM instruction sets.
VirtualBox
VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as
well as home use. Not only is VirtualBox an extremely feature rich, high performance
product for enterprise customers,
it is also the only professional solution that is freely available as Open Source Software
under the terms of the GNU General Public License (GPL) version 2.
Presently, VirtualBox runs on Windows, Linux, Macintosh, and Solaris hosts and supports
a large number of guest operating systems including but not limited to Windows (NT 4.0,
2000, XP, Server 2003, Vista, Windows 7, Windows 8, Windows 10), DOS/Windows 3.x,
Linux (2.4, 2.6, 3.x and 4.x), Solaris and OpenSolaris, OS/2, and OpenBSD.
VirtualBox is being actively developed with frequent releases and has an ever growing
list of features, supported guest operating systems and platforms it runs on.
Microsoft Hyper-V (Type-1), codenamed Viridian, and briefly known before its release as Windows Server
Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows.
A Type 1 hypervisor runs directly on the underlying computer's physical hardware, interacting directly with its CPU,
memory, and physical storage. For this reason, Type 1 hypervisors are also referred to as bare-metal hypervisors. A
Type 1 hypervisor takes the place of the host operating system.
A Type 2 hypervisor, also called a hosted hypervisor, is a virtual machine (VM) manager that is installed as a
software application on an existing operating system (OS). This makes it easy for an end user to run a VM on a
personal computing (PC) device.
The main difference between Type 1 vs. Type 2 hypervisors is that Type 1 runs on bare metal and Type 2 runs on top
of an operating system.
The key difference between Hyper-V and a Type 2 hypervisor is that Hyper-V uses hardware-assisted virtualization.
This allows Hyper-V virtual machines to communicate directly with the server hardware, allowing virtual machines to
perform far better than a Type 2 hypervisor would allow.
Unit – 5: Security Standards and Applications
(Cloud Computing)
Unit – 5: Security Standards and Applications
Security in Clouds
Cloud security challenges
Software as a Service Security
Common Standards
The Open Cloud Consortium
The Distributed Management Task Force
Standards for Application Developers
Standards for Messaging
Standards for Security
End user Access to Cloud Computing
Mobile Internet devices and the Cloud
Hadoop, MapReduce, Virtual Box, Google App Engine
Programming Environment for Google App Engine
Security in Clouds
Cloud Security, also known as cloud computing security, consists of a set of policies, controls,
procedures and technologies that work together to protect cloud-based systems, data, and
infrastructure.
These security measures are configured to protect cloud data, support regulatory compliance and
protect customers' privacy as well as setting authentication rules for individual users and devices.
From authenticating access to filtering traffic, cloud security can be configured to the exact needs of
the business. And because these rules can be configured and managed in one place, administration
overheads are reduced and IT teams empowered to focus on other areas of the business.
The way cloud security is delivered will depend on the individual cloud provider or the cloud security
solutions in place. However, implementation of cloud security processes should be a joint
responsibility between the business owner and solution provider.
For businesses making the transition to the cloud, robust cloud security is imperative. Security
threats are constantly evolving and becoming more sophisticated, and cloud computing is no less at
risk than an on-premise environment. For this reason, it is essential to work with a cloud provider
that offers best-in-class security that has been customized for your infrastructure.
Benefits of Cloud Security
1. Centralized security: Just as cloud computing centralizes applications and data, cloud
security centralizes protection. Cloud-based business networks consist of numerous
devices and endpoints that can be difficult to manage when dealing with shadow IT
or BYOD. Managing these entities centrally enhances traffic analysis and web filtering,
streamlines the monitoring of network events and results in fewer software and policy
updates. Disaster recovery plans can also be implemented and actioned easily when they
are managed in one place.
2. Reduced costs: One of the benefits of utilizing cloud storage and security is that it
eliminates the need to invest in dedicated hardware. Not only does this reduce capital
expenditure, but it also reduces administrative overheads. Where once IT teams were
firefighting security issues reactively, cloud security delivers proactive security features
that offer protection 24/7 with little or no human intervention.
Contd……
4. Reliability: Cloud computing services offer the ultimate in dependability. With the
right cloud security measures in place, users can safely access data and applications
within the cloud no matter where they are or what device they are using.
Software as a Service Security
SaaS security is cloud-based security designed to protect the data that software as
service applications carry.
It’s a set of practices that companies that store data in the cloud put in place to protect
sensitive information pertaining to their customers and the business itself.
However, SaaS security is not the sole responsibility of the organization using the cloud
service. In fact, the service customer and the service provider share the obligation to
adhere to SaaS security guidelines published by the National Cyber Security Center
(NCSC).
SaaS security is also an important part of SaaS management that aims to reduce
unused licenses, shadow IT and decrease security risks by creating as much visibility as
possible.
6 SaaS Security best practices
One of the main benefits that SaaS has to offer is that the respective applications are on-
demand, scalable, and very fast to implement, saving companies valuable resources and
time. On top of that, the SaaS provider typically handles updates and takes care of software
maintenance.
This flexibility and the fairly open access have created new security risks that SaaS security
best practices are trying to address and mitigate. Below are 6 security practices and solutions
that every cloud-operating business should know about.
1. Enhanced Authentication
Offering a cloud-based service to your customers means that there has to be a way for them
to access the software. Usually, this access is regulated through login credentials. That’s why
knowing how your users access the resource and how the third-party software provider
handles the authentication process is a great starting point.
Contd……
Once you understand the various methods, you can make better SaaS security decisions and
enable additional security features like multifactor authentication or integrate other enhanced
authentication methods.
2. Data Encryption
The majority of channels that SaaS applications use to communicate employ TLS (Transport Layer Security)
to protect data that is in transit. However, data that is at rest can be just as vulnerable to cyber attacks as data
that is being exchanged. That’s why more and more SaaS providers offer encryption capabilities that protect
data in transit and at rest. It’s a good idea to talk to your provider and check whether enhanced data encryption
is available for all the SaaS services you use.
5. Consider CASBs
It is possible that the SaaS provider that you are choosing is not able to provide the level of SaaS security that your
company requires. If there are no viable alternatives when it comes to the vendor, consider cloud access security broker
(CASB) tool options. This allows your company to add a layer of additional security controls that are not native to your
SaaS application. When selecting a CASB –whether proxy or API-based –make sure it fits into your existing IT
architecture.
The most well-known standard in information security and compliance is ISO 27001,
developed by the International Organization for Standardization.
The ISO 27001 standard was created to assist enterprises in protecting sensitive data by
best practices.
Cloud compliance is the principle that cloud-delivered systems must be compliant with
the standards their customers require. Cloud compliance ensures that cloud computing
services meet compliance requirements.
Contd……
https://kinsta.com/blog/cloud-
security/#how-does-cloud-
security-work
OCC manages and operates resources including the Open Science Data Cloud (aka OSDC), which is a
multi-petabyte scientific data sharing resource.
The consortium is based in Chicago, Illinois, and is managed by the 501(c)3 Center for Computational
Science
3. The Open Cloud Testbed - This working group manages and operates the Open Cloud Testbed. The
Open Cloud Testbed (OCT) is a geographically distributed cloud testbed spanning four data centers and
connected with 10G and 100G network connections. The OCT is used to develop new cloud computing
software and infrastructure.
4. The Biomedical Data Commons - The Biomedical Data Commons (BDC) is cloud-based infrastructure that
provides secure, compliant cloud services for managing and analyzing genomic data, electronic medical records
(EMR), medical images, and other PHI data. It provides resources to researchers so that they can more easily make
discoveries from large complex controlled access datasets. The BDC provides resources to those institutions in the
BDC Working Group. It is an example of what is sometimes called condominium model of sharing research
infrastructure in which the research infrastructure is operated by a consortium of educational and research
organizations and provides resources to the consortium.
Contd……
5. NOAA Data Alliance Working Group - The OCC National Oceanographic and Atmospheric
Administration (NOAA) Data Alliance Working Group supports and manages the NOAA data
commons and the surrounding community interested in the open redistribution of NOAA
datasets.
In 2015, the OCC was accepted into the Matter healthcare community at Chicago's historic
Merchandise Mart. Matter is a community healthcare entrepreneurs and industry leaders
working together in a shared space to individually and collectively fuel the future of healthcare
innovation.
In 2015, the OCC announced a collaboration with the National Oceanic and Atmospheric
Administration (NOAA) to help release their vast stores of environmental data to the general
public. This effort is managed by the OCC's NOAA data alliance working group.
The Distributed management Task Force (DMTF)
DMTF is a 501(c)(6) nonprofit industry standards organization that creates open manageability standards spanning
diverse emerging and traditional IT infrastructures including cloud, virtualization, network, servers and storage.
Member companies and alliance partners collaborate on standards to improve interoperable management of
information technologies.
Based in Portland, Oregon, the DMTF is led by a board of directors representing technology companies including:
Broadcom Inc., Cisco, Dell Technologies, Hewlett Packard Enterprise, Intel Corporation, Lenovo, NetApp, Positive
Tecnologia S.A., and Verizon.
Founded in 1992 as the Desktop Management Task Force, the organization’s first standard was the now-legacy
Desktop Management Interface (DMI). As the organization evolved to address distributed management through
additional standards, such as the Common Information Model (CIM), it changed its name to the Distributed
Management Task Force in 1999 , but is now known as, DMTF.
The DMTF continues to address converged, hybrid IT and the Software Defined Data Center (SDDC)
with its latest specifications, such as the CADF (Cloud Auditing Data Federation), CIMI (Cloud Infrastructure Management
Interface), CIM (Common Information Model), DASH (Desktop and Mobile Architecture for System Hardware), MCTP (Management
Component Transport Protocol), NC-SI (Network Controller Sideband Interface), OVF (Open Virtualization Format), PLDM (Platform
Level Data Model), Redfish Device Enablement (RDE), Redfish (Including Protocols, Schema, Host Interface, Profiles) SMASH (Systems
Management Architecture for Server Hardware) and SMBIOS (System Management BIOS).
The Distributed Management Task Force
(DMTF)
DMTF enables more effective management of millions of IT systems
worldwide by bringing the IT industry together to collaborate on the
development, validation and promotion of systems management
standards.
The group spans the industry with 160 member companies and
organizations, and more than 4,000 active participants crossing
43 countries.
The DMTF board of directors is led by 16 innovative, industry-
leading technology companies.
The Distributed Management Task Force
(DMTF)
DMTF management standards are critical to enabling management interoperability
among multi vendor systems, tools and solutions within the enterprise.
The Open Virtualization Format (OVF) is a fairly new standard that has emerged
within the VMAN Initiative.
Benefits of VMAN are Lowering the IT learning curve, and Lowering complexity
for vendors implementing their solutions
Standardized Approaches available to
Companies due to VMAN Initiative
Deploy virtual computer systems
Discover and take inventory of virtual computer systems
Manage the life cycle of virtual computer systems
Add/change/delete virtual resources
Monitor virtual systems for health and performance
Standards for Application Developers
The purpose of application development standards is to ensure
uniform, consistent, high-quality software solutions.
An Ajax framework helps developers to build dynamic web pages on the client side.
Data is sent to or from the server using requests, usually written in JavaScript.
The acronym derives from the fact that it includes Linux, Apache,
MySQL, and PHP (or Perl or Python) and is considered by many to be the
platform of choice for development and deployment of high- performance
web applications which require a solid and reliable foundation.
The Post Office Protocol (POP) was introduced to circumvent this situation.
Once the client connects, POP servers begin to download the messages and subsequently
delete them from the server (a default setting) in order to make room for more messages.
Internet Messaging Access Protocol
Once mail messages are downloaded with POP, they are automatically deleted
from the server when the download process has finished.
To get around these problems, a standard called Internet Messaging Access Protocol
was created. IMAP allows messages to be kept on the server but viewed and
manipulated (usually via a browser) as though they were stored locally.
Standards for Security
Security standards define the processes, procedures, and practices
necessary for implementing a secure environment that provides
privacy and security of confidential information in a cloud
environment.
Security protocols, used in the cloud are:
Security Assertion Markup Language (SAML)
Open Authentication (Oauth)
OpenID
SSL/TLS
Security Assertion Markup Language (SAML)
SAML is an XML-based standard for communicating authentication, authorization,
and attribute information among online partners. It allows businesses to securely send
assertions between partner organizations regarding the identity and entitlements of a
principal.
SAML allows a user to log on once for affiliated but separate Web sites. SAML
is designed for business-to-business (B2B) and business-to-consumer (B2C)
transactions.
SAML is built on a number of existing standards, namely, SOAP, HTTP, and XML.
SAML relies on HTTP as its communications protocol and specifies the use of
SOAP.
Most SAML transactions are expressed in a standardized form of XML. SAML
assertions and protocols are specified using XML schema.
Open Authentication (Oauth)
OAuth is an open protocol, initiated by Blaine Cook and Chris Messina,
to allow secure API authorization in a simple, standardized method for
various types of web applications.
OAuth is a method for publishing and interacting with protected
data.
OAuth provides users access to their data while protecting account
credentials.
OAuth by itself provides no privacy at all and depends on other protocols
such as SSL to accomplish that.
OpenID
OpenID is an open, decentralized standard for user authentication and access
control that allows users to log onto many services using the same digital
identity.
It is a single-sign-on (SSO) method of access control.
It replaces the common log-in process (i.e., a log-in name and a password) by
allowing users to log in once and gain access to resources across participating
systems.
An OpenID is in the form of a unique URL and is authenticated by the
entity hosting the OpenID URL.
SSL/TLS
Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are
cryptographically secure protocols designed to provide security and data integrity for
communications over TCP/IP
TLS and SSL encrypt the segments of network connections at the transport layer.
TLS provides endpoint authentication and data confidentiality by using
cryptography.
TLS involves three basic phases:
Peer negotiation for algorithm support
Key exchange and authentication
Symmetric cipher encryption and message authentication
End user Access to Cloud Computing
In its most strict sense, end-user computing (EUC) refers to computer systems and
platforms that help non-programmers create applications. ... What's important is that
a well-designed EUC/VDI plan can allow users to access the digital platforms they need
to be productive, both on-premises and working remotely in the cloud.
An End-User Computing application or EUC is any application that is not managed and
developed in an environment that employs robust IT general controls. ... Although the
most pervasive EUCs are spreadsheets, EUCs also can include user databases, queries,
scripts, or output from various reporting tools.
Broadly, end-user computing covers a wide range of user-facing resources, such as:
desktop and notebook end user computers; desktop operating systems and
applications; wearables and smartphones; cloud, mobile, and web applications; and
virtual desktops and applications.
WHAT IS END-USER COMPUTING?
3 Types of EUC
An App Engine web application can be described as having three major parts:
Application instances Scalable data storage Scalable services
Programming Environment for Google App Engine
Google App Engine (often referred to as GAE or simply App Engine) is a cloud computing
platform as a service for developing and hosting web applications in Google- managed
data centers.
Applications are sandboxed and run across multiple servers. App Engine offers automatic
scaling for web applications—as the number of requests increases for an application, App
Engine automatically allocates more resources for the web application to handle the
additional demand.
Google App Engine primarily supports Go, PHP, Java, Python, Node.js, .NET, and
Ruby applications, although it can also support other languages via "custom runtimes".
The service is free up to a certain level of consumed resources and only in standard
environment but not in flexible environment. Fees are charged for additional storage,
bandwidth, or instance hours required by the application. It was first released as a preview
version in April 2008 and came out of preview in September 2011.
The environment you choose depends on the language and related technologies you want
Contd…
Runtimes and framework
Google App Engine primarily supports Go, PHP, Java, Python, Node.js, .NET,
and Ruby applications, although it can also support other languages via "custom runtimes".
Python web frameworks that run on Google App Engine include
Django, CherryPy, Pyramid, Flask, web2py and webapp2, as well as a custom Google-written
webapp framework and several others designed specifically for the platform that emerged
since the release.
Any Python framework that supports the WSGI using the CGI adapter can be used to create
an application; the framework can be uploaded with the developed application. Third-party
libraries written in pure Python may also be uploaded.
Google App Engine supports many Java standards and frameworks. Core to this is the
servlet 2.5 technology using the open-source Jetty Web Server, along with accompanying
technologies such as JSP. JavaServer Faces operates with some workarounds. A newer
release of App Engine Standard Java in Beta supports Java8, Servlet
3.1 and Jetty9.
Contd…
Though the integrated database, Google Cloud Datastore, may be unfamiliar to
programmers, it is accessed and supported with JPA, JDO, and by the simple
low-level API.
There are several alternative libraries and frameworks you can use to model and
map the data to the database such as Objectify, Slim3 and Jello framework.
The Spring Framework works with GAE. However, the Spring Security module
(if used) requires workarounds. Apache Struts 1 is supported, and Struts 2
runs with workarounds.
The Django web framework and applications running on it can be used on App
Engine with modification.
Django-nonrelaims to allow Django to work with non-relational databases and
the project includes support for App Engine.