0% found this document useful (0 votes)
26 views142 pages

UNIT 1 CC - Merged

Uploaded by

RISHABH RATHORE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views142 pages

UNIT 1 CC - Merged

Uploaded by

RISHABH RATHORE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 142

Axis Institute of Technology & management

UNIT-1
Cloud computing
"The cloud" refers to servers that are accessed over the Internet, and the software and databases that run on those servers. Cloud servers are located in data
centers all over the world. By using cloud computing, users and companies do not have to manage physical servers themselves or run software applications
on their own machines.

The cloud enables users to access the same files and applications from almost any device, because the computing and storage takes place on servers in a data
center, instead of locally on the user device. This is why a user can log in to their Instagram account on a new phone after their old phone breaks and still
find their old account in place, with all their photos, videos, and conversation history. It works the same way with cloud email providers like Gmail or
Microsoft Office 365, and with cloud storage providers like Dropbox or Google Drive.

For businesses, switching to cloud computing removes some IT costs and overhead: for instance, they no longer need to update and maintain their own
servers, as the cloud vendor they are using will do that. This especially makes an impact for small businesses that may not have been able to afford their own
internal infrastructure but can outsource their infrastructure needs affordably via the cloud. The cloud can also make it easier for companies to operate
internationally, because employees and customers can access the same files and applications from any location.

Working of cloud Computing


Cloud computing is possible because of a technology called virtualization. Virtualization allows for the creation of a simulated, digital-only "virtual" computer
that behaves as if it were a physical computer with its own hardware. The technical term for such a computer is virtual machine. When properly implemented,
virtual machines on the same host machine are sandboxed from one another, so they do not interact with each other at all, and the files and applications from
one virtual machine are not visible to the other virtual machines even though they are on the same physical machine.

Virtual machines also make more efficient use of the hardware hosting them. By running many virtual machines at once, one ser ver can run many virtual
"servers," and a data center becomes like a whole host of data centers, able to serve many organizations. Thus, cloud providers can offer the use of their
servers to far more customers at once than they would be able to otherwise, and they can do so at a low cost.

Even if individual servers go down, cloud servers in general should be always online and always available. Cloud vendors gene rally back up their services
on multiple machines and across multiple regions.

Users access cloud services either through a browser or through an app, connecting to the cloud over the Internet — that is, through many interconnected
networks — regardless of what device they are using.

Why is it called 'the cloud'?


"The cloud" started off as a tech industry slang term. In the early days of the Internet, technical diagrams often represented the servers and networking
infrastructure that make up the Internet as a cloud. As more computing processes moved to this servers-and-infrastructure part of the Internet, people began
to talk about moving to "the cloud" as a shorthand way of expressing where the computing processes were taking place. Today, "the cloud" is a widely
accepted term for this style of computing.
Axis Institute of Technology & management

Evolution of Cloud Computing


Cloud computing allows users to access a wide range of services stored in the cloud or on the Internet. Cloud computing services include
computer resources, data storage, apps, servers, development tools, and networking protocols. It is most commonly used by IT companies
and for business purposes.
Evolution of Cloud Computing
The phrase “Cloud Computing” was first introduced in the 1950s to describe internet-related services, and it evolved from distributed
computing to the modern technology known as cloud computing. Cloud services include those provided by Amazon, Google, and
Microsoft. Cloud computing allows users to access a wide range of services stored in the cloud or on the Internet. Cloud computing services
include computer resources, data storage, apps, servers, development tools, and networking protocols.

Distributed Systems
Distributed System is a composition of multiple independent systems but all of them are depicted as a single entity to the users. The purpose
of distributed systems is to share resources and also use them effectively and efficiently. Distributed systems possess characteristics such
as scalability, concurrency, continuous availability, heterogeneity, and independence in failures. But the main problem with this system
was that all the systems were required to be present at the same geographical location. Thus to solve this problem, distributed computing
led to three more types of computing and they were-Mainframe computing, cluster computing, and grid computing.
Mainframe Computing
Mainframes which first came into existence in 1951 are highly powerful and reliable computing machines. These are responsible for
handling large data such as massive input-output operations. Even today these are used for bulk processing tasks such as online transactions
etc. These systems have almost no downtime with high fault tolerance. After distributed computing, these increased the processing
capabilities of the system. But these were very expensive. To reduce this cost, cluster computing came as an alternative to mainframe
technology.
Cluster Computing
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in the cluster was connected to each other by
a network with high bandwidth. These were way cheaper than those mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required. Thus, the problem of the cost was solved to some extent but the
problem related to geographical restrictions still pertained. To solve this, the concept of grid computing was introduced.
Grid Computing
In 1990s, the concept of grid computing was introduced. It means that different systems were placed at entirely different geographical
locations and these all were connected via the internet. These systems belonged to different organizations and thus the grid consisted of
heterogeneous nodes. Although it solved some problems but new problems emerged as the distance between the nodes increased. The main
problem which was encountered was the low availability of high bandwidth connectivity and with it other network associated issues. Thus.
cloud computing is often referred to as “Successor of grid computing”.
Virtualization
Virtualization was introduced nearly 40 years back. It refers to the process of creating a virtual layer over the hardware which allows the
user to run multiple instances simultaneously on the hardware. It is a key technology used in cloud computing. It is the base on which
major cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware virtualization is still one of the most
common types of virtualization.
Web 2.0
Web 2.0 is the interface through which the cloud computing services interact with the clients. It is because of Web 2.0 that we have
interactive and dynamic web pages. It also increases flexibility among web pages. Popular examples of web 2.0 include Google Maps,
Facebook, Twitter, etc. Needless to say, social media is possible because of this technology only. It gained major popularity in 2004.
Axis Institute of Technology & management

Service Orientation
A service orientation acts as a reference model for cloud computing. It supports low-cost, flexible, and evolvable applications. Two
important concepts were introduced in this computing model. These were Quality of Service (QoS) which also includes the SLA (Service
Level Agreement) and Software as a Service (SaaS).
Utility Computing
Utility Computing is a computing model that defines service provisioning techniques for services such as compute services along with
other major services such as storage, infrastructure, etc which are provisioned on a pay-per-use basis.
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that are hosted on the internet instead of the
computer’s hard drive or local server. Cloud computing is also referred to as Internet-based computing, it is a technology where the resource
is provided as a service through the Internet to the user. The data that is stored can be files, images, documents, or any other storable
document.
Advantages of Cloud Computing
 Cost Saving
 Data Redundancy and Replication
 Ransomware/Malware Protection
 Flexibility
 Reliability
 High Accessibility
 Scalable
Disadvantages of Cloud Computing
 Internet Dependency
 Issues in Security and Privacy
 Data Breaches
 Limitations on Control
Difference between Parallel Computing and Distributed Computing
Thus, Parallel Computing and Distributed Computing are two important models of computing that have important roles in today’s high-
performance computing. Both are designed to perform a large number of calculations breaking down the processes into several parallel
tasks; however, they differ in structure, function, and utilization. Therefore, in the following article, there is a dissection of Parallel
Computing and Distributed Computing, their gains, losses, and applications.

What is Parallel Computing?


In parallel computing multiple processors performs multiple tasks assigned to them simultaneously. Memory in parallel systems can either
be shared or distributed. Parallel computing provides concurrency and saves time and money.

Advantages of Parallel Computing


 Increased Speed: In this technique, several calculations are executed concurrently hence reducing the time of computation required to
complete large scale problems.
 Efficient Use of Resources: Takes full advantage of all the processing units it is equipped with hence making the best use of the
machine’s computational power.
 Scalability: Also the more processors built into the system, the more complex problems can be solved within a short time.
 Improved Performance for Complex Tasks: Best suited for activities which involve a large numerical calculation like, number
simulation, scientific analysis and modeling and data processing.
Disadvantages of Parallel Computing
 Complexity in Programming: Parallel writing programming that is used in organizing tasks in a parallel manner is even more difficult
than that of serial programming.
 Synchronization Issues: Interaction of various processors when operating concurrently can become synchronized and result in problem
areas on the overall communication.
Axis Institute of Technology & management

 Hardware Costs: The implementation of parallel computing does probably involve the use of certain components such as multi-core
processors which could possibly be costly than the normal systems.
What is Distributed Computing?
In distributed computing we have multiple autonomous computers which seems to the user as single system. In distributed systems there
is no shared memory and computers communicate with each other through message passing. In distributed computing a single task is
divided among different computers.

Advantages of Distributed Computing


 Fault Tolerance: The failure of one node means that this node is no longer part of the computations, but that is not fatal for the entire
computation since there are other computers participating in the process thereby making the system more reliable.
 Cost-Effective: Builds upon existing hardware and has flexibility in utilizing commodity machines instead of the need to have
expensive and specific processors for its use.
 Scalability: The distributed systems have the ability to scale and expand horizontally through the addition of more machines in the
networks and therefore they can take on greater workloads and processes.
 Geographic Distribution: Distributed computing makes it possible to execute tasks at different points thereby eliminating latencies.
Disadvantages of Distributed Computing
 Complexity in Management: The task of managing a distributed system itself can be made more difficult since it may require dealing
with the latency and/or failure of a network as well as issues related to synchronizing the information to be distributed.
 Communication Overhead: Inter node communication requirements can actually hinder the package transfer between nodes that are
geographically distant and hence the overall performance is greatly compromised.
 Security Concerns: In general, distributed systems are less secure as compared to centralized system because distributed systems
heavily depend on a network.
Difference between Parallel Computing and Distributed Computing:
S.NO Parallel Computing Distributed Computing

Many operations are performed System components are located at different


1.
simultaneously locations

2. Single computer is required Uses multiple computers

Multiple processors perform


3. Multiple computers perform multiple operations
multiple operations

It may have shared or distributed


4. It have only distributed memory
memory
Axis Institute of Technology & management

S.NO Parallel Computing Distributed Computing

Processors communicate with each Computer communicate with each other through
5.
other through bus message passing.

Improves system scalability, fault tolerance and


6. Improves the system performance
resource sharing capabilities

Characteristics of Cloud Computing


There are many characteristics of Cloud Computing here are few of them :
1. On-demand self-services: The Cloud computing services does not require any human administrators, user themselves are able to
provision, monitor and manage computing resources as needed.
2. Broad network access: The Computing services are generally provided over standard networks and heterogeneous devices.
3. Rapid elasticity: The Computing services should have IT resources that are able to scale out and in quickly and on a need basis.
Whenever the user require services it is provided to him and it is scale out as soon as its requirement gets over.
4. Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and services) present are shared across multiple
applications and occupant in an uncommitted manner. Multiple clients are provided service from a same physical resource.
5. Measured service: The resource utilization is tracked for each application and occupant, it will provide both the user and the resource
provider with an account of what has been used. This is done for various reasons like monitoring billing and effective use of resource.
6. Multi-tenancy: Cloud computing providers can support multiple tenants (users or organizations) on a single set of shared resources.
7. Virtualization: Cloud computing providers use virtualization technology to abstract underlying hardware resources and present them
as logical resources to users.
8. Resilient computing: Cloud computing services are typically designed with redundancy and fault tolerance in mind, which ensures
high availability and reliability.
9. Flexible pricing models: Cloud providers offer a variety of pricing models, including pay-per-use, subscription-based, and spot pricing,
allowing users to choose the option that best suits their needs.
10. Security: Cloud providers invest heavily in security measures to protect their users’ data and ensure the privacy of sensitive
information.
11. Automation: Cloud computing services are often highly automated, allowing users to deploy and manage resources with minimal
manual intervention.
12. Sustainability: Cloud providers are increasingly focused on sustainable practices, such as energy-efficient data centers and the use of
renewable energy sources, to reduce their environmental impact.

Fig – characteristics of cloud computing


Axis Institute of Technology & management

Cloud Elasticity
Cloud Elasticity is the property of a cloud to grow or shrink capacity for CPU, memory, and
storage resources to adapt to the changing demands of an organization. Cloud Elasticity can be
automatic, without need to perform capacity planning in advance of the occasion, or it can be a
manual process where the organization is notified they are running low on resources and can then
decide to add or reduce capacity when needed. Monitoring tools offered by the cloud provider
dynamically adjust the resources allocated to an organization without impacting existing cloud-
based operations.

A cloud provider is said to have more or less elasticity depending on the degree to which it is able
to adapt to workload changes by provisioning or de-provisioning resources autonomously to match
demand as closely as possible. This eliminates the need for IT administration staff to monitor
resources to determine if additional CPU, memory, or storage resources are needed, or whether
excess capacity can be decommissioned.
Cloud Elasticity is often associated with horizontal scaling (scale-out) architecture, and it generally
associated with public cloud provider resources that are billed on a pay-as-you-go basis. This
approach brings real-time cloud expenditures more closely in alignment with the actual
consumption of cloud services, for example when virtual machines (VMs) are spun up or down as
demand for a particular application or service varies over time.
Cloud Elasticity provides businesses and IT organizations the ability to meet any unexpected jump
in demand, without the need to maintain standby equipment to handle that demand. An
organization that normally runs certain processes on-premises can ‘cloudburst’ to take advantage
of Cloud Elasticity and meet that demand, returning to on-premises operations only when the
demand has passed. Thus, the result of cloud elasticity is savings in infrastructure costs, in human
capital, and in overall IT costs.

Why is Cloud Elasticity Important?


Without Cloud Elasticity, organizations would have to pay for capacity that remained unused for
most of the time, as well as manage and maintain that capacity with OS upgrades, patches, and
component failures. It is Cloud Elasticity that in many ways defines cloud computing and
differentiates it from other computing models such as client-server, grid computing, or legacy
infrastructure.
Cloud Elasticity helps businesses avoid either over-provisioning (deploying and allocating more
IT resources than needed to serve current demands) or under-provisioning (not allocating enough
IT resources to meet existing or imminent demands).
Organizations that over-provision spend more than is necessary to meet their needs, wasting
valuable capital which could be applied elsewhere. Even if an organization is already utilizing the
public cloud, without elasticity, thousands of dollars could be wasted on unused VMs every year.
Under-provisioning can lead to the inability to serve existing demand, which could lead to
unacceptable latency, user dissatisfaction, and ultimately loss of business as customers abandon
long and unresponsive online services and take their business to more responsive organizations. In
this way, the lack of Cloud Elasticity can lead to lost business and severe bottom-line impacts.
Axis Institute of Technology & management

How does Cloud Elasticity Work?


Cloud Elasticity enables organizations to rapidly scale capacity up or down, either automatically
or manually. Cloud Elasticity can refer to ‘cloud bursting’ from on-premises infrastructure into the
public cloud for example to meet a sudden or seasonal demand. Cloud Elasticity can also refer to
the ability to grow or shrink the resources used by a cloud-based application.
Cloud Elasticity can be triggered and executed automatically based on workload trends, or can be
manually instantiated, often in minutes. Before organizations had the ability to leverage Cloud
Elasticity, they would have to either have additional stand-by capacity already on hand or would
need to order, configure, and install additional capacity, a process that could take weeks or months.
If and when demand eases, capacity can be removed in minutes. In this manner, organizations pay
only for the amount of resources in use at any given time, without the need to acquire or retire on-
premises infrastructure to meet elastic demand.
Typical use cases for Cloud Elasticity include
 Retail or e-tail holiday seasonal demand, in which demand increases dramatically from Black
Friday shopping specials until the end of the holiday season in early January
 School district registration which spikes in demand during the spring and wanes after the school
term begins
 Businesses that see a sudden spike in demand due to a popular product introduction or social media
boost, such as a streaming service like Netflix adding VMs and storage to meet the demand for a
new release or positive review.
 and Business Continuity (DR/BC). Organizations can leverage public cloud
Disaster Recovery
capabilities to provide off-site snapshots or backups of critical data and applications and spin up
VMs in the cloud if on-premises infrastructure suffers an outage or loss.
 Scale virtual desktop infrastructure in the cloud for temporary workers or contractors or for
applications such as remote learning
 Scale infrastructure into the cloud for test and development activities and tear it down once test/dev
work is complete.
 Unplanned projects with short timelines
 Temporary projects like data analytics, batch processing, media rendering, etc.

What are the Benefits of Cloud Elasticity?


The benefits of cloud elasticity include:
Agility: By eliminating the need to purchase, configure, and install new infrastructure when
demand changes, Cloud Elasticity prevents the need to plan for such unexpected demand spikes,
and enables organizations to meet any unexpected demand, whether due to seasonal spike, mention
on Reddit, or selection by Oprah’s book club.
Pay-as-needed pricing: Rather than paying for infrastructure whether or not is is being used,
Cloud Elasticity enables organizations to pay only for the resources that are in use at any given
point tin time, closely tracking IT expenditures to the actual demand in real-time. In this way,
although spending may fluctuate, organizations can ‘right-size’ their infrastructure as elasticity
Axis Institute of Technology & management

automatically allocates or deallocates resources on the basis of real-time demand. Amazon has
stated that organizations that adopt its instance scheduler with their EC2 cloud service can achieve
savings of over 60 percent versus organizations that do not.
High Availability: Cloud elasticity facilitates both high availability and fault tolerance, since VMs
or containers can be replicated if they appear to be failing, helping to ensure that business services
are uninterrupted and that users do not experience downtime. This helps ensure that users perceive
a consistent and predictable experience, even as resources are provisioned or deprovisioned
automatically and without impact on operations.
Efficiency: As with most automations, the ability to autonomously adjust cloud resources as
needed enables IT staff to shift their focus away from provisioning and onto projects that are more
beneficial to the organization.
Speed/Time-to-market: organizations have access to capacity in minutes instead of the weeks or
months it may take through a traditional procurement process.

What are the Challenges in Cloud Elasticity?


Cloud Elasticity is only useful to organizations that experience rapid or periodic increases or
decreases in demand for IT services. Organizations with predictable, steady demand most likely
would not find an advantage in the benefits of Cloud Elasticity. Here are some potential challenges
with Cloud Elasticity
Time to provision: Although cloud VMs can be spun up on-demand, there can still be a lag time
of up to several minutes before it is available for use. This may or not be enough time base on a
specific application or service demands, and can impact performance when a sudden surge occurs,
such as when a sign-on storm occurs at the beginning of the business day.
Cloud Provider Lock-in: Although all major public cloud providers offer Cloud Elasticity
solution, each are implemented differently, which cloud mean that organizations are locked into a
single vendor for their cloud needs.
Security Impact: Cloud services that spin up and down in an elastic fashion can impact existing
security workflows and require them to be reimagined. Since elastic systems are ephemeral,
incident response may be impacted, for example when a server experiencing a security issue spins
down as demand wanes.
Resource Availability: Cloud Elasticity does require modifications to existing cloud or on-
premises deployments. Organizations that do not outsource their IT management will need to
acquire technical expertise including architects, developers, and admins to help ensure that a Cloud
Elasticity plan is properly configured to meet the organization’s specific needs. This can also
introduce a learning curve delay as the newly acquired talent come up to speed on new
environments, languages, and automation tools and processes that need to be implemented.

What is Provisioning in Cloud Computing?


Provisioning in cloud computing involves allocating and configuring IT resources to meet the dynamic needs of an organization. It ensures seamless access
to necessary resources and configures components like operating systems, middleware, and applications.

Security measures such as firewalls, threat detection, and encryption are also integral to cloud provisioning.

Types of Provisioning in Cloud Computing


Axis Institute of Technology & management

There are three types of provisioning in cloud computing with varying degrees of flexibility, control, and pricing structure. It includes:

 Advanced Cloud Provisioning

 Dynamic Cloud Provisioning

 User Cloud Provisioning

Advanced Provisioning
Advanced provisioning is ideal for businesses that need stable, reliable, and high-performance cloud resources. This method involves:

 Detailed Contracts: Agreements clearly define the responsibilities of both the provider and the client, including the specific resources allocated
and service level agreements (SLAs).

 Fixed Pricing Structures: Clients typically pay a fixed monthly or annual fee, making budgeting more predictable.

 Resource Guarantees: Providers allocate specific amounts of storage, CPU, RAM, and GPU (for graphic-intensive tasks) as agreed upon in the
contract.
Businesses with consistent workloads and resource requirements benefit most from this model. Examples include financial institutions, healthcare
organizations, and large enterprises with steady operational demands.

Dynamic Provisioning
Dynamic provisioning, or on-demand provisioning, is the most flexible and scalable cloud computing model. Key features include:

 Automatic Resource Allocation: Resources such as processing power, storage, and network bandwidth are allocated dynamically based on
current needs, reducing manual intervention.

 Cloud Automation: Automation tools streamline the provisioning process, ensuring resources are available instantly when needed. This includes
autoscaling, which adjusts resource allocation in real time based on usage patterns.

 Pay-Per-Use Pricing: Clients are billed based on the resources they consume, making it cost-effective for businesses with variable workloads.
Startups, seasonal businesses, and organizations with fluctuating resource needs benefit from dynamic provisioning. It supports rapid scaling up or down,
ensuring cost efficiency and flexibility.

User Self-Provisioning
User self-provisioning, also known as cloud self-service, empowers customers to manage their cloud resources directly through a provider’s platform.
Features include:

 Direct Access: Users can log into a web portal, select the resources they need (such as virtual machines, storage, and software), and deploy them
immediately.

 Autonomy and Agility: This model allows businesses to quickly adapt to changing needs without waiting for the provider’s intervention,
enhancing operational agility.

 Simple Subscription Process: Setting up an account and subscribing to services is straightforward, making it accessible for businesses of all
sizes.
Small to medium-sized businesses, individual developers, and teams need fast, self-service access to cloud resources. Such solutions allow users to easily
manage their subscriptions and resources, offering a high degree of control and flexibility.

Strategies to Address Cloud Provisioning Challenges


Cloud provisioning involves a lot of challenges associated with resource allocation, network allocation, storage allocation, security, scalability, and flexibility.
The cloud provisioning process involves the following methodologies to meet these significant requirements.

Resource Allocation
Organizations may require multiple provisioning tools to effectively manage, customize, and utilize their cloud resources.
Axis Institute of Technology & management

With the deployment of workloads on multiple cloud platforms, a centralized console is set up to monitor and manage all resou rces for optimized allocation.
It results in a more optimized and efficient allocation of required resources.

The industry’s best practices for optimized resource allocation include the following:

 Load Balancing: Distribute incoming network traffic across multiple servers to ensure no single server is overwhelmed. This enhances the
performance and reliability of applications.

 Autoscaling: Configure autoscaling to automatically adjust the number of active servers based on the load. This ensures that resources a re used
efficiently and cost-effectively, scaling up during high demand and scaling down when demand decreases.

 Capacity Planning: Project future resource needs based on current usage patterns and trends. This helps in planning and allocating resources to
meet future demands, ensuring scalability and avoiding resource shortages.
By implementing these practices, organizations can improve and optimize operational efficiency by eliminating the need for ma nual workloads. It also
ensures your mission-critical, CPU-intensive apps keep performing without experiencing downtimes.

Managing Engine Images


Managing engine images is essential for streamlining resource deployment and ensuring that applications run smoothly:

 Regular Updates: Continuously update engine images to include the latest software versions, patches, and security updates. This helps in
maintaining performance and security standards.

 Security Measures: Ensure that all engine images are secure and free from vulnerabilities. Regularly scan and test images for security threats.

 Availability: Keep engine images readily available to speed up the deployment process. This includes having a repository of commonly used
images that can be quickly accessed and deployed.

 Customization: Maintain a variety of engine images tailored to different application needs, reducing the time required for configuration and
deployment.

Network Configuration
In cloud computing, network configuration is the process of setting up and managing virtual networks, security groups, subnet s, and other network resources
to ensure a secure and efficient transition of resources.

Proper network configuration is vital for secure and efficient data flow within the cloud environment:

 Virtual Networks: Set up virtual networks to isolate and manage cloud resources effectively. This provides better control over data traffic and
enhances security.

 Security Groups: Implement security groups to define and enforce network access rules. This helps in protecting cloud resources from
unauthorized access.

 Subnets: Use subnets to segment network traffic and improve performance. This also allows for more granular control over network traffic
management.

 Firewall Configuration: Configure firewalls to monitor and control incoming and outgoing network traffic based on predetermined security
rules. This adds an extra layer of security to the cloud environment.

Storage Configuration
Storage configuration is another crucial aspect of cloud provisioning that involves deploying, managing, and optimizing cloud storage resources.

Effective storage configuration is key to ensuring data reliability and performance:

 Define Storage Requirements: Clearly define storage requirements for different applications and services. This helps in allocating the right
amount and type of storage.

 Storage Classes: Utilize different storage classes based on performance and cost requirements. For instance, use high-performance storage for
critical applications and more cost-effective storage for less demanding tasks.

 Resource Allocation: Allocate storage resources based on current and anticipated needs to prevent over-provisioning and under-provisioning.

 Data Management: Implement data management practices, such as data lifecycle policies, to manage data storage efficiently. This includes
archiving old data and deleting unnecessary files to free up space.
Axis Institute of Technology & management

Monitoring and Maintenance


Another critical process in cloud provisioning is to ensure the health and performance of cloud infrastructure.

Regular monitoring and maintenance are crucial for ensuring the health and performance of cloud infrastructure:

 Continuous Monitoring: Set up continuous monitoring systems to track the performance and health of cloud resources. This includes monitoring
CPU usage, memory usage, disk I/O, and network performance.

 Performance Optimization: Regularly analyze performance data to identify and resolve bottlenecks. This helps in maintaining optimal
performance and preventing downtime.

 Routine Maintenance: Schedule regular maintenance activities, such as software updates, hardware checks, and system backups. This ensures
that the infrastructure remains up-to-date and reliable.

 Alerts and Notifications: Configure alerts and notifications to promptly inform IT teams of any issues or irregularities. This enables quick
response and resolution to minimize impact on services.

Cloud Provisioning Best Practices


Automation & Orchestration
Automation and orchestration streamline cloud management by reducing manual tasks and enhancing efficiency. The main benefits of aligning automation
and orchestration are improved efficiency, reduced operational costs, enhanced reliability, and faster response times to changes in demand.

It involves:

 Cloud Automation: This involves using software tools to automate repetitive tasks, such as provisioning and de-provisioning resources, applying
patches, and managing backups. Automation reduces human error and speeds up processes, making your cloud operations more efficient.

 Orchestration Tools: These tools coordinate and manage automated tasks across complex workflows and multi-cloud environments. Examples
include Kubernetes for container orchestration and Terraform for infrastructure as code (IaC).

 AIOps: Artificial Intelligence for IT Operations (AIOps) leverages machine learning to enhance automation and orchestration by predicting
issues and optimizing resource allocation. This ensures smooth and efficient cloud operations.

Scalability
Scalability ensures continuous availability and performance during traffic spikes, supports business growth without significant downtime, and optimizes
resource usage and costs. It involves:

 Vertical Scaling (Scaling Up/Down): Add or remove resources (like CPU, RAM, storage) to an existing server to handle increased or decreased
workloads. This method is straightforward but has a limit based on the physical server’s capacity.

 Horizontal Scaling (Scaling Out/In): Add more servers to distribute the load across multiple machines. This method offers more flexibility and
virtually unlimited growth potential, but it requires sophisticated load balancing and application design.

 Scalability Testing: Regular testing helps ensure that your infrastructure can handle growth. It involves stress testing to measure how the system
performs under heavy loads, network request handling, CPU load analysis, and memory usage monitoring.

Security
Security is the central area in cloud provisioning, involving several security and scrutiny measures. Industry-standard compliance ensures that cloud
infrastructure is up-to-date and secured up to the standard benchmarks.

It involves:

 Multi-Factor Authentication (MFA): Adds an extra layer of security by requiring two or more verification steps to access resources. This
reduces the risk of unauthorized access.

 Data Encryption: Encrypt data both at rest and in transit to prevent unauthorized access. Use strong encryption protocols like AES-256 for data
storage and TLS for data transmission.
Axis Institute of Technology & management

 Threat Intelligence: Implement tools that continuously monitor for threats and vulnerabilities. These tools can detect unusual activities and alert
security teams to potential breaches.

 Disaster Recovery Plan: Develop and regularly update a disaster recovery plan to ensure business continuity in case of a major incident. This
includes regular backups, redundant systems, and clear procedures for restoring services.

 Compliance: Ensure your cloud infrastructure complies with industry standards and regulations such as SOC 2, ISO 27001, and PCI DSS.
Compliance ensures that you meet the required security and privacy standards.

 Security Best Practices: Regularly update and patch systems, conduct security audits and penetration testing, and employ robust access controls.

cloud services
The resources available in the cloud are known as "services," since they are actively managed by a cloud provider. Cloud services include infrastructure,
applications, development tools, and data storage, among other products. These services are sorted into several different cat egories, or service models.

cloud computing service models


Axis Institute of Technology & management

Software-as-a-Service (SaaS): Instead of users installing an application on their device, SaaS applications are hosted on cloud servers, and users access
them over the Internet. SaaS is like renting a house: the landlord maintains the house, but the tenant mostly gets to use it as if they owned it. Examples of
SaaS applications include Salesforce, MailChimp, and Slack.

Platform-as-a-Service (PaaS): In this model, companies don't pay for hosted applications; instead they pay for the things they need to build their own
applications. PaaS vendors offer everything necessary for building an application, including development tools, infrastructure, and operating systems, over
the Internet. PaaS can be compared to renting all the tools and equipment necessary for building a house, instead of renting the house itself. PaaS examples
include Heroku and Microsoft Azure.

Infrastructure-as-a-Service (IaaS): In this model, a company rents the servers and storage they need from a cloud provider. They then use that cloud
infrastructure to build their applications. IaaS is like a company leasing a plot of land on which they can build whatever they want — but they need to provide
their own building equipment and materials. IaaS providers include DigitalOcean, Google Compute Engine, and Ope nStack.

Formerly, SaaS, PaaS, and IaaS were the three main models of cloud computing, and essentially all cloud services fit into one of these categories.

cloud infrastructure
Cloud infrastructure refers to the resources needed for hosting and building applications in the cloud. IaaS and PaaS services are often included in an
organization's cloud infrastructure, although SaaS can be said to be part of cloud infrastructure as well, and FaaS offers the ability to construct infrastructure
as code.

cloud deployments/ Types of Clouds


In contrast to the models discussed above, which define how services are offered via the cloud, these different cloud deployment types have to do with where
the cloud servers are and who manages them.

The most common cloud deployments are:

 Private cloud: A private cloud is a server, data center, or distributed network wholly dedicated to one organization.

 Public cloud: A public cloud is a service run by an external vendor that may include servers in one or multiple data centers. Unlike a private
cloud, public clouds are shared by multiple organizations. Using virtual machines, individual servers may be shared by different companies,
a situation that is called "multitenancy" because multiple tenants are renting server space within the same server.

 Hybrid cloud: hybrid cloud deployments combine public and private clouds, and may even include on-premises legacy servers. An organization
may use their private cloud for some services and their public cloud for others, or they may use the public cloud as backup for their private
cloud.

 Multi-cloud: multi-cloud is a type of cloud deployment that involves using multiple public clouds. In other words, an organization with a multi-
cloud deployment rents virtual servers and services from several external vendors — to continue the analogy used above, this is like leasing
several adjacent plots of land from different landlords. Multi-cloud deployments can also be hybrid cloud, and vice versa.
Axis Institute of Technology & management

Architecture of Cloud Computing


Cloud Computing, is one of the most demanding technologies of the current time and is giving a
new shape to every organization by providing on-demand virtualized services/resources. Starting
from small to medium and medium to large, every organization uses cloud computing services for
storing information and accessing it from anywhere and at any time only with the help of the
internet. In this article, we will learn more about the internal architecture of cloud computing.
What is Cloud Computing?
Cloud Computing means storing and accessing the data and programs on remote servers that are
hosted on the internet instead of the computer’s hard drive or local server. Cloud computing is also
referred to as Internet-based computing, it is a technology where the resource is provided as a
service through the Internet to the user. The data that is stored can be files, images, documents, or
any other storable document. Transparency, scalability, security and intelligent monitoring are
some of the most important constraints which every cloud infrastructure should experience.
Current research on other important constraints is helping cloud computing system to come up
with new features and strategies with a great capability of providing more advanced cloud
solutions.
Cloud Computing Architecture
Architecture of cloud computing is the combination of both SOA (Service Oriented
Architecture) and EDA (Event Driven Architecture). Client infrastructure, application, service,
runtime cloud, storage, infrastructure, management and security all these are the components of
cloud computing architecture.
The cloud architecture is divided into 2 parts, i.e.
1. Frontend
2. Backend
The below figure represents an internal architectural view of cloud computing.
Axis Institute of Technology & management

Architecture of Cloud Computing

1. Frontend
Frontend of the cloud architecture refers to the client side of cloud computing system. Means it
contains all the user interfaces and applications which are used by the client to access the cloud
computing services/resources. For example, use of a web browser to access the cloud platform.
2. Backend
Backend refers to the cloud itself which is used by the service provider. It contains the resources
as well as manages the resources and provides security mechanisms. Along with this, it includes
huge storage, virtual applications, virtual machines, traffic control mechanisms, deployment
models, etc.
Components of Cloud Computing Architecture
Following are the components of Cloud Computing Architecture
1. Client Infrastructure – Client Infrastructure is a part of the frontend component. It contains
the applications and user interfaces which are required to access the cloud platform. In other
words, it provides a GUI( Graphical User Interface ) to interact with the cloud.
2. Application : Application is a part of backend component that refers to a software or platform
to which client accesses. Means it provides the service in backend as per the client requirement.
3. Service: Service in backend refers to the major three types of cloud based services like SaaS,
PaaS and IaaS. Also manages which type of service the user accesses.
4. Runtime Cloud: Runtime cloud in backend provides the execution and Runtime
platform/environment to the Virtual machine.
Axis Institute of Technology & management

5. Storage: Storage in backend provides flexible and scalable storage service and management
of stored data.
6. Infrastructure: Cloud Infrastructure in backend refers to the hardware and software
components of cloud like it includes servers, storage, network devices, virtualization software
etc.
7. Management: Management in backend refers to management of backend components like
application, service, runtime cloud, storage, infrastructure, and other security mechanisms etc.
8. Security: Security in backend refers to implementation of different security mechanisms in
the backend for secure cloud resources, systems, files, and infrastructure to end-users.
9. Internet: Internet connection acts as the medium or a bridge between frontend and backend
and establishes the interaction and communication between frontend and backend.
10. Database: Database in backend refers to provide database for storing structured data, such as
SQL and NOSQL databases. Example of Databases services include Amazon RDS, Microsoft
Azure SQL database and Google CLoud SQL.
11. Networking: Networking in backend services that provide networking infrastructure for
application in the cloud, such as load balancing, DNS and virtual private networks.
12. Analytics: Analytics in backend service that provides analytics capabilities for data in the
cloud, such as warehousing, business intelligence and machine learning.
Benefits of Cloud Computing Architecture
 Makes overall cloud computing system simpler.
 Improves data processing requirements.
 Helps in providing high security.
 Makes it more modularized.
 Results in better disaster recovery.
 Gives good user accessibility.
 Reduces IT operating costs.
 Provides high level reliability.
 Scalability.
NIST Cloud Computing Reference Architecture and Taxonomy
The NIST Cloud Computing Reference Architecture and Taxonomy was designed to accurately
communicate the components and offerings of cloud computing. The guiding principles used to
create the reference architecture were:
1. Develop a vendor-neutral architecture that is consistent with the NIST definition
2. Develop a solution that does not stifle innovation by defining a prescribed technical
solution
Axis Institute of Technology & management

NIST
Cloud Computing Reference Architecture and Taxonomy
Actors in Cloud Computing
The NIST cloud computing reference architecture defines five major actors. Each actor is an entity
(a person or an organization) that participates in a transaction or process and/or performs tasks in
cloud computing. The five actors are:
 Cloud user/cloud customer: A user is accessing either paid-for or free cloud services and
resources within a cloud. These users are generally granted system administrator privileges
to the instances they start (and only those instances, as opposed to the host itself or other
components).
 Cloud provider: A company that provides a cloud-based platform, infrastructure,
application, or storage services to other organizations and/or individuals, usually for a fee
(otherwise known to clients as “as a service”).
 Cloud auditor: A party that can conduct independent assessments of cloud services,
information system operations, performance, and security of the cloud implementation.
 Cloud carrier: An intermediary that provides connectivity and transport of cloud services
between cloud consumers and cloud providers.
 Cloud services broker (CSB): The CSB is typically a third-party entity or company that
looks to extend value to multiple customers of cloud-based services through relationships
with multiple cloud service providers. It acts as a liaison between cloud services customers
and cloud service providers, selecting the best provider for each customer and monitoring
the services. A CSB provides:
 Service intermediation: A CSB enhances a given service by improving some
specific capability and providing value-added services to cloud consumers. The
improvement can be managing access to cloud services, identity management,
performance reporting, enhanced security, etc.
 Service aggregation: A CSB combines and integrates multiple services into one
or more new services. The broker provides data integration and ensures the secure
data movement between the cloud consumer and multiple cloud providers.
 Service arbitrage: Service arbitrage is similar to service aggregation except that
the services being aggregated are not fixed. Service arbitrage means a broker has
the flexibility to choose services from multiple agencies. The cloud broker, for
example, can use a credit-scoring service to measure and select an agency with the
best score.
Axis Institute of Technology & management

Cloud Deployment Models

Most cloud hubs have tens of thousands of servers and storage devices to enable fast loading. It is often
possible to choose a geographic area to put the data "closer" to users. Thus, deployment models for cloud
computing are categorized based on their location. To know which model would best fit the requirements
of your organization, let us first learn about the various types.

Public Cloud

The name says it all. It is accessible to the public. Public deployment models in the cloud are perfect for
organizations with growing and fluctuating demands. It also makes a great choice for companies with low-
security concerns. Thus, you pay a cloud service provider for networking services, compute virtualization
& storage available on the public internet. It is also a great delivery model for the teams with development
and testing. Its configuration and deployment are quick and easy, making it an ideal choice for test
environments.
Axis Institute of Technology & management

Benefits of Public Cloud

o Minimal Investment - As a pay-per-use service, there is no large upfront cost and is ideal for
businesses who need quick access to resources
o No Hardware Setup - The cloud service providers fully fund the entire Infrastructure
o No Infrastructure Management - This does not require an in-house team to utilize the public cloud.

Limitations of Public Cloud

o Data Security and Privacy Concerns - Since it is accessible to all, it does not fully protect against
cyber-attacks and could lead to vulnerabilities.
o Reliability Issues - Since the same server network is open to a wide range of users, it can lead to
malfunction and outages
o Service/License Limitation - While there are many resources you can exchange with tenants, there
is a usage cap.

Private Cloud

Now that you understand what the public cloud could offer you, of course, you are keen to know what a
private cloud can do. Companies that look for cost efficiency and greater control over data & resources will
find the private cloud a more suitable choice.

It means that it will be integrated with your data center and managed by your IT team. Alternatively, you
can also choose to host it externally. The private cloud offers bigger opportunities that help meet specific
organizations' requirements when it comes to customization. It's also a wise choice for mission-critical
processes that may have frequently changing requirements.

Benefits of Private Cloud


Axis Institute of Technology & management

o Data Privacy - It is ideal for storing corporate data where only authorized personnel gets access
o Security - Segmentation of resources within the same Infrastructure can help with better access and
higher levels of security.
o Supports Legacy Systems - This model supports legacy systems that cannot access the public cloud.

Limitations of Private Cloud

o Higher Cost - With the benefits you get, the investment will also be larger than the public cloud.
Here, you will pay for software, hardware, and resources for staff and training.
o Fixed Scalability - The hardware you choose will accordingly help you scale in a certain direction
o High Maintenance - Since it is managed in-house, the maintenance costs also increase.

Community Cloud

The community cloud operates in a way that is similar to the public cloud. There's just one difference - it
allows access to only a specific set of users who share common objectives and use cases. This type of
deployment model of cloud computing is managed and hosted internally or by a third-party vendor.
However, you can also choose a combination of all three.

Benefits of Community Cloud

o Smaller Investment - A community cloud is much cheaper than the private & public cloud and
provides great performance
Axis Institute of Technology & management

o Setup Benefits - The protocols and configuration of a community cloud must align with industry
standards, allowing customers to work much more efficiently.

Limitations of Community Cloud

o Shared Resources - Due to restricted bandwidth and storage capacity, community resources often
pose challenges.
o Not as Popular - Since this is a recently introduced model, it is not that popular or available across
industries

Hybrid Cloud

As the name suggests, a hybrid cloud is a combination of two or more cloud architectures. While each
model in the hybrid cloud functions differently, it is all part of the same architecture. Further, as part of this
deployment of the cloud computing model, the internal or external providers can offer resources.

Let's understand the hybrid model better. A company with critical data will prefer storing on a private cloud,
while less sensitive data can be stored on a public cloud. The hybrid cloud is also frequently used for 'cloud
bursting'. It means, supposes an organization runs an application on-premises, but due to heavy load, it can
burst into the public cloud.

Benefits of Hybrid Cloud

o Cost-Effectiveness - The overall cost of a hybrid solution decreases since it majorly uses the public
cloud to store data.
Axis Institute of Technology & management

o Security - Since data is properly segmented, the chances of data theft from attackers are
significantly reduced.
o Flexibility - With higher levels of flexibility, businesses can create custom solutions that fit their
exact requirements

Limitations of Hybrid Cloud

o Complexity - It is complex setting up a hybrid cloud since it needs to integrate two or more cloud
architectures
o Specific Use Case - This model makes more sense for organizations that have multiple use cases
or need to separate critical and sensitive data

While numerous benefits are realized with hybrid cloud deployments and cloud models, these can
often be time consuming and laborious at the start, as most companies and entities encounter
integration and migration issues at the outset.
Issues in Cloud Computing
Cloud Computing is a new name for an old concept. The delivery of computing services
from a remote location. Cloud Computing is Internet-based computing, where shared
resources, software, and information are provided to computers and other devices on
demand.
These are major issues in Cloud Computing:
1. Privacy: The user data can be accessed by the host company with or without
permission. The service provider may access the data that is on the cloud at any point
in time. They could accidentally or deliberately alter or even delete information.
2. Compliance: There are many regulations in places related to data and hosting. To
comply with regulations (Federal Information Security Management Act, Health
Insurance Portability and Accountability Act, etc.) the user may have to adopt
deployment modes that are expensive.
3. Security: Cloud-based services involve third-party for storage and security. Can one
assume that a cloud-based company will protect and secure one’s data if one is using
their services at a very low or for free? They may share users’ information with others.
Security presents a real threat to the cloud.
4. Sustainability: This issue refers to minimizing the effect of cloud computing on the
environment. Citing the server’s effects on the environmental effects of cloud
computing, in areas where climate favors natural cooling and renewable electricity is
readily available, the countries with favorable conditions, such as Finland, Sweden, and
Switzerland are trying to attract cloud computing data centers. But other than nature’s
favors, would these countries have enough technical infrastructure to sustain the high-
end clouds?
Axis Institute of Technology & management

5. Abuse: While providing cloud services, it should be ascertained that the client is not
purchasing the services of cloud computing for a nefarious purpose. In 2009, a banking
Trojan illegally used the popular Amazon service as a command and control channel
that issued software updates and malicious instructions to PCs that were infected by the
malware So the hosting companies and the servers should have proper measures to
address these issues.

6, Higher Cost: If you want to use cloud services uninterruptedly then you need to have
a powerful network with higher bandwidth than ordinary internet networks, and also if
your organization is broad and large so ordinary cloud service subscription won’t suit
your organization. Otherwise, you might face hassle in utilizing an ordinary cloud
service while working on complex projects and applications. This is a major problem
before small organizations, that restricts them from diving into cloud technology for
their business.
7. Recovery of lost data in contingency: Before subscribing any cloud service
provider goes through all norms and documentations and check whether their services
match your requirements and sufficient well-maintained resource infrastructure with
proper upkeeping. Once you subscribed to the service you almost hand over your data
into the hands of a third party. If you are able to choose proper cloud service then in the
future you don’t need to worry about the recovery of lost data in any contingency.
8. Upkeeping(management) of Cloud: Maintaining a cloud is a herculin task because
a cloud architecture contains a large resources infrastructure and other challenges and
risks as well, user satisfaction, etc. As users usually pay for how much they have
consumed the resources. So, sometimes it becomes hard to decide how much should be
charged in case the user wants scalability and extend the services.
9. Lack of resources/skilled expertise: One of the major issues that companies and
enterprises are going through today is the lack of resources and skilled employees.
Every second organization is seeming interested or has already been moved to cloud
services. That’s why the workload in the cloud is increasing so the cloud service hosting
companies need continuous rapid advancement. Due to these factors, organizations are
having a tough time keeping up to date with the tools. As new tools and technologies
are emerging every day so more skilled/trained employees need to grow. These
challenges can only be minimized through additional training of IT and development
staff.
10. Pay-per-use service charges: Cloud computing services are on-demand services a
user can extend or compress the volume of the resource as per needs. so you paid for
how much you have consumed the resources. It is difficult to define a certain pre-
defined cost for a particular quantity of services. Such types of ups and downs and price
variations make the implementation of cloud computing very difficult and intricate. It
is not easy for a firm’s owner to study consistent demand and fluctuations with the
Axis Institute of Technology & management

seasons and various events. So it is hard to build a budget for a service that could
consume several months of the budget in a few days of heavy use.

Eucalyptus
The open-source cloud refers to software or applications publicly available for the users
in the cloud to set up for their own purpose or for their organization.
Eucalyptus is a Linux-based open-source software architecture for cloud computing
and also a storage platform that implements Infrastructure a Service (IaaS). It provides
quick and efficient computing services. Eucalyptus was designed to provide services
compatible with Amazon’s EC2 cloud and Simple Storage Service(S3).

Eucalyptus Architecture
Eucalyptus CLIs can handle Amazon Web Services and their own private instances.
Clients have the independence to transfer cases from Eucalyptus to Amazon Elastic
Cloud. The virtualization layer oversees the Network, storage, and Computing.
Occurrences are isolated by hardware virtualization.
Important Features are:-
1. Images: A good example is the Eucalyptus Machine Image which is a module
software bundled and uploaded to the Cloud.
2. Instances: When we run the picture and utilize it, it turns into an instance.
3. Networking: It can be further subdivided into three modes: Static mode(allocates
IP address to instances), System mode (assigns a MAC address and imputes the
instance’s network interface to the physical network via NC), and Managed mode
(achieves local network of instances).
Axis Institute of Technology & management

4. Access Control: It is utilized to give limitations to clients.


5. Elastic Block Storage: It gives block-level storage volumes to connect to an
instance.
6. Auto-scaling and Load Adjusting: It is utilized to make or obliterate cases or
administrations dependent on necessities.
Components of Architecture
 Node Controller is the lifecycle of instances running on each node. Interacts with
the operating system, hypervisor, and Cluster Controller. It controls the working of
VM instances on the host machine.
 Cluster Controller manages one or more Node Controller and Cloud Controller
simultaneously. It gathers information and schedules VM execution.
 Storage Controller (Walrus) Allows the creation of snapshots of volumes.
Persistent block storage over VM instances. Walrus Storage Controller is a simple
file storage system. It stores images and snapshots. Stores and serves files using
S3(Simple Storage Service) APIs.
 Cloud Controller Front-end for the entire architecture. It acts as a Complaint Web
Services to client tools on one side and interacts with the rest of the components on
the other side.
Operation Modes Of Eucalyptus
 Managed Mode: Numerous security groups to users as the network is large. Each
security group is assigned a set or a subset of IP addresses. Ingress rules are applied
through the security groups specified by the user. The network is isolated by VLAN
between Cluster Controller and Node Controller. Assigns two IP addresses on each
virtual machine.
 Managed (No VLAN) Node: The root user on the virtual machine can snoop into
other virtual machines running on the same network layer. It does not provide VM
network isolation.
 System Mode: Simplest of all modes, least number of features. A MAC address is
assigned to a virtual machine instance and attached to Node Controller’s bridge
Ethernet device.
 Static Mode: Similar to system mode but has more control over the assignment of
IP address. MAC address/IP address pair is mapped to static entry within the DHCP
server. The next set of MAC/IP addresses is mapped.
Advantages Of The Eucalyptus Cloud
1. Eucalyptus can be utilized to benefit both the eucalyptus private cloud and the
eucalyptus public cloud.
2. Examples of Amazon or Eucalyptus machine pictures can be run on both clouds.
3. Its API is completely similar to all the Amazon Web Services.
4. Eucalyptus can be utilized with DevOps apparatuses like Chef and Puppet.
Axis Institute of Technology & management

5. Although it isn’t as popular yet but has the potential to be an alternative to


OpenStack and CloudStack.
6. It is used to gather hybrid, public and private clouds.
7. It allows users to deliver their own data centers into a private cloud and hence,
extend the services to other organizations.

Nimbus

Nimbus is a powerful toolkit focused on converting a computer cluster into an Infrastructure-as-a-Service (IaaS)
cloud for scientific communities. Essentially, it allows a deployment and configuration of virtual machines
(VMs) on remote resources to create an environment suitable for the users’ requirements. Being written
in Python and Java, it is totally free and open-source software, released under the Apache License.

Nimbus consists of two basic products:

 Nimbus Infrastructure is an open source EC2/S3-compatible IaaS solution with features that benefit scientific
community interests, like support for auto-configuring clusters, proxy credentials, batch schedulers, best-effort
allocations, etc.
 Nimbus Platform is an integrated set of tools for a multi-cloud environment that automates and simplifies the
work with infrastructure clouds (deployment, scaling, and management of cloud resources) for scientific users.

This toolkit is compatible with Amazon's Network Protocols via EC2 based clients, S3 REST API clients, as
well as SOAP API and REST API that have been implemented in Nimbus. Also it provides support for X509
credentials, fast propagation, multiple protocols, and compartmentalized dependencies. Nimbus features flexible
user, group and workspaces management, request authentication and authorization, and per-client usage tracking.

NIMBUS KEEPS DEVELOPERS, PROVIDERS AND USERS SATISFIED

To open all power and versatility of IaaS to scientific users Nimbus project developers targeted the main three
goals and their open source implementations:

 Give capabilities to providers of resources for private or community IaaS clouds development. The Nimbus
Workspace Service enables lease of computational resources by deploying virtual machines on those resources.
Cumulus is an open source implementation of the S3 REST API that was built for scalable quota-based storage
cloud implementation and multiple storage cloud configuration.
 Give capabilities to users for IaaS clouds application. Among Nimbus scaling tools (users can automatically
scale across multiple distributed providers) the Nimbus Context Broker is especially robust. It coordinates large
virtual cluster launches automatically and repeatedly using a common configuration and security context across
resources.
 Give capabilities to developers for extension, experimentation and customization of IaaS. For instance, the
Workspace Service can support several virtualization implementations (either Xen or KVM), resource
management options (including schedulers such as Portable Batch System), interfaces (including compatibility
with Amazon EC2), and other options.

OpenNebula
Axis Institute of Technology & management

OpenNebula is a simple, feature-rich and flexible solution for the management of virtualised data
centres. It enables private, public and hybrid clouds. Here are a few facts about this solution.

OpenNebula is an open source cloud middleware solution that manages heterogeneous distributed data
centre infrastructures. It is designed to be a simple but feature-rich, production-ready, customisable
solution to build and manage enterprise clouds—simple to install, update and operate by the
administrators; and simple to use by end users. OpenNebula combines existing virtualisation
technologies with advanced features for multi-tenancy, automated provisioning and elasticity. A built-
in virtual network manager maps virtual networks to physical networks. Distributions such as Ubuntu
and Red Hat Enterprise Linux have already integrated OpenNebula. As you’ll learn in this article, you
can set up OpenNebula by installing a few packages and performing some cursory configurations.
OpenNebula supports Xen, KVM and VMware hypervisors.

The OpenNebula deployment model


An OpenNebula deployment is modelled after the classic cluster architecture. Figure 1 shows the
layout of the OpenNebula deployment model.

Master node: A single gateway or front-end machine, sometimes also called the master node, is
responsible for queuing, scheduling and submitting jobs to the machines in the cluster. It runs several
other OpenNebula services mentioned below:

 Provides an interface to the user to submit virtual machines and monitor their status.
 Manages and monitors all virtual machines running on different nodes in the cluster.
 It hosts the virtual machine repository and also runs a transfer service to manage the transfer
of virtual machine images to the concerned worker nodes.
 Provides an easy-to-use mechanism to set up virtual networks in the cloud.
 Finally, the front-end allows you to add new machines to your cluster.
Worker node: The other machines in the cluster, known as ‘worker nodes’, provide raw computing
power for processing the jobs submitted to the cluster. The worker nodes in an OpenNebula cluster are
machines that deploy a virtualisation hypervisor, such as VMware, Xen or KVM.
Axis Institute of Technology & management

CloudSim

CloudSim is an open-source framework, which is used to simulate cloud computing


infrastructure and services. It is developed by the CLOUDS Lab organization and is
written entirely in Java. It is used for modelling and simulating a cloud computing
environment as a means for evaluating a hypothesis prior to software development in
order to reproduce tests and results.
For example, if you were to deploy an application or a website on the cloud and
wanted to test the services and load that your product can handle and also tune its
performance to overcome bottlenecks before risking deployment, then such
evaluations could be performed by simply coding a simulation of that environment
with the help of various flexible and scalable classes provided by the CloudSim
package, free of cost.

Benefits of Simulation over the Actual Deployment:

Following are the benefits of CloudSim:


 No capital investment involved. With a simulation tool like CloudSim there is no
installation or maintenance cost.
 Easy to use and Scalable. You can change the requirements such as adding or
deleting resources by changing just a few lines of code.
 Risks can be evaluated at an earlier stage. In Cloud Computing utilization of
real testbeds limits the experiments to the scale of the testbed and makes the
reproduction of results an extremely difficult undertaking. With simulation, you
can test your product against test cases and resolve issues before actual deployment
without any limitations.
Axis Institute of Technology & management

 No need for try-and-error approaches. Instead of relying on theoretical and


imprecise evaluations which can lead to inefficient service performance and
revenue generation, you can test your services in a repeatable and controlled
environment free of cost with CloudSim.

Why use CloudSim?

Below are a few reasons to opt for CloudSim:


 Open source and free of cost, so it favours researchers/developers working in the
field.
 Easy to download and set-up.
 It is more generalized and extensible to support modelling and experimentation.
 Does not require any high-specs computer to work on.
 Provides pre-defined allocation policies and utilization models for managing
resources, and allows implementation of user-defined algorithms as well.
 The documentation provides pre-coded examples for new developers to get
familiar with the basic classes and functions.
 Tackle bottlenecks before deployment to reduce risk, lower costs, increase
performance, and raise revenue.
Axis Institute of Technology & management

CloudSim Architecture:

CloudSim Layered Architecture

CloudSim Core Simulation Engine provides interfaces for the management of


resources such as VM, memory and bandwidth of virtualized Datacenters.
CloudSim layer manages the creation and execution of core entities such as VMs,
Cloudlets, Hosts etc. It also handles network-related execution along with the
provisioning of resources and their execution and management.
User Code is the layer controlled by the user. The developer can write the
requirements of the hardware specifications in this layer according to the scenario.
Some of the most common classes used during simulation are:
Axis Institute of Technology & management

 Datacenter: used for modelling the foundational hardware equipment of any cloud
environment, that is the Datacenter. This class provides methods to specify the
functional requirements of the Datacenter as well as methods to set the allocation
policies of the VMs etc.
 Host: this class executes actions related to management of virtual machines. It also
defines policies for provisioning memory and bandwidth to the virtual machines,
as well as allocating CPU cores to the virtual machines.
 VM: this class represents a virtual machine by providing data members defining a
VM’s bandwidth, RAM, mips (million instructions per second), size while also
providing setter and getter methods for these parameters.
 Cloudlet: a cloudlet class represents any task that is run on a VM, like a processing
task, or a memory access task, or a file updating task etc. It stores parameters
defining the characteristics of a task such as its length, size, mi (million
instructions) and provides methods similarly to VM class while also providing
methods that define a task’s execution time, status, cost and history.
 DatacenterBroker: is an entity acting on behalf of the user/customer. It is
responsible for functioning of VMs, including VM creation, management,
destruction and submission of cloudlets to the VM.
 CloudSim: this is the class responsible for initializing and starting the simulation
environment after all the necessary cloud entities have been defined and later
stopping after all the entities have been destroyed.

Features of CloudSim:

CloudSim provides support for simulation and modelling of:


1. Large scale virtualized Datacenters, servers and hosts.
2. Customizable policies for provisioning host to virtual machines.
3. Energy-aware computational resources.
4. Application containers and federated clouds (joining and management of multiple
public clouds).
5. Datacenter network topologies and message-passing applications.
6. Dynamic insertion of simulation entities with stop and resume of simulation.
7. User-defined allocation and provisioning policies.
Axis Institute of Technology & Management

UNIT-2

Cloud Services

Cloud Service Models


NIST defines three cloud computing service models: software as a service (SaaS), platform as a
service (PaaS), and infrastructure as a service (IaaS). Often referred to as the SPI model, these
acronyms have become synonymous with cloud computing when discussing cloud service models.

Cloud Service Models


Infrastructure as a Service (IaaS)
Infrastructure as a service (IaaS) is a model where the customer can provision equipment as a
service to support operations, including storage, hardware, servers, and relevant networking
components. While the consumer has use of the related equipment, the cloud service provider
retains ownership, and is ultimately responsible for hosting, running, and maintaining the
infrastructure. IaaS is also referred to as hardware as a service by some customers and providers.
IaaS has multiple key benefits for organizations, which include, but are not limited to:
 Usage metered and priced on the basis of units (or instances) consumed, allowing it to be
billed back to specific departments or functions
 Ability to scale infrastructure services up and down based on usage, which is particularly
useful and beneficial where there are significant spikes and dips in usage within the
infrastructure
 Reduced cost of ownership, meaning no need to buy assets for everyday use, no loss of
asset value over time, and reduction of other related costs of maintenance and support
 Reduced energy and cooling costs, plus a “green IT” environmental effect, with optimum
use of IT resources and systems
Platform as a Service (PaaS)
Platform as a service (PaaS) is a way for customers to rent virtualized servers and associated
services for running existing applications or developing and testing new ones.
 PaaS has several key benefits for developers, which include, but are not limited to:
 Operating systems can be changed and upgraded frequently
Axis Institute of Technology & Management

 Where development teams are scattered globally, or across various geographic locations,
the ability to work together on software development projects within the same environment
can be extremely beneficial
 Services are available and can be obtained from diverse sources that cross international
boundaries
 Upfront and recurring or ongoing costs can be significantly reduced by utilizing a single
vendor, rather than maintaining multiple hardware facilities and environments
Software as a Service (SaaS)
Software as a service (SaaS) is a distributed model where software applications are hosted by a
vendor or cloud service provider and made available to customers over network resources. SaaS
is currently the most widely used and adopted form of cloud computing, with users most often
simply needing an internet connection and credentials to have full use of the cloud service,
application, and data housed.
Within SaaS, there are two delivery models currently used. First is hosted application management
(hosted AM), where a cloud provider hosts commercially available software for customers and
delivers it over the web (internet). Second is software on demand, where a cloud provider provides
customers with network-based access to a single copy of an application created specifically for
SaaS distribution (typically within the same network segment). Within either delivery model, SaaS
can be implemented with a custom application, or the customer may acquire a vendor -specific
application that can be tailored to the customer.
SaaS has several key benefits for organizations, which include, but are not limited to:
 Ease of use and limited/minimal administration
 Automatic updates and patch management; always running the latest version and most up-
to-date deployment (no manual updates required)
 Standardization and compatibility (all users have the same version of software)

Global accessibility

Database as a Service (DBaaS) in Cloud Computing

Definition:
Database as a Service (DBaaS) is a cloud-based managed database service that enables users to
access, manage, and operate databases without handling the underlying infrastructure. It allows
organizations to deploy databases quickly while offloading maintenance tasks such as backups,
scaling, security, and updates to the cloud provider.

Features of DBaaS:
1. Managed Infrastructure: The cloud provider manages hardware, software, and network
configurations.
2. Automatic Scaling: The database can scale storage and compute resources automatically based
on demand.
3. High Availability & Disaster Recovery: Built-in replication, backup, and failover mechanisms
ensure data availability.
Axis Institute of Technology & Management

4. Security & Compliance: Advanced security features, including encryption, access control, and
compliance with standards (e.g., GDPR, HIPAA).
5. Multi-Tenancy: Supports multiple users on a shared infrastructure while maintaining isolation.
6. Pay-as-You-Go Pricing: Customers pay based on usage, reducing upfront investment.
7. Integration with Cloud Services: Seamless connectivity with cloud applications, AI, and
analytics services.

Popular DBaaS Providers & Services:


1. Amazon Web Services (AWS):
o Amazon RDS (Relational Database Service)
o Amazon Aurora
o Amazon DynamoDB (NoSQL)
o Amazon Redshift (Data Warehousing)
2. Microsoft Azure:
o Azure SQL Database
o Azure Cosmos DB (Multi-model NoSQL)
o Azure Database for PostgreSQL, MySQL, MariaDB
3. Google Cloud Platform (GCP):
o Google Cloud SQL
o Google BigQuery (Data Warehousing)
o Google Firestore & Bigtable (NoSQL)
4. Other DBaaS Solutions:
o Oracle Autonomous Database
o IBM Cloud Databases
o MongoDB Atlas (NoSQL)

Advantages of DBaaS:
Cost-Efficient: No need for on-premise hardware and DBA (Database Administrator)
management.
Easy Deployment & Management: Simplifies database provisioning, updates, and
maintenance.
Improved Performance: Optimized configurations for high-speed queries and transactions.
Security & Compliance: Providers handle security patches, encryption, and regulatory
compliance.
Disaster Recovery & Backup: Automated backups and failover mechanisms ensure business
continuity.

Challenges & Considerations:


Axis Institute of Technology & Management

Vendor Lock-in: Moving databases between providers can be complex.


Performance Variability: Multi-tenant environments may affect query speed.
Data Privacy & Compliance: Sensitive data in cloud environments requires strict access
controls.
Limited Customization: Cloud-managed databases may restrict deep-level optimizations.

Use Cases of DBaaS:


 Web Applications: Scalable and managed database backend for websites and SaaS applications.
 Big Data Analytics: Handling large datasets with cloud-based data warehouses like BigQuery
and Redshift.
 IoT (Internet of Things): Storing and processing sensor data from connected devices.
 AI & Machine Learning: Databases integrated with AI models for predictive analytics.
 E-commerce & Finance: Secure and high-availability databases for transactions and customer
management.

Difference Between Storage as a Service (STaaS) and Database as a Service (DBaaS)


Feature Storage as a Service (STaaS) Database as a Service (DBaaS)

A cloud service that provides scalable A cloud service that provides a managed
Definition storage solutions for storing and database system for structured or unstructured
retrieving files, objects, or block data. data storage, retrieval, and management.

Used for managing structured or semi-


Used for storing raw data, files, backups,
Purpose structured data with indexing, querying, and
and multimedia content.
transactions.

Data Unstructured or semi-structured (files, Structured (tables, schemas) or semi-structured


Structure blobs, objects). (NoSQL, JSON, key-value).

Accessed through file systems, APIs, or Accessed through query languages (SQL for
Data Access
object storage protocols. relational, APIs for NoSQL).

Requires database management, indexing,


Minimal management required, focuses
Management querying, transactions, and performance
on storing and retrieving files.
optimization.

Scales storage capacity without Scales compute and storage based on database
Scaling
performance concerns. workload.
Axis Institute of Technology & Management

Feature Storage as a Service (STaaS) Database as a Service (DBaaS)

AWS S3, Google Cloud Storage, Azure AWS RDS, Google Cloud SQL, Azure SQL
Examples
Blob Storage Database, MongoDB Atlas

Backup, archiving, multimedia storage, Web applications, transactional processing,


Use Cases
data lakes, disaster recovery. data analytics, AI & ML applications.

Monitoring as a Service (MaaS) in Cloud Computing

Definition:

Monitoring as a Service (MaaS) is a cloud-based service that provides real-time tracking,


analysis, and alerting of an organization's IT infrastructure, applications, networks, and security
systems. It enables businesses to monitor performance, detect issues, and optimize resource
usage without requiring on-premises monitoring solutions.

Key Features of MaaS:


1. Real-Time Monitoring: Continuously tracks system performance, uptime, and resource
utilization.
2. Automated Alerts & Notifications: Sends alerts via email, SMS, or dashboards when issues
arise.
3. Performance Analytics: Collects and analyzes data to optimize system efficiency.
4. Multi-Layer Monitoring: Supports infrastructure, application, database, and network
monitoring.
5. Cloud-Native & Scalable: Easily scales with cloud resources and integrates with multiple
platforms.
6. Log Management & Analysis: Centralized logging helps in troubleshooting and security
analysis.
7. Security & Compliance Monitoring: Detects security threats and ensures compliance with
industry standards.

Types of Monitoring in MaaS:


1. Infrastructure Monitoring: Tracks servers, storage, and cloud resources (e.g., AWS
CloudWatch, Azure Monitor).
2. Application Performance Monitoring (APM): Monitors application behavior, response time,
and errors (e.g., New Relic, Dynatrace).
3. Network Monitoring: Analyzes network traffic, latency, and security vulnerabilities (e.g.,
SolarWinds, Nagios).
Axis Institute of Technology & Management

4. Security Monitoring: Detects threats, unauthorized access, and compliance violations (e.g.,
Splunk, IBM QRadar).
5. Log Monitoring: Aggregates logs for system performance and security insights (e.g., ELK Stack,
Datadog).
6. User Experience Monitoring: Evaluates website and application performance from an end-user
perspective (e.g., Pingdom).

Popular MaaS Providers & Tools:


 Cloud-Based MaaS Solutions:
o AWS CloudWatch
o Microsoft Azure Monitor
o Google Cloud Operations Suite (Stackdriver)
 Third-Party Monitoring Tools:
o Datadog
o New Relic
o Splunk
o Nagios
o Prometheus & Grafana

Advantages of MaaS:
Cost-Effective: No need for on-premise monitoring infrastructure.
Scalability: Easily adapts to growing IT needs.
Improved System Reliability: Detects and resolves issues before they impact operations.
Enhanced Security: Monitors for cyber threats and compliance violations.
Centralized Visibility: Provides a single dashboard for monitoring multiple IT components.

Challenges of MaaS:
Data Privacy Concerns: Monitoring sensitive data in the cloud requires strong security
measures.
Latency Issues: Real-time monitoring depends on network speed and connectivity.
Integration Complexity: Some MaaS tools may not easily integrate with legacy systems.
Cost Overhead: Advanced features may come with high subscription costs.

Use Cases of MaaS:


Axis Institute of Technology & Management

 Cloud Infrastructure Monitoring: Ensuring cloud services remain available and


optimized.
 DevOps & CI/CD Pipelines: Monitoring performance in software development
environments.
 Cybersecurity Threat Detection: Identifying

Communication as a Service (CaaS) in Cloud Computing


Definition:

Communication as a Service (CaaS) is a cloud-based delivery model that provides


communication solutions such as voice, video, messaging, and collaboration tools over the
internet. It eliminates the need for businesses to manage on-premises communication
infrastructure by offering scalable, pay-as-you-go services.

Key Features of CaaS:


1. Unified Communication: Integrates voice, video, messaging, and email into a single platform.
2. VoIP & Telephony Services: Cloud-based phone systems with features like call forwarding and
voicemail.
3. Video Conferencing: Web-based meetings, webinars, and real-time video communication.
4. Instant Messaging & Chatbots: Cloud-based messaging for real-time business communication.
5. Collaboration Tools: Document sharing, screen sharing, and remote team collaboration.
6. API & SDK Integrations: Developers can embed communication features into applications
(e.g., Twilio, Vonage).
7. Scalability & Flexibility: Easily expands communication services as business needs grow.
8. Security & Compliance: Ensures secure data transmission with encryption and regulatory
compliance (e.g., HIPAA, GDPR).

Types of CaaS Solutions:


1. Cloud Telephony (VoIP): Internet-based calling services (e.g., RingCentral, Zoom Phone).
2. Video & Web Conferencing: Online meetings and collaboration (e.g., Zoom, Microsoft Teams,
Google Meet).
3. Messaging & Chat Services: SMS, instant messaging, and chatbot integrations (e.g., Slack,
Twilio).
4. Email as a Service: Cloud-based email solutions (e.g., Gmail, Microsoft Outlook 365).
5. Collaboration Platforms: Tools for team communication and file sharing (e.g., Slack, Cisco
Webex).
Axis Institute of Technology & Management

Popular CaaS Providers & Tools:


 VoIP & Cloud Telephony:
o RingCentral
o Zoom Phone
o Vonage
 Video & Web Conferencing:
o Zoom
o Microsoft Teams
o Google Meet
 Messaging & Chat APIs:
o Twilio
o Slack
o WhatsApp Business API
 Email & Collaboration:
o Gmail (Google Workspace)
o Microsoft Outlook 365
o Cisco Webex

Advantages of CaaS:
Cost Savings: No need for expensive on-premise PBX systems.
Scalability: Easily scales to meet business growth.
Remote Accessibility: Employees can communicate from anywhere.
Reliability & Uptime: Cloud providers ensure high availability and redundancy.
Enhanced Security: End-to-end encryption and compliance with industry regulations.

Challenges of CaaS:
Internet Dependency: Requires a stable internet connection for optimal performance.
Latency Issues: Poor network conditions can impact call and video quality.
Security Concerns: Sensitive communication data needs strong encryption.
Integration Complexity: Some businesses may need custom integrations with existing systems.

Use Cases of CaaS:


 Customer Support Centers: Cloud-based call centers and chat support.
Axis Institute of Technology & Management

 Remote Work & Collaboration: Video conferencing and virtual team communication.
 E-commerce & Sales: Chatbots and automated messaging for customer engagement.
 Healthcare & Telemedicine: Secure video consultations and patient communication.
 Education & E-learning: Virtual classrooms and online training sessions.

Major cloud service providers—Google Cloud, Amazon Web Services (AWS),


Microsoft Azure, IBM Cloud, and Salesforce—organized under the specified headings:

1. Payment System They Provide

 Google Cloud
o Payment Model: Pay-as-you-go model, where customers pay for the resources they use.
Google offers free credits for new users and offers cost estimation tools.
o Pricing: Based on usage of services like compute time, storage, network usage, etc.
Discounts available for sustained usage.
 Amazon Web Services (AWS)
o Payment Model: AWS follows a pay-per-use pricing model. There is no upfront cost,
and users only pay for what they consume.
o Pricing: Offers various pricing plans based on instance type, storage, and services used.
AWS also offers Reserved Instances for long-term savings and Savings Plans.
 Microsoft Azure
o Payment Model: Similar to Google and AWS, Azure uses a pay-as-you-go pricing
model, with billing based on consumption.
o Pricing: Azure also offers a free tier and pay-as-you-go model with pricing based on
compute, storage, and data transfer. Discounts are available for long-term usage or
reserved instances.
 IBM Cloud
o Payment Model: IBM Cloud offers pay-per-use pricing and subscription options. It
provides flexible pricing options for users based on resource consumption.
o Pricing: IBM offers pricing calculators to help estimate costs and provides a pay-as-you-
go model, as well as volume discounts for large enterprises.
 Salesforce
o Payment Model: Salesforce primarily uses a subscription-based payment model for its
cloud services (Salesforce CRM, Marketing Cloud, etc.).
o Pricing: Subscription prices depend on the number of users, services required, and
contract length. It offers different pricing tiers depending on the features.

2. Services They Provide

 Google Cloud
o Compute: Google Compute Engine, Google Kubernetes Engine, Google App Engine.
o Storage: Google Cloud Storage, Google Cloud Bigtable, Google Cloud Spanner.
o AI and Machine Learning: Google Cloud AI, Google Cloud Vision, Google Cloud
Natural Language.
o Networking: Google Cloud Load Balancing, Cloud Interconnect.
Axis Institute of Technology & Management

o Data Analytics: Google BigQuery, Google Dataflow.


o Other: Google Cloud Functions, Google Firebase.
 Amazon Web Services (AWS)
o Compute: Amazon EC2, AWS Lambda, AWS Elastic Beanstalk.
o Storage: Amazon S3, Amazon EBS, Amazon Glacier.
o Databases: Amazon RDS, DynamoDB, Amazon Aurora.
o Networking: Amazon VPC, AWS Direct Connect.
o Machine Learning: AWS SageMaker, AWS Rekognition.
o Other: AWS IoT, AWS Step Functions, AWS CloudWatch.
 Microsoft Azure
o Compute: Azure Virtual Machines, Azure App Services, Azure Functions.
o Storage: Azure Blob Storage, Azure Disk Storage, Azure File Storage.
o Databases: Azure SQL Database, Azure Cosmos DB.
o Networking: Azure Virtual Network, Azure Load Balancer, Azure Traffic Manager.
o AI and ML: Azure Machine Learning, Cognitive Services.
o Other: Azure Kubernetes Service, Azure DevOps, Azure Logic Apps.
 IBM Cloud
o Compute: IBM Cloud Virtual Servers, IBM Cloud Functions.
o Storage: IBM Cloud Object Storage, IBM Block Storage.
o Databases: IBM Cloud Databases (e.g., PostgreSQL, MongoDB).
o AI and ML: IBM Watson, IBM Watson Studio.
o Networking: IBM Cloud CDN, IBM Cloud Direct Link.
o Other: IBM Cloud Kubernetes Service, IBM Cloud Foundry, IBM Blockchain.
 Salesforce
o CRM: Salesforce Sales Cloud, Service Cloud, Marketing Cloud.
o AI and Analytics: Salesforce Einstein, Tableau (acquired).
o Collaboration Tools: Salesforce Chatter, Slack (acquired).
o Data Management: Salesforce Data Cloud, Salesforce Heroku.
o Other: Salesforce AppExchange, Salesforce Platform.

3. Deployment Models They Use

 Google Cloud
o Public Cloud: Google Cloud is primarily a public cloud offering, providing resources
like compute, storage, and networking.
o Hybrid Cloud: Supports hybrid cloud deployments with tools like Anthos for multi-
cloud management.
o Multi-cloud: Google supports multi-cloud environments, especially with Google Anthos.
 Amazon Web Services (AWS)
o Public Cloud: AWS operates primarily as a public cloud service provider.
o Hybrid Cloud: AWS supports hybrid cloud environments through AWS Outposts and
AWS Direct Connect.
o Multi-cloud: AWS is also used in multi-cloud environments, though its services are
more focused on single-cloud environments.
 Microsoft Azure
o Public Cloud: Azure is mainly a public cloud provider offering a broad range of
services.
Axis Institute of Technology & Management

o Hybrid Cloud: Azure’s hybrid cloud offerings are robust, including services like Azure
Arc and Azure Stack.
o Multi-cloud: Azure integrates well with other cloud platforms, facilitating multi-cloud
solutions.
 IBM Cloud
o Public Cloud: IBM Cloud provides public cloud services, with a focus on enterprise
solutions.
o Private Cloud: Offers private cloud solutions for organizations needing more control
over their infrastructure.
o Hybrid Cloud: IBM promotes hybrid cloud environments, especially through the use of
IBM Cloud Satellite and Red Hat OpenShift.
o Multi-cloud: IBM supports multi-cloud deployments, providing solutions for managing
applications across multiple cloud providers.
 Salesforce
o Public Cloud: Salesforce operates primarily in the public cloud, offering its software
through SaaS solutions.
o Hybrid Cloud: Through integrations and tools, Salesforce supports hybrid cloud
deployments, especially with its Salesforce Platform.
o Multi-cloud: Salesforce enables multi-cloud environments by connecting and integrating
various cloud services, particularly with its MuleSoft platform.

4. Benefits and Drawbacks

 Google Cloud
o Benefits:
 Strong AI and machine learning tools.
 Excellent networking capabilities (Google’s global infrastructure).
 High scalability and flexibility.
o Drawbacks:
 Smaller ecosystem compared to AWS and Azure.
 Limited enterprise-focused features.
 Amazon Web Services (AWS)
o Benefits:
 Largest range of services and tools.
 Mature and well-established with a vast global infrastructure.
 Strong security features.
o Drawbacks:
 Can be complex for beginners due to the large number of services.
 Pricing can be difficult to understand, leading to potential cost overruns.
 Microsoft Azure
o Benefits:
 Excellent integration with existing Microsoft products like Office 365, Windows
Server, and SQL Server.
 Strong hybrid cloud capabilities.
 Extensive enterprise focus and support.
o Drawbacks:
 More complicated billing system.
 Sometimes criticized for inconsistent service performance.
Axis Institute of Technology & Management

 IBM Cloud
o Benefits:
 Strong focus on AI, data, and enterprise-level applications.
 Good support for hybrid and multi-cloud environments.
 Unique offerings like IBM Watson and IBM Blockchain.
o Drawbacks:
 Smaller market share compared to AWS, Azure, and Google Cloud.
 User interface can be less intuitive for some users.
 Salesforce
o Benefits:
 Comprehensive CRM and customer-centric tools.
 Excellent integration with other tools via AppExchange and APIs.
 Scalable and flexible cloud platform.
o Drawbacks:
 Primarily focused on CRM and may not be ideal for general-purpose cloud
services.
 High subscription costs, especially for smaller businesses.
UNIT- 3
COLLABORATING USING CLOUD SERVICES
What is Cloud Collaboration?
Cloud collaboration enables employees to work together seamlessly on documents and files stored off-premises or
outside the company's firewall. This collaborative process occurs when a user creates or uploads a file online and shares
access with other individuals, allowing them to share, edit, and view documents in real-time. All changes made are
automatically saved and synced to the cloud, ensuring that all users have access to the latest version of the document.
Cloud collaboration is essential for modern businesses looking to enhance teamwork, productivity, and adaptability
in an increasingly digital and remote work environment. By leveraging the right cloud collaboration tools and implementing
best practices, organizations can streamline workflows, improve communication, and achieve better outcomes.
Benefits of Cloud Collaboration:
1. Improved Team Collaboration: Storing documents in a shared online location makes it easier for team members
to access and collaborate on them. This eliminates the need for constant emailing of files and ensures everyone is
on the same page, leading to enhanced teamwork and smoother discussions.
2. Faster Access to Large Files: Cloud collaboration allows for the quick sharing of large files without the limitations
of email servers. This is crucial for teams, especially those working remotely, as it eliminates delays and distribution
challenges associated with offline file sharing methods.
3. Support for Remote Employees: Cloud-based collaboration tools empower remote teams to collaborate
effectively regardless of their geographical locations. This flexibility is vital for the success of remote teams and
ensures they can work efficiently without being tied to a physical office.
4. Embracing BYOD Trend: Cloud collaboration aligns with the Bring Your Own Device (BYOD) trend, enabling
employees to access work-related files and data from their personal devices without the need for complex network
setups or VPNs. This increases productivity and employee satisfaction.
Top Cloud Collaboration Features:

1. Internet Access to Files: Cloud collaboration tools should be accessible via web browsers or mobile devices, with

offline support for editing and viewing files.

2. Real-Time Communication: Features like instant messaging, team channels, and comments facilitate real-time

communication and collaboration within the tool itself.

3. Custom Permission Levels: Tools should allow administrators to set custom permission levels for different users,

ensuring data security and control over file access.

4. Version Control: Automatic syncing and version control ensure that users always have access to the latest version

of documents while tracking changes and revisions.

5. Centralized File Storage: Cloud collaboration tools should provide a centralized repository for storing all work-

related data securely and facilitating easy access for team members.
Challenges in Implementing Cloud Collaboration:

1. Application Overload: Managing multiple cloud collaboration apps alongside existing systems can lead to

confusion and duplication of efforts.

2. Lack of Collaboration Strategy: Without a clear collaboration strategy and practices, adopting cloud collaboration tools may
not yield optimal results.

Best Practices for Cloud Collaboration:

1. Access Settings: Organize teams and control access permissions to ensure data security and privacy.

2. Choose the Right Tool: Select a cloud collaboration tool that aligns with your organization's needs, security standards,
and integrates seamlessly with existing systems.

3. Layered Security: Implement multiple layers of security to protect assets and data beyond the company firewall. 4. End-User
Training: Train employees on using the collaboration tool effectively and adhere to security protocol.

Cloud Collaboration Tools:


Popular cloud collaboration tools include:
Communication Tools: Cisco Webex, Microsoft Teams, Zoom, etc.
Cloud Storage: Dropbox, Google Docs, WeTransfer, etc.
Project Management: Asana, Trello, Microsoft Teams, etc.
Code Collaboration: Atlassian's Bitbucket, Microsoft's GitHub, etc.

Email Communication over the Cloud


Understanding Cloud Email: How It Works, Benefits, and Popular Providers
Cloud email refers to an email delivery and storage method hosted by an external provider, as opposed to on-
premise email hosting which relies on internal servers within an organization's infrastructure. The shift towards cloud-based
email services has gained momentum due to the increasing trend of remote work and the need for scalable, accessible
solutions. Let's delve deeper into what cloud email is, how it works, its advantages over on-premises solutions, popular
providers, and key benefits.
Cloud-based email services offer numerous advantages, including cost-effectiveness, scalability, remote
accessibility, uptime reliability, reduced maintenance burden, and enhanced security. Organizations can choose from a
variety of reputable providers based on their specific requirements, making cloud email a compelling choice for modern
businesses seeking efficient and secure communication solutions.
What is Cloud Email?
Cloud-based email services are hosted and maintained by third-party providers who manage email delivery, storage,
security, and maintenance. Users can securely send, receive, and store emails without the need for in-house server
infrastructure. Notable cloud email providers include Google Gmail, Microsoft Outlook, Hubspot, and ProtonMail, each
offering unique features tailored to different organizational needs.
How Cloud Email Works
Cloud email providers utilize remote cloud-based servers to send, receive, and store emails. While the delivery and storage
mechanisms differ from traditional on-premise solutions, the fundamental process of email communication remains the
same regardless of the hosting environment.
On-Premises vs. Cloud Email
On-premise email hosting involves setting up and maintaining email servers within an organization's premises, requiring
dedicated IT resources and infrastructure. In contrast, cloud email eliminates the need for physical servers, offering cost
savings, scalability, remote access, improved uptime, and reduced maintenance overhead.
Benefits of Cloud Email
1. Cost Savings: Cloud email services typically result in cost savings as organizations no longer need to invest in
hardware, maintenance, and IT support for email servers.
2. Remote Access: Cloud email enables users to access emails from any device with an internet connection, enhancing
productivity for remote and hybrid work setups.
3. Scalability: Cloud email solutions offer scalability, allowing organizations to easily adjust storage capacity and
user counts to accommodate growth.
4. Improved Uptime: Cloud-based providers ensure high uptime by leveraging redundant servers, minimizing
downtime and ensuring continuous email functionality.
5. Less Maintenance: With cloud email, maintenance tasks are handled by the provider, freeing up internal resources
to focus on core business operations.
6. Built-in Security: Cloud email providers offer robust security measures, including spam filtering, malware
protection, encryption, and advanced threat detection.

Popular Cloud Email Providers


1. Google Gmail: With 1.5 billion users, Gmail offers reliability, accessibility, and integration with Google
Workspace tools, making it a popular choice for businesses.
2. Microsoft Outlook: Outlook, particularly for Office 365 users, provides powerful email management features,
integration with OneDrive and third-party apps, and intuitive calendar functionalities.
3. Hubspot: Hubspot is tailored for marketing email campaigns, offering tools for creating, personalizing, and
analyzing email campaigns with detailed engagement metrics.
4. ProtonMail: Known for its focus on security and privacy, ProtonMail offers end-to-end encryption, address
verification, and enhanced privacy features, making it ideal for sensitive communications.

Key Benefits of Moving to Cloud Email


1. Cost Savings: Cloud email services eliminate upfront infrastructure costs and reduce ongoing maintenance
expenses.
2. More Uptime: Cloud providers ensure high availability and disaster recovery, minimizing email downtime and
improving business continuity.
3. Scalability: Organizations can easily scale email capacity and features based on evolving business needs without
hardware investments.
4. Remote Access: Cloud email enables seamless access from anywhere, facilitating remote work and enhancing
productivity.
5. Improved Security: Cloud providers invest in advanced security measures, offering better data protection and threat
mitigation compared to on-premise solutions.
CRM Management

What Is CRM?

CRM, or Customer Relationship Management, encompasses all the tools, techniques, strategies, and technologies used by
organizations to manage and improve customer relationships, as well as customer data acquisition, retention, and analysis.It
involves storing customer data such as demographics, purchase behavior, history, and interactions to foster strong relationships,
enhance sales, and boost profits.

What Is Cloud Computing?


Cloud computing refers to the delivery of computing services over the internet, allowing users to access computing
resources such as storage, databases, and processing power remotely, without the need for physical infrastructure. Examples
cloud computing providers include Amazon Web Services (AWS), Microsoft Azure, Rackspace, and Dropbox.

What Is CRM in Cloud Computing?


CRM in cloud computing refers to CRM software that is accessible to customers via the internet in a cloud-based form.
Many organizations adopt CRM cloud solutions to enable easy access to customer information online, often accessible even
through mobile devices. CRM cloud systems facilitate information sharing, backup, storage, and global accessibility.
Types of CRM Systems
In cloud computing, Customer Relationship Management (CRM) systems are delivered as a service over the internet, offering
flexibility, scalability, and ease of access. There are several types of CRM that can be deployed in the cloud, each focusing on
different aspects of customer management. Here's a breakdown:

1. Operational CRM
●​ Purpose: Automates and streamlines business processes related to sales, marketing, and customer service.​

●​ Key Features:​

○​ Contact management​

○​ Lead and opportunity management​

○​ Marketing automation​

○​ Service automation​

●​ Example Cloud CRM Tools: Salesforce, HubSpot CRM, Zoho CRM​

2. Analytical CRM
●​ Purpose: Analyzes customer data to improve decision-making and strategies.​

●​ Key Features:​

○​ Data mining and pattern analysis​


○​ Customer segmentation​

○​ Predictive analytics​

○​ Reporting and dashboards​

●​ Example Cloud CRM Tools: SAP CRM, Oracle CRM Analytics​

3. Collaborative CRM (or Strategic CRM)


●​ Purpose: Enhances communication and collaboration across departments and with customers.​

●​ Key Features:​

○​ Interaction management​

○​ Multi-channel communication (email, chat, social media)​

○​ Shared customer information​

○​ Partner and supplier relationship management​

●​ Example Cloud CRM Tools: Microsoft Dynamics 365, Freshworks CRM

4. Campaign Management CRM


●​ Purpose: Combines features of operational and analytical CRM to manage marketing campaigns.​

●​ Key Features:​

○​ Campaign planning and execution​

○​ Performance tracking​

○​ Email marketing integration​

○​ ROI analysis​

●​ Example Cloud CRM Tools: Mailchimp CRM, ActiveCampaign

5. Social CRM
●​ Purpose: Integrates social media platforms with CRM to better engage with customers.​

●​ Key Features:​

○​ Social listening and monitoring​

○​ Social media analytics​


○​ Engagement tracking​

○​ Influencer and community management​

●​ Example Cloud CRM Tools: Sprout Social, Hootsuite CRM, Nimble​

Key Components of CRM

1.​ Contact Management​

○​ Stores customer information (name, email, phone, social profiles, etc.)​

○​ Tracks communication history for easy reference​

2.​ Sales Management​

○​ Manages sales pipeline, leads, and deals​

○​ Tracks performance, sets goals, and forecasts sales​

3.​ Marketing Automation​

○​ Automated email campaigns, lead nurturing, and segmentation​

○​ Analyzes campaign performance and customer engagement​

4.​ Customer Service & Support​

○​ Handles customer queries through tickets, live chat, and knowledge base​

○​ Tracks resolution times and customer satisfaction​

5.​ Workflow Automation​

○​ Automates repetitive tasks like follow-ups and reminders​

○​ Improves efficiency and reduces manual work​

6.​ Analytics & Reporting​

○​ Generates reports and dashboards on sales, marketing, and customer behavior​

○​ Provides insights for data-driven decision-making​

7.​ Integration Capabilities​

○​ Integrates with email, social media, ERP, and third-party tools​

○​ Enables centralized data and streamlined processes​

8.​ Mobile CRM​


○​ Offers CRM access via mobile apps​

○​ Useful for remote teams and on-the-go updates​

9.​ Lead & Opportunity Management​

○​ Tracks potential customers (leads) and their progress​

○​ Helps prioritize and convert leads into paying customers

Benefits of CRM
1.​ Improved Customer Relationships​

○​ Centralized data helps personalize interactions​

○​ Builds trust and long-term relationships​

2.​ Better Customer Service​

○​ Quick access to customer history for faster issue resolution​

○​ Enhances customer satisfaction and loyalty​

3.​ Increased Sales​

○​ Tracks leads and sales opportunities efficiently​

○​ Automates follow-ups and reminders​

4.​ Enhanced Productivity and Efficiency​

○​ Automates repetitive tasks (emails, reports, workflows)​

○​ Allows teams to focus on high-value activities​

5.​ Centralized Information​

○​ All customer data stored in one place​

○​ Accessible to marketing, sales, and support teams​

6.​ Data-Driven Decision Making​

○​ Real-time analytics and reporting tools​

○​ Helps in strategy planning and forecasting​

7.​ Better Marketing Campaigns​

○​ Customer segmentation for targeted marketing​


○​ Tracks campaign performance and ROI​

8.​ Improved Collaboration​

○​ Teams can share customer information easily​

○​ Breaks down departmental silos​

9.​ Customer Retention​

○​ Monitors satisfaction levels and sends timely follow-ups​

○​ Identifies and prevents customer churn​

10.​ Mobile Access​

●​ Access CRM data anytime, anywhere​

●​ Great for remote teams and field agents​

Choosing a Cloud CRM


When selecting a cloud-based CRM, organizations should consider:

1.​ Business Requirements – Identify the specific features needed to meet goals and customer expectations.​

2.​ Budget and Cost – Review pricing, subscription plans, and total cost of ownership for affordability.​

3.​ Scalability – Ensure the CRM can grow with the business and support more users or features.​

4.​ Integration Capabilities – Check compatibility with existing systems and third-party tools.​

5.​ Support and Training – Look for strong customer support and training to aid smooth implementation and adoption.

EXAMPLES OF CRM

Salesforce is one of the most powerful and widely used cloud-based CRM platforms in the world. It offers a wide range of
features including lead and opportunity management, workflow automation, customer support, and advanced analytics. What sets
Salesforce apart is its high level of customization and scalability, making it suitable for businesses of all sizes, especially large
enterprises. The platform also includes artificial intelligence capabilities through Einstein AI, which helps users gain predictive
insights and automate complex processes. Additionally, Salesforce has a vast marketplace called AppExchange that allows
businesses to extend CRM functionality with various third-party apps.

HubSpot CRM is known for its simplicity and ease of use. It is particularly popular among startups and small to medium-sized
businesses because it offers a free version with essential CRM features. HubSpot focuses on aligning marketing, sales, and
customer service efforts. It includes tools for contact management, email tracking, sales pipeline visualization, and marketing
automation. One of its key strengths is its integration with HubSpot’s broader marketing platform, making it a powerful choice
for inbound marketing strategies. Businesses can start small with the free version and scale up as they grow.

Zoho CRM is another cloud-based solution that caters to businesses looking for a cost-effective yet feature-rich CRM. It supports
sales automation, multi-channel communication, customer analytics, and artificial intelligence through its smart assistant, Zia.
The platform is highly customizable and offers a variety of modules for marketing, sales, and support functions. Zoho CRM also
integrates well with other Zoho applications as well as third-party tools, making it a flexible solution for small to medium-sized
enterprises that require collaboration across different teams.

Microsoft Dynamics 365 is a cloud-based CRM and ERP suite that offers deep integration with Microsoft products like Office
365 and Azure. It is designed for enterprises that need advanced data analytics, customer service management, and sales
forecasting. Dynamics 365 combines operational and analytical CRM capabilities, providing users with real-time insights through
embedded AI tools. Its strong data connectivity and seamless workflow with Microsoft apps make it ideal for large businesses
already using the Microsoft ecosystem. It supports a modular approach, allowing businesses to purchase only the tools they need.

Freshsales is a modern, intuitive CRM platform designed primarily for sales teams. It offers built-in phone and email features, a
visual sales pipeline, AI-based lead scoring, and automated workflows. It is easy to set up and use, making it suitable for small
and medium-sized businesses that want a sales-focused CRM without the complexity. Freshsales stands out for its affordable
pricing and responsive customer support, providing all the essential tools to manage leads, track customer interactions, and
improve conversion rates.

Pipedrive is a sales-centric CRM known for its visual and user-friendly interface. It is designed to help small businesses and sales
teams manage leads and deals more effectively. Users can easily track each opportunity through the sales pipeline and set up
automated tasks and reminders to follow up with potential clients. While it may not have the extensive features of enterprise-level
CRMs, its simplicity and focus on boosting sales performance make it a favorite among startups and growing companies.

Insightly combines CRM features with project management capabilities, making it ideal for businesses that need to manage
customer relationships and internal projects in one place. It offers lead and opportunity management, workflow automation, email
templates, and seamless integration with apps like G Suite and Microsoft 365. Insightly is best suited for small to mid-sized
businesses that want both CRM and project tracking tools without the need for separate platforms.

Project Management in Cloud Computing


Project management in cloud computing has become a powerful approach for modern businesses to efficiently plan, execute, and
monitor their projects. With cloud technology, organizations no longer need to rely on traditional, on-premise systems. Instead,
they can access robust project management tools through the internet, which offers faster deployment, easier accessibility, and
enhanced collaboration. These cloud-based systems are designed to support teams of all sizes by providing centralized control,
real-time updates, and improved transparency throughout the project lifecycle.

One of the most notable advantages of cloud-based project management is its easy setup and minimal installation
requirements. There is no need for complex hardware or software installations, and the interface is typically intuitive and
user-friendly. This allows organizations to onboard team members quickly and get projects running without delays.

Another key benefit is seamless collaboration. Cloud platforms enable teams—regardless of their physical location—to
communicate and work together in real time. Features such as shared task boards, file storage, live comments, and integrated chat
tools help improve communication, foster teamwork, and keep everyone aligned with project goals. This is especially valuable in
remote and hybrid work environments.

Cloud project management also brings increased efficiency by centralizing project data and automating repetitive tasks.
Managers can easily track project status, assign responsibilities, and monitor deadlines, while team members receive real-time
updates and reminders. This leads to quicker decision-making, better time management, and optimized resource utilization.

Another critical aspect is the reduction in maintenance and infrastructure costs. Since cloud service providers handle system
updates, data backups, and security enhancements, organizations do not need to invest heavily in IT support or additional
hardware. This not only lowers operational costs but also ensures the platform is always up to date with the latest features.

In terms of security, cloud-based project management platforms offer advanced protection, including encrypted communication,
user access controls, and compliance with data privacy regulations. These measures help ensure that sensitive project information
remains secure and accessible only to authorized users.
Cloud platforms are also known for their scalability and flexibility. Organizations can easily adjust the number of users, storage
space, or features as project demands change. Whether scaling up for a large project or scaling down during slower periods, cloud
systems adapt without the need for restructuring or reinstallation.

Moreover, using these systems often results in improved employee satisfaction. The simplicity, accessibility, and collaborative
nature of cloud tools empower employees to manage their tasks more effectively and stay connected with their teams. This
creates a more organized, less stressful work environment, boosting morale and productivity.

Examples of Cloud-Based Project Management Tools


Several popular tools offer cloud-based project management solutions, each catering to different needs:

●​ ClickUp: A versatile tool that provides features like task tracking, goal setting, time management, automation, and
real-time collaboration. It’s ideal for both individual users and large teams looking for an all-in-one platform.​

●​ Monday.com: Known for its visually appealing interface, it helps teams organize work using customizable boards,
dashboards, and workflow automation. It’s widely used by marketing, sales, and creative teams for its simplicity and
clarity.​

●​ Smartsheet: Combines the functionality of spreadsheets with project management capabilities. It allows for detailed
planning, resource allocation, reporting, and automation—making it suitable for data-driven projects.​

These tools demonstrate the diversity and effectiveness of cloud-based project management solutions. They help
organizations manage projects more efficiently, ensure better team coordination, and adapt to changing business needs with ease

Event Management in Cloud Computing


Event management in cloud computing refers to the use of cloud-based platforms and services to plan, organize, execute, and
evaluate events of various scales—ranging from small webinars to large-scale conferences and corporate gatherings. Cloud
technology has revolutionized the way events are managed, making the process more efficient, scalable, and accessible, while
reducing operational costs and enhancing participant engagement.

With cloud computing, event managers can handle all aspects of event planning through a centralized, web-based system. These
systems offer a wide array of tools for registration management, ticketing, attendee communication, scheduling, resource
allocation, and post-event analysis. Since everything is hosted online, there is no need for on-site software installations or
complex infrastructure—everything can be accessed from anywhere, at any time, on any device.

Key Benefits of Cloud-Based Event Management:


1.​ Centralized Planning and Coordination: Cloud platforms allow event teams to collaborate in real-time, managing
logistics, timelines, and responsibilities through shared dashboards and task lists.​

2.​ Real-Time Registration and Ticketing: Attendees can register online, make payments, and receive instant
confirmations. Organizers can monitor sign-ups in real-time, generate digital tickets, and manage capacity limits with
ease.​

3.​ Cost-Effective Operations: Since the infrastructure is managed by cloud service providers, there is no need for
additional hardware or IT support. This reduces operational costs significantly, especially for recurring or large-scale
events.​
4.​ Scalability and Flexibility: Cloud platforms can handle both small meetings and large international conferences. As
attendee numbers grow, resources can be scaled up automatically without any disruptions.​

5.​ Enhanced Communication: Built-in email and notification systems ensure that attendees receive timely updates,
reminders, and important information. Some platforms also support live chat, Q&A sessions, and polling during events.​

6.​ Virtual and Hybrid Event Support: Many cloud-based systems support virtual events and hybrid formats, including
live streaming, breakout rooms, and networking lounges. This broadens audience reach and engagement.​

7.​ Data Analytics and Feedback: After the event, organizers can generate detailed reports on attendance, engagement
levels, and survey responses. These insights are valuable for evaluating success and planning future events.​

8.​ Security and Privacy: Cloud service providers ensure data encryption, access control, and compliance with global
standards like GDPR, ensuring the safety of both organizer and attendee data.

Examples of Cloud-Based Event Management Tools


●​ Eventbrite: One of the most widely used cloud-based event platforms, Eventbrite allows for easy event creation, ticket
sales, and attendee management. It integrates with various marketing tools and provides analytics for performance
tracking.​

●​ Cvent: A comprehensive event management platform used for enterprise-level conferences and meetings. It offers
registration, venue selection, marketing automation, mobile event apps, and attendee engagement features.​

●​ Hopin: Focused on virtual and hybrid events, Hopin enables live streaming, interactive sessions, expo booths, and
networking lounges, all hosted within a cloud environment.​

●​ Whova: Known for its mobile-friendly interface, Whova helps with event promotion, agenda management, attendee
engagement, and live interaction features.


Collaboration Tools in Cloud Computing (with Examples)
Cloud computing enables real-time collaboration by offering accessible tools that can be used by multiple users across locations.
Below is an explanation of key tools used for collaboration, along with real-world examples:

1. Calendar
Definition: A cloud-based calendar allows teams to organize and manage schedules, meetings, and reminders collaboratively.

Example:​
Google Calendar lets users create events, invite team members, add meeting links, and set reminders. Team members can see
each other's availability and schedule accordingly.

2. Schedules
Definition: Scheduling tools help in planning project timelines, assigning tasks, and setting deadlines collaboratively.

Example:​
Trello uses boards, lists, and cards to assign tasks, set due dates, and monitor progress. All team members can view and update
the schedule in real-time.

3. Word Processing
Definition: Cloud-based word processors allow multiple users to work on documents at the same time, with automatic saving and
version control.

Example:​
Google Docs allows real-time co-authoring of documents, commenting, and version history. Users can collaborate on reports or
research papers simultaneously from different locations.

4. Presentation
Definition: Online presentation tools help in jointly creating slides, sharing feedback, and delivering content virtually.

Example:​
Microsoft PowerPoint Online allows team members to co-create slides, add animations, and practice presentations online. Edits
are saved automatically and can be viewed in real-time.

5. Spreadsheet
Definition: Cloud spreadsheets enable shared editing, data analysis, and financial tracking in collaborative environments.

Example:​
Google Sheets lets multiple users enter data, apply formulas, and build charts at the same time—commonly used for budgets,
project timelines, and research data.
6. Databases
Definition: Cloud databases provide centralized, real-time access to data for applications and teams, allowing collaboration in
data entry, management, and analytics.

Example:​
Airtable combines the features of a spreadsheet and a database. Teams use it for inventory management, CRM systems, and
event planning—collaboratively managing data with forms, filters, and views.

7. Desktop (VDI - Virtual Desktop Infrastructure)


Definition: A virtual desktop allows users to access their desktop environment from any device via the cloud, supporting remote
collaboration securely.

Example:​
Amazon WorkSpaces allows users to access a cloud-hosted Windows or Linux desktop. This is useful in organizations where
employees need a standardized, secure work environment from anywhere.

8. Social Networks
Definition: Enterprise social platforms enable informal communication and networking within organizations to enhance team
bonding and sharing.

Example:​
Workplace by Meta (Facebook) lets employees chat, post updates, create groups, and host live video sessions, encouraging an
open and social culture in remote teams.

9. Groupware
Definition: Groupware is collaborative software that combines communication, task management, file sharing, and more within
one integrated platform.

Example:​
Microsoft Teams allows users to chat, meet, call, and collaborate on files all in one app. It’s integrated with other Microsoft 365
tools like Word, Excel, and SharePoint for seamless teamwork.

By using these cloud-based collaboration tools, organizations and teams can achieve higher productivity, better communication,
and streamlined project execution. These tools break down geographical barriers and support real-time interaction, making them
essential for modern workplaces.
 Virtualization for Cloud:
Unit -4
 Need for Virtualization
 Pros and cons of Virtualization
 Types of Virtualization
 System VM
 Process VM
 Virtual Machine monitor
 Virtual Machine Properties
 Interpretation and Binary Translation
 HLL VM
 Supervisors
 Xen, KVM, VMware, Virtual Box, Hyper-V.
 Good Reading & Reference Material available @
 https://www.sciencedirect.com/topics/computer-science/virtual-machine-
monitor
History of Virtualization
(from “Modern Operating Systems” 4th Edition, p474 by Tanenbaum and Bos)

 1960’s, IBM: CP/CMS control program: a virtual machine operating system for the IBM System/360
Model 67

 2000, IBM: z-series with 64-bit virtual address spaces and backward compatible with the System/360

 1974: Popek and Golberg from UCLA published “Formal Requirements for Virtualizable Third
Generation Architectures” where they listed the conditions a computer architecture should satisfy to
support virtualization efficiently. The popular x86 architecture that originated in the 1970s did not
support these requirements for decades.

 1990’s, Stanford researchers, VMware: Researchers developed a new hypervisor and founded
VMware, the biggest virtualization company of today’s. First virtualization solution was is 1999 for x86.

 Today many virtualization solutions: Xen from Cambridge, KVM, Hyper-V,

 IBM was the first to produce and sell virtualization for the mainframe. But, VMware popularised
virtualization for the masses.
Need for Virtualization
Need
Need for Virtualization
1. Enhanced Performance
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely
used by the user. Most of their systems have sufficient resources which can host a virtual
machine manager and can perform a virtual machine with acceptable performance so far.

2. Limited use of Hardware and Software Resources


The limited use of the resources leads to under-utilization of hardware and software
resources. As all the PCs of the user are sufficiently capable to fulfill their regular
computational needs that’s why many of their computers are used often which can be used
24/7 continuously without any interruption. The efficiency of IT infrastructure could be
increase by using these resources after hours for other purposes. This environment is
possible to attain with the help of Virtualization.
Contd……
3. SHORTAGE OF SPACE
The regular requirement for additional capacity, whether memory storage or compute
power, leads data centers raise rapidly. Companies like Google, Microsoft and Amazon
develop their infrastructure by building data centers as per their needs. Mostly, enterprises
unable to pay to build any other data center to accommodate additional resource capacity.
This heads to the diffusion of a technique which is known as server consolidation.

4. ECO-FRIENDLY INITIATIVES-
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as
well as a good amount of energy is needed to keep them cool for well-functioning.
Therefore, server consolidation drops the power consumed and cooling impact by having a
fall in number of servers. Virtualization can provide a sophisticated method of server
consolidation.
Contd……
5. ADMINISTRATIVE COSTS
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel-intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.
Benefits of Virtualization
1. More flexible and efficient allocation of resources.

2. Enhance development productivity.

3. It lowers the cost of IT infrastructure.

4. Remote access and rapid scalability.

5. High availability and disaster recovery.

6. Pay peruse of the IT infrastructure on demand.

7. Enables running multiple operating systems.


Virtualization Reference Model
Contd……
1. GUEST
The guest represents the system component that interacts with the virtualization layer rather than with
the host, as would normally happen. Guests usually consist of one or more virtual disk files, and a VM
definition file. Virtual Machines are centrally managed by a host application that sees and manages each
virtual machine as a different application.

2. HOST
The host represents the original environment where the guest is supposed to be managed. Each guest
runs on the host using shared resources donated to it by the host. The operating system, works as the
host and manages the physical resource management, and the device support.

3. VIRTUALIZATION LAYER
The virtualization layer is responsible for recreating the same or a different environment where the
guest will operate. It is an additional abstraction layer between a network and storage hardware,
computing, and the application running on it. Usually it helps to run a single operating system per
machine which can be very inflexible compared to the usage of virtualization.
Types of Virtualization:
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data Virtualization
Contd……
1. Application Virtualization
Application virtualization helps a user to have remote access of an application from a
server. The server stores all personal information and other characteristics of the
application but can still run on a local workstation through the internet. Example of this
would be a user who needs to run two different versions of the same software.
Technologies that use application virtualization are hosted applications and packaged
applications.

2. Network Virtualization
The ability to run multiple virtual networks with each has a separate control and data plan.
It co-exists together on top of one physical network. It can be managed by individual parties
that potentially confidential to each other.
Network virtualization provides a facility to create and provision virtual networks—logical
switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and workload
security within days or even in weeks.
Contd……
3. Desktop Virtualization
Desktop virtualization allows the users’ OS to be remotely stored on a server in the data
centre. It allows the user to access their desktop virtually, from any location by a different
machine. Users who want specific operating systems other than Windows Server will need
to have a virtual desktop. Main benefits of desktop virtualization are user mobility,
portability, easy management of software installation, updates, and patches.

4. Storage Virtualization
Storage virtualization is an array of servers that are managed by a virtual storage system.
The servers aren’t aware of exactly where their data is stored, and instead function more
like worker bees in a hive. It makes managing storage from multiple sources to be
managed and utilized as a single repository. storage virtualization software maintains
smooth operations, consistent performance and a continuous suite of advanced functions
despite changes, break down and differences in the underlying equipment.
Contd……
5. Server Virtualization
This is a kind of virtualization in which masking of server resources takes place. Here, the central-
server(physical server) is divided into multiple different virtual servers by changing the identity
number, processors. So, each system can operate its own operating systems in isolate manner. Where
each sub-server knows the identity of the central server. It causes an increase in the performance and
reduces the operating cost by the deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reduce energy consumption, reduce infrastructural cost, etc.

6. Data Virtualization
This is the kind of virtualization in which the data is collected from various sources and managed that
at a single place without knowing more about the technical information like how data is collected,
stored & formatted then arranged that data logically so that its virtual view can be accessed by its
interested people and stakeholders, and users through the various cloud services remotely. Many big
giant companies are providing their services like Oracle, IBM, At scale, Cdata, etc.
System VM & Process VM
A System Virtual Machine (System VM) provides a complete system
platform which supports the execution of a complete operating system
(OS).

In contrast, a Process Virtual Machine (Process VM) is designed to run a


single program, which means that it supports a single process.
System Virtual Machine
 A System Virtual Machine is also called as Hardware Virtual Machine. It is the software
emulation of a computer system. It mimics the entire computer.
 In computing, an emulator is hardware or software that enables one computer system (called the
host) to behave like another computer system (called the guest). An emulator typically enables the
host system to run software or use a peripheral device designed for the guest system.
 It is an environment that allows multiple instances of the operating system (virtual machines) to run
on a host system, sharing the physical resources.
 System Virtual Machine provides a platform for the execution of a complete operating system. It will
create a number of different isolated identical execution environments in a single computer by
partitioning computer memory to install and execute the different operating systems at the time.
 It allows us to install applications in each operating system, run the application in this operating
system as if we work in real work on a real computer. For example, we can install Windows XP/7/8 or
Linux Ubuntu/Kali in Windows 10 operating system with the help of VM.
 Examples of System VMs software - VMware, VirtualBox, Windows Virtual PC, Parallels, QEMU,
Citrix Xen
Process Virtual Machine
 A Process Virtual Machine is also called a Language Virtual Machine or an Application
Virtual Machine or Managed Runtime Environment.

 Process VM is a software simulation of a computer system. It provides a runtime


environment to execute a single program and supports a single process.

 The purpose of a process virtual machine is to provide a platform-independent


programming environment that abstracts the details of the underlying hardware or
operating system and allows a program to execute in the same way on any platform.

 Process virtual machines are implemented using an interpreter; for improving performance
these virtual machines will use just-in-time compilers internally.

 Examples of Process VMs - JVM (Java Virtual Machine) is used for the Java language
PVM (Parrot Virtual Machine) is used for PERL Language, CLR (Common Language
Runtime) is used for .NET Framework
Virtual Machine Monitor (VMM)
A Virtual Machine Monitor (VMM) is a software program that enables the creation, management and
governance of virtual machines (VM) and manages the operation of a virtualized environment on top
of a physical host machine.

VMM is also known as Virtual Machine Manager and Hypervisor. However, the provided architectural
implementation and services differ by vendor product.

VMM is the primary software behind virtualization environments and implementations. When installed
over a host machine, VMM facilitates the creation of VMs, each with separate operating systems (OS)
and applications. VMM manages the backend operation of these VMs by allocating the necessary
computing, memory, storage and other input/output (I/O) resources.

VMM also provides a centralized interface for managing the entire operation, status and availability of
VMs that are installed over a single host or spread across different and interconnected hosts.
Virtual Machine Monitor (VMM / Hypervisor)
A virtual machine monitor (VMM/hypervisor) partitions the resources of computer system into one
or more virtual machines (VMs). Allows several operating systems to run concurrently on a single
hardware platform.
 A VM is an execution environment that runs an OS
 VM – an isolated environment that appears to be a whole computer, but actually only has access to a
portion of the computer resources

 A VMM allows:
 Multiple services to share the same platform
 Live migration - the movement of a server from one
platform to another
 System modification while maintaining backward compatibility with the original
system
 Enforces isolation among the systems, thus security
 A guest operating system is an OS that runs in a VM under the control of the VMM.
VMM Virtualizes the CPU and the Memory
 A VMM (also hypervisor)
 Traps the privileged instructions executed by a guest OS and enforces the
correctness and safety of the operation

 Traps interrupts and dispatches them to the individual guest operating systems

 Controls the virtual memory management

 Maintains a shadow page table for each guest OS and replicates any modification made
by the guest OS in its own shadow page table. This shadow page table points to the
actual page frame and it is used by the Memory Management Unit (MMU) for dynamic
address translation.

 Monitors the system performance and takes corrective actions to avoid performance
degradation. For example, the VMM may swap out a VM to avoid thrashing.
Type 1 and 2 Hypervisors
Type 1 Hypervisor Type 2 Hypervisor

 Taxonomy of VMMs:
1. Type 1 Hypervisor (bare metal, native): supports multiple virtual machines
and runs directly on the hardware (e.g., VMware ESX , Xen, Denali)
2. Type 2 Hypervisor (hosted) VM - runs under a host operating system (e.g.,
user-mode Linux)
Virtual Machine Properties
Being able to use apps and operating systems without the need for hardware presents users
with some advantages over a traditional computer. The benefits of virtual machines include:

1. Compatibility
Virtual machines host their own guest operating systems and applications, using all the
components found in a physical computer (motherboard, VGA card, network card controller,
etc). This allows VMs to be fully compatible with all standard x86 operating systems,
applications and device drivers. You can therefore run all the same software that you would
usually use on a standard x86 computer.

2. Isolation
VMs share the physical resources of a computer, yet remain isolated from one another. This
separation is the core reason why virtual machines create a more secure environment for
running applications when compared to a non-virtual system. If, for example, you’re running
four VMs on a server and one of them crashes, the remaining three will remain unaffected
and will still be operational.
Contd……
3. Encapsulation
A virtual machine acts as a single software package that encapsulates a complete set of
hardware resources, an operating system, and all its applications. This makes VMs
incredibly portable and easy to manage. You can move and copy a VM from one location
to another like any other software file, or save it on any storage medium — from storage
area networks (SANs) to a common USB flash drive.

4. Hardware independence
Virtual machines can be configured with virtual components that are completely
independent of the physical components of the underlying hardware. VMs that reside on
the same server can even run different types of operating systems. Hardware
independence allows you to move virtual machines from one x86 computer to another
without needing to make any changes to the device drivers, operating system or
applications.
Interpretation and Binary Translation
 Interpretation in Cloud Computing, In simple terms, the behavior of the hardware is
produced by a software program. Emulation process involves only those hardware
components so that user or virtual machines does not understand the underlying
environment. This process is also termed as interpretation.

 Binary Translation is one specific approach to implementing full virtualization that does
not require hardware virtualization features.

 It involves examining the executable code of the virtual guest for "unsafe" instructions,
translating these into "safe" equivalents, and then executing the translated code.

 VMware is an example of virtualization using binary translation (VMware, n.d.).


Hypervisors can also be distinguished by their relation to the host-operating system.
HLL VM

A static compiler is probably the best solution when performance is paramount, portability is not a great concern, destinations of calls are
known at compile time and programs bind to external symbols before running. Thus, most third generation languages like C and FORTRAN
are implemented this way. However, if the language is object-oriented, binds to external references late, and must run on many
platforms, it may be advantageous to implement a compiler that targets a fictitious high-level language virtual machine (HLL VM)
instead.

In Smith's taxonomy, an HLL VM is a system that provides a process with an execution environment that does not correspond to any
particular hardware platform. The interface offered to the high-level language application process is usually designed to hide differences
between the platforms to which the VM will eventually be ported. For instance, UCSD Pascal p-code and Java bytecode both express virtual
instructions as stack operations that take no register arguments. Gosling, one of the designers of the Java virtual machine, has said that he
based the design of the JVM on the p-code machine. Smalltalk, Self and many other systems have taken a similar approach. A VM may also
provide virtual instructions that support peculiar or challenging features of the language. For instance, a Java virtual machine has
specialized virtual instructions
Contd……
 This approach has benefits for the users as well. For instance, applications can be
distributed in a platform neutral format. In the case of the Java class libraries or UCSD
Pascal programs, the amount of virtual software far exceeds the size of the VM.

 The advantage is that the relatively small amount of effort required to port the VM to a
new platform enables a large body of virtual applications to run on the new platform also.

 There are various approaches a HLL VM can take to actually execute a virtual program.
An interpreter fetches, decodes, then emulates each virtual instruction in turn. Hence,
interpreters are slow but can be very portable.

 Faster, but less portable, a dynamic compiler can translate to native code and dispatch
regions of the virtual application. A dynamic compiler can exploit runtime knowledge of
program values so it can sometimes do a better job of optimizing the program than a
static compiler
Supervisors
A supervisory program or supervisor is a computer program, usually part of an operating system, that controls the
execution of other routines and regulates work scheduling, input/output operations, error actions, and similar
functions and regulates the flow of work in a data processing system. It is thus capable of executing both
input/output operations and privileged operations. The operating system of a computer usually operates in this
mode.

Supervisor mode is "an execution mode on some processors which enables execution of all instructions, including
privileged instructions. It may also give access to a different address space, to memory management hardware and to
other peripherals. This is the mode in which the operating system usually runs.“

It can also refer to a program that allocates computer component space and schedules computer events by task
queuing and system interrupts. Control of the system is returned to the supervisory program frequently enough to
ensure that demands on the system are met.

Historically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360.
In other operating systems, the supervisor is generally called the kernel. In the 1970s, IBM further abstracted the
supervisor state from the hardware, resulting in a hypervisor that enabled full virtualization, i.e. the capacity to run
multiple operating systems on the same machine totally independently from each other. Hence the first such system
was called Virtual Machine or VM.
Xen
 Xen (pronounced /ˈzɛn/) is a type-1 hypervisor, providing services that allow multiple
computer operating systems to execute on the same computer
hardware concurrently.

 It was originally developed by the University of Cambridge Computer Laboratory and


is now being developed by the Linux Foundation with support from Intel, Citrix, Arm
Ltd, Huawei, AWS, Alibaba Cloud, AMD, Bitdefender and epam.

 The Xen Project community develops and maintains Xen Project as free and open-
source software, subject to the requirements of the GNU General Public
License (GPL), version 2. Xen Project is currently available for the IA-32, x86-
64 and ARM instruction sets.
Contd……
 Xen provides a form of virtualization known as Paravirtualization, in which guests run a
modified operating system.

 The guests are modified to use a special hypercall ABI, instead of certain architectural
features.

 Through Paravirtualization, Xen can achieve high performance even on its host
architecture (x86) which has a reputation for non-cooperation with traditional virtualization
techniques.

 Xen can run paravirtualized guests ("PV guests" in Xen terminology) even on CPUs without
any explicit support for virtualization.

 Paravirtualization avoids the need to emulate a full set of hardware and firmware services,
which makes a PV system simpler to manage and reduces the attack surface exposed to
potentially malicious guests. On 32-bit x86, the Xen host kernel code runs in Ring 0, while
the hosted domains run in Ring 1 (kernel) and Ring 3 (applications).
KVM

 Xen (pronounced /ˈzɛn/) is a type-1 hypervisor, providing services that allow multiple
computer operating systems to execute on the same computer
hardware concurrently.

 It was originally developed by the University of Cambridge Computer Laboratory and


is now being developed by the Linux Foundation with support from Intel, Citrix, Arm
Ltd, Huawei, AWS, Alibaba Cloud, AMD, Bitdefender and epam.

 The Xen Project community develops and maintains Xen Project as free and open-
source software, subject to the requirements of the GNU General Public
License (GPL), version 2. Xen Project is currently available for the IA-32, x86-
64 and ARM instruction sets.
VMware

 Xen (pronounced /ˈzɛn/) is a type-1 hypervisor, providing services that allow multiple
computer operating systems to execute on the same computer
hardware concurrently.

 It was originally developed by the University of Cambridge Computer Laboratory and


is now being developed by the Linux Foundation with support from Intel, Citrix, Arm
Ltd, Huawei, AWS, Alibaba Cloud, AMD, Bitdefender and epam.

 The Xen Project community develops and maintains Xen Project as free and open-
source software, subject to the requirements of the GNU General Public
License (GPL), version 2. Xen Project is currently available for the IA-32, x86-
64 and ARM instruction sets.
VirtualBox
 VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as
well as home use. Not only is VirtualBox an extremely feature rich, high performance
product for enterprise customers,

 it is also the only professional solution that is freely available as Open Source Software
under the terms of the GNU General Public License (GPL) version 2.

 Presently, VirtualBox runs on Windows, Linux, Macintosh, and Solaris hosts and supports
a large number of guest operating systems including but not limited to Windows (NT 4.0,
2000, XP, Server 2003, Vista, Windows 7, Windows 8, Windows 10), DOS/Windows 3.x,
Linux (2.4, 2.6, 3.x and 4.x), Solaris and OpenSolaris, OS/2, and OpenBSD.

 VirtualBox is being actively developed with frequent releases and has an ever growing
list of features, supported guest operating systems and platforms it runs on.

 VirtualBox is a community effort backed by a dedicated company: everyone is


encouraged to contribute while Oracle ensures the product always meets professional
quality criteria.
Hyper-V Contd…

 Microsoft Hyper-V (Type-1), codenamed Viridian, and briefly known before its release as Windows Server
Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows.

 A Type 1 hypervisor runs directly on the underlying computer's physical hardware, interacting directly with its CPU,
memory, and physical storage. For this reason, Type 1 hypervisors are also referred to as bare-metal hypervisors. A
Type 1 hypervisor takes the place of the host operating system.

 A Type 2 hypervisor, also called a hosted hypervisor, is a virtual machine (VM) manager that is installed as a
software application on an existing operating system (OS). This makes it easy for an end user to run a VM on a
personal computing (PC) device.

 The main difference between Type 1 vs. Type 2 hypervisors is that Type 1 runs on bare metal and Type 2 runs on top
of an operating system.

 The key difference between Hyper-V and a Type 2 hypervisor is that Hyper-V uses hardware-assisted virtualization.
This allows Hyper-V virtual machines to communicate directly with the server hardware, allowing virtual machines to
perform far better than a Type 2 hypervisor would allow.
Unit – 5: Security Standards and Applications
(Cloud Computing)
Unit – 5: Security Standards and Applications
 Security in Clouds
 Cloud security challenges
 Software as a Service Security
 Common Standards
 The Open Cloud Consortium
 The Distributed Management Task Force
 Standards for Application Developers
 Standards for Messaging
 Standards for Security
 End user Access to Cloud Computing
 Mobile Internet devices and the Cloud
 Hadoop, MapReduce, Virtual Box, Google App Engine
 Programming Environment for Google App Engine
Security in Clouds
 Cloud Security, also known as cloud computing security, consists of a set of policies, controls,
procedures and technologies that work together to protect cloud-based systems, data, and
infrastructure.
 These security measures are configured to protect cloud data, support regulatory compliance and
protect customers' privacy as well as setting authentication rules for individual users and devices.
 From authenticating access to filtering traffic, cloud security can be configured to the exact needs of
the business. And because these rules can be configured and managed in one place, administration
overheads are reduced and IT teams empowered to focus on other areas of the business.
 The way cloud security is delivered will depend on the individual cloud provider or the cloud security
solutions in place. However, implementation of cloud security processes should be a joint
responsibility between the business owner and solution provider.
 For businesses making the transition to the cloud, robust cloud security is imperative. Security
threats are constantly evolving and becoming more sophisticated, and cloud computing is no less at
risk than an on-premise environment. For this reason, it is essential to work with a cloud provider
that offers best-in-class security that has been customized for your infrastructure.
Benefits of Cloud Security
1. Centralized security: Just as cloud computing centralizes applications and data, cloud
security centralizes protection. Cloud-based business networks consist of numerous
devices and endpoints that can be difficult to manage when dealing with shadow IT
or BYOD. Managing these entities centrally enhances traffic analysis and web filtering,
streamlines the monitoring of network events and results in fewer software and policy
updates. Disaster recovery plans can also be implemented and actioned easily when they
are managed in one place.

2. Reduced costs: One of the benefits of utilizing cloud storage and security is that it
eliminates the need to invest in dedicated hardware. Not only does this reduce capital
expenditure, but it also reduces administrative overheads. Where once IT teams were
firefighting security issues reactively, cloud security delivers proactive security features
that offer protection 24/7 with little or no human intervention.
Contd……

3. Reduced Administration: When you choose a reputable cloud services provider or


cloud security platform, you can kiss goodbye to manual security configurations and
almost constant security updates. These tasks can have a massive drain on resources,
but when you move them to the cloud, all security administration happens in one place
and is fully managed on your behalf.

4. Reliability: Cloud computing services offer the ultimate in dependability. With the
right cloud security measures in place, users can safely access data and applications
within the cloud no matter where they are or what device they are using.
Software as a Service Security
 SaaS security is cloud-based security designed to protect the data that software as
service applications carry.

 It’s a set of practices that companies that store data in the cloud put in place to protect
sensitive information pertaining to their customers and the business itself.

 However, SaaS security is not the sole responsibility of the organization using the cloud
service. In fact, the service customer and the service provider share the obligation to
adhere to SaaS security guidelines published by the National Cyber Security Center
(NCSC).

 SaaS security is also an important part of SaaS management that aims to reduce
unused licenses, shadow IT and decrease security risks by creating as much visibility as
possible.
6 SaaS Security best practices
One of the main benefits that SaaS has to offer is that the respective applications are on-
demand, scalable, and very fast to implement, saving companies valuable resources and
time. On top of that, the SaaS provider typically handles updates and takes care of software
maintenance.

This flexibility and the fairly open access have created new security risks that SaaS security
best practices are trying to address and mitigate. Below are 6 security practices and solutions
that every cloud-operating business should know about.

1. Enhanced Authentication
Offering a cloud-based service to your customers means that there has to be a way for them
to access the software. Usually, this access is regulated through login credentials. That’s why
knowing how your users access the resource and how the third-party software provider
handles the authentication process is a great starting point.
Contd……
Once you understand the various methods, you can make better SaaS security decisions and
enable additional security features like multifactor authentication or integrate other enhanced
authentication methods.

2. Data Encryption
The majority of channels that SaaS applications use to communicate employ TLS (Transport Layer Security)
to protect data that is in transit. However, data that is at rest can be just as vulnerable to cyber attacks as data
that is being exchanged. That’s why more and more SaaS providers offer encryption capabilities that protect
data in transit and at rest. It’s a good idea to talk to your provider and check whether enhanced data encryption
is available for all the SaaS services you use.

3. Vetting and Oversight


With a stark increase in SaaS deployment, usage and demand, new SaaS vendors emerge on a regular basis.
This creates a competitive market and gives companies seeking the best SaaS solutions for their business
needs the upper hand. However, too many similar products can lead to decision fatigue or rash decisions.
When you choose your saas provider, apply the same review and validation process you would with other
vendors and compare optional security features that might be available.
4. Discovery and Inventory
With increased digital literacy, software procurement is not only limited to IT departments but can be practiced
by almost every employee. Ultimately, this leads to shadow IT and security loopholes. That’s why one of the
most important SaaS security practices involves maintaining a reliable inventory of what services are being
used and the tracking of SaaS usage to detect unusual or unexpected activity. Automated tools within SaaS
management systems can send out alerts for immediate notification.

5. Consider CASBs
It is possible that the SaaS provider that you are choosing is not able to provide the level of SaaS security that your
company requires. If there are no viable alternatives when it comes to the vendor, consider cloud access security broker
(CASB) tool options. This allows your company to add a layer of additional security controls that are not native to your
SaaS application. When selecting a CASB –whether proxy or API-based –make sure it fits into your existing IT
architecture.

6. Maintain situational awareness


Last but not least, always monitor your SaaS use. Comprehensive SaaS management tools and CASBs
offer you a lot of information that can help you make the right decision when it comes to SaaS security.
Common Cloud Security Standard
Cloud Security encompasses the technologies, controls, processes, and policies which
combine to protect your cloud-based systems, data, and infrastructure. It is a sub-domain
of computer security and more broadly, information security.

The most well-known standard in information security and compliance is ISO 27001,
developed by the International Organization for Standardization.

The ISO 27001 standard was created to assist enterprises in protecting sensitive data by
best practices.

Cloud compliance is the principle that cloud-delivered systems must be compliant with
the standards their customers require. Cloud compliance ensures that cloud computing
services meet compliance requirements.
Contd……

https://kinsta.com/blog/cloud-
security/#how-does-cloud-
security-work

Cloud Security Shared Responsibility Model (Image source: Synopsys)


Open Cloud Consortium
 The Open Cloud Consortium (OCC) is
 A not for profit
 Manages and operates cloud computing infrastructure to
support scientific, medical, health care and environmental
research.
 OCC members span the globe and include over 10 universities,
over 15 companies, and over 5 government agencies and
national laboratories.
 The OCC is organized into several different working groups.
The OCC Mission
 The purpose of the Open Cloud Consortium is to support the development of
standards for cloud computing and to develop a framework for interoperability
among various clouds.

 The OCC supports the development of benchmarks for cloud computing.

 Manages cloud computing testbeds, such as the Open Cloud Testbed, to


improve cloud computing software and services.

 Develops reference implementations, benchmarks and standards, such as the


MalStone Benchmark, to improve the state of the art of cloud computing.

 Sponsors workshops and other events related to cloud computing to educate


The Open Cloud Consortium
 The Open Commons Consortium (aka OCC - formerly the Open Cloud Consortium) is
a 501(c)(3) non-profit venture which provides cloud computing and data commons resources to
support "scientific, environmental, medical and health care research."

 OCC manages and operates resources including the Open Science Data Cloud (aka OSDC), which is a
multi-petabyte scientific data sharing resource.

 The consortium is based in Chicago, Illinois, and is managed by the 501(c)3 Center for Computational
Science

 The OCC is divided into Working Groups which include:


 The Open Science Data Cloud - This is a working group that manages and operates the Open
Science Data Cloud (OSDC), which is a petabyte scale science cloud for researchers to manage,
analyze and share their data. Individual researchers may apply for accounts to analyze data
hosted by the OSDC. Research projects with TB-scale datasets are encouraged to join the OSDC
and contribute towards its infrastructure.
Contd……
2. Project Matsu - Project Matsu is a collaboration between the NASA Goddard Space Flight Center
and the Open Commons Consortium to develop open source technology for cloud-based processing of
satellite imagery to support the earth science research community as well as human assisted disaster
relief.

3. The Open Cloud Testbed - This working group manages and operates the Open Cloud Testbed. The
Open Cloud Testbed (OCT) is a geographically distributed cloud testbed spanning four data centers and
connected with 10G and 100G network connections. The OCT is used to develop new cloud computing
software and infrastructure.

4. The Biomedical Data Commons - The Biomedical Data Commons (BDC) is cloud-based infrastructure that
provides secure, compliant cloud services for managing and analyzing genomic data, electronic medical records
(EMR), medical images, and other PHI data. It provides resources to researchers so that they can more easily make
discoveries from large complex controlled access datasets. The BDC provides resources to those institutions in the
BDC Working Group. It is an example of what is sometimes called condominium model of sharing research
infrastructure in which the research infrastructure is operated by a consortium of educational and research
organizations and provides resources to the consortium.
Contd……
5. NOAA Data Alliance Working Group - The OCC National Oceanographic and Atmospheric
Administration (NOAA) Data Alliance Working Group supports and manages the NOAA data
commons and the surrounding community interested in the open redistribution of NOAA
datasets.

In 2015, the OCC was accepted into the Matter healthcare community at Chicago's historic
Merchandise Mart. Matter is a community healthcare entrepreneurs and industry leaders
working together in a shared space to individually and collectively fuel the future of healthcare
innovation.

In 2015, the OCC announced a collaboration with the National Oceanic and Atmospheric
Administration (NOAA) to help release their vast stores of environmental data to the general
public. This effort is managed by the OCC's NOAA data alliance working group.
The Distributed management Task Force (DMTF)
 DMTF is a 501(c)(6) nonprofit industry standards organization that creates open manageability standards spanning
diverse emerging and traditional IT infrastructures including cloud, virtualization, network, servers and storage.
Member companies and alliance partners collaborate on standards to improve interoperable management of
information technologies.
 Based in Portland, Oregon, the DMTF is led by a board of directors representing technology companies including:
Broadcom Inc., Cisco, Dell Technologies, Hewlett Packard Enterprise, Intel Corporation, Lenovo, NetApp, Positive
Tecnologia S.A., and Verizon.
 Founded in 1992 as the Desktop Management Task Force, the organization’s first standard was the now-legacy
Desktop Management Interface (DMI). As the organization evolved to address distributed management through
additional standards, such as the Common Information Model (CIM), it changed its name to the Distributed
Management Task Force in 1999 , but is now known as, DMTF.
 The DMTF continues to address converged, hybrid IT and the Software Defined Data Center (SDDC)
with its latest specifications, such as the CADF (Cloud Auditing Data Federation), CIMI (Cloud Infrastructure Management
Interface), CIM (Common Information Model), DASH (Desktop and Mobile Architecture for System Hardware), MCTP (Management
Component Transport Protocol), NC-SI (Network Controller Sideband Interface), OVF (Open Virtualization Format), PLDM (Platform
Level Data Model), Redfish Device Enablement (RDE), Redfish (Including Protocols, Schema, Host Interface, Profiles) SMASH (Systems
Management Architecture for Server Hardware) and SMBIOS (System Management BIOS).
The Distributed Management Task Force
(DMTF)
 DMTF enables more effective management of millions of IT systems
worldwide by bringing the IT industry together to collaborate on the
development, validation and promotion of systems management
standards.
 The group spans the industry with 160 member companies and
organizations, and more than 4,000 active participants crossing
43 countries.
 The DMTF board of directors is led by 16 innovative, industry-
leading technology companies.
The Distributed Management Task Force
(DMTF)
 DMTF management standards are critical to enabling management interoperability
among multi vendor systems, tools and solutions within the enterprise.

 The DMTF started the Virtualization Management Initiative (VMAN).

 The Open Virtualization Format (OVF) is a fairly new standard that has emerged
within the VMAN Initiative.

 Benefits of VMAN are Lowering the IT learning curve, and Lowering complexity
for vendors implementing their solutions
Standardized Approaches available to
Companies due to VMAN Initiative
 Deploy virtual computer systems
 Discover and take inventory of virtual computer systems
 Manage the life cycle of virtual computer systems
 Add/change/delete virtual resources
 Monitor virtual systems for health and performance
Standards for Application Developers
 The purpose of application development standards is to ensure
uniform, consistent, high-quality software solutions.

 Programming standards help to improve the readability of the


software, allowing developers to understand new code more
quickly and thoroughly.

 Commonly used application standards are available for the


Internet in browsers, for transferring data, sending messages,
and securing data.
Standards for Browsers (Ajax)
 Using Ajax, a web application can request only the content that needs to be updated
in the web pages. This greatly reduces networking bandwidth usage and page load
times.

 Sections of pages can be reloaded individually.

 An Ajax framework helps developers to build dynamic web pages on the client side.
Data is sent to or from the server using requests, usually written in JavaScript.

 ICEfaces is an open source Ajax framework developed as Java product and


maintained by http://icefaces.org.
ICEfaces Ajax Application Framework
 ICEfaces is an integrated Ajax application framework that enables
Java EE application developers to easily create and deploy thin- client
rich Internet applications in pure Java. 
 To run ICEfaces applications, users need to download and install the
following products:
 Java 2 Platform, Standard Edition
 Ant
 Tomcat
 ICEfaces
 Web browser (if you don’t already have one installed)
Security Features in ICEfaces Ajax
Application Framework
 ICEfaces is the one of the most secure Ajax solutions available.

 It is Compatible with SSL (Secure Sockets Layer) protocol.

 It prevents cross-site scripting, malicious code injection, and


unauthorized data mining.

 ICEfaces does not expose application logic or user data.

 It is effective in preventing fake form submits and SQL (Structured Query


Language) injection attacks.
Data (XML, JSON)
 Extensible Markup Language (XML) allows to define
markup elements.
 Its purpose is to enable sharing of structured data.
 XML is often used to describe structured data and to serialize
Objects.
 XML provides a basic syntax that can be used to share
information among different kinds of computers, different
applications, and different organizations without needing to be
converted from one to another.
Solution Stacks (LAMP and LAPP)
 LAMP is a popular open source solution commonly used to run dynamic
web sites and servers.

 The acronym derives from the fact that it includes Linux, Apache,
MySQL, and PHP (or Perl or Python) and is considered by many to be the
platform of choice for development and deployment of high- performance
web applications which require a solid and reliable foundation.

 When used in combination, they represent a solution stack of


technologies that support application servers.
Linux, Apache, PostgreSQL, and PHP(or Perl
or Python) (LAPP)
 The LAPP stack is an open source web platform that can be used to
run dynamic web sites and servers. It is considered by many to be a
more powerful alternative to the more popular LAMP stack.

 LAPP offers SSL

 Many consider the LAPP stack a more secure out-of-the-box


solution than the LAMP stack.
Standards for Messaging
 A message is a unit of information that is moved from
one place to another.
 Most common messaging standards used in the cloud
are
 Simple Message Transfer Protocol (SMTP)
 Post Office Protocol (POP)
 Internet Messaging Access Protocol (IMAP)
 Syndication (Atom, Atom Publishing Protocol, and
RSS)
 Communications (HTTP, SIMPLE, and XMPP)
Simple Message Transfer Protocol
 Simple Message Transfer Protocol is arguably the most important
protocol in use today for basic messaging. Before SMTP was created,
email messages were sent using File Transfer Protocol (FTP).
 The FTP protocol was designed to transmit files, not messages, so it did
not provide any means for recipients to identify the sender or for the
sender to designate an intended recipient.
 SMTP was designed so that sender and recipient information could be
transmitted with the message.
 SMTP is a two-way protocol that usually operates using TCP
(Transmission Control Protocol) port 25.
Post Office Protocol (POP)
 SMTP can be used both to send and receive messages, but the client must have a constant
connection to the host to receive SMTP messages.

 The Post Office Protocol (POP) was introduced to circumvent this situation.

 POP is a lightweight protocol whose single purpose is to download messages from a


server. This allows a server to store messages until a client connects and requests them.

 Once the client connects, POP servers begin to download the messages and subsequently
delete them from the server (a default setting) in order to make room for more messages.
Internet Messaging Access Protocol
 Once mail messages are downloaded with POP, they are automatically deleted
from the server when the download process has finished.

 Many businesses have compulsory compliance guidelines that require saving


messages. It also becomes a problem if users move from computer to computer or use
mobile networking, since their messages do not automatically move where they go.

 To get around these problems, a standard called Internet Messaging Access Protocol
was created. IMAP allows messages to be kept on the server but viewed and
manipulated (usually via a browser) as though they were stored locally.
Standards for Security
 Security standards define the processes, procedures, and practices
necessary for implementing a secure environment that provides
privacy and security of confidential information in a cloud
environment.
 Security protocols, used in the cloud are:
 Security Assertion Markup Language (SAML)
 Open Authentication (Oauth)
 OpenID
 SSL/TLS
Security Assertion Markup Language (SAML)
 SAML is an XML-based standard for communicating authentication, authorization,
and attribute information among online partners. It allows businesses to securely send
assertions between partner organizations regarding the identity and entitlements of a
principal.
 SAML allows a user to log on once for affiliated but separate Web sites. SAML
is designed for business-to-business (B2B) and business-to-consumer (B2C)
transactions.
 SAML is built on a number of existing standards, namely, SOAP, HTTP, and XML.
SAML relies on HTTP as its communications protocol and specifies the use of
SOAP.
 Most SAML transactions are expressed in a standardized form of XML. SAML
assertions and protocols are specified using XML schema.
Open Authentication (Oauth)
 OAuth is an open protocol, initiated by Blaine Cook and Chris Messina,
to allow secure API authorization in a simple, standardized method for
various types of web applications.
 OAuth is a method for publishing and interacting with protected
data.
 OAuth provides users access to their data while protecting account
credentials.
 OAuth by itself provides no privacy at all and depends on other protocols
such as SSL to accomplish that.
OpenID
 OpenID is an open, decentralized standard for user authentication and access
control that allows users to log onto many services using the same digital
identity.
 It is a single-sign-on (SSO) method of access control.
 It replaces the common log-in process (i.e., a log-in name and a password) by
allowing users to log in once and gain access to resources across participating
systems.
 An OpenID is in the form of a unique URL and is authenticated by the
entity hosting the OpenID URL.
SSL/TLS
 Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are
cryptographically secure protocols designed to provide security and data integrity for
communications over TCP/IP
 TLS and SSL encrypt the segments of network connections at the transport layer.
 TLS provides endpoint authentication and data confidentiality by using
cryptography.
 TLS involves three basic phases:
 Peer negotiation for algorithm support
 Key exchange and authentication
 Symmetric cipher encryption and message authentication
End user Access to Cloud Computing
 In its most strict sense, end-user computing (EUC) refers to computer systems and
platforms that help non-programmers create applications. ... What's important is that
a well-designed EUC/VDI plan can allow users to access the digital platforms they need
to be productive, both on-premises and working remotely in the cloud.
 An End-User Computing application or EUC is any application that is not managed and
developed in an environment that employs robust IT general controls. ... Although the
most pervasive EUCs are spreadsheets, EUCs also can include user databases, queries,
scripts, or output from various reporting tools.
 Broadly, end-user computing covers a wide range of user-facing resources, such as:
desktop and notebook end user computers; desktop operating systems and
applications; wearables and smartphones; cloud, mobile, and web applications; and
virtual desktops and applications.
WHAT IS END-USER COMPUTING?

EUC = computer systems & platforms meant to


allow non-programmers to create working
computer applications

3 Types of EUC

1 Traditional EUC = the end user merely


systems & applications created by developers
uses computer

End-User Control = the user's department purchases

2 package applications & hardware for their use

End-User Development = the user is given a set of


3 tools that let them customize & create applications
Mobile Internet devices and the Cloud
 Mobile cloud computing uses cloud computing to deliver applications to mobile devices. These
mobile apps can be deployed remotely using speed and flexibility and development tools.
 Mobile cloud storage is a form of cloud storage that is accessible on mobile devices such as
laptops, tablets, and smartphones. Mobile cloud storage providers offer services that allow the
user to create and organize files, folders, music, and photos, similar to other cloud computing
models.
 The mobile cloud is Internet-based data, applications and related services accessed through
smartphones, laptop computers, tablets and other portable devices. Mobile cloud computing
is differentiated from mobile computing in general because the devices run cloud- based Web
apps rather than native apps.
 Locator apps and remote backup are two types of cloud-enabled services for mobile devices
 A mobile cloud app is a software program designed to be accessible via the internet through
portable devices. In terms of the real world, there are many examples of mobile cloud solutions,
including: Email.
Hadoop (https://en.wikipedia.org/wiki/Apache_Hadoop)
 It is a collection of open-source software utilities that facilitates using a network of many
computers to solve problems involving massive amounts of data and computation.
 It provides a software framework for distributed storage and processing of big data using
the MapReduce programming model.
 Hadoop was originally designed for computer clusters built from commodity hardware, which
is still the common use. It has since also found use on clusters of higher-end hardware.
 All the modules in Hadoop are designed with a fundamental assumption that hardware
failures are common occurrences and should be automatically handled by the framework.
 The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File
System (HDFS), and a processing part which is a MapReduce programming model.
 Hadoop splits files into large blocks and distributes them across nodes in a cluster. It then
transfers packaged code into nodes to process the data in parallel. This approach takes
advantage of data locality, where nodes manipulate the data they have access to.
 This allows the dataset to be processed faster and more efficiently than it would be in a more
conventional supercomputer architecture that relies on a parallel file system where
computation and data are distributed via high-speed networking.
Contd…
The base Apache Hadoop framework is composed of the following modules:
 Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
 Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity
machines, providing very high aggregate bandwidth across the cluster;
 Hadoop YARN – (introduced in 2012) a platform responsible for managing computing resources in
clusters and using them for scheduling users' applications;[10][11]
 Hadoop MapReduce – an implementation of the MapReduce programming model for large-scale data
processing.
 Hadoop Ozone – (introduced in 2020) An object store for Hadoop
 The term Hadoop is often used for both base modules and sub-modules and also the ecosystem, or
collection of additional software packages that can be installed on top of or alongside Hadoop, such as
Apache Pig, Apache Hive, Apache HBase, Apache Phoenix, Apache Spark, Apache ZooKeeper,
Cloudera Impala, Apache Flume, Apache Sqoop, Apache Oozie, and Apache Storm.
 Apache Hadoop's MapReduce and HDFS components were inspired by Google papers
on MapReduce and Google File System.
 The Hadoop framework itself is mostly written in the Java programming language, with some native code in
C and command line utilities written as shell scripts. Though MapReduce Java code is common, any
programming language can be used with Hadoop Streaming to implement the map and reduce parts of the
user's program.[15] Other projects in the Hadoop ecosystem expose richer user in
MapReduce
 MapReduce is a programming model or pattern within the Hadoop framework that is
used to access big data stored in the Hadoop File System (HDFS). ... MapReduce
facilitates concurrent processing by splitting petabytes of data into smaller chunks, and
processing them in parallel on Hadoop commodity servers.
 MapReduce is a programming model for processing large amounts of data in a parallel and
distributed fashion. It is useful for large, long-running jobs that cannot be handled within
the scope of a single request, tasks like:
 Analyzing application logs
 Aggregating related data from external sources
 Transforming data from one format to another
 Exporting data for external analysis
 App Engine MapReduce is a community-maintained, open source library that is built
on top of App Engine services, including Datastore and Task Queues. The library is
available on GitHub at these locations:
 Java source project.
 Python source project.
Contd…
 MapReduce is a software framework for easily writing
applications which process vast amounts of data (multi-terabyte
data-sets) in-parallel on large clusters (thousands of nodes) of
commodity hardware in a reliable, fault-tolerant manner.
 A MapReduce job usually splits the input data-set into
independent chunks which are processed by the map tasks in a
completely parallel manner.
 The framework sorts the outputs of the maps, which are then
input to the reduce tasks.
 Typically both the input and the output of the job are stored in a
file-system.
 The framework takes care of scheduling tasks, monitoring them
and re-executes the failed tasks.
Contd…
 Typically the compute nodes and the storage nodes are the same, that is, the
MapReduce framework and the Hadoop Distributed File System are running on
the same set of nodes. This configuration allows the framework to effectively
schedule tasks on the nodes where data is already present, resulting in very
high aggregate bandwidth across the cluster.
 The MapReduce framework consists of a single master JobTracker and one
slave TaskTracker per cluster-node. The master is responsible for scheduling
the jobs' component tasks on the slaves, monitoring them and re-executing the
failed tasks. The slaves execute the tasks as directed by the master.
 Minimally, applications specify the input/output locations and
supply map and reduce functions via implementations of appropriate interfaces
and/or abstract-classes. These, and other job parameters, comprise the job
configuration.
 The Hadoop job client then submits the job (jar/executable etc.) and
configuration to the JobTracker which then assumes the responsibility of
distributing the software/configuration to the slaves, scheduling tasks and
monitoring them, providing status and diagnostic information to the job-client.
VirtualBox
 VirtualBox is a general-purpose Type-2 Hypervisor virtualization tool for x86 and x86-
64 hardware developed by Oracle Corp., targeted at server, desktop, and embedded use,
that allows users and administrators to easily run multiple guest operating systems on a single
host.
 VirtualBox was originally created by Innotek GmbH, which was acquired by Sun
Microsystems in 2008, which was in turn acquired by Oracle in 2010.
 VirtualBox may be installed on Microsoft Windows, MacOS, Linux, Solaris and OpenSolaris.
There are also ports to FreeBSD and Genode.
 It supports the creation and management of guest virtual machines running Windows,
Linux, BSD, OS/2, Solaris, Haiku, and OSx86, as well as limited virtualization of
macOS guests on Apple hardware. For some guest operating systems, a "Guest Additions"
package of device drivers and system applications is available, which typically improves
performance, especially that of graphics, and allows changing the resolution of the guest
OS automatically when the window of the virtual machine on the host OS is resized.
Google App Engine
 Google App Engine (often referred to as GAE or simply App Engine) is a cloud
computing platform as a service for developing and hosting web applications
in Google-managed data centers. Applications are sandboxed and run across
multiple servers.
 Google App Engine, which is a platform-as-a-service (PaaS) offering that gives software
developers access to Google's scalable hosting.

 Major Features of Google App Engine in Cloud Computing


Collection of Development Languages & Tools Fully Managed Pay-as-you-Go
Effective Diagnostic Services Traffic Splitting All Time Availability
Ensure Faster Time to Market Easy to Use Platform

 An App Engine web application can be described as having three major parts:
Application instances Scalable data storage Scalable services
Programming Environment for Google App Engine
 Google App Engine (often referred to as GAE or simply App Engine) is a cloud computing
platform as a service for developing and hosting web applications in Google- managed
data centers.
 Applications are sandboxed and run across multiple servers. App Engine offers automatic
scaling for web applications—as the number of requests increases for an application, App
Engine automatically allocates more resources for the web application to handle the
additional demand.
 Google App Engine primarily supports Go, PHP, Java, Python, Node.js, .NET, and
Ruby applications, although it can also support other languages via "custom runtimes".
The service is free up to a certain level of consumed resources and only in standard
environment but not in flexible environment. Fees are charged for additional storage,
bandwidth, or instance hours required by the application. It was first released as a preview
version in April 2008 and came out of preview in September 2011.
 The environment you choose depends on the language and related technologies you want
Contd…
Runtimes and framework
 Google App Engine primarily supports Go, PHP, Java, Python, Node.js, .NET,
and Ruby applications, although it can also support other languages via "custom runtimes".
 Python web frameworks that run on Google App Engine include
Django, CherryPy, Pyramid, Flask, web2py and webapp2, as well as a custom Google-written
webapp framework and several others designed specifically for the platform that emerged
since the release.
 Any Python framework that supports the WSGI using the CGI adapter can be used to create
an application; the framework can be uploaded with the developed application. Third-party
libraries written in pure Python may also be uploaded.
 Google App Engine supports many Java standards and frameworks. Core to this is the
servlet 2.5 technology using the open-source Jetty Web Server, along with accompanying
technologies such as JSP. JavaServer Faces operates with some workarounds. A newer
release of App Engine Standard Java in Beta supports Java8, Servlet
3.1 and Jetty9.
Contd…
 Though the integrated database, Google Cloud Datastore, may be unfamiliar to
programmers, it is accessed and supported with JPA, JDO, and by the simple
low-level API.
 There are several alternative libraries and frameworks you can use to model and
map the data to the database such as Objectify, Slim3 and Jello framework.
 The Spring Framework works with GAE. However, the Spring Security module
(if used) requires workarounds. Apache Struts 1 is supported, and Struts 2
runs with workarounds.
 The Django web framework and applications running on it can be used on App
Engine with modification.
 Django-nonrelaims to allow Django to work with non-relational databases and
the project includes support for App Engine.

You might also like