0% found this document useful (0 votes)
16 views42 pages

DSCC Notes

The document provides an overview of cloud computing, detailing its definition, benefits, limitations, components, and service models (IaaS, PaaS, SaaS). It also discusses related concepts such as grid computing, utility computing, and client-server architecture, along with deployment models like public, private, hybrid, and community clouds. Additionally, it highlights the impact of cloud computing on businesses, key drivers for adoption, and scenarios where public cloud may not be suitable.

Uploaded by

Nirmala Nadar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views42 pages

DSCC Notes

The document provides an overview of cloud computing, detailing its definition, benefits, limitations, components, and service models (IaaS, PaaS, SaaS). It also discusses related concepts such as grid computing, utility computing, and client-server architecture, along with deployment models like public, private, hybrid, and community clouds. Additionally, it highlights the impact of cloud computing on businesses, key drivers for adoption, and scenarios where public cloud may not be suitable.

Uploaded by

Nirmala Nadar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

DSCC NOTES

What is Cloud Computing


 A model enabling convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, services).
 Resources can be rapidly provisioned and released with minimal management effort.
Benefits of Cloud Computing
 Faster time to market: Enables rapid deployment of new instances, allowing developers to
accelerate development and innovate without hardware limitations or slow procurement processes.
 Scalability and flexibility: Easily scale resources and storage up or down to meet business demands
without investing in physical infrastructure.
 Cost savings: Pay only for the resources you use, avoiding overbuilding and overprovisioning, while
freeing up IT teams for more strategic work.
 Better collaboration: Access data anytime, anywhere, from any device with an internet connection,
improving accessibility and collaboration.
 Advanced security: Cloud computing enhances security through advanced features, automatic
maintenance, and centralized management, backed by top security experts and solutions.
 Data loss prevention: Backup and disaster recovery features help protect against data loss due to
hardware failure, malicious threats, or user error.
Limitations of Cloud Computing
 Internet dependency: A bad connection can hinder access to data or applications, unlike traditional
computing with a hardwired connection.
 Downtime risk: Cloud services may experience downtime due to natural disasters or unforeseen
technical issues affecting connectivity.
 Other disadvantages:
o Risk of vendor lock-in
o Less control over cloud infrastructure
o Concerns about security risks, such as data privacy and online threats
o Integration challenges with existing systems
o Unforeseen costs and unexpected expenses
Components of Cloud Computing
 Client
o Access device or software interface used by users to access cloud services.
o Can include resources like processor, memory, OS, database, middleware, and applications
for user tasks and processing.
o Categories of clients:
 Mobile clients
 Thin clients: Rely on network connections for computing; minimal hardware
processing (e.g., Google Docs, web apps, Yahoo Messenger).
 Thick clients: Operate without a server or network, capable of processing, storing,
and managing data independently.
 Cloud Network
o Acts as the connecting link between the user and cloud services.
o Internet is the most widely used network.
o Requires advanced features like encryption and compression during data transmission.
 Cloud API
o A set of programming instructions and tools providing abstraction over specific cloud
providers.
Cloud API
A Cloud API (Application Programming Interface) is a set of protocols, tools, and definitions that allow
software applications to interact with cloud services. It acts as a bridge for communication between the
software and the cloud infrastructure, enabling developers to integrate, manage, and manipulate cloud-based
resources and services.
Cloud APIs provide a way to access and manage cloud resources, such as storage, compute power,
databases, and networking, programmatically. This allows applications to automate tasks like data storage,
compute scaling, and service management without manual intervention.
Here are some key points about Cloud APIs:
 Access Cloud Services: They allow applications to interact with cloud platforms like Amazon Web
Services (AWS), Microsoft Azure, Google Cloud, etc.
 Automation: Automate infrastructure provisioning, scaling, and management.
 Integration: Enable the integration of third-party services with cloud platforms.
 Security: Cloud APIs often include authentication and authorization protocols to ensure secure
access to cloud resources.
Examples include APIs for cloud storage (like Amazon S3), cloud computing (like AWS EC2), and cloud
databases (like Google Cloud SQL).
Explain cloud computing characteristics
 On-Demand Self Service:
A consumer can provision computing capabilities automatically using a simple interface, without
requiring human interaction with the service provider.
 Heterogeneous Access:
Capabilities are available over the network and accessed through standard mechanisms, promoting
use by heterogeneous devices, such as thin or thick client platforms.
 Resource Pooling:
The provider's computing resources are pooled to serve multiple consumers using a multi-tenant
model.
Physical and virtual resources are dynamically assigned and reassigned according to consumer
demand.
 Rapid Elasticity:
Resources can be accessed and scaled up when needed, and scaled back when not required, as they
are elastically provisioned and released.
 Measured Service:
Cloud systems control and optimize resource usage by leveraging metering capabilities.
Users pay only for what they use or reserve, with transparent monitoring, measurement, and
reporting of resource utilization.

Grid Computing
 Combines computer resources from different geographical locations to achieve a common goal.
 Pools unused resources across multiple computers for a single task.
 Used by organizations to perform large tasks or solve complex problems.
 Example: Meteorologists use grid computing for weather modeling, which requires complex data
management and analysis.
 Enables faster processing of computation-intensive tasks like weather modeling over geographically
dispersed systems.
Common Applications of Grid Computing:
 Financial Services: Used for risk management; shortens forecasting duration in volatile markets by
leveraging combined computing power.
 Gaming: Allocates large tasks like in-game design creation to multiple machines, resulting in faster
development turnaround.
 Entertainment: Speeds up production timelines for special effects in movies by sharing
computational resources across the grid.
Utility Computing
 Originated in the 1960s with time-sharing provided by mainframe manufacturers.
 Offered free database storage and compute power to banks and large organizations.
 Tracks resources like CPU cycles, storage, and network data transfer, billing consumers based on
usage.
 Cloud computing extends this model to include software applications, licenses, and self-service
portals under a metered pay-as-you-go system.
Client-Server Architecture
 A computing model where the server hosts, delivers, and manages resources and services requested
by the client over a network.
 Example: In hospitals, client computers handle patient information input while server computers
manage database storage.
 Concentrates processing power and administrative functions at the server while enabling clients to
perform basic tasks.
 Requires additional investment for rapid deployment of resources during demand spikes.
 Cloud computing enhances this model with increased performance, flexibility, cost savings, and
responsibility for application hosting by the cloud provider.
 Offers consumers virtually infinite resources on demand.
Types of CC deployment models
1. Public Cloud
 Public clouds are managed by third parties which provide cloud services over the internet to the
public, these services are available as pay-as-you-go billing models.
 They offer solutions for minimizing IT infrastructure costs and become a good option for handling
peak loads on the local infrastructure. Public clouds are the go-to option for small enterprises, which
can start their businesses without large upfront investments by completely relying on public
infrastructure for their IT needs.
 The fundamental characteristics of public clouds are multitenancy. A public cloud is meant to serve
multiple users, not a single customer. A user requires a virtual computing environment that is
separated, and most likely isolated, from other users.
Examples: Amazon EC2, IBM, Azure, GCP
2. Private cloud
 Private clouds are distributed systems that work on private infrastructure and provide the users with
dynamic provisioning of computing resources. Instead of a pay-as-you-go model in private clouds,
there could be other schemes that manage the usage of the cloud and proportionally billing of the
different departments or sections of an enterprise. Private cloud providers are HP Data Centers,
Ubuntu, Elastic-Private cloud, Microsoft, etc.
Examples: VMware vCloud Suite, OpenStack, Cisco Secure Cloud, Dell Cloud Solutions, HP Helion
Eucalyptus
3. Hybrid cloud
 A hybrid cloud is a heterogeneous distributed system formed by combining facilities of the public
cloud and private cloud. For this reason, they are also called heterogeneous clouds.
 A major drawback of private deployments is the inability to scale on-demand and efficiently address
peak loads. Here public clouds are needed. Hence, a hybrid cloud takes advantage of both public and
private clouds.
 Examples: AWS Outposts, Azure Stack, Google Anthos, IBM Cloud Satellite, Oracle Cloud at
Customer
4. Community Cloud
 Community clouds are distributed systems created by integrating the services of different clouds to
address the specific needs of an industry, a community, or a business sector. But sharing
responsibilities among the organizations is difficult.
 In the community cloud, the infrastructure is shared between organizations that have shared concerns
or tasks. An organization or a third party may manage the cloud.
 Examples: CloudSigma, Nextcloud, Synology C2, OwnCloud, Stratoscale
Cloud computing service models

Infrastructure As A Service (IAAS) is means of delivering computing infrastructure as on-demand services.


It is one of the three fundamental cloud service models. The user purchases servers, software data center
space, or network equipment and rent those resources through a fully outsourced, on-demand service model.
It allows dynamic scaling and the resources are distributed as a service. It generally includes multiple-user
on a single piece of hardware. It totally depends upon the customer to choose its resources wisely and as per
need. Also, it provides billing management too.
Example of IAAS (Infrastructure As A Service)
 Amazon Web Services
 Microsoft Azure
 Google Compute Engine
 Digital Ocean
Platform As A Service (PAAS) is a cloud delivery model for applications composed of services managed by
a third party. It provides elastic scaling of your application which allows developers to build applications
and services over the internet and the deployment models include public, private and hybrid. Basically, it is a
service where a third-party provider provides both software and hardware tools to the cloud computing. The
tools which are provided are used by developers. PAAS is also known as Application PAAS. It helps us to
organize and maintain useful applications and services. It has a well-equipped management system and is
less expensive compared to IAAS.
Examples of PAAS (Platform as a Service)
 AWS Lambda
 Google App Engine
 Google Cloud
 IBM Cloud
Software As A Service (SAAS) allows users to run existing online applications and it is a model software
that is deployed as a hosting service and is accessed over Output Rephrased/Re-written Text the internet or
software delivery model during which software and its associated data are hosted centrally and accessed
using their client, usually an online browser over the web. SAAS services are used for the development and
deployment of modern applications. It allows software and its functions to be accessed from anywhere with
good internet connection device and a browser. An application is hosted centrally and also provides access to
multiple users across various locations via the internet.
Example of SAAS (Software as a Service)
 Salesforce
 Google Workspace apps
 Microsoft 365
 Adobe Creative Cloud

IaaS (Infrastructure as a Service)


 Also known as Hardware as a Service (HaaS).
 Provides outsourced IT infrastructure like servers, networking, processing, storage, virtual machines,
and other resources.
 Users pay for services based on usage.
 Eliminates the need for organizations to maintain IT infrastructure.
 Offered in three models: public, private, and hybrid cloud.
 Services provided by IaaS providers:
o Compute: Virtual CPUs and memory for VMs.
o Storage: Backend storage for files.
o Network: Networking components like routers, switches, and bridges.
o Load Balancers: Infrastructure-level load balancing.
PaaS (Platform as a Service)
 Provides a runtime environment for developing, testing, and deploying web applications.
 Backend scalability is managed by the service provider.
 Includes infrastructure and platform to support the web application lifecycle.
 Examples: Google App Engine, Force.com, Azure.
 Services provided by PaaS providers:
o Programming Languages: Java, PHP, Ruby, Perl, Go, etc.
o Application Frameworks: Node.js, Drupal, WordPress, Spring, etc.
o Databases: ClearDB, PostgreSQL, MongoDB, Redis, etc.
o Other Tools: Tools required for application development and deployment.
 Advantages of PaaS:
o Simplifies development.
o Lower risk due to no upfront hardware or software investment.
o Prebuilt business functionality.
o Access to community support.
o Scalable applications.
 Disadvantages of PaaS:
o Vendor lock-in.
o Potential data privacy concerns.
o Integration complexity with local applications.
SaaS (Software as a Service)
 Known as "On-Demand Software."
 Services are hosted by a cloud service provider and accessed via the internet.
 No need for end-users to install software.
 Examples: Slack, Box, Zoho Forms, Samepage.
 Services provided by SaaS providers:
o Business Services: ERP, CRM, billing, sales.
o Document Management: Software for creating, managing, and tracking documents.
o Social Networks: Cloud-based social networking services.
o Mail Services: E-mail services to handle large user loads.
 Advantages of SaaS:
o Easy to purchase with subscription pricing.
o One-to-many model allows shared use.
o Low hardware requirements.
o Minimal maintenance.
o No special software or hardware needed.
o Multidevice support.
o Easy API integration.
o No client-side installation required.
 Disadvantages of SaaS:
o Security concerns with cloud storage.
o Potential latency issues.
o Complete dependency on the internet.
o Difficult to switch vendors due to data transfer and conversion challenges.
Impact of CC on business
• Significantly reduces operational costs
• Enables better collaboration & teamwork
• Ensures data security
• Promises reliable continuity for businesses
• Provides better scalability

Key drivers in CC
 Security:
o Cloud adoption helps businesses enhance security against increasing cyber threats, including
sophisticated phishing and malware attacks.
o It provides a secure platform, making it a key driver for businesses migrating to the cloud.
 Cost Saving:
o Reduces capital expenditure (CapEx) by eliminating the need for costly hardware, storage,
and network devices.
o Pay-per-use model allows businesses to pay only for what they consume, saving significant
costs.
 Efficiency:
o Streamlines processes by eliminating unnecessary steps, increasing productivity, and
improving customer delivery times.
 Flexibility and Scalability:
o Cloud services scale with business growth, allowing businesses to expand resources without
the need for costly infrastructure investments.
o Provides flexibility to adjust storage and capabilities as needed.
 Rapid Recovery:
o Cloud backups store data across multiple centers, ensuring quick recovery in case of disaster,
unlike on-premises solutions which require costly infrastructure replacements.
 Increased Convenience:
o Cloud-based storage offers easy access to files from anywhere, enhancing employee
productivity and focusing on business growth.
 Speed and Productivity:
o Cloud services enable faster application deployment, reducing the time from weeks or
months to just hours, thereby boosting productivity.
 Strategic Value:
o Cloud migration offers businesses a competitive edge by providing innovative technologies
and quick solutions to customers, improving agility and customer satisfaction.
 Multi-tenancy:
o Cloud infrastructure allows multiple customers to share resources without compromising
privacy and security.
 Service and Innovation:
o Cloud enables businesses to leverage various services, APIs, and tools to develop new,
innovative applications and processes.

When to avoid public cloud


 Data Sensitivity and Privacy Concerns:
o If your business deals with highly sensitive or confidential information (such as financial,
healthcare, or personal data), and you cannot ensure full control over the data, a private cloud
or on-premises infrastructure may be more suitable to meet regulatory or privacy
requirements.
 Regulatory and Compliance Issues:
o When your business is subject to strict regulatory requirements (e.g., HIPAA, GDPR), and
there are limitations on where or how your data can be stored or accessed, public cloud
services may not provide the necessary compliance guarantees for your industry.
 Latency and Performance Requirements:
o If your application requires extremely low latency or has high performance demands (such as
real-time analytics or gaming), using a public cloud might introduce unpredictable delays due
to shared infrastructure. In such cases, private infrastructure or edge computing might be
more effective.
 Control and Customization Needs:
o Public clouds may not offer the level of control or customization required for some complex
or highly specialized applications. If you need specific configurations, custom hardware, or
deep control over your environment, private clouds or on-premises solutions might be better
suited.
 Cost Constraints with High Usage:
o While public clouds can save costs initially, businesses with consistently high usage or large-
scale workloads might find it more cost-effective to invest in on-premises infrastructure due
to the ongoing costs of public cloud services.
 Security Concerns with Multi-Tenancy:
o Public cloud services typically operate on a multi-tenant model, where resources are shared
between different clients. For businesses that require a higher level of isolation or have
concerns over the potential for data leakage, private cloud or dedicated hosting might be
preferable.
 Dependency on Internet Connectivity:
o Public clouds require reliable and high-speed internet connectivity. If your business operates
in regions with poor or inconsistent internet access, relying on the cloud could result in
service disruptions. In such cases, an on-premises solution or hybrid cloud could be a better
fit.
 Long-term Data Storage and Archiving Needs:
o If your business needs to retain large amounts of data over a long period, and accessing it
regularly isn't required, on-premises or private cloud solutions may offer more predictable
and lower-cost storage options compared to the variable costs of public cloud storage.
 Vendor Lock-In Concerns:
o If you're concerned about becoming too dependent on a specific cloud provider (vendor lock-
in), using public cloud services could make it difficult to switch to other platforms without
significant reconfiguration or data migration. A hybrid or private cloud model might offer
more flexibility in avoiding this issue.

Virtualization
Hypervisor
 It’s a virtual machine manager/monitor
 Program which allows to share single h/w
 Each virtual machines with guest OS acquires hosts processor memory & other resources
 A controller to isolate the virtual machines to operate with separate OS
Virtual machines
 A virtual computer system is VM
 Has a tightly isolated s/w container with an OS & application inside
 Each self contained VM is completely independent
 Putting multiple VMs on single computer enables several OS & applications to run on just 1 physical
server or host
Properties of VM:
o Partitioning: run multiple OS on 1 physical machine. Divide system resources b/w VM
o Isolation: provide fault & security isolation at h/w level. Preserve performance with advanced
resource controls
o Encapsulation: save entire state of VM to files. Move & copy VM as easily as moving & copying
files
o h/w independence: provision or migrate any VM to any physical server
 Virtualization creates a virtual version of an underlying service, allowing multiple operating systems
and applications to run on the same machine and hardware simultaneously.
 Initially developed during the mainframe era, it increases hardware utilization and flexibility.
 Virtualization is a cost-effective, hardware-reducing, and energy-saving technique widely used by
cloud providers.
 It enables sharing of a single physical resource or application instance among multiple customers and
organizations.
 Resources are virtualized by assigning logical names to physical storage and providing pointers to
physical resources on demand.
 Virtualization is synonymous with hardware virtualization, fundamental to delivering Infrastructure-
as-a-Service (IaaS) in cloud computing.
 Virtualization technologies create virtual environments for executing applications, storage, memory,
and networking.

Benefits of virtualization:
o Reduced capital & operating costs
o More flexible and efficient allocation of resources.
o Enhance development productivity.
o It lowers the cost of IT infrastructure.
o Remote access and rapid scalability.
o High availability and disaster recovery.
o Pay peruse of the IT infrastructure on demand.
o Enables running multiple operating systems.

VMM (Virtual machine monitor) Design Requirements and Providers:


 A VMM must fulfill three primary design requirements:
o Environment replication: Provide an environment for programs that is nearly identical to
the original machine.
o Performance preservation: Ensure that programs run in this environment with, at most,
minor speed reductions.
o System control: Maintain complete control over system resources. Programs should function
the same as when run directly on the original machine.
 Two exceptions to these requirements are permissible:
o Differences due to system resource availability when multiple virtual machines run on the
same hardware.
o Differences caused by timing dependencies.
Types of Virtualization:
 Emulation
o From one existing hardware and software system, a completely different hardware and
software system can be emulated (software simulation of hardware).
o Allows different guest OS to run on emulated virtual systems.
o Translates every machine (guest) instruction to native (host) machine instruction, resulting in
significant slowdowns.
o Offers a lot of flexibility.
 Full/Native Virtualization
o The emulated processor is identical to the underlying processor, avoiding instruction
translation.
o Can run applications and OS designed for the same underlying hardware.
o Enables faster operations compared to emulation.
o Examples include VMware Workstation and VirtualBox.
o In emulation and full virtualization, unmodified OS and applications can run on the emulated
hardware.
 Paravirtualization
o Modifies the OS to better support virtualization.
o Applications remain unmodified.
o Requires the use of special APIs that the modified guest OS must use.
o Hypercalls are trapped by the hypervisor and serviced.
o These techniques (emulation, full virtualization, and paravirtualization) essentially emulate
the hardware layer and processor.
Hardware level Virtualization
 Definition:
Hardware virtualization creates virtual versions of physical desktops and operating systems using a
hypervisor or Virtual Machine Manager (VMM) to share hardware resources efficiently.
 Components of Hardware Virtualization:
o Hardware Layer (Virtualization Host): Contains physical server components like CPU,
memory, network, and disk drives. Requires an x86-based system with one or more CPUs.
o Hypervisor: Creates a virtualization layer between the OS and hardware, enabling multiple
operating systems or instances to run in parallel. Isolates applications and operating systems
from hardware and virtual machines.
o Virtual Machines (VMs): Software emulations of computing hardware providing the
functionalities of physical computers, consisting of virtual hardware, guest OS, and guest
applications.
 Working of Hardware Virtualization:
o A hypervisor manages shared hardware resources, creating an abstraction layer between
software and hardware.
o The hypervisor splits the physical hardware into multiple environments or VMs, allocating
resources like CPU, memory, and storage as needed.
o Enables the full utilization of a machine’s capacity, enhances security by isolating VMs, and
supports server virtualization.
 Properties of Hardware-Level Virtualization:
o Supports running multiple operating systems and applications simultaneously.
o Eliminates the need for system reboot or dual boot setups.
o Simulates multiple independent machines, each functioning as a normal system.
o Offers high isolation.
o Less risky to implement and easy to maintain.
 Issues in Hardware-Level Virtualization:
o Installation and administration can be time-consuming before testing or running applications.
o Duplication of efforts and reduced efficiency when physical and virtual OS are the same.
o These issues can be mitigated by implementing OS-level virtualization.
Virtualization at OS level
 Virtualization at OS Level
o Involves sharing of hardware and the operating system.
o Separates the physical machine from the logical structure through a virtualization layer,
comparable to a Virtual Machine Monitor (VMM).
o This layer, built on the base OS, enables access to multiple isolated and independent
machines.
o Known as middleware support for virtualization.
o Keeps OS, application-specific data structures, user-level libraries, environmental settings,
and other requisites separate.
o Ensures applications cannot distinguish between real and virtual environments.
o Replicates the operating environment of the physical machine to create Virtual Environments
(VEs) by partitioning virtual systems as needed.
o Operating environments are isolated from the physical machine and from each other.
 Features of OS-Level Virtualization
o Resource Isolation: Provides dedicated resources for each container, such as CPU, memory,
and I/O bandwidth.
o Lightweight: Shares the host OS, enabling faster startups and reduced resource usage.
o Portability: Easily moved between environments without modifying the application.
o Scalability: Scales applications up or down based on demand.
o Security: Isolates applications from the host OS and other containers.
o Reduced Overhead: Avoids emulating a full hardware environment, minimizing resource
overhead.
o Easy Management: Simplifies operations through basic commands to start, stop, and monitor
containers.
 Advantages of OS-Level Virtualization
o Resource Efficiency: Reduces resource overhead by avoiding full hardware emulation.
o High Scalability: Quickly adapts to workload changes by scaling containers.
o Easy Management: Streamlines deployment and maintenance with simple command-based
operations.
o Reduced Costs: Requires fewer resources and infrastructure compared to traditional VMs.
o Faster Deployment: Accelerates launching and updating applications.
o Portability: Easily transitions containers across environments without application changes.
 Disadvantages of OS-Level Virtualization
o Security Risks: A breach in one container could affect others due to shared host OS.
o Limited Isolation: May lead to performance issues or resource contention.
o Complexity: Setup and management require specialized expertise.
o Dependency Issues: Potential compatibility problems with other containers or the host OS.
o Limited Hardware Access: Restricts tasks requiring direct access to hardware resources.

Hosted Virtualization vs bare metal virtualization


Feature Hosted Virtualization Bare-Metal Virtualization

Installed on top of a base operating Hypervisor communicates directly with system


Architecture
system (host OS). hardware, bypassing the need for a host OS.

Limited subset of I/O devices available Direct communication with hardware; supports
I/O Access to virtual machines; I/O requests pass partitioning and emulation of shared I/O
through the host OS. devices.

Performance may degrade due to I/O Improved I/O performance; suitable for real-
Performance requests being routed through the host time operating systems with deterministic
OS. performance.

Easy to install and configure; can run on


Ease of More difficult to install and configure; requires
a wide variety of PCs without
Installation inclusion of low-level drivers in the hypervisor.
customization.

Provides emulation for generic devices


Requires drivers for various hardware and
Device (e.g., network cards, CD-ROM drives);
emulation of shared devices; supports advanced
Support lacks support for non-generic PCI I/O
PCI devices like data acquisition boards.
devices.

Useful for testing beta software, running Ideal for deployed applications requiring real-
Use Cases legacy applications, and quick access to time data processing and simultaneous use of
different operating systems. general-purpose OS services.

Supports real-time operating systems with


Real-Time OS Lacks support for real-time operating
features for bounding interrupt latency and
Support systems due to host OS scheduling.
deterministic performance.

Engineers testing software on different Applications needing simultaneous real-time


Example OSs without a dedicated machine; and general-purpose OS capabilities, such as
Scenarios running legacy apps on modern those involving real-time data processing and
hardware. graphical interfaces.

Binary Translation with Full Virtualization


 Binary Translation with Full Virtualization
o Hardware virtualization can be categorized into full virtualization and host-based
virtualization.
o Full virtualization does not require modification of the host operating system.
o Relies on binary translation to trap and virtualize sensitive, nonvirtualizable instructions.
o Noncritical instructions run directly on hardware, while critical instructions are trapped and
emulated by the Virtual Machine Monitor (VMM).
o Binary translation can cause significant performance overhead, so only critical instructions
are trapped into the VMM.
o Noncritical instructions are harmless to system security and improve efficiency when
executed on hardware.
o VMware and other companies implement this by placing the VMM at Ring 0 and the guest
OS at Ring 1.
o The VMM scans for privileged, control-, and behavior-sensitive instructions, traps them, and
emulates their behavior using binary translation.
o Full virtualization combines binary translation with direct execution, decoupling the guest OS
from the underlying hardware.
o The guest OS remains unaware of virtualization.
 Hypervisor Mode
o x86 CPUs have multiple protection levels (rings) for code execution.
o Ring 0 offers the highest privilege, where the operating system kernel runs in kernel mode.
o Applications typically run in less privileged rings, like Ring 3.
o In hypervisor virtualization, the hypervisor (Type 1 Virtual Machine Monitor or VMM)
operates directly on host system hardware at Ring 0.
o The hypervisor manages resource and memory allocation for virtual machines and provides
interfaces for administration and monitoring.
o Guest operating system kernels, designed to run in Ring 0, must operate in less privileged
CPU rings under hypervisor virtualization.
o Challenges arise because OS kernels need access to privileged CPU instructions and memory
manipulation.
o Various solutions, including paravirtualization, address this issue.
Para-Virtualization with Compiler Support
 Para-virtualization involves modifying the guest operating system's kernel.
 It provides special APIs, requiring substantial OS modifications in user applications.
 Performance degradation in virtualized systems can discourage usage compared to physical
machines.
 The virtualization layer can be inserted at various positions in a machine's software stack.
 Para-virtualization aims to reduce virtualization overhead and improve performance by modifying
only the guest OS kernel.
 Non-virtualizable instructions are replaced with hypercalls to communicate directly with the
hypervisor or VMM.
 A modified guest OS kernel for para-virtualization cannot run directly on hardware.
 Non-virtualizable instructions allow resource modifications without VMM oversight, which can be
dangerous or produce inconsistent results.
 Privileged operations are replaced with hypercalls, enabling the hypervisor to perform tasks on
behalf of the guest kernel.
 Unlike full virtualization, para-virtualization handles privileged and sensitive instructions at compile
time.
 The guest OS kernel is modified to replace privileged instructions with hypercalls to the hypervisor
or VMM.
 Xen uses a para-virtualization architecture, with the guest OS running at Ring 1 instead of Ring 0.
 Running at Ring 1 restricts the guest OS from executing some privileged instructions directly.
 Privileged instructions are implemented via hypercalls, enabling the modified guest OS to emulate
the original OS behavior.
 On UNIX systems, system calls involve interrupts or service routines, while in Xen, hypercalls use a
dedicated service routine.
Virtualization of CPU
o A VM duplicates an existing computer system, where most VM instructions execute directly
on the host processor in native mode.
o Unprivileged instructions run directly on the host machine for higher efficiency, while critical
instructions are carefully handled for correctness and stability.
o Critical instructions are categorized into:
 Privileged instructions: Execute in privileged mode and are trapped when executed
outside this mode.
 Control-sensitive instructions: Change the configuration of resources used.
 Behavior-sensitive instructions: Behave differently depending on resource
configurations, such as load/store operations on virtual memory.
o A CPU architecture is virtualizable if:
 VM’s privileged and unprivileged instructions run in CPU’s user mode.
 VMM runs in supervisor mode, trapping privileged instructions for correctness and
stability.
o Virtualizable CPU Architectures:
 RISC CPUs are naturally virtualized as all control- and behavior-sensitive instructions
are privileged.
 x86 CPUs are not inherently designed for virtualization due to 10 sensitive non-
privileged instructions (e.g., SGDT, SMSW) that cannot be trapped by VMM.
 Hardware-Assisted CPU Virtualization
o Simplifies virtualization compared to full or paravirtualization.
o Intel and AMD introduce an additional privilege mode (Ring -1) in x86 processors:
 Operating systems operate at Ring 0.
 Hypervisors operate at Ring -1, trapping all privileged and sensitive instructions
automatically.
o Eliminates the need for binary translation in full virtualization, enabling unmodified
operating systems to run in VMs.
o Efficiency:
 High efficiency but transitions between hypervisor and guest OS incur overhead.
 Hybrid approaches (e.g., VMware) offload some tasks to hardware while retaining
software-based tasks.
 Para-virtualization and hardware-assisted virtualization are combined for performance
improvements.
o Example: QEMU/KVM in Linux.
Memory Virtualization

 Virtual memory virtualization mirrors the virtual memory support in modern operating systems.
 Traditional environments use page tables for a one-stage mapping of virtual memory to machine
memory.
 Modern x86 CPUs optimize virtual memory with a memory management unit (MMU) and a
translation lookaside buffer (TLB).
 In virtual execution environments, physical system memory is shared and dynamically allocated to
virtual machines (VMs).
 Two-stage mapping is required:
o Virtual memory to physical memory (managed by the guest OS).
o Physical memory to machine memory (managed by the virtual machine monitor, or VMM).
 MMU virtualization allows the guest OS to map virtual addresses to VM physical memory
transparently while restricting direct access to machine memory.
 The VMM maps guest physical memory to machine memory using shadow page tables, which
correspond to guest OS page tables.
 Nested page tables introduce additional indirection:
o The OS maps virtual to physical memory.
o The hypervisor maps physical memory to machine addresses using another set of page tables.
 Maintaining shadow page tables for every process leads to high performance overhead and memory
costs.
 VMware uses shadow page tables for virtual-memory-to-machine-memory address translation.
 Processors leverage TLB hardware to map virtual memory directly to machine memory, bypassing
two levels of translation for better performance.
 The AMD Barcelona processor introduced hardware-assisted memory virtualization in 2007 using
nested paging technology to streamline two-stage address translation.
I/O Virtualization

 I/O Virtualization
o Manages routing of I/O requests between virtual devices and shared physical hardware.
o Three primary approaches:
 Full Device Emulation:
 Emulates real-world devices and replicates functions like device enumeration,
identification, interrupts, and DMA in software.
 I/O requests of the guest OS are trapped in the Virtual Machine Monitor
(VMM) and handled via software emulation.
 Allows sharing of hardware devices among multiple VMs, but slower than
actual hardware.
 Para-Virtualization:
 Used in systems like Xen, also called the split driver model.
 Consists of a frontend driver (running in Domain U) and a backend driver
(running in Domain 0).
 Drivers interact via shared memory, with the backend driver managing real I/O
devices and multiplexing data for VMs.
 Provides better performance than full device emulation but has higher CPU
overhead.
 Direct I/O Virtualization:
 Allows VMs to access devices directly, achieving near-native performance
with low CPU cost.
 Focused mainly on networking for mainframes, with challenges in commodity
hardware due to workload migration and arbitrary device states.
 Hardware-assisted I/O virtualization, like Intel VT-d, supports remapping I/O
DMA transfers and device-generated interrupts.
Self-Virtualized I/O (SV-IO):
o Utilizes multicore processors to virtualize I/O devices.
o Provides virtual devices with APIs for VMs and VMM management.
o Defines Virtual Interfaces (VIFs) for each type of virtualized I/O device (e.g., network
interfaces, block devices, cameras).
o Each VIF:
 Contains unique IDs for identification.
 Includes two message queues for outgoing and incoming device messages.
Virtualization in Multicore processors
 Virtualization in Multicore Processors
o Virtualizing multi-core processors is more complex than uni-core processors.
o Multicore processors integrate multiple cores, enhancing performance but introducing new
challenges for computer architects, compiler constructors, system designers, and application
programmers.
o Key difficulties include:
 The need for parallelizing application programs to utilize all cores.
 The complexity of explicitly assigning tasks to cores.
o Addressing these challenges requires:
 New programming models, languages, and libraries to simplify parallel programming.
 Research on scheduling algorithms and resource management policies, which struggle
to balance performance, complexity, and emerging issues.
o Dynamic heterogeneity, mixing fat CPU cores with thin GPU cores on the same chip, adds
complexity in resource management due to unreliable transistors and increased hardware
complexity.
 Physical vs Virtual Processor Cores
o Wells et al. proposed a multicore virtualization method for abstracting low-level processor
core details.
o This technique reduces inefficiency in managing hardware resources by software.
o It operates below the ISA level, without modifications by the OS or VMM.
o Enables software-visible virtual CPUs (VCPUs) to move across cores and suspend execution
when no appropriate core is available.
 Virtual Hierarchy
o Many-core chip multiprocessors (CMPs) introduce space-sharing, where workloads are
assigned to groups of cores for long intervals.
o Marty and Hill suggested virtual hierarchies to overlay coherence and caching onto physical
processors.
o Unlike fixed physical hierarchies, virtual hierarchies adapt to workload requirements,
improving performance and isolation.
o Key features include:
 Faster data access by locating blocks near cores needing them.
 Isolation between workloads to minimize interference.
 Globally shared memory for dynamic resource repartitioning and minimal system
software changes.
o Applications include multiprogramming, server consolidation, and optimizing tiled
architectures.
 Operating System Virtualization
o OS virtualization inserts a layer within the OS to partition physical resources into multiple
isolated virtual machines (VMs) or containers.
o Containers share the same OS kernel but appear as independent servers to users, with their
own processes, file systems, and network settings.
o Benefits include:
 Efficient use of hardware and software in data centers.
 Creation of virtual hosting environments for resource allocation among users.
 Consolidation of server hardware by moving services into containers.
o Containers allow programs to operate with allocated resources as if they are standalone.
o Multiple containers can coexist, run programs parallelly, or interact within the same OS.
Xen Hypervisor

 Xen is an open-source hypervisor program developed by Cambridge University.


 Xen is a micro-kernel hypervisor, meaning it includes only the basic, unchanging functions like
physical memory management and processor scheduling.
 Device drivers and other changeable components are outside the hypervisor in a micro-kernel design.
 A monolithic hypervisor, in contrast, includes all functions, including device drivers, leading to a
larger code size.
 A hypervisor’s main role is to convert physical devices into virtual resources for deployed VMs.
 Xen does not include device drivers natively but allows a guest OS to access physical devices
directly, keeping the hypervisor’s size small.
 Xen provides a virtual environment between the hardware and the OS.
 The core components of a Xen system are the hypervisor, kernel, and applications.
 Xen supports multiple guest OSes, but one special guest OS, Domain 0, controls the others.
 Domain 0 is a privileged guest OS in Xen and is loaded first during boot without file system drivers.
 Domain 0 manages devices and allocates hardware resources for guest domains (Domain U).
 Xen’s hypervisor serves as a thin privileged abstraction layer between hardware and the OS.
 It defines virtual machines (VMs) that the guest domains see instead of physical hardware.
 Xen grants portions of physical resources to each guest and exports simplified devices to the guests.
 Xen modifies difficult-to-virtualize portions for x86 architecture.
 Example: Xen is based on Linux with a security level of C2. Its management VM, Domain 0, has the
privilege to manage other VMs.
 If Domain 0 is compromised, the hacker can control the entire system, emphasizing the need for
security policies.
 Domain 0, as a VMM, enables the creation, manipulation, and management of VMs like a file,
offering significant benefits but also security risks.

Everything as a Service (XaaS)


o Refers to a general category of services related to cloud computing and remote access.
o Encompasses a wide range of products, tools, and technologies delivered as a service over the
internet.
o Any IT function can be transformed into a service for enterprise consumption.
o Services are paid for in a flexible consumption model rather than upfront purchases or
licenses.
o XaaS emerged with cloud computing, which initially offered only cloud services but now
enables virtually anything to be a service.
 Xaas Examples:
o Hardware as a Service (HaaS) – Managed Service Providers (MSP) provide hardware on-
demand for clients.
o Communication as a Service (CaaS) – Cost-effective communication solutions like VoIP
and video conferencing.
o Desktop as a Service (DaaS) – Manages data storage, security, and backup for desktop apps.
o Security as a Service (SECaaS) – Internet-based security services like anti-virus software,
encryption, and authentication.
o Healthcare as a Service (HaaS) – Electronic medical records (EMR), IoT-based health
monitoring, and online consultations.
o Transport as a Service (TaaS) – Mobility and transport solutions like Uber, including future
technologies like flying taxis.
 Benefits of XaaS:
o Cost Saving – Reduces costs and simplifies IT deployments.
o Scalability – Easily handles growing demands by providing required resources or services.
o Accessibility – Improved access as long as there is an internet connection.
o Faster Implementation – Speeds up the implementation of various activities.
o Quick Modification – Provides rapid updates and modifications to services.
o Better Security – Enhanced security tailored to business requirements.
o Boosts Innovation – Frees up resources to focus on innovation.
o Flexibility – Utilizes cloud services and advanced approaches for flexibility.
 Disadvantages of XaaS:
o Internet Breakage – Service disruptions due to internet issues or service provider reliability.
o Slowdown – Performance issues when too many clients use the same resources
simultaneously.
o Difficult Troubleshooting – Hard to troubleshoot issues due to the variety of services and
technologies involved.
o Change Brings Problems – Discontinuation or changes by XaaS providers can impact users.

DBaaS
 DBaaS (Database as a Service):
o A cloud computing service that provides access to a database without the need for physical
hardware, software installation, or database configuration.
o Most database administration and maintenance tasks are handled by the service provider.
o Growing popularity as organizations shift from on-premises systems to cloud databases.
o Provided by cloud platforms and database makers that host their software on cloud
infrastructure.
o Available on public cloud platforms, with some vendors offering private or hybrid cloud
installations.
 DBaaS vs. On-Premises Databases:
o On-Premises Databases: Managed and run by an organization's IT staff, requiring in-house
database administrators (DBAs) for configuration, management, and maintenance.
o DBaaS: The provider handles database management tasks, including installation,
configuration, maintenance, upgrades, backups, patching, and performance management.
o DBAs focus on monitoring database usage, managing user access, and optimizing databases
for applications.
o DBaaS operates on a subscription model, typically with a pay-as-you-go structure or
discounted rates for reserved instances.
 DBaaS for SMBs:
o Ideal for small and medium-sized businesses (SMBs) that lack large IT departments.
o Offloading database service and maintenance to the provider allows SMBs to implement
applications and systems without on-premises infrastructure.
 Limitations of DBaaS:
o Data Security: May not be suitable for workloads with stringent regulatory or security
requirements due to reliance on the provider's infrastructure.
o Performance: Mission-critical applications may perform better with on-premises
implementations, but cloud adoption for larger organizations is increasing.
 DBaaS Adoption:
o In 2021, 49% of organizations used relational database services in the cloud, while 38% used
NoSQL database services.
 Advantages of DBaaS:
o Reduced Management Requirements: Many database administration tasks are outsourced
to the provider.
o Elimination of Physical Infrastructure: The DBaaS provider manages the IT infrastructure.
o Reduced IT Equipment Costs: No need for database servers or ongoing hardware upgrades.
o Additional Savings: Lower electrical, HVAC, and space costs, as well as possible IT staff
reductions.
o Scalability: Infrastructure can be elastically scaled up or down based on usage.
 Disadvantages of DBaaS:
o Lack of Control: Organizations have no direct access to servers and storage devices.
o Dependency on Internet and Provider: Database access is affected by internet outages or
provider system failures.
o Security Concerns: Organizations have limited control over the security of the
infrastructure, with some responsibilities falling on the organization and others on the vendor.
o Latency: Increased access times over the internet can lead to performance issues, especially
when handling large data loads.
What is Cloud Deployment
 The process of deploying an application through one or more hosting models (SaaS, PaaS, IaaS)
leveraging the cloud.
 Includes architecting, planning, implementing, and operating workloads on the cloud.
Factors for Successful Cloud Deployment
1. Security
o Determine the type of data to be placed in the cloud (e.g., sensitive data like financial or
medical records needs enhanced security).
o Compliance with security requirements, such as HIPAA, is crucial.
o Security varies depending on data sensitivity; highly sensitive data may need to reside in a
specific type of cloud.
2. Performance
o Consider the nature of applications being deployed (e.g., database-heavy applications vs.
office productivity suites).
o Cost-effectiveness may suggest running certain applications in-house rather than in the cloud.
o Conduct a pilot or expert assessment to gauge performance under real-world conditions.
3. Integration
o Fully virtualized applications are ideal candidates for cloud deployment.
o Integration of multiple applications across different clouds requires attention to APIs.
o APIs facilitate simple and inexpensive ways to connect services and data.
4. Legal Requirements
o Understand legal responsibilities when migrating sensitive information to third-party cloud
solutions (e.g., data breaches and liability).
o Compliance with laws like HIPAA, PCI DSS, and SOX may affect cloud deployment.
Potential Network Problems Cloud Providers Must Address
 Network Node Latency
o Use optimized networks to reduce latency.
 Transport Protocol Latency
o Mitigate TCP impact, reduce congestion, and minimize data loss.
 Number of Nodes Traversed
o Reduce latency by minimizing the number of hops between servers and end users.
 TCP Congestion
o Use larger windows in TCP to improve throughput during network congestion.
Cloud Network Topologies
 Describes how users access cloud resources over the internet. Has 3 components:
o Front End (User Access Layer): Initiates connection to cloud services.
o Compute Layer: Includes servers, storage, load balancers, and security devices.
o Network Layer: Can be Layer 2 or Layer 3, with Layer 3 handling inter-cloud
communication.
Automation and Self-Service Features in Cloud
 Automates manual IT processes, enabling faster delivery of resources based on demand.
 Used in various stages of software development, such as code testing, network diagnostics, and
security.
Cloud Performance
 Measures how applications, workloads, and databases operate on the cloud.
 Performance is evaluated based on response time, network speed, and storage I/O.
Cloud Performance Metrics
 IOPS (I/O Operations per Second): Measures the rate at which the cloud platform reads and writes
data.
 Latency: Describes the speed of executing operations on the cloud platform.
 Resource Availability: Ensures cloud instances are functioning as expected.
 Capacity: Determines the available storage needed for processing requests.
Impact of Memory on Cloud Performance
 Memory usage in the cloud affects performance, especially with multi-tenancy and simultaneous user
tasks.
 Memory leakage (where unused memory is not returned to the OS) should be monitored to avoid
performance issues.
Improving Cloud Database Performance
 Cloud databases offer high accessibility, better replication, automation, and elasticity.
 Issues include security concerns, data privacy, multi-tenancy, and reliance on third-party providers.
Cloud Data Security
o Protects data and digital assets from security threats, human error, and insider threats.
o Ensures data confidentiality while maintaining accessibility for authorized users in cloud-
based environments.
o Safeguards data in storage (at rest) and during transmission (in motion) against security
threats, unauthorized access, theft, and corruption.
o Relies on physical security, technology tools, access management, controls, and
organizational policies.
 Why Companies Need Cloud Security
o Growing volumes of data need to be accessed, managed, and analyzed by organizations.
o Cloud services offer agility, faster market times, and support for remote or hybrid workforces.
o The traditional network perimeter is disappearing, requiring new approaches to secure cloud
data and manage access across environments.
Data Confidentiality and Encryption
o Data Confidentiality: Ensures only authorized people or processes can access or modify
data.
o Data Integrity: Prevents tampering, ensuring data remains accurate, authentic, and reliable.
o Data Availability: Ensures data is available and accessible to authorized users when needed.
o These principles (CIA triad) form the foundation of effective security infrastructure.
Benefits of cloud data security
 Data confidentiality: Ensures that data can only be accessed or modified by authorized people or
processes, keeping the organization’s data private.
 Data integrity: Guarantees that data is accurate, authentic, and reliable by implementing policies to
prevent tampering or deletion.
 Data availability: Ensures that data remains accessible to authorized users and processes whenever
needed, maintaining continuous uptime and smooth operation of systems, networks, and devices.
Challenges of Cloud Data Security
o Lack of Visibility: Uncertainty about where data and applications reside.
o Less Control: Data and apps hosted on third-party infrastructure reduce control over access
and sharing.
o Confusion Over Shared Responsibility: Gaps in security coverage due to unclear roles
between companies and cloud providers.
o Inconsistent Coverage: Varying levels of protection across multi-cloud and hybrid
environments.
o Growing Cybersecurity Threats: Cloud data storage and databases are prime targets for
cybercriminals.
o Strict Compliance Requirements: Pressure to comply with data protection and privacy
regulations.
o Distributed Data Storage: Storing data on international servers raises data sovereignty
concerns.

Cloud Storage Gateways

o A cloud storage gateway is a hardware or software appliance that bridges local applications
and remote cloud-based storage.
o Provides basic protocol translation and connectivity for incompatible technologies to
communicate.
o Can be a hardware device or a virtual machine (VM) image.
o Necessary due to the incompatibility between cloud storage protocols (e.g., RESTful API
over HTTP) and legacy storage systems (e.g., SAN or NAS).
o When to Use:
 Not always required.
 Needed for migrating SaaS applications to cloud storage repositories.
o Typical Use Cases:
 Local S3 object storage provisioning for backup software like Veeam, Rubrik,
Commvault, etc.
 Data archiving in cost-effective public cloud storage.
 Medical record storage, retention, and archiving.
 Video surveillance data storage.
 Block-level storage for relational databases (e.g., MySQL, PostgreSQL, SAP HANA).
 Backup target storage provisioning (e.g., Azure, Amazon S3).
 Remote and Branch Office (ROBO) file storage, sharing, and collaboration.
Firewall

o A firewall is a security product that filters malicious traffic between trusted and untrusted
networks.
o Traditionally, firewalls were physical appliances placed between a private network and the
Internet.
o Firewalls block and allow traffic based on predefined rules, customizable by administrators.
 Cloud Firewall
o A security product filtering malicious network traffic, hosted in the cloud (Firewall-as-a-
Service or FWaaS).
o Runs in the cloud and is accessed via the Internet, updated and maintained by third-party
vendors.
o Protects cloud platforms, infrastructure, and applications, similar to traditional firewalls.
o Can also protect on-premise infrastructure.
o Benefits of Cloud Firewall:
 Blocks malicious web traffic (e.g., malware, bad bots).
 Prevents sensitive data from being sent out.
 Eliminates network choke points by avoiding hardware appliances.
 Easy integration with cloud infrastructure.
 Scalable to handle increasing traffic.
 No need for organizations to maintain updates; the vendor manages them.

SaaS and PaaS Host Security

o Cloud service providers do not disclose information about their host platforms, OS, or
security processes to avoid exploitation by hackers.
o Security Responsibility:
 SaaS and PaaS providers are responsible for securing the host platform.
o Virtualization:
 Cloud service providers use virtualization platforms like VMware or XEN for host
security.
o Abstraction Layers:
 In SaaS, the host abstraction layer is hidden from users, accessible only by developers
and operational staff.
 In PaaS, users access the abstraction layer indirectly via API, which interacts with the
host layer.

IaaS Host Security

o Customer Responsibility:
 IaaS customers are responsible for securing their hosts.
o Virtualization Software Security:
 Provides customers the ability to create and manage virtual instances.
o Customer Guest OS/Virtual Server Security:
 Customers manage virtualized guest operating systems (e.g., Linux, Windows) and
virtual servers.
 Public IaaS customers have full access to virtual servers, and cloud providers manage
the hypervisor layer.
o Virtual Server Security:
 Customers manage virtual machines and are responsible for securing them.
 IaaS platforms offer APIs for provisioning, decommissioning, and managing virtual
servers.
 Network access is restricted, with only necessary ports (e.g., port 22 for SSH)
typically open for remote access.
Draw and explain openstack cloud architecture in detail. (write all components)

 OpenStack is a free, open-standard cloud computing platform primarily used as Infrastructure-as-a-


Service (IaaS) in private and public clouds.
 Combines networking, storage, hardware, and control components to manage data center resources.
 Managed through command-line tools, RESTful web services, and a web-based dashboard.
History
 Launched in 2010 as a joint project by NASA and Rackspace Hosting.
 Governed by the OpenStack Foundation, established in September 2012 to promote the OpenStack
community and software.
 Supported by over 50 enterprises.
Architecture and Key Components
 Nova (Compute)
o Facilitates provisioning compute instances, including bare-metal servers and virtual
machines.
o Written in Python and integrates external libraries like SQLAlchemy and Kombu.
o Supports horizontal scaling and performance monitoring.
 Neutron (Networking)
o Provides "network connectivity as a service."
o Manages virtual and physical networking infrastructure, including SDN technologies like
OpenFlow.
o Supports VPNs, firewalls, load balancing, and other advanced services.
 Cinder (Block Storage)
o Offers block storage services for VMs and containers.
o Features include persistent storage, snapshot management, and high availability.
 Keystone (Identity)
o Provides multi-tenant authentication, service discovery, and API client authentication.
o Supports integration with directory services like LDAP.
 Glance (Image)
o Manages virtual machine images and metadata.
o Includes a RESTful API for querying and retrieving images.
 Swift (Object Storage)
o Distributed, eventually consistent storage system for unstructured data.
o Designed for scalability, durability, and availability.
 Horizon (Dashboard)
o Provides a web-based UI for managing OpenStack services like Keystone and Nova.
o Includes stable API abstractions for consistent development.
 Heat (Orchestration)
o Automates application deployment using templates.
o Supports REST API and CloudFormation-compatible Query API.
 Mistral (Workflow)
o Manages workflows defined in YAML and triggered via REST API or events.
 Ceilometer (Telemetry)
o Centralized billing system that collects and delivers traceable counters.
 Trove (Database)
o Database-as-a-Service for provisioning relational and non-relational databases.
 Sahara (Elastic Map-Reduce)
o Simplifies provisioning and scaling Hadoop clusters.
 Ironic (Bare Metal)
o Manages bare-metal machines with plugins for vendor-specific functionality.
 Zaqar (Messaging)
o Multi-tenant cloud messaging service with RESTful API for scalable communication.
 Designate (DNS)
o DNS-as-a-Service supporting backends like BIND and PowerDNS.
 Manila (Shared File System)
o Provides API for managing shared file systems in vendor-agnostic environments.
 Searchlight (Search)
o Advanced search capabilities integrated with ElasticSearch.
 Magnum (Container Orchestration)
o Offers container orchestration engines like Kubernetes and Docker Swarm as OpenStack
resources.
 Barbican (Key Manager)
o Secure storage, provisioning, and management of secrets through REST API.
 Vitrage (Root Cause Analysis)
o Analyzes events and alarms to identify root causes and reduce downtime.
 Aodh (Rule-Based Alarm Actions)
o Triggers tasks based on predefined rules against event or metric data from Ceilometer or
Gnocchi.
Explain the role of CSB? difference between cloud service provider and cloud service broker?
A Cloud Service Broker acts as an intermediary between cloud service providers (CSPs) and cloud service
consumers. The CSB enables organizations to choose, customize, integrate, and manage cloud services
effectively. The primary roles of a CSB include:
1. Aggregation: Combining multiple cloud services from different providers into a unified solution for
the consumer.
2. Integration: Ensuring that services work seamlessly together across different environments, such as
on-premises and cloud-based systems.
3. Customization: Tailoring services to meet specific organizational needs.
4. Governance and Compliance: Managing access, usage, and compliance with regulatory
requirements.
5. Cost Optimization: Offering insights into usage patterns to optimize costs and manage budgets.
6. Support and Maintenance: Acting as a single point of contact for troubleshooting and service
management.

Aspect Cloud Service Provider (CSP) Cloud Service Broker (CSB)

An entity offering cloud services (e.g., An intermediary that helps consumers manage,
Definition storage, compute, networking) directly integrate, and customize cloud services from
to users. multiple providers.

Deliver cloud infrastructure, platform, Facilitate the selection, integration, and


Primary Role or software services directly to management of services across multiple
customers. providers.

Service AWS, Google Cloud, Microsoft Azure,


RightScale, CloudBolt, Jamcracker.
Examples IBM Cloud.

Target End-users and organizations seeking Organizations requiring multi-cloud strategies or


Audience cloud solutions. custom cloud solutions.

Providing core cloud services and Ensuring interoperability, cost efficiency, and
Key Focus
infrastructure. simplified management.
Aspect Cloud Service Provider (CSP) Cloud Service Broker (CSB)

Limited to the offerings of the specific Provides customized solutions by combining


Customization
provider. services from various providers.

Limited to provider-specific policies Ensures governance, compliance, and security


Governance
and guidelines. across multiple platforms.

Cost Offers tools for managing costs within Offers consolidated cost optimization across
Management their platform. multiple platforms.

Explain GFS (google file system in detail)

 Introduction
o The Google File System (GFS) is a scalable distributed file system developed by Google Inc.
o Designed to handle large-scale data processing, offering fault tolerance, dependability,
scalability, availability, and performance.
o Constructed from inexpensive commodity hardware to meet Google's storage and data use
needs.
 Key Features
o Fault tolerance and reduced hardware flaws.
o Manages two data types: file metadata and file data.
o Large (64 MB) chunks split and replicated at least three times for fault tolerance.
o Supports hierarchical directories with path names.
o Includes a single master node and several chunk servers.
 Components
o GFS Clients: Applications or programs that request files for reading, writing, or
modification.
o GFS Master Server: Coordinates the cluster, maintains the operation log, and manages
metadata.
o GFS Chunk Servers: Store 64 MB-sized file chunks and send them directly to clients.
Replicate chunks to ensure stability (default is three copies).
 Features
o Namespace management and locking.
o High availability with automatic data recovery.
o Fault tolerance with critical data replication.
o Reduced interaction between clients and master due to large chunk sizes.
o High aggregate throughput for concurrent operations.
 Advantages
o High accessibility through replication, ensuring data availability even with node failures.
o Reliable storage with error detection and duplication of corrupted data.
o High throughput due to concurrent operation of multiple nodes.
 Disadvantages
o Not optimized for small files.
o Master server can become a bottleneck.
o Lacks support for random writing.
o Suitable primarily for write-once, read-later (appended) data.
List the guidelines that SMB must follow to get most out of their cloud.
Here are guidelines small and medium-sized businesses (SMBs) should follow to maximize the benefits of
their cloud investments:
 Define Clear Objectives
o Identify specific business goals and challenges that cloud solutions will address.
o Ensure alignment with long-term business strategies.
 Choose the Right Cloud Model
o Evaluate public, private, or hybrid cloud options based on cost, security, and scalability
needs.
o Select cloud providers that align with your industry requirements and business size.
 Ensure Data Security and Compliance
o Implement robust data encryption and access controls.
o Verify the cloud provider adheres to industry compliance standards (e.g., GDPR, HIPAA).
 Optimize Costs
o Use tools to monitor and manage cloud resource usage to avoid unnecessary expenses.
o Take advantage of pricing models such as pay-as-you-go or reserved instances for predictable
workloads.
 Focus on Scalability and Flexibility
o Adopt cloud solutions that can scale with your business growth.
o Leverage cloud-native applications and services for agility.
 Train Employees
o Conduct training sessions to familiarize employees with cloud tools and workflows.
o Promote awareness of best practices for using cloud solutions securely and efficiently.
 Backup and Disaster Recovery
o Set up regular automated backups to prevent data loss.
o Design a disaster recovery plan to ensure business continuity.
 Monitor and Optimize Performance
o Use performance monitoring tools to analyze and improve cloud application efficiency.
o Continuously evaluate and update cloud configurations to meet evolving needs.
 Adopt Automation
o Automate repetitive tasks like resource provisioning, scaling, and backups to save time and
reduce errors.
o Explore Infrastructure as Code (IaC) for efficient resource management.
 Establish Strong Vendor Relationships
o Work closely with cloud providers for better support and customized solutions.
o Regularly review Service Level Agreements (SLAs) to ensure accountability.
By adhering to these guidelines, SMBs can achieve better cost efficiency, security, and scalability,
maximizing their return on cloud investments.
Explain the programming structure of A- EC2.
The programming structure of Amazon EC2 (Elastic Compute Cloud) revolves around providing scalable,
on-demand computing capacity in the cloud. Developers can launch, manage, and terminate virtual server
instances programmatically through APIs, SDKs, or the AWS Management Console. Below is an
explanation of its programming structure:
 Instance Lifecycle
o Instances represent virtual servers that can be launched and terminated based on demand.
o Developers can define the instance type, size, operating system, and configuration during
initialization.
 Programming Interfaces
o AWS Management Console: A graphical user interface for manual instance management.
o AWS CLI (Command Line Interface): Allows programmatic control over EC2 instances
through command-line scripts.
o AWS SDKs: Software development kits available for various programming languages like
Python (Boto3), Java, Node.js, and C# to integrate EC2 functionality into applications.
o Amazon EC2 APIs: RESTful APIs enable direct programmatic interaction with EC2
resources for launching, managing, or monitoring instances.
 Key Components
o Elastic Load Balancer (ELB): Distributes traffic among instances to ensure availability and
fault tolerance.
o Auto Scaling: Automatically adjusts the number of running instances based on traffic or
performance metrics.
o Security Groups: Acts as a virtual firewall to control inbound and outbound traffic to
instances.
o Elastic Block Store (EBS): Persistent storage volumes attached to instances for data
retention.
o Key Pairs: Used for secure access to instances via SSH or RDP.
o Amazon Machine Images (AMIs): Pre-configured templates that define the operating
system, application server, and software for instances.
 Instance Management Operations
o Launch: Specify the AMI, instance type, and key pair to start a new instance.
o Start/Stop: Start or stop running instances to optimize costs.
o Monitor: Use CloudWatch to track performance metrics like CPU utilization, memory usage,
and disk I/O.
o Terminate: Permanently delete an instance when no longer required.
 Programming Workflow
1. Configuration: Define instance parameters such as AMI, instance type, storage, and security
settings.
2. Launching Instances: Use APIs or SDKs to launch instances programmatically, specifying
configurations and optional user data scripts for automation.
3. Instance Management: Manage instances by adjusting resources, attaching EBS volumes, or
monitoring via CloudWatch.
4. Scaling: Use Auto Scaling groups to maintain desired performance levels automatically.
5. Termination: Programmatically terminate instances to free up resources and control costs.
Write short note on cloud performance monitoring and tuning.
Cloud performance monitoring and tuning is the process of ensuring that cloud-based systems operate
efficiently, reliably, and at optimal performance levels. It involves the continuous observation and
adjustment of cloud resources and applications to meet desired performance goals.
Key Aspects:
 Performance Monitoring: Involves tracking metrics such as CPU usage, memory utilization,
network latency, storage I/O, and response times. Tools like AWS CloudWatch, Microsoft Azure
Monitor, and Google Cloud Operations Suite are commonly used.
 Tuning Techniques: Adjustments include optimizing resource allocation (e.g., scaling up/down
instances), database query optimization, load balancing, and caching frequently accessed data.
 Benefits: Ensures minimal downtime, cost efficiency, improved user experience, and the ability to
handle varying workloads.
 Challenges: Complex configurations, varying performance baselines, and identifying bottlenecks in
distributed systems.
Effective cloud performance monitoring and tuning help organizations maintain service quality and adapt to
dynamic workloads while optimizing costs.
Explain the potential network problems and their mitigation during deployment of cloud.
Potential network problems during cloud deployment and their mitigation include:
 Latency Issues
o Problem: Delays in data transmission between the client and the cloud due to physical
distance, congestion, or routing inefficiencies.
o Mitigation:
 Use Content Delivery Networks (CDNs) to cache data closer to users.
 Optimize network routes with advanced routing protocols.
 Deploy applications and services in geographically distributed data centers.
 Bandwidth Limitations
o Problem: Insufficient bandwidth causing slow data transfer rates and degraded performance.
o Mitigation:
 Assess and provision adequate bandwidth requirements in advance.
 Implement traffic prioritization and Quality of Service (QoS) policies.
 Use scalable bandwidth solutions like dynamic bandwidth allocation.
 Network Congestion
o Problem: High traffic volume leading to packet loss and reduced throughput.
o Mitigation:
 Implement load balancers to distribute traffic evenly.
 Use traffic shaping and rate-limiting mechanisms to manage heavy loads.
 Monitor and upgrade network capacity based on traffic patterns.
 Security Threats
o Problem: Vulnerabilities like Distributed Denial of Service (DDoS) attacks, data breaches, or
unauthorized access.
o Mitigation:
 Use firewalls, intrusion detection systems (IDS), and intrusion prevention systems
(IPS).
 Employ encryption for data in transit and at rest.
 Use Virtual Private Networks (VPNs) and secure access mechanisms like multi-factor
authentication.
 Packet Loss and Jitter
o Problem: Data packets may be dropped or arrive at irregular intervals, affecting performance.
o Mitigation:
 Optimize network configurations with redundant paths and fault-tolerant designs.
 Use protocols like TCP retransmission to recover lost packets.
 Deploy tools for real-time monitoring and correction of jitter issues.
 DNS Failures
o Problem: Domain Name System (DNS) issues can lead to service disruptions or unreachable
resources.
o Mitigation:
 Use redundant and distributed DNS servers.
 Implement DNS failover strategies to switch to backup systems.
 Regularly monitor and update DNS configurations.
 Cross-Region Data Transfer Challenges
o Problem: Increased latency and costs when data is transferred across regions.
o Mitigation:
 Optimize data transfer by using region-specific resources.
 Compress data and minimize unnecessary transfers.
 Use reserved or dedicated cloud network connections for consistent performance.
 Network Configuration Errors
o Problem: Misconfigured network settings causing connectivity issues or exposure to risks.
o Mitigation:
 Automate network configuration using Infrastructure as Code (IaC) tools.
 Conduct thorough testing and validation of configurations.
 Maintain up-to-date documentation and standard operating procedures.
Continuous monitoring, proactive network management, and regular audits are critical to mitigating these
potential problems effectively.
Write a short note on parallelization and leveraging in memory operations within cloud applications.
 Parallelization in Cloud Applications
o Enables simultaneous execution of multiple tasks or processes, improving performance and
efficiency.
o Divides large tasks into smaller sub-tasks, distributing them across multiple compute
resources.
o Utilizes multi-core processors and distributed computing architectures to enhance scalability.
o Reduces execution time for data-intensive and computationally heavy operations.
 Leveraging In-Memory Operations
o In-memory operations store and process data in RAM rather than on slower storage mediums
like disks.
o Improves data access speed and reduces latency, enhancing application performance.
o Ideal for real-time analytics, caching, and high-frequency data processing.
o Frequently employed in conjunction with parallelization for maximum efficiency.
 Benefits in Cloud Applications
o Increases throughput and reduces latency, particularly in real-time applications.
o Enhances scalability to handle large datasets and complex computations.
o Supports fault tolerance and reliability through distributed in-memory systems.
o Facilitates efficient resource utilization and cost-effective performance.
Explain characteristics of Amazon simple DB.
 Scalability
o Amazon SimpleDB is designed to handle massive amounts of structured data and
automatically scales to meet growing application demands.
 Flexibility
o Supports flexible and schema-less data organization, allowing developers to store and query
structured data without predefined schemas.
 Simple Data Model
o Data is stored in domains, organized into items, and further broken into attributes, enabling
simple data storage and retrieval.
 Availability and Reliability
o High availability is ensured through automatic data replication across multiple servers in
different locations.
 Querying Capabilities
o Provides efficient and straightforward query processing with support for simple, condition-
based queries using Select statements.
 No Server Management
o Fully managed service, removing the need for developers to manage servers, software
updates, or scaling infrastructure.
 Elasticity
o Offers automatic scaling of resources based on the volume of data and query demands.
 Integration
o Seamlessly integrates with other AWS services like Amazon EC2, Amazon S3, and AWS
SDKs for application development.
 Pay-as-You-Go Model
o Cost-effective pricing model, where users pay only for the resources they use, including data
storage, data transfer, and query operations.
 Durability
o Ensures data durability through redundant storage and error detection mechanisms.
 Security
o Provides built-in access control and encryption mechanisms to safeguard data.
 Low Latency
o Optimized for high performance, delivering low-latency responses for data storage and
retrieval.
Explain the tasks performs by google application engine.
Google App Engine performs several tasks to support the development and deployment of web applications.
Here are the key tasks it handles:
 Automatic Scaling: App Engine automatically scales your application based on the incoming traffic,
scaling up or down without the need for manual intervention.
 App Hosting: It provides a platform for hosting web applications, making them accessible over the
internet with built-in security and management features.
 Load Balancing: App Engine distributes incoming requests to multiple instances of the application
to ensure optimal performance and reliability.
 Traffic Management: It allows you to route traffic to different versions of your application, making
it easy to deploy updates and maintain different environments.
 Database Integration: App Engine supports easy integration with Google Cloud databases like
Firestore and Cloud SQL, allowing for seamless data management.
 Monitoring and Logging: It integrates with Google Cloud's monitoring and logging tools to track
application performance, errors, and resource usage.
 Security and Authentication: App Engine provides security features like built-in identity and access
management (IAM), SSL certificates, and integration with Google Cloud Identity for authentication.
 API Management: It facilitates the creation, deployment, and management of APIs, integrating with
Google Cloud Endpoints for efficient API management.
 Version Control: You can deploy different versions of your application, roll back to previous
versions, and manage them easily through the App Engine interface.
 Zero Server Management: Developers focus on writing code without worrying about the
underlying infrastructure, as App Engine manages the servers automatically.
 Task Queue Management: App Engine supports background task management by allowing
developers to offload long-running or resource-heavy tasks to task queues.
 Billing and Cost Management: It provides tools to track and manage usage, helping to optimize
costs for the application based on the resources consumed.
Explain the phases during the migration to cloud.
Cloud Migration
 Definition:
Cloud migration is the transformation of traditional business operations into digital operations by
moving data, applications, or other business elements to a cloud computing environment.
o Example: Migrating data and applications from a local, on-premises data center to the cloud.
On-Premises to Cloud Migration Process
 Pre-migration considerations:
o Evaluate requirements and performance.
o Select a suitable cloud provider.
o Calculate operational costs.
 Basic steps:
1. Establish migration goals.
2. Create a security strategy.
3. Replicate the existing database.
4. Move business intelligence.
5. Switch production from on-premises to the cloud.
Cloud Migration Strategy: The 5 R's
1. Rehost:
o Move applications to the cloud using IaaS (Infrastructure as a Service).
2. Refactor:
o Reuse application code and frameworks to run on PaaS (Platform as a Service).
3. Revise:
o Expand the code base and deploy through Rehosting or Refactoring.
4. Rebuild:
o Redesign the application from scratch on a PaaS provider’s platform.
5. Replace:
o Substitute the old application with a new SaaS (Software as a Service) solution.
Describe performance evaluation functions and features of cloud platforms.
Performance Evaluation Functions of Cloud Platforms:
 Scalability Assessment:
Measures the platform's ability to handle increased workloads by scaling resources up or down
dynamically.
 Resource Utilization Analysis:
Evaluates how effectively the platform uses CPU, memory, and storage to minimize waste and
optimize performance.
 Latency Measurement:
Analyzes the time taken to process requests, ensuring low latency for real-time or critical
applications.
 Throughput Evaluation:
Determines the number of tasks or transactions a platform can handle within a specific time frame.
 Availability Testing:
Measures system uptime and reliability to ensure high availability and fault tolerance.
 Energy Efficiency Metrics:
Evaluates energy consumption relative to workload to promote sustainable and cost-effective
operations.
 Load Balancing Efficiency:
Tests how effectively the platform distributes workloads across multiple resources to avoid
bottlenecks.
 Elasticity Testing:
Assesses the ability to allocate and deallocate resources dynamically in response to workload
changes.
 Failure Recovery Time:
Evaluates the time required to recover from hardware or software failures to minimize downtime.
 Cost-Performance Ratio Analysis:
Balances performance outcomes against operational and maintenance costs.
Features of Cloud Platforms:
 On-Demand Self-Service:
Allows users to provision resources as needed without human intervention.
 Broad Network Access:
Ensures access to services via standard internet devices like smartphones, laptops, or desktops.
 Resource Pooling:
Provides shared resources among multiple users with location independence.
 Rapid Elasticity:
Offers the ability to scale resources up or down automatically based on demand.
 Measured Service:
Implements a metering system to track and optimize resource usage and billing.
 Multi-Tenancy Support:
Enables multiple users or clients to share the same physical infrastructure securely.
 Security Features:
Includes encryption, authentication, access control, and compliance certifications.
 Global Accessibility:
Provides services worldwide through a network of distributed data centers.
 Integrated Development Tools:
Offers APIs, SDKs, and other tools for easy application development and deployment.
 Interoperability and Portability:
Ensures compatibility across different platforms and easy migration of data and applications.
Discuss the difficulties faced by SMB in their growth of business.
 Limited Budget and Resources
o Difficulty affording advanced cloud solutions or scaling services.
o High costs associated with data migration, subscriptions, and ongoing maintenance.
 Lack of Technical Expertise
o Insufficient in-house knowledge to deploy and manage cloud infrastructure.
o Dependence on third-party vendors increases costs and risks.
 Data Security and Privacy Concerns
o Fear of data breaches and non-compliance with regulations like GDPR or HIPAA.
o Hesitation to trust third-party cloud providers with sensitive business data.
 Integration Challenges
o Difficulty integrating cloud services with existing legacy systems.
o Compatibility issues with other business tools and software.
 Downtime and Reliability Issues
o Dependence on consistent internet connectivity for uninterrupted cloud access.
o Concerns about service outages impacting critical operations.
 Vendor Lock-in
o Fear of being tied to a single provider, limiting flexibility and bargaining power.
o Challenges in migrating data to a new provider or system.
 Scalability and Predictability
o Uncertainty about future growth, leading to under- or over-investment in cloud resources.
o Difficulty predicting costs due to pay-as-you-go pricing models.
 Lack of Awareness or Understanding
o Misconceptions about cloud computing benefits and costs.
o Resistance to change due to fear of disrupting established workflows.
 Customization Limitations
o Standardized cloud services may not cater to specific SMB needs.
o Difficulty finding tailored solutions without incurring high development costs.
 Compliance and Legal Issues
o Complexity in understanding and adhering to international and industry-specific regulations.
o Risk of non-compliance due to lack of specialized compliance tools.
Addressing these challenges involves a combination of strategic planning, selecting the right cloud partner,
and ensuring adequate training and support for SMBs.

You might also like