Unit 1 Cloud Computing
Unit 1 Cloud Computing
Parallel computing and distributed computing are both paradigms used to process large or
complex computations by dividing tasks among multiple processors or systems. However, they
differ in architecture, goals, and the way resources are utilized. Here's a detailed comparison:
1. Definition
2. Architecture
● Parallel Computing:
3. Communication
● Parallel Computing:
○ Nodes communicate through network protocols like TCP/IP, often over the
Internet or a local network.
○ Relatively slower communication with potential latency issues.
4. Scalability
● Parallel Computing:
5. Fault Tolerance
● Parallel Computing:
6. Application Examples
● Parallel Computing:
7. Programming Models
● Parallel Computing:
○ Uses frameworks like OpenMP, MPI (in hybrid settings), CUDA for GPUs.
○ Focus on dividing tasks into smaller subtasks running simultaneously on cores.
● Distributed Computing:
Summary Table
Feature Parallel Computing Distributed Computing
While they serve different purposes, modern systems often combine both approaches—for
example, using parallel computing on each node of a distributed system for greater efficiency.
characteristics of cloud computing
Cloud computing is a technology paradigm that allows on-demand access to shared computing
resources, such as servers, storage, applications, and services, over the Internet. Its
characteristics make it versatile, scalable, and efficient for various use cases. Below are the key
characteristics of cloud computing:
1. On-Demand Self-Service
● Users can provision computing resources (e.g., servers, storage, and applications)
automatically without requiring human intervention from the service provider.
● Resources are available as needed, reducing delays in provisioning.
● Resources and services are accessible over the Internet or private networks from a wide
range of devices, including laptops, smartphones, tablets, and desktop computers.
● Supports diverse platforms and devices through standard interfaces.
3. Resource Pooling
● Cloud providers use a multi-tenant model to pool resources like storage, processing
power, and bandwidth to serve multiple users.
● Resources are dynamically allocated and reassigned based on demand, ensuring
efficient utilization.
● Users are abstracted from the physical location of the resources (though they might
specify a region or datacenter for compliance reasons).
● Users pay only for the resources and services they use, often measured in terms of
hours, storage capacity, or data transferred.
● Reduces upfront capital expenses, making it cost-effective for organizations.
6. Measured Service
● Cloud systems automatically monitor and optimize resource usage through metering
capabilities.
● Provides transparency for both providers and users, enabling monitoring, control, and
accurate billing.
8. Security
9. Multi-Tenancy
● Multiple users (tenants) share the same physical infrastructure while maintaining data
isolation and privacy.
● Facilitates cost-sharing while ensuring each user’s resources are logically separated.
● Users can access cloud services from anywhere with an Internet connection, enabling
remote work and global collaboration.
13. Flexibility
● Provides users with flexibility to choose from a wide range of services and configurations
to meet their specific needs.
● Supports hybrid and multi-cloud environments for greater adaptability.
Cloud computing has several distinct features that differentiate it from traditional IT setups:
a. On-Demand Self-Service
Users can provision and manage resources without needing human interaction with the service
provider.
Accessible over the Internet or a private network, supporting a wide range of devices like
laptops, smartphones, and tablets.
c. Resource Pooling
Resources are pooled to serve multiple users (multi-tenancy) dynamically, with abstraction of
physical resource locations.
Users are billed based on usage, which can include storage space, processing power, or
network bandwidth.
With redundant infrastructure and disaster recovery mechanisms, cloud computing ensures
minimal downtime and maximum service continuity.
● Offers a platform for developers to build, test, and deploy applications without worrying
about infrastructure management.
● Examples: Google App Engine, Microsoft Azure App Service, Heroku.
● Delivers software applications over the Internet on a subscription basis, accessible via
web browsers.
● Examples: Gmail, Microsoft Office 365, Salesforce.
a. Public Cloud
● Services are delivered over the Internet and shared among multiple customers.
● Cost-effective but less customizable.
● Examples: AWS, Google Cloud, Microsoft Azure.
b. Private Cloud
c. Hybrid Cloud
● Combines public and private clouds, enabling data and application portability between
them.
● Balances flexibility, cost, and control.
d. Community Cloud
a. Cost Efficiency
c. Accessibility
d. Disaster Recovery
● Provides built-in redundancy and backup for quick recovery during failures.
e. Enhanced Security
● Leading cloud providers implement robust security measures like encryption and
firewalls.
a. Security Concerns
● Storing sensitive data on third-party servers can raise security and privacy concerns.
b. Downtime Risks
c. Limited Control
● Users have limited control over cloud infrastructure and are dependent on the service
provider.
d. Cost Management
a. Virtualization
● Creates virtual instances of physical hardware, enabling resource pooling and efficient
usage.
b. Networking
● High-speed networks enable data transfer between cloud resources and users.
c. Automation
d. Containerization
● Containers like Docker and Kubernetes isolate applications for consistency and
scalability.
e. APIs
a. Web Hosting
● Processing and analyzing large datasets with tools like AWS Redshift and Google
BigQuery.
c. Machine Learning
d. Disaster Recovery
e. IoT
a. Edge Computing
● Processing data closer to its source for lower latency and better performance.
b. Multi-Cloud Strategies
● Using multiple cloud providers to avoid vendor lock-in and increase flexibility.
c. Serverless Computing
● Developers focus on code while the cloud provider handles infrastructure management.
d. AI Integration
● Advanced AI tools are being integrated into cloud platforms for intelligent automation.
10. Conclusion
Cloud computing is transforming how organizations manage their IT needs, offering unparalleled
scalability, flexibility, and efficiency. While it presents challenges like security concerns and
downtime risks, its advantages far outweigh them, making it a cornerstone of modern digital
transformation.
1. Cost Efficiency
Cloud computing eliminates the need for organizations to invest in expensive hardware
and infrastructure. It operates on a pay-as-you-go model, ensuring businesses only pay
for the resources they use.
6. Resource Optimization
By leveraging shared infrastructure, cloud computing optimizes resource usage,
reducing waste and improving operational efficiency.
7. Global Reach
Cloud services operate across multiple geographic locations, enabling businesses to
expand and serve customers globally.
8. Enhanced Security
Leading cloud providers implement advanced security measures like encryption, identity
management, and regular updates to protect data and systems.
10. Sustainability
Centralized cloud data centers are more energy-efficient than distributed on-premises
systems, contributing to reduced environmental impact.
In summary, cloud computing meets the demands of modern businesses by providing flexible,
reliable, and scalable solutions that drive efficiency, innovation, and global connectivity.
Definition:
IaaS provides virtualized computing resources over the internet, including virtual machines,
storage, and networking.
Advantages:
Disadvantages:
Definition:
PaaS provides a platform for developers to build, deploy, and manage applications without
managing the underlying infrastructure.
Advantages:
Disadvantages:
Definition:
SaaS delivers software applications over the internet, typically on a subscription basis.
Advantages:
Disadvantages:
● Limited Customization: Applications may not meet all specific business needs.
● Dependency on Internet: Requires a stable internet connection.
● Data Security: Sensitive data is stored on third-party servers.
● Subscription Costs: Long-term subscriptions may become costly.
Each delivery model has its advantages and limitations, making it essential for organizations to
choose the one that aligns with their goals, expertise, and budget.
1. Cloud Infrastructure
● Hardware Layer: The physical servers, data centers, and networks that form the
backbone of cloud services.
● Virtualization Layer: Virtualization technologies (such as VMware, KVM, or Hyper-V)
allow multiple virtual machines (VMs) to run on a single physical server, creating a more
efficient and flexible use of hardware resources.
● Storage Layer: Includes various storage solutions such as object storage (e.g., AWS
S3), block storage, and file systems used to store data in the cloud.
● Network Layer: Ensures communication between various components in the cloud
infrastructure, typically including private networks, public internet connections, and
VPNs.
5. End-User Interface
In summary, cloud computing architecture is designed to deliver efficient, scalable, and secure
cloud services by combining infrastructure, platform, software services, and management tools.
Each layer and component plays a role in delivering a seamless cloud experience for end-users.
Major cloud deployment models
The major cloud deployment models define the way cloud services are hosted, managed, and
accessed. They provide different levels of control, flexibility, and security based on an
organization's needs. The four primary cloud deployment models are:
1. Public Cloud
● Description: A public cloud is owned and operated by third-party cloud service providers
(e.g., Amazon Web Services, Microsoft Azure, Google Cloud). The services and
resources are made available to the general public or a large industry group.
● Characteristics:
○ Resources are shared between multiple organizations (multi-tenancy).
○ The cloud provider manages and maintains the infrastructure.
○ Typically, users pay for what they use (pay-as-you-go model).
○ Ideal for small-to-medium businesses and startups due to low costs.
● Advantages:
○ Cost-effective, with no need for physical infrastructure.
○ Scalable, with resources available on demand.
○ Managed by the cloud provider, reducing the operational burden.
● Disadvantages:
○ Limited control over the infrastructure.
○ Potential security and privacy concerns due to shared resources.
2. Private Cloud
3. Hybrid Cloud
● Description: A hybrid cloud is a combination of both private and public clouds, where
data and applications are shared between them. This allows businesses to take
advantage of the scalability of the public cloud while maintaining control over sensitive
data in the private cloud.
● Characteristics:
○ Allows integration and communication between private and public clouds.
○ Often used for workloads that require varying levels of privacy, security, and
scalability.
○ Can balance the need for scalability with the need for secure data handling.
● Advantages:
○ Flexibility to move workloads between private and public clouds based on
demand and cost.
○ Enables businesses to meet security and regulatory compliance requirements
while still leveraging public cloud benefits.
○ Optimizes existing infrastructure investments.
● Disadvantages:
○ Complexity in managing and integrating different environments.
○ Can incur higher costs if not carefully managed.
4. Community Cloud
Each deployment model is designed to meet different business needs and objectives, providing
various levels of control, flexibility, cost, and security. Organizations choose the most suitable
model based on their requirements for data privacy, scalability, regulatory compliance, and cost
management.
IaaS provides virtualized computing resources over the internet, including virtual machines
(VMs), storage, and networking.
● Compute Services: Provides virtual machines (VMs) or instances to run applications
and workloads (e.g., Amazon EC2, Microsoft Azure Virtual Machines).
● Storage Services: Cloud-based storage solutions for data backup, disaster recovery,
and scalability (e.g., AWS S3, Google Cloud Storage, Azure Blob Storage).
● Networking Services: Tools to manage virtual networks, load balancing, and security
(e.g., Amazon VPC, Azure Virtual Network).
● Content Delivery Network (CDN): Delivers content like videos, images, and web pages
to users with high performance and low latency (e.g., Amazon CloudFront, Azure CDN).
● Backup and Disaster Recovery: Ensures that data is backed up and recoverable in the
event of a disaster (e.g., AWS Backup, Google Cloud Storage Archive).
PaaS provides a platform that allows developers to build, deploy, and manage applications
without worrying about the underlying infrastructure.
● Application Hosting: Platforms for deploying web and mobile applications (e.g., Google
App Engine, Heroku, Microsoft Azure App Service).
● Database Services: Managed database solutions for relational, NoSQL, and in-memory
databases (e.g., Amazon RDS, Google Cloud SQL, Azure SQL Database).
● Container Orchestration: Platforms to deploy and manage containers at scale (e.g.,
Kubernetes on Google Cloud, Amazon ECS, Azure Kubernetes Service).
● DevOps Tools: Services that support continuous integration and continuous deployment
(CI/CD) (e.g., AWS CodePipeline, GitHub Actions, Azure DevOps).
● Data Processing and Analytics: Managed services for processing, analyzing, and
visualizing large volumes of data (e.g., Google BigQuery, AWS Lambda, Azure Data
Factory).
● AI and Machine Learning: Tools and services for building and deploying machine
learning models (e.g., Amazon SageMaker, Google AI Platform, Azure Machine
Learning).
SaaS delivers software applications over the internet on a subscription basis, eliminating the
need for users to install, manage, or maintain software.
These services go beyond the basic IaaS, PaaS, and SaaS categories and address specific
needs within cloud environments.
● Cloud Security Services: Tools for identity and access management (IAM), encryption,
threat detection, and compliance (e.g., AWS Identity and Access Management, Azure
Security Center).
● Serverless Computing: Platforms for running code without managing servers, where
users only pay for actual compute time (e.g., AWS Lambda, Google Cloud Functions,
Azure Functions).
● Big Data and Analytics: Services designed for large-scale data storage, processing,
and analysis (e.g., Amazon Redshift, Google BigQuery, Azure Synapse Analytics).
● Blockchain Services: Cloud-based services for developing and managing blockchain
applications (e.g., AWS Blockchain, Azure Blockchain Workbench).
● Edge Computing: Services for processing data closer to the location of end users to
reduce latency (e.g., AWS IoT Greengrass, Microsoft Azure IoT Edge).
● IoT (Internet of Things) Services: Cloud platforms for managing IoT devices and data
(e.g., AWS IoT Core, Google Cloud IoT, Azure IoT Hub).
Here are some of the major cloud service providers that offer the services mentioned above:
● Amazon Web Services (AWS): The leading provider, offering a broad range of services
across IaaS, PaaS, and SaaS, including compute, storage, databases, machine
learning, and more.
● Microsoft Azure: Offers a comprehensive suite of cloud services, with strengths in
enterprise IT, security, hybrid cloud, and integrations with Microsoft products.
● Google Cloud: Known for its capabilities in data analytics, machine learning, and open
Threat agent
A threat agent refers to any entity or individual (either human or non-human) that has the
potential to exploit vulnerabilities in a system or environment, causing harm to an organization
or its assets. Threat agents can be either intentional or unintentional and can originate both from
external and internal sources. These agents can be involved in a variety of malicious activities,
such as stealing data, causing service disruptions, or damaging reputation.
○ Description: Physical agents that may threaten the security of a system through
physical means, such as tampering with hardware, stealing devices, or causing
physical damage to data centers.
○ Example: A burglar physically stealing hard drives from a company’s data center.
○ Motivation: Theft, sabotage, or espionage.
9. Social Engineers
● Intent: Whether the agent is acting intentionally or accidentally, their actions are typically
harmful or pose a risk to security.
● Capability: The skill level, tools, and resources available to the threat agent (e.g., a
skilled hacker vs. a script kiddie).
● Opportunity: The ability of the threat agent to exploit vulnerabilities—this includes both
technical weaknesses and human factors.
● Motive: The underlying reason or purpose behind the attack, which could range from
financial gain to political or ideological reasons.
By understanding the different types of threat agents and the methods they use, organizations
can develop better strategies to mitigate risks and protect their assets
Virtual Desktop Infrastructure
Virtual Desktop Infrastructure (VDI) is a technology that allows organizations to host desktop
environments on centralized servers in a data center, providing users with access to these
desktops from any device, anywhere. In a VDI setup, the desktop environment is virtualized and
stored in the data center, rather than running locally on an end user's device. VDI helps improve
security, manageability, and flexibility in how organizations deliver desktops to end users.
1. Virtualization Layer
○VDI uses virtualization technology to create virtual machines (VMs) that host
desktop environments. Each virtual machine runs an operating system and
applications just like a physical desktop.
○ Common VDI technologies include VMware Horizon, Microsoft Remote
Desktop Services (RDS), and Citrix Virtual Apps and Desktops.
2. Centralized Servers and Storage
○The virtual desktops are stored and run on centralized servers in the data center
or cloud. These servers host the VMs and provide resources like CPU, memory,
and storage.
○ Storage for VDI environments must be highly available and fast to ensure
performance, often utilizing Storage Area Networks (SAN) or Network
Attached Storage (NAS).
3. End-User Devices
○ Users access their virtual desktops from a variety of devices, including PCs,
laptops, thin clients, tablets, or smartphones. These devices do not require high
processing power, as most of the processing happens on the centralized servers.
○ Access to VDI environments is usually provided through a remote desktop
protocol (RDP), PCoIP, HDX, or other similar technologies.
4. Virtual Desktop Broker
○
A desktop broker manages user connections, assigning virtual desktops to
users based on policies and availability. It acts as an intermediary between the
end user and the virtual desktop infrastructure, ensuring users are connected to
the appropriate virtual machine based on their needs and profiles.
5. Connection Protocol
○ The connection protocol is used to display the virtual desktop on the user's
device. Common protocols include:
■
Remote Desktop Protocol (RDP): Used primarily in Microsoft
environments.
■ PCoIP (PC over IP): Used by VMware Horizon for delivering
high-performance virtual desktops.
■ HDX: Citrix’s optimized protocol for virtual desktop environments.
6. Management and Monitoring Tools
○ VDI platforms come with management tools that allow IT admins to provision,
manage, and monitor virtual desktops. These tools help in:
■ Deploying applications and updates.
■ Monitoring user activity and system performance.
■ Scaling infrastructure up or down based on demand.
○ Tools such as VMware vSphere, Citrix Director, and Microsoft System Center
Virtual Machine Manager are commonly used.
Benefits of VDI
1. Centralized Management
○ Since all desktops are managed from a central server, IT administrators can
apply updates, patches, and configurations across all virtual desktops
simultaneously, reducing the overhead of managing individual physical machines.
○ New applications, security updates, and policies can be pushed to all users
quickly.
2. Cost Savings
○ VDI reduces the need for high-powered end-user devices. Thin clients, which are
cheaper and require less maintenance than traditional desktop PCs, can be used
to access virtual desktops.
○ It also reduces costs related to hardware upgrades, as the central servers handle
processing power.
3. Security and Compliance
○ Data and applications are stored in the data center rather than on local devices,
which reduces the risk of data loss due to device theft or failure.
○ Security controls such as encryption, multi-factor authentication, and centralized
access management can be easily implemented and enforced.
○ It also supports compliance with industry regulations by ensuring that sensitive
data is not stored on end-user devices but remains protected in the data center.
4. Flexibility and Mobility
○ Users can access their virtual desktops from anywhere, on any device, as long
as they have an internet connection. This enables remote work, BYOD (bring
your own device) policies, and disaster recovery.
○ Virtual desktops can be easily moved between different data centers or even to
cloud platforms, improving business continuity and scaling.
5. Improved Disaster Recovery
○ Since the virtual desktops are hosted on centralized servers, they can be backed
up and replicated more easily. In the event of hardware failure or disaster, virtual
desktops can be quickly restored from backups.
○ VDI can also be deployed across multiple data centers, ensuring business
continuity if one site goes down.
6. Scalability
○ VDI environments can scale easily by adding more virtual machines or resources
to the server infrastructure. The system can grow as the organization’s needs
increase without having to replace individual end-user devices.
Challenges of VDI
○While VDI can save on hardware and maintenance costs in the long run, the
initial setup of the infrastructure (servers, storage, networking) can be expensive.
○ The centralization of resources requires robust servers and high-speed network
connectivity, which may require significant investment.
2. Complexity in Implementation
○ Software licensing for VDI solutions can be complex and costly. Some providers
may require additional licenses for virtual desktops, remote access, and
operating systems, which can increase costs.
○ It’s also necessary to plan for licensing of software applications used within the
VDI environment.
Use Cases for VDI
○VDI is ideal for organizations that support remote work or have mobile
employees. Workers can access their desktop environment from any location or
device, ensuring business continuity and flexibility.
2. BYOD (Bring Your Own Device)
○ Schools and training institutions can use VDI to provide students with access to
standardized desktop environments and software applications from any device or
location.
4. Healthcare
○ Healthcare organizations can use VDI to provide secure access to patient data
and medical applications, meeting strict regulatory requirements (e.g., HIPAA
compliance) while enabling remote or mobile access for healthcare workers.
5. Contractor or Temporary Worker Access
● VMware Horizon
○ A leading VDI solution that provides a secure, high-performance virtual desktop
environment with centralized management.
● Citrix Virtual Apps and Desktops
○ Offers advanced VDI features with scalability and optimized performance for a
variety of workloads.
● Microsoft Remote Desktop Services (RDS)
○ An affordable and scalable solution that integrates well with Windows Server
environments.
● Amazon WorkSpaces
○ A managed Desktop-as-a-Service (DaaS) offering from AWS that simplifies the
management of VDI.
In summary, Virtual Desktop Infrastructure (VDI) is a powerful solution for organizations looking
to centralize desktop management, enhance security, and provide flexible access to desktop
environments for a diverse range of users. However, it requires careful planning and investment
in infrastructure to ensure optimal performance and scalability.
○ Ensures that only authorized users and devices can access cloud resources.
○ Features:
■ Role-based access control (RBAC)
■ Multi-factor authentication (MFA)
■ Single sign-on (SSO)
■ Identity federation
○ Examples: AWS IAM, Azure Active Directory, Okta
2. Data Protection
○ Protects cloud environments from unauthorized access and cyber threats at the
network level.
○ Features:
■ Firewalls (e.g., AWS WAF, Azure Firewall)
■ Virtual private clouds (VPCs) and subnets.
■ Secure VPNs and hybrid connectivity.
■ Intrusion detection and prevention systems (IDPS).
○ Example: AWS Shield, Azure DDoS Protection
4. Application Security
1. Preventive Controls
○ Ensure compliance with data residency laws by storing and processing data in
specific geographic locations.
1. Data Breaches
1. Secure Configurations
○ Use automated tools to check for misconfigurations (e.g., AWS Config, Azure
Security Center).
2. Regular Updates and Patches
1. Server Virtualization
○Definition: Divides physical servers into multiple virtual machines (VMs), each
running its own operating system and applications independently.
○ Benefits:
■ Optimized resource utilization by running multiple workloads on a single
server.
■ Easier management, backup, and recovery.
■ Supports dynamic allocation of computing resources.
○ Technologies: VMware vSphere, Microsoft Hyper-V, KVM.
2. Storage Virtualization
1. Resource Optimization
○ Reduces the need for physical hardware, leading to lower capital and operational
expenses.
3. Flexibility and Scalability
1. Performance Overhead
○Virtualization can introduce performance overhead, as multiple virtual machines
share the same physical resources.
2. Complexity
1. Data Centers
○ Developers can quickly spin up and tear down virtual environments for testing.
3. Disaster Recovery
● Hypervisors:
○ Type 1 (Bare-metal): VMware ESXi, Microsoft Hyper-V, XenServer.
○ Type 2 (Hosted): Oracle VirtualBox, VMware Workstation, Parallels Desktop.
● Containerization: Docker, Podman, Kubernetes.
● Storage Virtualization: VMware vSAN, Red Hat Gluster Storage.
● Network Virtualization: OpenStack Neutron, VMware NSX.