Icc Notes
Icc Notes
1. Security Risks: Vulnerable to cyberattacks and data breaches, with limited control over data.
2. Downtime: Dependent on internet connectivity and prone to outages.
3. Cost Overruns: Pay-as-you-go models can result in unexpected expenses.
4. Limited Customization: Restricted by provider’s infrastructure and potential vendor lock-in.
5. Performance Issues: Network latency and resource sharing can impact speed.
6. Compliance Challenges: Legal issues with data sovereignty and regulatory compliance.
7. Support Limitations: Quality of technical support can vary, needing in-house expertise.
1. Distributed Systems
o Independent systems in different locations are connected through networks.
o Examples: Ethernet (LAN), telecommunication networks, parallel processing.
o Features:
▪ Resource Sharing: Data, hardware, and software are shared.
▪ Open-to-All: Software is designed to be accessible.
▪ Fault Detection: Errors can be identified and corrected.
2. Mainframe Computing
o Developed in 1951 and still used for bulk data processing.
o Known for fast and large-scale computations.
o Advantages: Handles vast data efficiently for businesses.
o Drawback: Expensive to maintain.
3. Grid Computing
o Introduced in 1990, with nodes placed in different geographical locations connected via the
internet.
o Solves some issues of cluster computing but introduces latency problems between distant
nodes.
o Nodes can be from different organizations.
4. Web 2.0
o Involves users generating content and collaborating through social media and web services
(e.g., Facebook, Twitter).
o Combines the second-generation WWW and modern web services to promote interaction.
5. Virtualization
o Introduced over 40 years ago and enables the creation of virtual resources on physical
hardware.
o A key technique for cloud-based services, allowing efficient use of infrastructure.
6. Utility Computing
o Provides computing resources (like storage) on a rental basis.
o Services are tailored to user or business needs, supporting flexible usage.
Here’s a tabular comparison of Cloud Computing vs. Cluster Computing:
A service model defines the type and level of services a cloud provider offers to users. The three primary
models are IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a
Service).
In summary, the choice of service model depends on how much control and responsibility the user wants.
A deployment model refers to the specific way in which a cloud service is made available to users. It
defines the environment in which cloud services are hosted and the level of control, security, and
accessibility users have. Different deployment models cater to different organizational needs and
requirements.
1. Public Cloud:
o Description: Services are delivered over the public internet and shared among multiple
organizations (tenants). Resources are owned and operated by third-party cloud service
providers.
o Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP).
o Use Cases: Suitable for small to medium-sized businesses, startups, or organizations that
require scalability without high upfront costs.
2. Private Cloud:
o Description: Cloud infrastructure is exclusively used by a single organization. It can be
hosted on-premises or by a third-party provider. Offers enhanced security and control.
o Examples: VMware vSphere, OpenStack.
o Use Cases: Ideal for businesses with strict regulatory requirements or those handling
sensitive data.
3. Hybrid Cloud:
o Description: Combines both public and private clouds, allowing data and applications to be
shared between them. Organizations can benefit from the scalability of the public cloud while
maintaining sensitive data on a private cloud.
o Examples: Microsoft Azure Stack, Google Anthos.
o Use Cases: Suitable for businesses that want flexibility, such as using the public cloud for
non-sensitive operations while keeping sensitive data private.
4. Community Cloud:
o Description: A collaborative environment where cloud infrastructure is shared among several
organizations with similar interests or requirements. It can be managed internally or by a
third-party provider.
o Examples: Government organizations sharing a cloud for public services, academic
institutions collaborating on research.
o Use Cases: Suitable for organizations that require a specific level of security or compliance
shared among similar entities.
5. Multi-Cloud:
o Description: The use of services from multiple cloud providers, which can include both
public and private clouds. This approach allows organizations to avoid vendor lock-in and
take advantage of the best services from different providers.
o Examples: Using AWS for storage, Azure for machine learning, and Google Cloud for data
analytics.
o Use Cases: Suitable for large enterprises looking for flexibility and redundancy in their cloud
strategy.
Virtualization is a technology that allows the creation of a virtual version of something, such as hardware
platforms, storage devices, or network resources. It abstracts physical resources to enable multiple operating
systems and applications to run on a single physical machine, maximizing resource utilization and
flexibility.
1. Abstraction:
o Virtualization abstracts physical hardware resources into virtual resources. For example, a
single physical server can be divided into multiple virtual machines (VMs), each with its own
operating system and applications.
2. Hypervisor:
o A hypervisor is software that enables virtualization. It sits between the hardware and the
operating systems, managing the distribution of physical resources to the virtual machines.
o There are two types of hypervisors:
▪ Type 1 (Bare-Metal Hypervisor): Runs directly on the physical hardware (e.g.,
VMware ESXi, Microsoft Hyper-V).
▪ Type 2 (Hosted Hypervisor): Runs on top of an existing operating system (e.g.,
Oracle VirtualBox, VMware Workstation).
3. Virtual Machines (VMs):
o VMs are instances created through virtualization. Each VM behaves like a separate physical
computer, with its own operating system, applications, and resources. They can be easily
created, modified, and deleted.
4. Resource Allocation:
o Virtualization allows for dynamic allocation of resources (CPU, memory, storage) among
VMs based on demand. This optimizes resource use and enhances performance.
5. Isolation:
o VMs are isolated from each other, meaning that issues in one VM (such as crashes or security
breaches) do not affect other VMs on the same host. This isolation improves security and
reliability.
6. Snapshots and Cloning:
o Virtualization technologies often include features like snapshots (saving the current state of a
VM) and cloning (creating an exact copy of a VM). These features are useful for backup,
testing, and development.
Virtualization is a technology that creates a virtual version of physical resources, such as servers, storage
devices, networks, or applications. By abstracting these resources, virtualization allows multiple operating
systems (OS) and applications to run on a single physical machine, enhancing resource utilization,
flexibility, and management.
Types of Virtualization
1. Server Virtualization:
o Description: This is the most common form of virtualization. It involves partitioning a
physical server into multiple virtual servers (or virtual machines, VMs), each running its own
operating system and applications.
o Hypervisors Used: Type 1 (bare-metal) or Type 2 (hosted) hypervisors.
o Use Cases: Data centers, cloud computing, and consolidating server workloads.
2. Desktop Virtualization:
o Description: This technology allows users to access desktop environments hosted on a
server. Users can run their desktop OS and applications from any device, anywhere.
o Types:
▪ Virtual Desktop Infrastructure (VDI): Desktops are hosted on a server and
delivered to client devices.
▪ Desktop as a Service (DaaS): A cloud-based VDI offering.
o Use Cases: Remote work, BYOD (Bring Your Own Device) policies, and centralized desktop
management.
3. Application Virtualization:
o Description: This isolates applications from the underlying operating system, allowing them
to run in a virtualized environment. This means applications can be run on different devices
without needing to be installed directly on those devices.
o Use Cases: Simplifying application deployment and management, running legacy
applications on newer OS versions.
4. Network Virtualization:
o Description: This abstracts network resources, allowing multiple virtual networks to coexist
on the same physical network infrastructure. It enables the creation of virtual networks,
switches, and routers.
o Types:
▪ Software-Defined Networking (SDN): Separates the control plane from the data
plane, allowing for more flexible network management.
o Use Cases: Network segmentation, optimizing network performance, and improving security.
5. Storage Virtualization:
o Description: This combines multiple physical storage devices into a single, manageable
virtual storage resource. It abstracts the complexity of physical storage from the applications
that use it.
o Types:
▪ Block Storage Virtualization: Aggregates storage volumes from multiple devices.
▪ File Storage Virtualization: Pools file storage systems to improve accessibility and
management.
o Use Cases: Simplifying data management, improving backup and recovery, and enhancing
storage efficiency.
6. Data Virtualization:
o Description: This allows users to access and manipulate data from multiple sources without
needing to know the physical location of the data. It creates a single view of data from
disparate sources.
o Use Cases: Business intelligence, data integration, and analytics.
A data center is a centralized facility that houses computing resources such as servers, storage, and
networking equipment, designed to manage, process, and store large volumes of data for applications and
services.
Virtual machine storage refers to the storage resources that are allocated to virtual machines (VMs) within
a virtualization environment. This storage is essential for storing the VM's operating system, applications,
and data. The way storage is managed for VMs can significantly affect performance, scalability, and data
management.
1. Virtual Disks:
o VMs use virtual disks (often with file extensions like .vmdk, .vhd, or .vdi) that emulate
physical hard disks. These virtual disks store the operating system and application files for
the VM.
2. Storage Types:
o Local Storage: Directly attached storage on the physical host. While fast, it limits mobility
because the VM is tied to a specific host.
o Network Attached Storage (NAS): Storage accessed over a network. It allows multiple
hosts to access the same storage resources, facilitating VM mobility.
o Storage Area Network (SAN): A dedicated network that provides access to consolidated
block-level storage. It is often used in enterprise environments for performance and
scalability.
3. Storage Formats:
o Different virtualization platforms use various formats for virtual disks, each with its features
and advantages.
4. Storage Policies:Administrators can define storage policies for VMs based on performance,
availability, and redundancy requirements. This includes decisions on data replication, backups, and
storage tiering.
P2V (Physical-to-Virtual) Conversion is the process of converting a physical server into a virtual machine
(VM). This allows organizations to move workloads from physical hardware to a virtual environment,
facilitating better resource utilization and flexibility.
1. Purpose:
o To migrate physical servers to virtual environments for better efficiency, easier management,
and improved disaster recovery.
2. Process:
o Use specialized software or tools to create a virtual disk image of the physical server.
o Transfer the image to a hypervisor (Type 1 or Type 2) and create a VM based on this image.
3. Benefits:
o Resource Optimization: Consolidates multiple physical servers onto fewer virtual machines.
o Cost Savings: Reduces hardware costs and energy consumption.
o Flexibility: Facilitates easier backup, recovery, and scaling of workloads.
4. Tools:
o Tools for P2V conversion include Microsoft Virtual Machine Converter, and others.
A hypervisor, also known as a virtual machine monitor (VMM), is the software layer that manages multiple
virtual machines on a single host system. It allows each VM to run independently by allocating resources
from the host machine.