0% found this document useful (0 votes)
8 views16 pages

Cloud Answers

The document contains a comprehensive question bank covering various topics in high-performance computing, cloud computing, virtualization, and CPU microarchitectures. It details the evolution of HPC and HTC systems, different cloud types, virtualization levels, and I/O virtualization methods, along with specific architectures like vCUDA and Xen hypervisor. Additionally, it discusses live VM migration processes and the properties of virtual clusters, emphasizing scalability, resource optimization, and fault tolerance.

Uploaded by

jnvarshitha1709
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views16 pages

Cloud Answers

The document contains a comprehensive question bank covering various topics in high-performance computing, cloud computing, virtualization, and CPU microarchitectures. It details the evolution of HPC and HTC systems, different cloud types, virtualization levels, and I/O virtualization methods, along with specific architectures like vCUDA and Xen hypervisor. Additionally, it discusses live VM migration processes and the properties of virtual clusters, emphasizing scalability, resource optimization, and fault tolerance.

Uploaded by

jnvarshitha1709
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

QUESTION BANK

1. Illustrate the evolution of HPC and HTC systems.


2. Explain Iaas, Paas, Saas in detail and outline the eight reasons to adapt the cloud.
3. Demonstrate the five micro-architecture in modern CPU processors with neat diagram.
4. Explain Trends towards Distributed Operating System.
5. Explain different levels of virtualization.
6. Illustrate basic concept of vCUDA architecture with neat diagram.
7. Summarize I/O virtualization with suitable example.
8. Explain six properties of virtual clusters and outline fast deployment in detail.
9. Explain different types of clouds with suitable diagram.

10. Demonstrate different cloud design objectives.


11. Compare different computing paradigm distinctions.
12. Summarize Computational Grids and Grid families .
13. Outline parallel and distributed programming models in detail.

14. Explain different dimensions of scalability.


15. Explain OS level virtualization with advantages and disadvantages of OS extensions.
16. Explain hypervisor and outline Xen hypervisor architecture in detail.
17. Illustrate how multiple clusters execute various workloads with neat diagram.
18. Demonstrate Live migration process of a VM from one host to another with suitable diagram.
19. Outline cloud ecosystem for building private clouds with suitable diagram.
20. Explain public, private and hybrid clouds in detail.
21. Explain computing paradigm distinction.
22. Design objectives of HPC and HTC system.
23. Illustrate degree of parallelism.
24. Demonstrate trends towards utility computing.
25. write the short notes on IOT cyber physical system.
26. Discuss architecture of typical multicore processor.
27. Show GPU with eg NIVID.
28. write the short notes on memory storage
29. write the short notes on performance metrics and fault tolerance.
30. explain 4 layers of distributed computing system.
31. discuss live vm migration steps and performance effects.
32. Discuss Migration of memory, files and network resources.
33. Explain dynamic deployment of virtual clusters.
34. Discuss server consolidation in data centers.
35. explain virtual storage management.
36. Discuss three resource mangers over the VM linking through Ethernet and internet.
37. Deiscuss briefly trust management in virtualized data centers.
38. Illustrate briefly binary translation with full virtualization.
39. Explain hardware support for virtualization in the intel x86 processor.
40. Explain I/O virtualization in detail.
Five Microarchitectures in Modern CPU Processors

Modern CPU architectures use different techniques to improve performance by exploiting


Instruction-Level Parallelism (ILP) and Thread-Level Parallelism (TLP). The five
microarchitectures considered here are:

1. Four-Issue Superscalar Processor


2. Fine-Grain Multithreaded Processor
3. Coarse-Grain Multithreaded Processor
4. Two-Core Chip Multiprocessor (CMP)
5. Simultaneous Multithreaded (SMT) Processor

Each of these architectures schedules instructions differently across multiple functional units or
cores.

1. Four-Issue Superscalar Processor

 Description:
o A single-threaded processor capable of issuing up to four instructions per cycle to
four functional units.
o Exploits ILP (Instruction-Level Parallelism) by executing multiple instructions
from the same thread simultaneously.
o If there are stalls due to data dependencies or lack of available instructions, some
functional units remain idle.
 Diagram Representation:
o A grid where each row represents a functional unit and each column represents a
cycle.
o Instructions from the same thread are executed simultaneously in available units.
o Some empty slots (blank cells) appear when dependencies stall execution.

2. Fine-Grain Multithreaded Processor

 Description:
o Switches execution between multiple threads every cycle to avoid stalls.
o Exploits TLP (Thread-Level Parallelism) by allowing different threads to
execute on the same processor but in separate cycles.
o No idle cycles but at the cost of additional context switching overhead.
 Diagram Representation:
o Each cycle, a different thread issues an instruction to an available functional unit.
o No consecutive instructions from the same thread appear in back-to-back cycles.
o Threads are interleaved, ensuring all functional units remain busy.

3. Coarse-Grain Multithreaded Processor

 Description:
o Runs multiple threads but switches to another thread only when a long-latency
event (e.g., a cache miss) occurs.
o Reduces the overhead of frequent switching seen in fine-grain multithreading.
o Functional units remain active within a thread but may be idle during a thread
switch.
 Diagram Representation:
o Instructions from the same thread execute for multiple cycles before switching to
another thread.
o Large blocks of the same shading pattern (indicating the same thread) before a
transition occurs.

4. Two-Core Chip Multiprocessor (CMP)

 Description:
o A multicore processor with two separate cores, each capable of independent
execution.
o Each core is a two-way superscalar processor, meaning each can issue two
instructions per cycle.
o Fully exploits TLP by executing multiple threads in parallel across cores.
 Diagram Representation:
o Two sets of execution timelines, one per core.
o Each core runs instructions independently, showing that threads execute
concurrently rather than being interleaved.

5. Simultaneous Multithreaded (SMT) Processor

 Description:
o A single processor that issues instructions from multiple threads simultaneously
in the same cycle.
o Improves resource utilization by filling idle functional units with instructions
from different threads.
o Exploits both ILP and TLP, maximizing execution efficiency.
 Diagram Representation:
o Instructions from multiple threads appear in the same cycle but use different
functional units.
o No blank slots, as SMT makes full use of the available resources.

Comparison of Execution Patterns


Parallelism
Processor Type Execution Pattern
Type
Multiple instructions from the same thread per
Superscalar ILP
cycle
Fine-Grain
TLP Different threads alternate execution per cycle
Multithreading
Coarse-Grain
TLP Long execution blocks per thread before switching
Multithreading
Parallelism
Processor Type Execution Pattern
Type
TLP (across Multiple threads execute on different cores
CMP (Multicore)
cores) concurrently
Instructions from multiple threads executed in the
SMT ILP + TLP
same cycle

Levels of Virtualization Implementation

1. Instruction Set Architecture (ISA) Level

 Virtualization occurs through ISA emulation, allowing software compiled for one architecture
(e.g., MIPS) to run on another (e.g., x86).
 Achieved through code interpretation (slow) or dynamic binary translation (faster and more
efficient).
 Creates a Virtual ISA (V-ISA) using processor-specific software translation.

2. Hardware Abstraction Level

 Virtualization happens directly on bare hardware, managed by a hypervisor (e.g., Xen, IBM
VM/370).
 Creates a virtual hardware environment for VMs to share CPU, memory, and I/O resources.
 Goal: Improve hardware utilization by enabling multiple users to share resources.

3. Operating System (OS) Level

 Virtualization occurs at the OS level, where containers (e.g., Docker, LXC) create isolated
execution environments.
 Efficient for cloud and server consolidation, as multiple isolated instances share the same OS
kernel.
 Common in virtual hosting and multi-tenant environments.

4. Library Support Level

 Applications interact with system APIs rather than directly with the OS.
 Virtualization is done via API hooks, enabling software compatibility across platforms.
 Example: WINE (runs Windows applications on UNIX), vCUDA (GPU acceleration in
VMs).

6. Application Level (Process-Level Virtualization)

 Virtualization occurs at the application level, abstracting the execution environment.


 Uses High-Level Language (HLL) VMs like JVM (.NET CLR) to run platform-independent
code.
 Other forms include application sandboxing (e.g., LANDesk) for better software portability and
security.

Here’s a structured explanation of the vCUDA Architecture along with a clear illustration:

vCUDA: Virtualization of General-Purpose GPUs

Overview

 CUDA is a parallel computing platform that allows applications to utilize GPU acceleration.
 Running CUDA applications directly on hardware-level VMs is challenging due to hardware
constraints.
 vCUDA virtualizes CUDA libraries to allow execution on guest OS by redirecting API calls to
the host OS.

Key Components of vCUDA Architecture

1. vCUDA Library (Guest OS)

o Acts as a substitute for the standard CUDA library.


o Intercepts and redirects API calls to the vCUDA stub in the host OS.
o Creates and manages virtual GPUs (vGPUs).

2. Virtual GPU (vGPU)

o Provides a uniform view of the physical GPU to applications.


o Handles memory allocation: returns a local virtual address while mapping it to real
GPU memory.
o Stores and manages CUDA API calls for execution.

3. vCUDA Stub (Host OS)

o Interprets and processes API requests from the guest OS.


o Creates an execution context for CUDA API calls.
o Manages actual GPU resources and returns results to the guest OS.

How vCUDA Works

1. A CUDA application in the guest OS makes an API call.


2. The vCUDA library intercepts the call and forwards it to the vGPU.
3. The vGPU manages memory and passes the request to the vCUDA stub in the host OS.
4. The vCUDA stub translates the request, executes it on the real GPU, and sends back the result.
5. The result is returned to the CUDA application running in the guest OS.
4. Xen Hypervisor Architecture

Xen is an open-source micro-kernel hypervisor developed at Cambridge University. It separates


policy from mechanism, meaning:

 Xen itself handles core virtualization mechanisms.


 Domain 0 (Dom0) handles policies, including VM management, device control, and resource
allocation.
 It does not have standard OS without standard OS xen provides virtual OS.

Core Components of Xen Architecture

1. Hypervisor
o Provides a virtual environment between hardware and OS.
o Does not include device drivers (relies on Domain 0).

2. Domain 0 (Dom0) – Privileged Guest OS


o The first OS loaded when Xen boots.
o Directly manages hardware and allocates resources for guest VMs.
o If compromised, the entire system is at risk.

3. Domain U (DomU) – Unprivileged Guest OS


o Normal VMs running under Xen.
o Cannot access hardware directly (rely on Dom0 for resource allocation).
4. Xen Virtual Machine Lifecycle
 VMs in Xen behave like files: they can be created, copied, saved, migrated, and rolled back.
 Unlike traditional systems, Xen allows execution states to branch out in a tree-like structure,
enabling rollback to previous states.
 This flexibility improves fault tolerance but also introduces security challenges (e.g., if an
attacker gains access to a VM snapshot).

Here’s a structured breakdown of I/O Virtualization with a detailed explanation:

3.3.4 I/O Virtualization

1. What is I/O Virtualization?

I/O virtualization is the process of managing I/O requests from virtual machines (VMs) and routing them
to shared physical hardware. Since multiple VMs run on a single physical machine, they need a way to
efficiently access I/O devices like network cards, storage devices, and GPUs.
At the time of writing, there are three main approaches to implementing I/O virtualization:

1. Full Device Emulation


2. Para-Virtualization
3. Direct I/O Virtualization

2. Methods of I/O Virtualization

2.1 Full Device Emulation

 Concept: Emulates real-world devices in software inside the Virtual Machine Monitor (VMM)
(hypervisor).
 How it works:
o The guest OS interacts with virtual devices, which behave like real hardware.
o The VMM traps and processes I/O requests from the guest OS, then forwards them to
real hardware.
 Advantages:
Compatible with unmodified guest OSes.
 Disadvantages:
Slow performance due to high software overhead.
Example: Bochs and QEMU use full device emulation for virtualizing devices.

2.2 Para-Virtualization (Split-Driver Model)

 Concept: Uses modified drivers to improve performance by splitting I/O handling into two parts:
o Frontend Driver (inside guest OS) – Sends I/O requests.
o Backend Driver (inside host OS) – Processes I/O requests and manages the real device.
 How it works:
o The frontend and backend drivers communicate via a shared memory buffer.
o The backend driver multiplexes I/O operations across multiple VMs.
 Advantages:
Faster than full emulation (reduces software overhead).
 Disadvantages:
Requires modified guest OS drivers (not always supported).
Higher CPU usage compared to direct I/O.
Example: Xen hypervisor uses para-virtualization for I/O operations.

2.3 Direct I/O Virtualization (Hardware-Assisted I/O Virtualization)

 Concept: VMs access physical devices directly without going through the hypervisor.
 How it works:
o Uses hardware-assisted virtualization (e.g., Intel VT-d) to allow VMs to access
devices safely.
o I/O devices are mapped directly to VMs, bypassing software layers.
 Advantages:
Near-native performance (low overhead).
Reduces CPU overhead from emulation.
 Disadvantages:
Hardware-dependent (requires VT-d or similar support).
Complex resource management (device state must be carefully handled).
Example: Used in high-performance networking and GPUs (e.g., SR-IOV for network cards).

are six properties of virtual clusters:

1. Support for Both Physical and Virtual Nodes


 Virtual clusters can consist of both physical machines and virtual machines (VMs).
 Multiple VMs with different operating systems can run on the same physical machine.

2. Guest OS Independence
 Each VM operates with a guest OS that is often different from the host OS.
 The guest OS manages its resources independently within the physical machine.

3. Resource Consolidation and Optimization


 VMs allow multiple functionalities to run on the same physical server.
 This enhances server utilization and application flexibility while reducing hardware costs.

4. Distributed Parallelism and Fault Tolerance


 VMs can be replicated across multiple servers to promote distributed computing, disaster
recovery, and fault tolerance.
 This ensures system reliability and availability.

5. Dynamic Scalability
 The number of nodes (VMs) in a virtual cluster can increase or decrease dynamically.
 This is similar to how a peer-to-peer (P2P) network adapts to changing conditions.

6. Failure Resilience
 If a physical node fails, some VMs may be affected, but the entire system remains operational.
 The failure of VMs does not impact the host system, ensuring continued functionality.

Fast Deployment of Virtualization

**What It Is:**
Fast deployment in virtualization means quickly setting up and managing virtual machines (VMs) on
physical servers in a cluster. This helps users access the computing resources they need without delays.

Deployment involves constructing and distributing software stacks (OS, libraries, applications)
efficiently to physical nodes in clusters.

**Key Points:**

1. **Quick Software Installation:**


- The system should rapidly install operating systems, libraries, and applications on physical servers,
making resources available almost immediately.

2. **Easy Switching Between Virtual Environments:**


- Users should be able to switch between different virtual clusters quickly, allowing them to work on
various tasks without waiting.

3. **Efficient Resource Management:**


- When a user finishes their work, the system should quickly shut down or suspend their virtual cluster
to free up resources for others.

4. **Automation:**
- Automating the setup process can speed things up, making it easier to install software and allocate
resources without manual intervention.

5. **Scalability:**
- The system should easily add or remove virtual clusters based on user demand, ensuring that resources
are available when needed.

6. **Minimizing Performance Impact:**


- Fast deployment should avoid slowing down the system, even when moving VMs around.

**Benefits:**
- **Better User Experience:** Users get the resources they need quickly.
- **More Efficient Resource Use:** Resources are used effectively, reducing waste.
- **Energy Savings:** Quickly shutting down unused clusters helps save energy.

In summary, fast deployment of virtualization ensures that virtual environments are set up and managed
quickly and efficiently, leading to better resource use and a smoother experience for users.

Live VM Migration Process

Live VM migration involves transferring a running virtual machine (VM) from one physical host to
another with minimal downtime. This ensures continuity of service and optimal resource utilization.

Steps in Live Migration

1. Pre-Migration (Stage 0)

 The VM is actively running on Host A.


 An alternate physical host (Host B) is preselected for migration.
 The block devices are mirrored, and necessary resources are maintained.

2. Reservation (Stage 1)

 A container is initialized on Host B to receive the migrating VM.

3. Iterative Pre-Copy (Stage 2)

 The memory of the VM is copied iteratively from Host A to Host B.


 Shadow paging is enabled.
 Dirty pages (modified memory) are recopied in successive rounds to reduce downtime.
4. Stop-and-Copy (Stage 3)

 The VM is suspended on Host A for the final memory copy.


 A gratuitous ARP (Address Resolution Protocol) is generated to redirect network traffic to Host
B.
 The remaining VM state (CPU, disk, network configurations) is synchronized to Host B.

5. Commitment (Stage 4)

 The VM state on Host A is released.


 The original VM instance is removed from Host A.

6. Activation (Stage 5)

 The VM starts running on Host B.


 The VM connects to local devices.
 It resumes normal operations, ensuring seamless service continuity.
Types of Clouds in Cloud Computing

Cloud computing is classified into three main types: Public Cloud, Private Cloud, and Hybrid Cloud.
Each type has unique characteristics and serves different purposes based on business needs.

1. Public Cloud

Definition: A public cloud is a cloud infrastructure that is owned and managed by third-party
service providers and is available to the general public over the internet.

Characteristics:
 Hosted and maintained by cloud providers.
 Accessible by multiple users (multi-tenancy).
 Pay-per-use pricing model (users pay only for the resources they consume).
 Scalability and flexibility based on demand.

Examples:
 Amazon Web Services (AWS)
 Google Cloud Platform (GCP)
 Microsoft Azure
 IBM Cloud

Advantages:
Cost-effective (no need for infrastructure setup).
Easy to scale up or down.
Highly available and reliable.

Disadvantages:
Security risks (data stored on shared infrastructure).
Limited control over resources and configurations.

2. Private Cloud

Definition: A private cloud is a dedicated cloud infrastructure operated exclusively for a single
organization. It can be hosted on-premises or by a third-party provider but remains private to one entity.

Characteristics:

 Owned and controlled by a single organization.


 Offers high security and customization.
 Can be hosted on-premises or by a service provider.

Examples:
 OpenStack Private Cloud
 VMware vCloud
 IBM Private Cloud
Advantages:
High security and control over data.
Better customization based on business needs.
Efficient for regulatory compliance (e.g., healthcare, finance).

Disadvantages:
Expensive to set up and maintain.
Limited scalability compared to public clouds.

3. Hybrid Cloud

Definition: A hybrid cloud is a combination of both public and private clouds that allows data and
applications to be shared between them. This model is used to balance flexibility, cost, and security.

Characteristics:
 Mix of public and private cloud infrastructure.
 Provides flexibility to store sensitive data in a private cloud while using public cloud for less-
sensitive workloads.
 Supports bursting (automatically shifting workloads to the public cloud when private cloud
resources are maxed out).

Examples:
 IBM Research Compute Cloud (RC2)
 Microsoft Azure Hybrid Cloud
 Google Anthos

Advantages:
Balances cost and performance by using both public and private resources.
Scalable and flexible to handle varying workloads.
Improved disaster recovery and business continuity.

Disadvantages:
Complex to manage due to integration between multiple environments.
Security challenges in ensuring seamless communication between private and public cloud.

Comparison Table

Feature Public Cloud Private Cloud Hybrid Cloud


Ownership Third-party Single Organization Combination of both
Security Medium High Medium to High
Cost Low (pay-per-use) High (infrastructure cost) Medium
Scalability High Limited High
Control Low High Medium
Startups, General Government, Financial Enterprises, Cloud
Use Cases
Applications Institutions Bursting
Conclusion
 Public Clouds are best for businesses that need scalability and lower costs.
 Private Clouds are suitable for organizations needing security and full control over resources.
 Hybrid Clouds offer the best of both worlds, allowing flexibility in managing workloads.

4.1.2.1 Cloud Design Objectives

The following list highlights six design objectives for cloud computing:

• Shifting computing from desktops to data centers :


Computer processing, storage, and softwaredelivery is shifted away from desktops and local
servers and toward data centers over theInternet.

• Service provisioning and cloud economics :


Providers supply cloud services by signing SLAs with consumers and end users. The services
must be efficient in terms of computing, storage,and power consumption. Pricing is based on a
pay-as-you-go policy.

• Scalability in performance :
The cloud platforms and software and infrastructure services must be able to scale in performance
as the number of users increases.
• Data privacy protection
Can you trust data centers to handle your private data and records?
This concern must be addressed to make clouds successful as trusted services.

• High quality of cloud services :


The QoS of cloud computing must be standardized to make clouds interoperable among multiple
providers.

• New standards and interfaces :


This refers to solving the data lock-in problem associated with data centers or cloud providers.
Universally accepted APIs and access protocols are needed to provide high portability and
flexibility of virtualized applications.

You might also like