0% found this document useful (0 votes)
24 views9 pages

CC QB1 Solved

Scalable computing in cloud allows systems to handle increasing workloads through resource addition. The document outlines the evolution of computing technology across five generations, detailing advancements from mainframes to high-performance and high-throughput computing systems. It also explains virtualization and its implementation levels, including instruction set architecture, hardware, operating system, library support, and application levels.

Uploaded by

Naveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views9 pages

CC QB1 Solved

Scalable computing in cloud allows systems to handle increasing workloads through resource addition. The document outlines the evolution of computing technology across five generations, detailing advancements from mainframes to high-performance and high-throughput computing systems. It also explains virtualization and its implementation levels, including instruction set architecture, hardware, operating system, library support, and application levels.

Uploaded by

Naveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1.

Define scalable computing and with neat diagram explain the concept of
Platform Evolution.
Scalable computing in cloud refers to the ability of a system to adapt and handle increasing
workloads or demands without compromising performance, achieved through techniques
like adding more resources or nodes to the existing infrastructure.
Computer technology has FIVE generation of development.

Generation Technology Details

IBM 360, CDC 6400 used in Large


1950 - 1970 Handful Mainframes Business and in the Government
organization.
Lower cost, such as DEC PDP 11 and
1960 - 1980 Minicomputers VAX series in small business and
college campuses.
1970 - 1990 Personal computer Built with VLSI microprocessors
1980 - 2000 Portable computers Appeared in wired and wireless
Pervasive devices applications.
HTC & HPC systems These are employed by consumers &
Since 1990 hidden in clusters, grids, high-end web-scale computing and
or Internet clouds information services.
Table 1.1: Five Generation of computer technologies

The Evolution of HPC and HTC systems

The Evolution of HPC and HTC systems


At HPC side,

• Supercomputers (massively parallel processors or MPPs) are gradually replaced


by clusters of cooperative computers to share computing resources.
• The cluster is often a collection of homogeneous compute nodes that are physically
connected in close range to one another.

At HTC side,

• Peer-to-peer (P2P) networks are formed for distributed file sharing and content
delivery applications.
• A P2P system is built over many client machines which are globally distributed in
nature.
• P2P, cloud computing, and web service platforms are more focused on HTC
applications than on HPC applications.
• Clustering and P2P technologies lead to the development of computational grids
or data grids.

High-Performance computing

• The speed of HPC systems has increased from Gflops in the early 1990s to now
Pflops in 2010.
• Reason for HPC: Demands from scientific, engineering and manufacturing
communities.
• Example: Top 500 most powerful computer systems in the world are measured by
floating-point speed in Linpack benchmark results.
• However, the number of supercomputer users is limited to less than 10% of all
computer users. Today, most computer users are using desktop computers or large
servers when they conduct Internet searches and market-driven computing tasks.

High-Throughput computing

• The development of market-oriented high-end computing systems is undergoing


a strategic change from an HPC paradigm to an HTC paradigm.
• HTC paradigm pays more attention to high-flux computing.
• The main application for high-flux computing is in Internet searches and web
services by millions or more users simultaneously.
• The performance goal thus shifts to measure high throughput, or the number of
tasks completed per unit of time.
• HTC technology needs to not only improve in terms of batch processing speed, but
also
(i) Address the acute problems of cost
(ii) Energy savings
(iii) security
(iv) Reliability at many data and enterprise computing centers.

2. Differentiate between centralized, distributed, and parallel computing and


What are the key characteristics of Distributed System.
Feature Centralized Parallel Distributed
Structure All computer All processors are either A distributed system
resources are tightly coupled with consists of multiple
centralized in one centralized shared autonomous
physical system. memory or loosely computers, each
coupled with distributed having its own
memory. private memory,
communicating
through a computer
network.
Communication No inter-system Inter-processor Information
communication communication is exchange in a
accomplished through distributed system is
shared memory or via accomplished
message passing. through message
passing.
Resource Centralized Shared or distributed Fully distributed
allocation
Type of A computer A computer system A computer
computer program that runs capable of parallel program that runs in
in a single computing is commonly a distributed system
physical system is known as a parallel is known as a
known as a computer. distributed program.
centralized
program.
Programming The process of The process of writing The process of
writing standalone parallel programs is writing distributed
or centralized often referred to as programs is referred
programs is often parallel programming. to as distributed
referred to as programming.
centralized
programming.
Examples Data centers, Multi-core processors, Hadoop, few cloud
supercomputer GPU systems

Key characteristics of Distributed systems


• Involves multiple autonomous computers, each with its own private memory,
communicating via a network.
• Information exchange occurs through message passing.
• Programs running in a distributed system are called distributed programs and writing them
is known as distributed programming.

3. Explain the concept of Grid Families, Peer-to-Peer Network Families and


Overlay Networks with diagram.
• Grid technology demands new distributed computing models, software/middleware
support, network protocols, and hardware infrastructures.
• National grid projects are followed by industrial grid plat form development by IBM,
Microsoft, Sun, HP, Dell, Cisco, EMC, Platform Computing, and others.
• New grid service providers (GSPs) and new grid applications have emerged rapidly,
like the growth of Internet and web services in the past two decades.

Peer-to-Peer Network Families


The P2P architecture offers a distributed model of networked systems. First, a P2P
network is client-oriented instead of server-oriented.

P2P Systems
• In a P2P system, every node acts as both a client and a server, providing part of the
system resources.
• Peer machines are simply client computers connected to the Internet.
• All client machines act autonomously to join or leave the system freely.
• This implies that no master-slave relationship exists among the peers. No central
coordination or central database is needed.
• Figure 1.17 shows the architecture of a P2P network at two abstraction levels.
• Initially, the peers are totally unrelated. Each peer machine joins or leaves the P2P
network voluntarily. Only the participating peers form the physical network at any
time.
• Unlike the cluster or grid, a P2P network does not use a dedicated interconnection
network. The physical network is simply an ad hoc network formed at various
Internet domains randomly using the TCP/IP and NAI protocols. Thus, the physical
network varies in size and topology dynamically due to the free membership in the
P2P network.

Overlay Networks
• Data items or files are distributed in the participating peers. Based on communication
or file-sharing needs, the peer IDs form an overlay network at the logical level. This
overlay is a virtual network formed by mapping each physical machine with its ID,
logically, through a virtual mapping as shown in Figure 1.17.
• When a new peer joins the system, its peer ID is added as a node in the overlay
network. When an existing peer leaves the system, its peer ID is removed from the
overlay network automatically.
• Therefore, it is the P2P overlay network that characterizes the logical connectivity
among the peers.
• There are two types of overlay networks: unstructured and structured.
• An unstructured overlay network is characterized by a random graph. There is no
fixed route to send messages or files among the nodes. Often, flooding is applied to
send a query to all nodes in an unstructured overlay, thus resulting in heavy network
traffic and nondeterministic search results.
• Structured overlay networks follow certain connectivity topology and rules for
inserting and removing nodes (peer IDs) from the overlay graph. Routing
mechanisms are developed to take advantage of the structured overlays.
4. With diagram explain the concept of Layered architecture for web services
and the grids.
• Entity interfaces in distributed systems correspond to WSDL, Java methods, and
CORBA IDL specifications.
• These interfaces are linked with high-level communication systems like SOAP, RMI,
and IIOP.
• These communication systems support message patterns, fault recovery, and
specialized routing.
• Middleware like WebSphere MQ or JMS provides rich functionality and virtualized
routing.
• Fault tolerance in Web Services Reliable Messaging (WSRM) mimics OSI layer fault
tolerance.
• Security often uses or reimplements concepts like IPsec or secure sockets. • Higher-
level services support entity communication, such as registries, metadata, and
management.
• Examples of discovery services include JNDI, CORBA Trading Service, UDDI,
LDAP, and ebXML.
• Management services include state and lifetime support, such as CORBA Life Cycle
and Persistent states.
• Distributed systems offer higher performance and better software function separation
and maintenance.
• CORBA and Java were popular in early distributed systems; now, SOAP, XML, and
REST are more common.

Layered Architecture for Web Services and Grids

• These architectures build on the OSI layers for networking abstractions.


• Higher-level environments are built on top of this base.
• These environments focus on entity interfaces and inter-entity communication.
• They rebuild the top four OSI layers at the entity level, not the bit level.
5. Define Virtualization & Explain the levels of Virtualization
Implementation.
Virtualization is a computer architecture technology by which multiple virtual machines
(VMs) are multiplexed in the same hardware machine.

• A traditional computer runs with a host operating system specially tailored for its
hardware architecture, as shown in Figure 3.1(a).
• After virtualization, different user applications managed by their own operating
systems (guest OS) can run on the same hardware, independent of the host OS.
• This is often done by adding additional software, called a virtualization layer as
shown in Figure 3.1(b).
• This virtualization layer is known as hypervisor or virtual machine monitor
(VMM). The VMs are shown in the upper boxes, where applications run with
their own guest OS over the virtualized CPU, memory, and I/O resources.

• The main function of the software layer for virtualization is to virtualize the
physical hardware of a host machine into virtual resources to be used by the VMs,
exclusively.
• The virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system.
• Common virtualization layers include the instruction set architecture (ISA) level,
hardware level, operating system level, library support level, and application
level.
1. Instruction Set Architecture Level
• At the ISA level, virtualization is performed by emulating a given ISA by the
ISA of the host machine. For example, MIPS binary code can run on an x86-
based host machine with the help of ISA emulation. With this approach, it is
possible to run a large amount of legacy binary code writ ten for various
processors on any given new hardware host machine.
• Instruction set emulation leads to virtual ISAs created on any hardware
machine. The basic emulation method is through code interpretation. An
interpreter program interprets the source instructions to target instructions
one by one.
• One source instruction may require tens or hundreds of native target
instructions to perform its function. Obviously, this process is relatively slow.
For better performance, dynamic binary translation is desired.
• Instruction set emulation requires binary translation and optimization. A
virtual instruction set architecture (V-ISA) thus requires adding a processor-
specific software translation layer to the compiler.
2. Hardware Abstraction Level
• Hardware-level virtualization is performed right on top of the bare hardware.
On the one hand, this approach generates a virtual hardware environment for
a VM.
• On the other hand, the process manages the underlying hardware through
virtualization.
• The idea is to virtualize a computer’s resources, such as its processors,
memory, and I/O devices. The intention is to upgrade the hardware utilization
rate by multiple users concurrently. The idea was implemented in the IBM
VM/370 in the 1960s.
3. Operating System Level
• This refers to an abstraction layer between traditional OS and user
applications.
• OS-level virtualization creates isolated containers on a single physical server
and the OS instances to utilize the hardware and software in data centers.
• The containers behave like real servers. OS-level virtualization is commonly
used in creating virtual hosting environments to allocate hardware resources
among many mutually distrusting users.
• It is also used, to a lesser extent, in consolidating server hardware by moving
services on separate hosts into containers or VMs on one server.
4. Library Support Level
• Most applications use APIs exported by user-level libraries rather than using
lengthy system calls by the OS. Since most systems provide well-documented
APIs, such an interface becomes another candidate for virtualization.
• Virtualization with library interfaces is possible by controlling the
communication link between applications and the rest of a system through
API hooks.
• The software tool WINE has implemented this approach to support Windows
applications on top of UNIX hosts.
5. User-Application Level
• Virtualization at the application level virtualizes an application as a VM. On
a traditional OS, an application often runs as a process. Therefore,
application-level virtualization is also known as 3.1 Implementation Levels
of Virtualization 133 process-level virtualization.
• The most popular approach is to deploy high level language (HLL) VMs. In
this scenario, the virtualization layer sits as an application program on top of
the operating system, and the layer exports an abstraction of a VM that can
run programs written and compiled to a particular abstract machine
definition.
• Any program written in the HLL and compiled for this VM will be able to
run on it.
• The Microsoft .NET CLR and Java Virtual Machine (JVM) are two good
examples of this class of VM. Other forms of application-level virtualization
are known as application isolation, application sandboxing, or application
streaming.
• The process involves wrapping the application in a layer that is isolated from
the host OS and other applications. The result is an application that is much
easier to distribute and remove from user workstations.
• An example is the LANDesk application virtualization platform which
deploys software applications as self-contained, executable files in an
isolated environment without requiring installation, system modifications, or
elevated security privileges.

You might also like