CC QB1 Solved
CC QB1 Solved
Define scalable computing and with neat diagram explain the concept of
Platform Evolution.
Scalable computing in cloud refers to the ability of a system to adapt and handle increasing
workloads or demands without compromising performance, achieved through techniques
like adding more resources or nodes to the existing infrastructure.
Computer technology has FIVE generation of development.
At HTC side,
• Peer-to-peer (P2P) networks are formed for distributed file sharing and content
delivery applications.
• A P2P system is built over many client machines which are globally distributed in
nature.
• P2P, cloud computing, and web service platforms are more focused on HTC
applications than on HPC applications.
• Clustering and P2P technologies lead to the development of computational grids
or data grids.
High-Performance computing
• The speed of HPC systems has increased from Gflops in the early 1990s to now
Pflops in 2010.
• Reason for HPC: Demands from scientific, engineering and manufacturing
communities.
• Example: Top 500 most powerful computer systems in the world are measured by
floating-point speed in Linpack benchmark results.
• However, the number of supercomputer users is limited to less than 10% of all
computer users. Today, most computer users are using desktop computers or large
servers when they conduct Internet searches and market-driven computing tasks.
High-Throughput computing
P2P Systems
• In a P2P system, every node acts as both a client and a server, providing part of the
system resources.
• Peer machines are simply client computers connected to the Internet.
• All client machines act autonomously to join or leave the system freely.
• This implies that no master-slave relationship exists among the peers. No central
coordination or central database is needed.
• Figure 1.17 shows the architecture of a P2P network at two abstraction levels.
• Initially, the peers are totally unrelated. Each peer machine joins or leaves the P2P
network voluntarily. Only the participating peers form the physical network at any
time.
• Unlike the cluster or grid, a P2P network does not use a dedicated interconnection
network. The physical network is simply an ad hoc network formed at various
Internet domains randomly using the TCP/IP and NAI protocols. Thus, the physical
network varies in size and topology dynamically due to the free membership in the
P2P network.
Overlay Networks
• Data items or files are distributed in the participating peers. Based on communication
or file-sharing needs, the peer IDs form an overlay network at the logical level. This
overlay is a virtual network formed by mapping each physical machine with its ID,
logically, through a virtual mapping as shown in Figure 1.17.
• When a new peer joins the system, its peer ID is added as a node in the overlay
network. When an existing peer leaves the system, its peer ID is removed from the
overlay network automatically.
• Therefore, it is the P2P overlay network that characterizes the logical connectivity
among the peers.
• There are two types of overlay networks: unstructured and structured.
• An unstructured overlay network is characterized by a random graph. There is no
fixed route to send messages or files among the nodes. Often, flooding is applied to
send a query to all nodes in an unstructured overlay, thus resulting in heavy network
traffic and nondeterministic search results.
• Structured overlay networks follow certain connectivity topology and rules for
inserting and removing nodes (peer IDs) from the overlay graph. Routing
mechanisms are developed to take advantage of the structured overlays.
4. With diagram explain the concept of Layered architecture for web services
and the grids.
• Entity interfaces in distributed systems correspond to WSDL, Java methods, and
CORBA IDL specifications.
• These interfaces are linked with high-level communication systems like SOAP, RMI,
and IIOP.
• These communication systems support message patterns, fault recovery, and
specialized routing.
• Middleware like WebSphere MQ or JMS provides rich functionality and virtualized
routing.
• Fault tolerance in Web Services Reliable Messaging (WSRM) mimics OSI layer fault
tolerance.
• Security often uses or reimplements concepts like IPsec or secure sockets. • Higher-
level services support entity communication, such as registries, metadata, and
management.
• Examples of discovery services include JNDI, CORBA Trading Service, UDDI,
LDAP, and ebXML.
• Management services include state and lifetime support, such as CORBA Life Cycle
and Persistent states.
• Distributed systems offer higher performance and better software function separation
and maintenance.
• CORBA and Java were popular in early distributed systems; now, SOAP, XML, and
REST are more common.
• A traditional computer runs with a host operating system specially tailored for its
hardware architecture, as shown in Figure 3.1(a).
• After virtualization, different user applications managed by their own operating
systems (guest OS) can run on the same hardware, independent of the host OS.
• This is often done by adding additional software, called a virtualization layer as
shown in Figure 3.1(b).
• This virtualization layer is known as hypervisor or virtual machine monitor
(VMM). The VMs are shown in the upper boxes, where applications run with
their own guest OS over the virtualized CPU, memory, and I/O resources.
• The main function of the software layer for virtualization is to virtualize the
physical hardware of a host machine into virtual resources to be used by the VMs,
exclusively.
• The virtualization software creates the abstraction of VMs by interposing a
virtualization layer at various levels of a computer system.
• Common virtualization layers include the instruction set architecture (ISA) level,
hardware level, operating system level, library support level, and application
level.
1. Instruction Set Architecture Level
• At the ISA level, virtualization is performed by emulating a given ISA by the
ISA of the host machine. For example, MIPS binary code can run on an x86-
based host machine with the help of ISA emulation. With this approach, it is
possible to run a large amount of legacy binary code writ ten for various
processors on any given new hardware host machine.
• Instruction set emulation leads to virtual ISAs created on any hardware
machine. The basic emulation method is through code interpretation. An
interpreter program interprets the source instructions to target instructions
one by one.
• One source instruction may require tens or hundreds of native target
instructions to perform its function. Obviously, this process is relatively slow.
For better performance, dynamic binary translation is desired.
• Instruction set emulation requires binary translation and optimization. A
virtual instruction set architecture (V-ISA) thus requires adding a processor-
specific software translation layer to the compiler.
2. Hardware Abstraction Level
• Hardware-level virtualization is performed right on top of the bare hardware.
On the one hand, this approach generates a virtual hardware environment for
a VM.
• On the other hand, the process manages the underlying hardware through
virtualization.
• The idea is to virtualize a computer’s resources, such as its processors,
memory, and I/O devices. The intention is to upgrade the hardware utilization
rate by multiple users concurrently. The idea was implemented in the IBM
VM/370 in the 1960s.
3. Operating System Level
• This refers to an abstraction layer between traditional OS and user
applications.
• OS-level virtualization creates isolated containers on a single physical server
and the OS instances to utilize the hardware and software in data centers.
• The containers behave like real servers. OS-level virtualization is commonly
used in creating virtual hosting environments to allocate hardware resources
among many mutually distrusting users.
• It is also used, to a lesser extent, in consolidating server hardware by moving
services on separate hosts into containers or VMs on one server.
4. Library Support Level
• Most applications use APIs exported by user-level libraries rather than using
lengthy system calls by the OS. Since most systems provide well-documented
APIs, such an interface becomes another candidate for virtualization.
• Virtualization with library interfaces is possible by controlling the
communication link between applications and the rest of a system through
API hooks.
• The software tool WINE has implemented this approach to support Windows
applications on top of UNIX hosts.
5. User-Application Level
• Virtualization at the application level virtualizes an application as a VM. On
a traditional OS, an application often runs as a process. Therefore,
application-level virtualization is also known as 3.1 Implementation Levels
of Virtualization 133 process-level virtualization.
• The most popular approach is to deploy high level language (HLL) VMs. In
this scenario, the virtualization layer sits as an application program on top of
the operating system, and the layer exports an abstraction of a VM that can
run programs written and compiled to a particular abstract machine
definition.
• Any program written in the HLL and compiled for this VM will be able to
run on it.
• The Microsoft .NET CLR and Java Virtual Machine (JVM) are two good
examples of this class of VM. Other forms of application-level virtualization
are known as application isolation, application sandboxing, or application
streaming.
• The process involves wrapping the application in a layer that is isolated from
the host OS and other applications. The result is an application that is much
easier to distribute and remove from user workstations.
• An example is the LANDesk application virtualization platform which
deploys software applications as self-contained, executable files in an
isolated environment without requiring installation, system modifications, or
elevated security privileges.