0% found this document useful (0 votes)
8 views82 pages

Module01 Cloudcomputing 250409082345 d719f5bc

The document discusses distributed system models and enabling technologies, focusing on scalable computing over the internet, network-based systems, and cloud computing. It covers the evolution of computing platforms, trends in scalable computing, and the Internet of Things, along with various technologies such as multicore CPUs, GPU computing, and virtualization. Additionally, it addresses performance metrics, security, and energy efficiency in distributed systems.

Uploaded by

ise.dept05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views82 pages

Module01 Cloudcomputing 250409082345 d719f5bc

The document discusses distributed system models and enabling technologies, focusing on scalable computing over the internet, network-based systems, and cloud computing. It covers the evolution of computing platforms, trends in scalable computing, and the Internet of Things, along with various technologies such as multicore CPUs, GPU computing, and virtualization. Additionally, it addresses performance metrics, security, and energy efficiency in distributed systems.

Uploaded by

ise.dept05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

MODULE-1

Distributed System Models


and Enabling Technologies
1. SCALABLE COMPUTING OVER THE INTERNET
2. TECHNOLOGIES FOR NETWORK-BASED SYSTEMS
3. SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING
4. SOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMS AND
CLOUDS
5. PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY
• SCALABLE COMPUTING OVER THE INTERNET
A) The Age of Internet Computing
B) Scalable Computing Trends and New Paradigms
C) The Internet of Things and Cyber-Physical Systems
1.a) The Age of Internet Computing
The Platform Evolution
• Computer technology has gone through five generations
• each generation lasting from 10 to 20 years
1. 1950 to 1970, a handful of mainframes, including the IBM 360 and CDC 6400,
were built to satisfy the demands of large businesses and government
organizations.
2. From 1960 to 1980, lower-cost minicomputers such as the DEC PDP 11 and VAX
Series became popular among small businesses and on college campuses
3. From 1970 to 1990, we saw widespread use of personal computers built with
VLSI microprocessors
4. From 1980 to 2000, portable computers and pervasive devices appeared in
both wired and wireless applications.
5. Since 1990, the use of both HPC and HTC
• Figure 1.1 illustrates the evolution of HPC and HTC systems. On the
HPC side, supercomputers (massively parallel processors or MPPs) are
gradually replaced by clusters of cooperative computers out of a
desire to share computing resources.
1) SCALABLE COMPUTING OVER THE INTERNET
Distributed System Families
• mid-1990s, technologies for building P2P networks and networks of clusters
have been consolidated into many national projects designed to establish
wide area computing infrastructures, known as computational grids or data
grids. Recently, we have witnessed a surge in interest in exploring Internet
cloud resources for data-intensive applications. Internet clouds are the result
of moving desktop computing to service-oriented computing using server
clusters and huge databases at data centers.
• In October 2010, the highest performing cluster machine was built in China
with 86016 CPU processor cores and 3,211,264 GPU cores in a Tianhe-1A
system. The largest computational grid connects up to hundreds of server
clusters. A typical P2P network may involve millions of client machines
working simultaneously. Experimental cloud computing clusters have been
built with thousands of processing nodes.
Meeting these goals requires to
yield the following design
objectives:
• Efficiency measures the utilization rate of resources in an execution model by
exploiting massive parallelism in HPC. For HTC, efficiency is more closely related to
job throughput, data access, storage, and power efficiency.
• Dependability measures the reliability and self-management from the chip to the
system and application levels. The purpose is to provide high-throughput service
with Quality of Service (QoS) assurance, even under failure conditions.
• Adaptation in the programming model measures the ability to support billions of
job requests over massive data sets and virtualized cloud resources under various
workload and service models.
• Flexibility in application deployment measures the ability of distributed systems
to run well in both HPC (science and engineering) and HTC (business) applications
b) Scalable Computing Trends and New Paradigms

• The tremendous price/performance ratio of commodity hardware


was driven by the desktop, notebook, and tablet computing markets.
This has also driven the adoption and use of commodity technologies
in large-scale computing.
b) Scalable Computing Trends and
New Paradigms
• Innovative Applications
b) Scalable Computing Trends and New Paradigms
• The Trend toward Utility Computing
b) Scalable Computing Trends and
New Paradigms
• The Hype Cycle of New Technologies
c) The Internet of Things and Cyber-
Physical Systems
• The Internet of Things
The concept of the IoT was introduced in 1999 at
MIT(Massachusetts Institute of Technology
. The IoT refers to the networked interconnection of everyday objects,
tools, devices, or computers. The idea is to tag every object using RFID
or a related sensor or electronic technology such as GPS
With the introduction of the IPv6 protocol, 2128 IP addresses are available
to distinguish all the objects on Earth, including all computers and
pervasive devices.
The IoT researchers have estimated that every human being will be
surrounded by 1,000 to 5,000 objects.
The IoT needs to be designed to track 100 trillion static or moving objects
simultaneously.
The IoT demands universal addressability of all of the objects or things. To
reduce the complexity of identification, search, and storage, one can set
the threshold to filter out fine-grain objects.
The IoT obviously extends the Internet and is more heavily developed in
Asia and European countries
c) The Internet of Things and Cyber-
Physical Systems
• cyber-physical system (CPS)
A cyber-physical system (CPS) is the result of interaction between
computational processes and the physical world. A CPS integrates
“cyber” (heterogeneous, asynchronous) with “physical” (concurrent
and information-dense) objects. A CPS merges the “3C” technologies of
computation, communication, and control into an intelligent closed
feedback system between the physical world and the information
world, a concept which is actively explored in the United States. The IoT
emphasizes various networking connections among physical objects,
while the CPS emphasizes exploration of virtual reality (VR) applications
in the physical world
1. SCALABLE COMPUTING OVER THE INTERNET
2. TECHNOLOGIES FOR NETWORK-BASED
SYSTEMS
3. SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING
4. SOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMS AND
CLOUDS
5. PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY
2. TECHNOLOGIES FOR NETWORK-BASED
SYSTEMS
a) Multicore CPUs and Multithreading Technologies
Advances in CPU Processors
(figure 1.4)
 Multicore CPU and Many-Core GPU Architectures
 Multithreading Technology
b) Multicore CPU and Many-Core
GPU Architectures

IA-32 and IA-64 instruction set architectures


x-86 processors have been extended to serve HPC and HTC systems
Many RISC processors have been replaced with multicore x-86 processors
including the Intel i7, Xeon, AMD Opteron, Sun Niagara, IBM Power 6, and X cell processors.

Niagara II is built with eight cores


with eight threads handled by
each core.
2. TECHNOLOGIES FOR NETWORK-BASED
SYSTEMS
a) Multicore CPUs and Multithreading Technologies
Multithreading Technology
2. TECHNOLOGIES FOR NETWORK-BASED
SYSTEMS
b) GPU Computing to Exascale and Beyond
 How GPUs Work
 GPU Programming Model
 Power Efficiency of the GPU
• A GPU offloads the CPU from tedious graphics tasks in video editing
applications.
• The world’s first GPU, the GeForce 256, was marketed by
NVIDIA in 1999.
• These GPU chips can process a minimum of 10 million polygons per
second,
• are used in nearly every computer on the market today
• General-purpose computing on GPUs, known as GPGPUs, have
appeared in the HPC field. NVIDIA’s CUDA model was for HPC using
GPGPUs
2. TECHNOLOGIES FOR NETWORK-BASED SYSTEMS

c) Memory, Storage, and Wide-Area Networking


 Memory Technology
 Disks and Storage Technology
 System-Area Interconnects
 Wide-Area Networking
System-Area Interconnect Wide-Area Networking
2. TECHNOLOGIES FOR NETWORK-BASED
SYSTEMS
d) Virtual Machines and Virtualization Middleware
 Virtual Machines
 VM Primitive Operations
 Virtual Infrastructures
Virtual Machines
VM Primitive Operations
2. TECHNOLOGIES FOR NETWORK-BASED
SYSTEMS
e) Data Center Virtualization for Cloud Computing
 Data Center Growth and Cost Breakdown
 Low-Cost Design Philosophy
 Convergence of Technologies
3. SYSTEM MODELS FOR DISTRIBUTED AND CLOUD
COMPUTING
a) Clusters of Cooperative Computers
 Cluster Architecture
 Single-System Image
 Hardware, Software, and Middleware Support
 Major Cluster Design Issues
3. SYSTEM MODELS FOR DISTRIBUTED AND CLOUD
COMPUTING
b) Grid Computing Infrastructures
 Computational Grids
 Grid Families
3. SYSTEM MODELS FOR DISTRIBUTED AND CLOUD
COMPUTING
c) Peer-to-Peer Network Families
 P2P Systems
 Overlay Networks
 P2P Application Families
 P2P Computing Challenges
3. SYSTEM MODELS FOR DISTRIBUTED AND CLOUD
COMPUTING
d) Cloud Computing over the Internet
 Internet Clouds
 The Cloud Landscape
4. SOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMS
AND CLOUDS

SOA Service-Oriented Architecture


Layered Architecture for Web Services and Grids
 CORBA AND RMI
SOA
Benefits of SOA
4. SOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMS
AND CLOUDS
b) Trends toward Distributed Operating Systems
 Distributed Operating Systems
 Amoeba versus DCE
 MOSIX2 for Linux Clusters
 Transparency in Programming Environments
4. SOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMS
AND CLOUDS
c) Parallel and Distributed Programming Models
 Message-Passing Interface (MPI)
 MapReduce
Hadoop Library
 Open Grid Services Architecture (OGSA)
 Globus Toolkits and Extensions
5. PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY
a) Performance Metrics and Scalability Analysis
 Performance Metrics
 Dimensions of Scalability
 Scalability versus OS Image Count
 Amdahl’s Law
 Problem with Fixed Workload
 Gustafson’s Law
5. PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY
b) Fault Tolerance and System Availability
 System Availability
5. PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY
c) Network Threats and Data Integrity
 Threats to Systems and Networks
 Security Responsibilities
 Copyright Protection
 System Defense Technologies
 Data Protection Infrastructure
5. PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY
d) Energy Efficiency in Distributed Computing
 Energy Consumption of Unused Servers
Reducing Energy in Active Servers
 Application Layer
 Middleware Layer
Resource Layer
 Network Layer
DVFS Method for Energy Efficiency
Thank you

You might also like