0% found this document useful (0 votes)
30 views

Computer Architecture and Organization

Uploaded by

The Jelly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Computer Architecture and Organization

Uploaded by

The Jelly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

COMPUTER ARCHITECTURE

AND ORGANIZATION
SYLLABUS

• Basics of Computer Architecture


• Memory Interfacing and Hierarchy
• Computer Organization
• I/O Interfacing
• Instruction Pipelining
• Number System
INTRODUCTION
INTRODUCTION
ARCHITECTURE VS ORGANIZATION

Computer Architecture:
• Deals with functional behaviour of Computer Systems.
• Design Implementation for the various parts of computer.

Computer Organization:
• Deals with structural relationship.
• Operational attributes are linked together and contribute to
• realize the architectural specification.
• In describing computers, a distinction is often made between computer architecture and computer organization.
Although it is difficult to give precise definitions for these terms, a consensus exists about the general areas
covered by each.
WHAT ARE COMPUTER ARCHITECTURE AND
COMPUTER ORGANIZATION IN SIMPLE WORDS?

• Computer architecture is a blueprint for the design of a computer system and


describes the system in an abstract manner. It describes how the computer
system is designed. On the other hand, computer organization is how
operational parts of a computer system are linked together. It explains how
the computer works and provides a structural relationship between parts of
the computer system. It describes how the computer system works.
WHAT IS THE USE OF STUDYING COMPUTER
ARCHITECTURE AND COMPUTER ORGANIZATION?

• Studying computer architecture and computer organization offers several important benefits for
individuals seeking to make a career in computer science and related disciplines. Here are some key
reasons:
• Computer architecture and organization provide insights into the fundamental principles underlying the
design and operation of computer systems.
• Knowing how instructions are executed, memory is accessed, and data is stored allows programmers to
write efficient codes.
• Computer architecture knowledge is crucial for optimizing the performance of applications.
• Studying computer organization helps bridge the gap between hardware and software and provides
insights into how the software interacts with the hardware components.
BLOCK DIAGRAM OF COMPUTER
Processor
HARDWARE SYSTEM ARCHITECTURE
HISTORY
CLASSIFICATION OF COMPUTER
ORGANIZATION
CLASSES OF PARALLELISM AND PARALLEL
ARCHITECTURES
• The term Parallelism refers to techniques to make programs faster by performing several
computations at the same time. This requires hardware with multiple processing units.
• Parallelism at multiple levels is now the driving force of computer design across all four classes
of computers, with energy and cost being the primary constraints. There are basically two kinds
of parallelism in applications:
• 1. Data-Level Parallelism (DLP) arises because there are many data items that can be operated on
at the same time
• 2. Task-Level Parallelism (TLP) arises because tasks of work are created that can operate
independently and largely in parallel
CLASSES OF PARALLELISM AND PARALLEL
ARCHITECTURES
• Computer hardware in turn can exploit these two kinds of application parallelism in
four major ways:
• 1. Instruction-Level Parallelism exploits data-level parallelism at modest levels
with compiler help using ideas like pipelining and at medium levels using ideas like
speculative execution. is a family of processor and compiler design techniques that
speed up execution by causing individual machine operations, such as memory loads
and stores, integer additions and floating point multiplications, to execute in parallel.
• 2. Vector Architectures and Graphic Processor Units (GPs) exploit data-level
parallelism by applying a single instruction to a collection of data in parallel. It
implements an instruction set where its instructions are designed to operate
efficiently and effectively on large one-dimensional arrays of data called vectors.
• 3. Thread-Level Parallelism exploits either data-level parallelism or task-level
parallelism in a tightly coupled hardware model that allows for interaction among
parallel threads. thread-level parallelism, TLP is a software capability that allows
high-end programs, such as a database or web application, to work with multiple
threads at the same time. Programs that support this ability can do a lot more, even
under high workloads.
• 4. Request-Level Parallelism exploits parallelism among largely decoupled tasks
specified by the programmer or the operating system. is another way of
representing tasks which are nothing but a set of requests which we are going to
run in parallel. When we use the term Request then we mean that user is asking
for some information which servers are going to respond.
This classifications was proposed by Michael J. Flynn in 1966 that is why it is
called Flynn taxonomy
• 1. Single instruction stream, single data stream (SISD) This category is the
uniprocessor. The programmer thinks of it as the standard sequential computer, but
it can exploit instruction-level parallelism.
• 2. Single instruction stream, multiple data streams (SIMD) The same instruction is
executed by multiple processors using different data streams. SIMD computers
exploit data-level parallelism by applying the same operations to multiple items of
data in parallel, Each processor has its own data memory but there is a single
instruction memory and control processor, which fetches and dispatches instructions
COMPUTER ARCHITECTURE

• These four ways for hardware to support the data-level parallelism and task-
level parallelism go back 50 years. When Michael Flynn (1966] studied the
parallel computing efforts in the 1960s, he found a simple classification whose
abbreviations we still use today. He looked at the parallelism in the instruction
and data streams called for by the instructions at the most constrained
component of the multiprocessor, and placed all computers into one of four
categories;

COMPUTER ARCHITECTURE

• 3. Multiple instruction streams, single data stream (MISD)-No commercial multiprocessor of


this type has been built to date, but it rounds out this simple classification
• 4. Multiple instruction streams, multiple data streams (MIMD) -Each processor fetches its
own instructions and operates on its own data, and it targets task-level parallelism. In general,
MIMD is more flexible than SIMD and thus more generally applicable, but it is inherently
more expensive than SIMD.
• This taxonomy is a coarse model, as many parallel processors are hybrids of the SISD, SIMD,
and MIMD classes. Nonetheless, it is useful to put a framework on the design space for the
computers we will see in our future discussion.
CLASSES OF COMPUTER
CLASSES OF COMPUTER
Personal Mobile Device (PMD)
• Personal mobile device (PMD) is the term we apply to a collection of wireless devices with multimedia user interfaces such
as cell phones, tablet computers. and so on, Cost is a prime concern given the consumer price for the whole product is a
few thousand pesos. Although the emphasis on energy efficiency is frequently driven by the use of batteries, the need to use
less expensive packaging- plastic versus ceramic- and the absence of a fan for cooling also limit total power consumption
• Applications on PMDs are often Web-based and media-oriented, like the Google Goggles example above Energy and size
requirements lead to use of Flash memory for storage instead of magnetic disks
• Responsiveness and predictability are key characteristics for media applications. A real-time performance requirement
means a segment of the application has an absolute maximum execution time.
• Other key characteristics in many PMD applications are the need to minimize memory and the need to use energy
efficiently. Energy efficiency is driven by bach battery power and heat dissipation. The memory can be a substantial portion
of the system cost, and it is important to optimize memory size in such cases. The importance of memory size translates to
an emphasis on code size, since data size is dictated by the application.
CLASSES OF COMPUTER

Desktop Computing
• The first, and probably still the largest market in dollar terms, is desktop computing. Desktop computing spans from
low-end netbooks that sell for cheap to high-end, heavily configured workstations that may sell for really high price.
Since 2008, more than half of the desktop computers made each year have been battery operated laptop computers.
Throughout this range in price and capability, the desktop market tends to be driven to optimize price-performance.
This combination of performance (measured primarily in terms of computing performance and graphics
performance) and price of a system is what matters most to customers in this market, and hence to computer
designers. As a result, the newest, highest-performance microprocessors and cost-reduced microprocessors often
appear first in desktop systems.
• Desktop computing also tends to be reasonably well characterized in terms of applications and benchmarking,
though the increasing use of Web-centric, interactive applications poses new challenges in performance evaluation.
CLASSES OF COMPUTER

Servers
• As the shift to desktop computing occurred in the 1980s, the role of servers grew to
provide larger-scale and more reliable file and computing services, Such servers have
become the backbone of large-scale enterprise computing, replacing the traditional
mainframe. For servers, different Characteristics are important First, availability is
critical. Consider the servers running ATM machines for banks or airline reservation
systems, Failure of such server system is far more catastrophic than failure of a single
desktop, since these servers must operate seven days a week, 24 hours a day
CLASSES OF COMPUTER

• A second key feature of server systems is scalability Server systems often grow in
response to an increasing demand for the services they support or an increase in
functional requirements, Thus, the ability to scale up the computing capacity, the
memory, the storage, and the I/O bandwidth of a server is crucial Finally, servers are
designed for efficient throughput. That is, the overall performance of the server in
terms of transactions per minute or Web pages served per second- is what is crucial.
Responsiveness to an individual request remains important, but overall efficiency and
cost-effectiveness, as determined by how many requests can be handled in a unit time,
are the key metrics for most servers.
CLASSES OF COMPUTER

• Figure 1.3 estimates the revenue costs of downtime for server applications.
CLASSES OF COMPUTER

Clusters/Warehouse-Scale Computers
• The growth of Software as a Service (SaaS) for applications like search, social networking,
video sharing, multiplayer games, online shopping, and so on has led to the growth of a
class of computers called afaters. Clusters are collections of desktop computers or servers
connected by local area networks to act as a sinple larger computer, Each node runs its own
operating system, and nodes communicate using a networking protocol. The largest of the
clusters are called warehouse-scale computers (SCs), in that they are designed so that tens
of thousands of servers can act as one Chapter 6 describes dos class of extremely large
computers.
CLASSES OF COMPUTER

Embedded Computers
• Embedded computers are found in everyday machines; microwaves, washing machines, most
printers, most networking switches, and all cars contain simple embedded microprocessors -‘ The
processors in a PMD are often considered embedded computers, but we are keeping them as a
separate category because MDs are platforms that can run externally developed software and they
share many of the characteristics of desk- top computers. Other embedded devices are more limited
in hardware and soft- ware sophistication. We use the ability to run third-party software as the
dividing line between non-embedded and embedded computers. Embedded computers have the
widest spread of processing power and cost, They include 8-bit and 16-bit processors that may cost
less than a dime, 32-bit
INTRODUCTION TO MEMORY
STRUCTURE VS FUNCTION

• ■Structure: The way in which the components are interrelated.

• ■Function: The operation of each individual component as part of the


structure.
FUNCTION
• Both the structure and functioning of a computer are, in essence, simple. In general terms, there are only four basic
functions that a computer can perform:
■Data processing:
Manipulation of data by a computer. It includes the conversion of raw data to machine-readable form, flow of data
through the CPU and memory to output devices, and formatting or transformation of output. Data may take a wide
variety of forms, and the range of processing requirements is broad. However, we shall see that there are only a few
fundamental methods or types of data processing.
■Data storage:
Data storage is the retention of information using technology specifically developed to keep that data and have it as
accessible as necessary. Data storage refers to the use of recording media to retain data using computers or other
devices. The most prevalent forms of data storage are file storage, block storage, and object storage, with each being
ideal for different purposes.
• ■Data movement: Data movement is the ability to move data from one place in
your organization to another through technologies that include extract,
transformation, load (ETL), extract, load, transform (ELT), data replication and
change data capture (CDC), primarily for the purposes of data migration and data
warehousing.. When data are moved over longer distances, to or from a remote
device, the process is known as data communications.
• ■Control: Within the computer, a control unit manages the computer’s resources
and orchestrates the performance of its functional parts in response to instructions.
STRUCTURE

• We now look in a general way at the internal structure of a computer. We


begin with a traditional computer with a single processor that employs a
microprogrammed control unit, then examine a typical multicore structure.
STRUCTURE

• ■Central processing unit (CPU): Controls the operation of the computer and
performs its data processing functions; often simply referred to as processor.
■Main memory: Stores data.
• ■I/O: Moves data between the computer and its external environment.
■System interconnection: Some mechanism that provides for communication
among CPU, main memory, and I/O. A common example of system
interconnection is by means of a system bus, consisting of a number of
conducting wires to which all the other components attach.
SECONDARY MEMORY

• Slower than primary memory


• Retains data permanently
• Bigger in size
• Cost-Effective
• Semi-Random Accessibility
Virtual Memory Mapping

Pages

You might also like