Computer Architecture and Organization
Computer Architecture and Organization
AND ORGANIZATION
SYLLABUS
Computer Architecture:
• Deals with functional behaviour of Computer Systems.
• Design Implementation for the various parts of computer.
Computer Organization:
• Deals with structural relationship.
• Operational attributes are linked together and contribute to
• realize the architectural specification.
• In describing computers, a distinction is often made between computer architecture and computer organization.
Although it is difficult to give precise definitions for these terms, a consensus exists about the general areas
covered by each.
WHAT ARE COMPUTER ARCHITECTURE AND
COMPUTER ORGANIZATION IN SIMPLE WORDS?
• Studying computer architecture and computer organization offers several important benefits for
individuals seeking to make a career in computer science and related disciplines. Here are some key
reasons:
• Computer architecture and organization provide insights into the fundamental principles underlying the
design and operation of computer systems.
• Knowing how instructions are executed, memory is accessed, and data is stored allows programmers to
write efficient codes.
• Computer architecture knowledge is crucial for optimizing the performance of applications.
• Studying computer organization helps bridge the gap between hardware and software and provides
insights into how the software interacts with the hardware components.
BLOCK DIAGRAM OF COMPUTER
Processor
HARDWARE SYSTEM ARCHITECTURE
HISTORY
CLASSIFICATION OF COMPUTER
ORGANIZATION
CLASSES OF PARALLELISM AND PARALLEL
ARCHITECTURES
• The term Parallelism refers to techniques to make programs faster by performing several
computations at the same time. This requires hardware with multiple processing units.
• Parallelism at multiple levels is now the driving force of computer design across all four classes
of computers, with energy and cost being the primary constraints. There are basically two kinds
of parallelism in applications:
• 1. Data-Level Parallelism (DLP) arises because there are many data items that can be operated on
at the same time
• 2. Task-Level Parallelism (TLP) arises because tasks of work are created that can operate
independently and largely in parallel
CLASSES OF PARALLELISM AND PARALLEL
ARCHITECTURES
• Computer hardware in turn can exploit these two kinds of application parallelism in
four major ways:
• 1. Instruction-Level Parallelism exploits data-level parallelism at modest levels
with compiler help using ideas like pipelining and at medium levels using ideas like
speculative execution. is a family of processor and compiler design techniques that
speed up execution by causing individual machine operations, such as memory loads
and stores, integer additions and floating point multiplications, to execute in parallel.
• 2. Vector Architectures and Graphic Processor Units (GPs) exploit data-level
parallelism by applying a single instruction to a collection of data in parallel. It
implements an instruction set where its instructions are designed to operate
efficiently and effectively on large one-dimensional arrays of data called vectors.
• 3. Thread-Level Parallelism exploits either data-level parallelism or task-level
parallelism in a tightly coupled hardware model that allows for interaction among
parallel threads. thread-level parallelism, TLP is a software capability that allows
high-end programs, such as a database or web application, to work with multiple
threads at the same time. Programs that support this ability can do a lot more, even
under high workloads.
• 4. Request-Level Parallelism exploits parallelism among largely decoupled tasks
specified by the programmer or the operating system. is another way of
representing tasks which are nothing but a set of requests which we are going to
run in parallel. When we use the term Request then we mean that user is asking
for some information which servers are going to respond.
This classifications was proposed by Michael J. Flynn in 1966 that is why it is
called Flynn taxonomy
• 1. Single instruction stream, single data stream (SISD) This category is the
uniprocessor. The programmer thinks of it as the standard sequential computer, but
it can exploit instruction-level parallelism.
• 2. Single instruction stream, multiple data streams (SIMD) The same instruction is
executed by multiple processors using different data streams. SIMD computers
exploit data-level parallelism by applying the same operations to multiple items of
data in parallel, Each processor has its own data memory but there is a single
instruction memory and control processor, which fetches and dispatches instructions
COMPUTER ARCHITECTURE
• These four ways for hardware to support the data-level parallelism and task-
level parallelism go back 50 years. When Michael Flynn (1966] studied the
parallel computing efforts in the 1960s, he found a simple classification whose
abbreviations we still use today. He looked at the parallelism in the instruction
and data streams called for by the instructions at the most constrained
component of the multiprocessor, and placed all computers into one of four
categories;
•
COMPUTER ARCHITECTURE
Desktop Computing
• The first, and probably still the largest market in dollar terms, is desktop computing. Desktop computing spans from
low-end netbooks that sell for cheap to high-end, heavily configured workstations that may sell for really high price.
Since 2008, more than half of the desktop computers made each year have been battery operated laptop computers.
Throughout this range in price and capability, the desktop market tends to be driven to optimize price-performance.
This combination of performance (measured primarily in terms of computing performance and graphics
performance) and price of a system is what matters most to customers in this market, and hence to computer
designers. As a result, the newest, highest-performance microprocessors and cost-reduced microprocessors often
appear first in desktop systems.
• Desktop computing also tends to be reasonably well characterized in terms of applications and benchmarking,
though the increasing use of Web-centric, interactive applications poses new challenges in performance evaluation.
CLASSES OF COMPUTER
Servers
• As the shift to desktop computing occurred in the 1980s, the role of servers grew to
provide larger-scale and more reliable file and computing services, Such servers have
become the backbone of large-scale enterprise computing, replacing the traditional
mainframe. For servers, different Characteristics are important First, availability is
critical. Consider the servers running ATM machines for banks or airline reservation
systems, Failure of such server system is far more catastrophic than failure of a single
desktop, since these servers must operate seven days a week, 24 hours a day
CLASSES OF COMPUTER
• A second key feature of server systems is scalability Server systems often grow in
response to an increasing demand for the services they support or an increase in
functional requirements, Thus, the ability to scale up the computing capacity, the
memory, the storage, and the I/O bandwidth of a server is crucial Finally, servers are
designed for efficient throughput. That is, the overall performance of the server in
terms of transactions per minute or Web pages served per second- is what is crucial.
Responsiveness to an individual request remains important, but overall efficiency and
cost-effectiveness, as determined by how many requests can be handled in a unit time,
are the key metrics for most servers.
CLASSES OF COMPUTER
• Figure 1.3 estimates the revenue costs of downtime for server applications.
CLASSES OF COMPUTER
Clusters/Warehouse-Scale Computers
• The growth of Software as a Service (SaaS) for applications like search, social networking,
video sharing, multiplayer games, online shopping, and so on has led to the growth of a
class of computers called afaters. Clusters are collections of desktop computers or servers
connected by local area networks to act as a sinple larger computer, Each node runs its own
operating system, and nodes communicate using a networking protocol. The largest of the
clusters are called warehouse-scale computers (SCs), in that they are designed so that tens
of thousands of servers can act as one Chapter 6 describes dos class of extremely large
computers.
CLASSES OF COMPUTER
Embedded Computers
• Embedded computers are found in everyday machines; microwaves, washing machines, most
printers, most networking switches, and all cars contain simple embedded microprocessors -‘ The
processors in a PMD are often considered embedded computers, but we are keeping them as a
separate category because MDs are platforms that can run externally developed software and they
share many of the characteristics of desk- top computers. Other embedded devices are more limited
in hardware and soft- ware sophistication. We use the ability to run third-party software as the
dividing line between non-embedded and embedded computers. Embedded computers have the
widest spread of processing power and cost, They include 8-bit and 16-bit processors that may cost
less than a dime, 32-bit
INTRODUCTION TO MEMORY
STRUCTURE VS FUNCTION
• ■Central processing unit (CPU): Controls the operation of the computer and
performs its data processing functions; often simply referred to as processor.
■Main memory: Stores data.
• ■I/O: Moves data between the computer and its external environment.
■System interconnection: Some mechanism that provides for communication
among CPU, main memory, and I/O. A common example of system
interconnection is by means of a system bus, consisting of a number of
conducting wires to which all the other components attach.
SECONDARY MEMORY
Pages