(Ebook PDF) Computer Organization and Architecture10th Global Edition All Chapter Instant Download
(Ebook PDF) Computer Organization and Architecture10th Global Edition All Chapter Instant Download
com
https://ebookluna.com/product/ebook-pdf-computer-
organization-and-architecture10th-global-edition/
OR CLICK HERE
DOWLOAD NOW
https://ebookluna.com/product/ebook-pdf-parallel-computer-
organization-and-design/
ebookluna.com
https://ebookluna.com/product/ebook-pdf-parallel-computer-
organization-and-design-2/
ebookluna.com
ebookluna.com
ebookluna.com
(eBook PDF) Computer Organization & Architecture: Themes
and Variations
https://ebookluna.com/product/ebook-pdf-computer-organization-
architecture-themes-and-variations/
ebookluna.com
https://ebookluna.com/product/essentials-of-computer-organization-and-
architecture-5th-edition-ebook-pdf/
ebookluna.com
https://ebookluna.com/product/ebook-pdf-essentials-of-computer-
organization-and-architecture-5th-edition/
ebookluna.com
https://ebookluna.com/product/ebook-pdf-the-essentials-of-computer-
organization-and-architecture-4th/
ebookluna.com
https://ebookluna.com/product/ebook-pdf-computer-organization-and-
design-arm-edition-the-hardware-software-interface/
ebookluna.com
Global
edition
Computer Organization
and Architecture
Designing for Performance
tenth edition
William Stallings
To Tricia
my loving wife, the kindest
and gentlest person
This page intentionally left blank.
Contents
Foreword 13
Preface 15
About the Author 23
7
8 Contents
Chapter 5 Internal Memory 189
5.1 Semiconductor Main Memory 190
5.2 Error Correction 198
5.3 DDR Dram 204
5.4 Flash Memory 209
5.5 Newer Nonvolatile Solid-State Memory Technologies 211
5.6 Key Terms, Review Questions, and Problems 214
Chapter 6 External Memory 218
6.1 Magnetic Disk 219
6.2 Raid 228
6.3 Solid State Drives 236
6.4 Optical Memory 241
6.5 Magnetic Tape 246
6.6 Key Terms, Review Questions, and Problems 248
Chapter 7 Input/Output 252
7.1 External Devices 254
7.2 I/O Modules 256
7.3 Programmed I/O 259
7.4 Interrupt-Driven I/O 263
7.5 Direct Memory Access 272
7.6 Direct Cache Access 278
7.7 I/O Channels and Processors 285
7.8 External Interconnection Standards 287
7.9 IBM zEnterprise EC12 I/O Structure 290
7.10 Key Terms, Review Questions, and Problems 294
Chapter 8 Operating System Support 299
8.1 Operating System Overview 300
8.2 Scheduling 311
8.3 Memory Management 317
8.4 Intel x86 Memory Management 328
8.5 Arm Memory Management 333
8.6 Key Terms, Review Questions, and Problems 338
References 824
Index 833
Credits 857
Online Appendices1
1
Online chapters, appendices, and other documents are Premium Content, available via the access card
at the front of this book.
This page intentionally left blank.
Foreword
by Chris Jesshope
Professor (emeritus) University of Amsterdam
Author of Parallel Computers (with R W Hockney), 1981 & 1988
Having been active in computer organization and architecture for many years, it is a pleas-
ure to write this foreword for the new edition of William Stallings’ comprehensive book on
this subject. In doing this, I found myself reflecting on the trends and changes in this subject
over the time that I have been involved in it. I myself became interested in computer archi-
tecture at a time of significant innovation and disruption. That disruption was brought about
not only through advances in technology but perhaps more significantly through access to
that technology. VLSI was here and VLSI design was available to students in the classroom.
These were exciting times. The ability to integrate a mainframe style computer on a single
silicon chip was a milestone, but that this was accomplished by an academic research team
made the achievement quite unique. This period was characterized by innovation and diver-
sity in computer architecture with one of the main trends being in the area of parallelism.
In the 1970s, I had hands-on experience of the Illiac IV, which was an early example of
explicit parallelism in computer architecture and which incidentally pioneered all semicon-
ductor memory. This interaction, and it certainly was that, kick-started my own interest in
computer architecture and organization, with particular emphasis on explicit parallelism in
computer architecture.
Throughout the 1980s and early 1990s research flourished in this field and there was a
great deal of innovation, much of which came to market through university start-ups. Iron-
ically however, it was the same technology that reversed this trend. Diversity was gradually
replaced with a near monoculture in computer systems with advances in just a few instruc-
tion set architectures. Moore’s law, a self-fulfilling prediction that became an industry guide-
line, meant that basic device speeds and integration densities both grew exponentially, with
the latter doubling every 18 months of so. The speed increase was the proverbial free lunch
for computer architects and the integration levels allowed more complexity and innovation
at the micro-architecture level. The free lunch of course did have a cost, that being the expo-
nential growth of capital investment required to fulfill Moore’s law, which once again limited
the access to state-of-the-art technologies. Moreover, most users found it easier to wait for
the next generation of mainstream processor than to invest in the innovations in parallel
computers, with their pitfalls and difficulties. The exceptions to this were the few large insti-
tutions requiring ultimate performance; two topical examples being large-scale scientific
simulation such as climate modeling and also in our security services for code breaking. For
13
14 Foreword
everyone else, the name of the game was compatibility and two instruction set architectures
that benefited from this were x86 and ARM, the latter in embedded systems and the former
in just about everything else. Parallelism was still there in the implementation of these ISAs,
it was just that it was implicit, harnessed by the architecture not in the instruction stream
that drives it.
Throughout the late 1990s and early 2000s, this approach to implicitly exploiting con-
currency in single-core computer systems flourished. However, in spite of the exponential
growth of logic density, it was the cost of the techniques exploited which brought this era to
a close. In superscalar processors, the logic costs do not grow linearly with issue width (par-
allelism), while some components grow as the square or even the cube of the issue width.
Although the exponential growth in logic could sustain this continued development, there
were two major pitfalls: it was increasingly difficult to expose concurrency implicitly from
imperative programs and hence efficiencies in the use of instruction issue slots decreased.
Perhaps more importantly, technology was experiencing a new barrier to performance
gains, namely that of power dissipation, and several superscalar developments were halted
because the silicon in them would have been too hot. These constraints have mandated the
exploitation of explicit parallelism, despite the compatibility challenges. So it seems that
again innovation and diversity are opening up this area to new research.
Perhaps not since the 1980s has it been so interesting to study in this field. That diver-
sity is an economic reality can be seen by the decrease in issue width (implicit parallelism)
and increase in the number of cores (explicit parallelism) in mainstream processors. How-
ever, the question is how to exploit this, both at the application and the system level. There
are significant challenges here still to be solved. Superscalar processors rely on the processor
to extract parallelism from a single instruction stream. What if we shifted the emphasis and
provided an instruction stream with maximum parallelism, how can we exploit this in dif-
ferent configurations and/or generations of processors that require different levels of expli-
cit parallelism? Is it possible therefore to have a micro-architecture that sequentializes and
schedules this maximum concurrency captured in the ISA to match the current configur-
ation of cores so that we gain the same compatibility in a world of explicit parallelism? Does
this require operating systems in silicon for efficiency?
These are just some of the questions facing us today. To answer these questions and
more requires a sound foundation in computer organization and architecture, and this book
by William Stallings provides a very timely and comprehensive foundation. It gives a com-
plete introduction to the basics required, tackling what can be quite complex topics with
apparent simplicity. Moreover, it deals with the more recent developments in this field,
where innovation has in the past, and is, currently taking place. Examples are in superscalar
issue and in explicitly parallel multicores. What is more, this latest edition includes two very
recent topics in the design and use of GPUs for general-purpose use and the latest trends in
cloud computing, both of which have become mainstream only recently. The book makes
good use of examples throughout to highlight the theoretical issues covered, and most of
these examples are drawn from developments in the two most widely used ISAs, namely the
x86 and ARM. To reiterate, this book is complete and is a pleasure to read and hopefully
will kick-start more young researchers down the same path that I have enjoyed over the last
40 years!
Preface
What’s New in the Tenth Edition
Since the ninth edition of this book was published, the field has seen continued innovations
and improvements. In this new edition, I try to capture these changes while maintaining a
broad and comprehensive coverage of the entire field. To begin this process of revision, the
ninth edition of this book was extensively reviewed by a number of professors who teach
the subject and by professionals working in the field. The result is that, in many places, the
narrative has been clarified and tightened, and illustrations have been improved.
Beyond these refinements to improve pedagogy and user-friendliness, there have been
substantive changes throughout the book. Roughly the same chapter organization has been
retained, but much of the material has been revised and new material has been added. The
most noteworthy changes are as follows:
■■ GPGPU [General-Purpose Computing on Graphics Processing Units (GPUs)]: One
of the most important new developments in recent years has been the broad adoption
of GPGPUs to work in coordination with traditional CPUs to handle a wide range of
applications involving large arrays of data. A new chapter is devoted to the topic of
GPGPUs.
■■ Heterogeneous multicore processors: The latest development in multicore architecture
tially revised and expanded to reflect the current state of embedded technology.
■■ Microcontrollers: In terms of numbers, almost all computers now in use are embedded
revised, expanded, and reorganized for a clearer and more thorough treatment.
Chapter 2 is devoted to this topic, and the issue of system performance arises through-
out the book.
15
16 Preface
■■ Flash memory: The coverage of flash memory has been updated and expanded, and now
includes a discussion of the technology and organization of flash memory for internal
memory (Chapter 5) and external memory (Chapter 6).
■■ Nonvolatile RAM: New to this edition is treatment of three important new nonvolatile
solid-state RAM technologies that occupy different positions in the memory hierarchy:
STT-RAM, PCRAM, and ReRAM.
■■ Direct cache access (DCA): To meet the protocol processing demands for very high
speed network connections, Intel and other manufacturers have developed DCA tech-
nologies that provide much greater throughput than traditional direct memory access
(DMA) approaches. New to this edition, Chapter 7 explores DCA in some detail.
■■ Intel Core Microarchitecture: As in the previous edition, the Intel x86 family is used as
a major example system throughout. The treatment has been updated to reflect newer
Intel systems, especially the Intel Core Microarchitecture, which is used on both PC and
server products.
■■ Homework problems: The number of supplemental homework problems, with solu-
Table P.1 Coverage of CS2013 Architecture and Organization (AR) Knowledge Area
IAS Knowledge Units Topics Textbook Coverage
Digital Logic and Digital ●● Overview and history of computer architecture —Chapter 1
Systems (Tier 2) ●● Combinational vs. sequential logic/Field program- —Chapter 11
mable gate arrays as a fundamental combinational
sequential logic building block
●● Multiple representations/layers of interpretation
(hardware is just another layer)
●● Physical constraints (gate delays, fan-in, fan-out,
energy/power)
Machine Level Represen- ●● Bits, bytes, and words —Chapter 9
tation of Data (Tier 2) ●● Numeric data representation and number bases —Chapter 10
●● Fixed-and floating-point systems
●● Signed and twos-complement representations
●● Representation of non-numeric data (character
codes, graphical data)
Preface 17
Objectives
This book is about the structure and function of computers. Its purpose is to present, as clearly
and completely as possible, the nature and characteristics of modern-day computer systems.
This task is challenging for several reasons. First, there is a tremendous variety of prod-
ucts that can rightly claim the name of computer, from single-chip microprocessors costing
a few dollars to supercomputers costing tens of millions of dollars. Variety is exhibited not
only in cost but also in size, performance, and application. Second, the rapid pace of change
that has always characterized computer technology continues with no letup. These changes
cover all aspects of computer technology, from the underlying integrated circuit technology
used to construct computer components to the increasing use of parallel organization con-
cepts in combining those components.
In spite of the variety and pace of change in the computer field, certain fundamental
concepts apply consistently throughout. The application of these concepts depends on the
current state of the technology and the price/performance objectives of the designer. The
intent of this book is to provide a thorough discussion of the fundamentals of computer
organization and architecture and to relate these to contemporary design issues.
The subtitle suggests the theme and the approach taken in this book. It has always
been important to design computer systems to achieve high performance, but never has
this requirement been stronger or more difficult to satisfy than today. All of the basic per-
formance characteristics of computer systems, including processor speed, memory speed,
memory capacity, and interconnection data rates, are increasing rapidly. Moreover, they are
increasing at different rates. This makes it difficult to design a balanced system that maxi-
mizes the performance and utilization of all elements. Thus, computer design increasingly
becomes a game of changing the structure or function in one area to compensate for a per-
formance mismatch in another area. We will see this game played out in numerous design
decisions throughout the book.
A computer system, like any system, consists of an interrelated set of components.
The system is best characterized in terms of structure—the way in which components are
interconnected, and function—the operation of the individual components. Furthermore, a
computer’s organization is hierarchical. Each major component can be further described by
decomposing it into its major subcomponents and describing their structure and function.
For clarity and ease of understanding, this hierarchical organization is described in this book
from the top down:
■■ Computer system: Major components are processor, memory, I/O.
■■ Processor: Major components are control unit, registers, ALU, and instruction execu-
tion unit.
■■ Control unit: Provides control signals for the operation and coordination of all proces-
Introduction, 3
“Irish Pioneers in New York,” by Hon. Victor J.
Dowling, 117
“Irish Pioneers of the West and Their
Descendants,” by Hon. Maurice T. Moloney, 139
“Irish in the Revolutionary War,” by Karl Egan,
218
Washington Meeting, 62
White, Hon. Edward D., Justice of the Supreme
Court of the United States, Address by, 113
TRANSCRIBER’S
NOTES
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.