0% found this document useful (0 votes)
27 views85 pages

Principles of Computer Hardware 4th Edition Alan Clements Instant Download

The document is a promotional listing for the fourth edition of 'Principles of Computer Hardware' by Alan Clements, aimed at students in electronics and computer science. It covers a wide range of topics related to computer hardware, including advanced computer arithmetic and the architecture of digital computers, while also introducing operating systems and local area networks. Additionally, it provides links to other related computer science and security books available for download.

Uploaded by

saaihcidral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views85 pages

Principles of Computer Hardware 4th Edition Alan Clements Instant Download

The document is a promotional listing for the fourth edition of 'Principles of Computer Hardware' by Alan Clements, aimed at students in electronics and computer science. It covers a wide range of topics related to computer hardware, including advanced computer arithmetic and the architecture of digital computers, while also introducing operating systems and local area networks. Additionally, it provides links to other related computer science and security books available for download.

Uploaded by

saaihcidral
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Principles Of Computer Hardware 4th Edition Alan

Clements download

https://ebookbell.com/product/principles-of-computer-
hardware-4th-edition-alan-clements-51751642

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Principles Of Computer Security Wm Arthur Conklin Greg White Chuck


Cothren Roger L Davis Dwayne Williams

https://ebookbell.com/product/principles-of-computer-security-wm-
arthur-conklin-greg-white-chuck-cothren-roger-l-davis-dwayne-
williams-50195064

Principles Of Computer Security Comptia Security And Beyond Lab Manual


Second Edition Vincent Nestler Gregory White Wm Arthur Conklin Matthew
Hirsch Corey Schou

https://ebookbell.com/product/principles-of-computer-security-comptia-
security-and-beyond-lab-manual-second-edition-vincent-nestler-gregory-
white-wm-arthur-conklin-matthew-hirsch-corey-schou-50195066

Principles Of Computer Organization And Assembly Language Using The


Java Virtual Machine Juola

https://ebookbell.com/product/principles-of-computer-organization-and-
assembly-language-using-the-java-virtual-machine-juola-22041400

Principles Of Computer Security Comptia Security And Beyond Lab Manual


Exam Sy0601 1st Edition Jonathan Weissman

https://ebookbell.com/product/principles-of-computer-security-comptia-
security-and-beyond-lab-manual-exam-sy0601-1st-edition-jonathan-
weissman-34084442
Principles Of Computer Security Comptia Security And Beyond Lab Manual
Exam Sy0601 Jonathan S Weissman

https://ebookbell.com/product/principles-of-computer-security-comptia-
security-and-beyond-lab-manual-exam-sy0601-jonathan-s-
weissman-34085392

Principles Of Computer Security Comptia Security And Beyond Exam


Sy0601 6th Edition 6th Greg White

https://ebookbell.com/product/principles-of-computer-security-comptia-
security-and-beyond-exam-sy0601-6th-edition-6th-greg-white-42112260

Principles Of Computeraided Design Joy Crelin Editor

https://ebookbell.com/product/principles-of-computeraided-design-joy-
crelin-editor-44078440

Principles Of Computer Graphics Theory And Practice Using Opengl And


Maya 1st Edition Shalini Govilpai Auth

https://ebookbell.com/product/principles-of-computer-graphics-theory-
and-practice-using-opengl-and-maya-1st-edition-shalini-govilpai-
auth-4592056

Principles Of Computer Science Salem Press

https://ebookbell.com/product/principles-of-computer-science-salem-
press-6855690
PRINCIPLES OF
COMPUTER
HARDWARE

Alan Clements
School of Computing
University of Teesside
Fourth Edition

1
3
Great Clarendon Street, Oxford OX2 6DP
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide in
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trade mark of Oxford University Press
in the UK and certain other countries
Published in the United States
by Oxford University Press Inc., New York
© Alan Clements, 2006
The moral rights of the author have been asserted
Database right Oxford University Press (maker)
First published 1985
Second edition 1991
Third edition 2000
Fourth edition 2006-01-18
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, or under terms agreed with the appropriate
reprographics rights organization. Enquiries concerning reproduction
outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this book in any other binding or cover
and you must impose the same condition on any acquirer
British Library Cataloguing in Publication Data
Data available
Library of Congress Cataloging in Publication Data
Data available
Typeset by Newgen Imaging Systems (P) Ltd., Chennai, India.
Printed in Great Britain
on acid-free paper by
Bath Press Ltd, Bath

ISBN 0–19–927313–8 978–0–19–927313–3

10 9 8 7 6 5 4 3 2 1
PREFACE

Principle of Computer Hardware is aimed at students taking course. Topics like advanced computer arithmetic, timing
an introductory course in electronics, computer science, or diagrams, and reliability have been included to show how the
information technology. The approach is one of breadth computer hardware of the real world often differs from that
before depth and we cover a wide range of topics under the of the first-level course in which only the basics are taught.
general umbrella of computer hardware. I’ve also broadened the range of topics normally found in
I have written Principles of Computer Hardware to achieve first-level courses in computer hardware and provided sec-
two goals. The first is to teach students the basic concepts on tions introducing operating systems and local area networks,
which the stored-program digital computer is founded. as these two topics are so intimately related to the hardware of
These include the representation and manipulation of infor- the computer. Finally, I have discovered that stating a formula
mation in binary form, the structure or architecture of a com- or a theory is not enough—many students like to see an
puter, the flow of information within a computer, and the actual application of the formula. Wherever possible I have
exchange of information between its various peripherals. We provided examples.
answer the questions, ‘How does a computer work’, and ‘How Like most introductory books on computer architecture,
is it organized?’ The second goal is to provide students with a I have chosen a specific microprocessor as a vehicle to illustrate
foundation for further study. In particular, the elementary some of the important concepts in computer architecture. The
treatment of gates and Boolean algebra provides a basis for ideal computer architecture is rich in features and yet easy to
a second-level course in digital design, and the introduction understand without exposing the student to a steep learning
to the CPU and assembly-language programming provides a curve. Some microprocessors have very complicated architec-
basis for advanced courses on computer architecture/organi- tures that confront the students with too much fine detail early
zation or microprocessor systems design. in their course. We use Motorola’s 68K microprocessor because
This book is written for those with no previous knowledge it is easy to understand and incorporates many of the most
of computer architecture. The only background information important features of a high-performance architecture. This
needed by the reader is an understanding of elementary alge- book isn’t designed to provide a practical assembly language
bra. Because students following a course in computer science programming course. It is intended only to illustrate the oper-
or computer technology will also be studying a high-level ation of a central processing unit by means of a typical assem-
language, we assume that the reader is familiar with the con- bly language. We also take a brief look at other microprocessors
cepts underlying a high-level language. to show the range of computer architectures available.
When writing this book, I set myself three objectives. By You will see the words computer, CPU, processor, micro-
adopting an informal style, I hope to increase the enthusiasm processor, and microcomputer in this and other texts. The part
of students who may be put off by the formal approach of of a computer that actually executes a program is called a
more traditional books. I have also tried to give students an CPU (central processing unit) or more simply a processor.
insight into computer hardware by explaining why things are A microprocessor is a CPU fabricated on a single chip of sili-
as they are, instead of presenting them with information to be con. A computer that is constructed around a microprocessor
learned and accepted without question. I have included sub- can be called a microcomputer. To a certain extent, these terms
jects that would seem out of place in an elementary first-level are frequently used interchangeably.
CONTENTS

2.5 An introduction to Boolean algebra 56


1 Introduction to computer hardware 1
2.5.1 Axioms and theorems of Boolean algebra 56
1.1 What is computer hardware? 1 2.5.2 De Morgan’s theorem 63
2.5.3 Implementing logic functions in NAND or NOR two
1.2 Why do we teach computer hardware? 2 logic only 65
1.2.1 Should computer architecture remain in the 2.5.4 Karnaugh maps 67
CS curriculum? 3
2.6 Special-purpose logic elements 83
1.2.2 Supporting the CS curriculum 4
2.6.1 The multiplexer 84
1.3 An overview of the book 5
2.6.2 The demultiplexer 84
1.4 History of computing 6
2.7 Tri-state logic 87
1.4.1 Navigation and mathematics 6
2.7.1 Buses 88
1.4.2 The era of mechanical computers 6
1.4.3 Enabling technology—the telegraph 8 2.8 Programmable logic 91
1.4.4 The first electromechanical computers 10
2.8.1 The read-only memory as a logic element 91
1.4.5 The first mainframes 11
2.8.2 Programmable logic families 93
1.4.6 The birth of transistors, ICs, and microprocessors 12
2.8.3 Modern programmable logic 94
1.4.7 Mass computing and the rise of the Internet 14
2.8.4 Testing digital circuits 96
1.5 The digital computer 15 SUMMARY 98
1.5.1 The PC and workstation 15 P RO B L E M S 98
1.5.2 The computer as a data processor 15
1.5.3 The computer as a numeric processor 16
3 Sequential logic 101
1.5.4 The computer in automatic control 17

1.6 The stored program computer—an overview 19 3.1 The RS flip-flop 103
1.7 The PC—a naming of parts 22 3.1.1 Analyzing a sequential circuit by assuming initial
SUMMARY 23 conditions 104
P RO B L E M S 23 3.1.2 Characteristic equation of an RS flip-flop 105
3.1.3 Building an RS flip-flop from NAND gates 106
2 Gates, circuits, and combinational logic 25 3.1.4 Applications of the RS flip-flop 106
3.1.5 The clocked RS flip-flop 108
2.1 Analog and digital systems 26
3.2 The D flip-flop 109
2.2 Fundamental gates 28 3.2.1 Practical sequential logic elements 110
2.2.1 The AND gate 28 3.2.2 Using D flip-flops to create a register 110
2.2.2 The OR gate 30 3.2.3 Using Digital Works to create a register 111
2.2.3 The NOT gate 31 3.2.4 A typical register chip 112
2.2.4 The NAND and NOR gates 31
2.2.5 Positive, negative, and mixed logic 32 3.3 Clocked flip-flops 113
3.3.1 Pipelining 114
2.3 Applications of gates 34
3.3.2 Ways of clocking flip-flops 115
2.4 Introduction to Digital Works 40 3.3.3 Edge-triggered flip-flops 116
2.4.1 Creating a circuit 41 3.3.4 The master–slave flip-flop 117
2.4.2 Running a simulation 45 3.3.5 Bus arbitration—an example 118
2.4.3 The clock and sequence generator 48
3.4 The JK flip-flop 120
2.4.4 Using Digital Works to create embedded circuits 50
2.4.5 Using a macro 52 3.5 Summary of flip-flop types 121
xii Contents

3.6 Applications of sequential elements 122


5 The instruction set architecture 203
3.6.1 Shift register 122
3.6.2 Asynchronous counters 128 5.1 What is an instruction set architecture? 204
3.6.3 Synchronous counters 132
5.2 Introduction to the CPU 206
3.7 An introduction to state machines 134
5.2.1 Memory and registers 207
3.7.1 Example of a state machine 136
5.2.2 Register transfer language 208
3.7.2 Constructing a circuit to implement
5.2.3 Structure of the CPU 209
the state table 138
5.3 The 68K family 210
SUMMARY 139
P RO B L E M S 140 5.3.1 The instruction 210
5.3.2 Overview of addressing modes 215
4 Computer arithmetic 145
5.4 Overview of the 68K’s instructions 217

4.1 Bits, bytes, words, and characters 146 5.4.1 Status flags 217
5.4.2 Data movement instructions 218
4.2 Number bases 148
5.4.3 Arithmetic instructions 218
4.3 Number base conversion 150 5.4.4 Compare instructions 220
4.3.1 Conversion of integers 150 5.4.5 Logical instructions 220
4.3.2 Conversion of fractions 152 5.4.6 Bit instructions 221
5.4.7 Shift instructions 221
4.4 Special-purpose codes 153 5.4.8 Branch instructions 223
4.4.1 BCD codes 153 SUMMARY 226
4.4.2 Unweighted codes 154 P RO B L E M S 226
4.5 Error-detecting codes 156
4.5.1 Parity EDCs 158 6 Assembly language programming 228
4.5.2 Error-correcting codes 158
4.5.3 Hamming codes 160 6.1 Structure of a 68K assembly language program 228
4.5.4 Hadamard codes 161 6.1.1 Assembler directives 229
6.1.2 Using the cross-assembler 232
4.6 Data-compressing codes 163
4.6.1 Huffman codes 164 6.2 The 68K’s registers 234
4.6.2 Quadtrees 167 6.2.1 Data registers 235
6.2.2 Address registers 236
4.7 Binary arithmetic 169
4.7.1 The half adder 170 6.3 Features of the 68K’s instruction set 237
4.7.2 The full adder 171 6.3.1 Data movement instructions 237
4.7.3 The addition of words 173 6.3.2 Using arithmetic operations 241
6.3.3 Using shift and logical operations 244
4.8 Signed numbers 175
6.3.4 Using conditional branches 244
4.8.1 Sign and magnitude representation 176
4.8.2 Complementary arithmetic 176 6.4 Addressing modes 249
4.8.3 Two’s complement representation 177 6.4.1 Immediate addressing 249
4.8.4 One’s complement representation 180 6.4.2 Address register indirect addressing 250
6.4.3 Relative addressing 259
4.9 Floating point numbers 181
4.9.1 Representation of floating point numbers 182 6.5 The stack 262
4.9.2 Normalization of floating point numbers 183 6.5.1 The 68K stack 263
4.9.3 Floating point arithmetic 186 6.5.2 The stack and subroutines 266
4.9.4 Examples of floating point calculations 188 6.5.3 Subroutines, the stack, and parameter
passing 271
4.10 Multiplication and division 189
4.10.1 Multiplication 189 6.6 Examples of 68K programs 280
4.10.2 Division 194 6.6.1 A circular buffer 282
SUMMARY 198 SUMMARY 287
P RO B L E M S 198 P RO B L E M S 287
Contents xiii

9.1.2 Instruction formats 366


7 Structure of the CPU 293
9.1.3 Instruction types 366
9.1.4 Addressing modes 367
7.1 The CPU 294
9.1.5 On-chip peripherals 367
7.1.1 The address path 294
9.2 The microcontroller 367
7.1.2 Reading the instruction 295
7.1.3 The CPU’s data paths 296 9.2.1 The M68HC12 368
7.1.4 Executing conditional instructions 298 9.3 The ARM—an elegant RISC processor 375
7.1.5 Dealing with literal operands 300
9.3.1 ARM’s registers 375
7.2 Simulating a CPU 300 9.3.2 ARM instructions 377
7.2.1 CPU with an 8-bit instruction 301 9.3.3 ARM branch instructions 380
7.2.2 CPU with a 16-bit instruction 304 9.3.4 Immediate operands 381
9.3.5 Sequence control 381
7.3 The random logic control unit 308
9.3.6 Data movement and memory reference
7.3.1 Implementing a primitive CPU 308 instructions 382
7.3.2 From op-code to operation 312 9.3.7 Using the ARM 385
7.4 Microprogrammed control units 315 SUMMARY 397
P RO B L E M S 398
7.4.1 The microprogram 316
7.4.2 Microinstruction sequence control 319
10 Buses and input/output mechanisms 399
7.4.3 User-microprogrammed processors 320
SUMMARY 322 10.1 The bus 400
P RO B L E M S 322
10.1.1 Bus architecture 400
8 Accelerating performance 325 10.1.2 Key bus concepts 400
10.1.3 The PC bus 404
10.1.4 The IEEE 488 bus 407
8.1 Measuring performance 326
10.1.5 The USB serial bus 411
8.1.1 Comparing computers 326
10.2 I/O fundamentals 412
8.2 The RISC revolution 327
10.2.1 Programmed I/O 413
8.2.1 Instruction usage 328
10.2.2 Interrupt-driven I/O 415
8.2.2 Characteristics of RISC architectures 329
10.3 Direct memory access 422
8.3 RISC architecture and pipelining 335
10.4 Parallel and serial interfaces 423
8.3.1 Pipeline hazards 336
8.3.2 Data dependency 338 10.4.1 The parallel interface 424
8.3.3 Reducing the branch penalty 339 10.4.2 The serial interface 428
8.3.4 Implementing pipelining 341 SUMMARY 433
P RO B L E M S 433
8.4 Cache memory 344
8.4.1 Effect of cache memory on computer
11 Computer Peripherals 435
performance 345
8.4.2 Cache organization 346
11.1 Simple input devices 436
8.4.3 Considerations in cache design 350
11.1.1 The keyboard 436
8.5 Multiprocessor systems 350
11.1.2 Pointing devices 440
8.5.1 Topics in Multiprocessor Systems 352
11.2 CRT, LED, and plasma displays 444
8.5.2 Multiprocessor organization 353
8.5.3 MIMD architectures 356 11.2.1 Raster-scan displays 445
11.2.2 Generating a display 445
SUMMARY 362
11.2.3 Liquid crystal and plasma displays 447
P RO B L E M S 362
11.2.4 Drawing lines 450
9 Processor architectures 365 11.3 The printer 452
11.3.1 Printing a character 453
9.1 Instruction set architectures and their resources 365 11.3.2 The Inkjet printer 453
9.1.1 Register sets 365 11.3.3 The laser printer 455
xiv Contents

11.4 Color displays and printers 457 12.7.3 RAID systems 531
11.4.1 Theory of color 457 12.7.4 The floppy disk drive 532
11.4.2 Color CRTs 458 12.7.5 Organization of data on disks 533
11.4.3 Color printers 460 12.8 Optical memory technology 536
11.5 Other peripherals 461 12.8.1 Storing and reading information 537
11.5.1 Measuring position and movement 461 12.8.2 Writable CDs 540
11.5.2 Measuring temperature 463 SUMMARY 543
11.5.3 Measuring light 464 P RO B L E M S 543
11.5.4 Measuring pressure 464
11.5.5 Rotation sensors 464 13 The operating system 547
11.5.6 Biosensors 465

11.6 The analog interface 466 13.1 The operating system 547
11.6.1 Analog signals 466 13.1.1 Types of operating system 548
11.6.2 Signal acquisition 467
13.2 Multitasking 550
11.6.3 Digital-to-analog conversion 473
13.2.1 What is a process? 551
11.6.4 Analog-to-digital conversion 477
13.2.2 Switching processes 551
11.7 Introduction to digital signal processing 486
13.3 Operating system support from the CPU 554
11.7.1 Control systems 486
13.3.1 Switching states 555
11.7.2 Digital signal processing 488
13.3.2 The 68K’s two Stacks 556
SUMMARY 491
P RO B L E M S 492 13.4 Memory management 561
13.4.1 Virtual memory 563
12 Computer memory 493 13.4.2 Virtual memory and the 68K family 565
SUMMARY 568
12.1 Memory hierarchy 493 P RO B L E M S 568
12.2 What is memory? 496

12.3 Memory technology 496 14 Computer communications 569

12.3.1 Structure modification 496


14.1 Background 570
12.3.2 Delay lines 496
12.3.3 Feedback 496 14.1.1 Local area networks 571
12.3.4 Charge storage 497 14.1.2 LAN network topology 572
12.3.5 Magnetism 498 14.1.3 History of computer communications 574
12.3.6 Optical 498 14.2 Protocols and computer communications 576
12.4 Semiconductor memory 498 14.2.1 Standards bodies 578
12.4.1 Static semiconductor memory 498 14.2.2 Open systems and standards 578
12.4.2 Accessing memory—timing diagrams 499 14.3 The physical layer 584
12.4.3 Dynamic memory 501
14.3.1 Serial data transmission 584
12.4.4 Read-only semiconductor memory devices 505
14.4 The PSTN 587
12.5 Interfacing memory to a CPU 506
14.4.1 Channel characteristics 587
12.5.1 Memory organization 507
14.4.2 Modulation and data transmission 588
12.5.2 Address decoders 508
14.4.3 High-speed transmission over the PSTN 591
12.6 Secondary storage 515
14.5 Copper cable 592
12.6.1 Magnetic surface recording 515
14.5.1 Ethernet 593
12.6.2 Data encoding techniques 521
14.6 Fiber optic links 595
12.7 Disk drive principles 524
12.7.1 Disk drive operational parameters 527 14.7 Wireless links 596
12.7.2 High-performance drives 529 14.7.1 Spread spectrum technology 598
Contents xv

14.8 The data link layer 599 SUMMARY 609


P RO B L E M S 610
14.8.1 Bit-oriented protocols 599
14.8.2 The Ethernet data link layer 603
Appendix: The 68000 instruction set 611
14.9 Routing techniques 604
Bibliography 641
14.9.1 Centralized routing 607
14.9.2 Distributed routing 607 Index 643
14.9.3 IP (Internet protocol) 607 Contents and installation instructions for the CD-Rom 653
Introduction to computer hardware 1
CHAPTER MAP

1 Introduction to 2 Logic elements and 3 Sequential logic 4 Computer arithmetic


computer hardware Boolean algebra We can classify logic circuits into In Chapter 4 we demonstrate
two groups: the combinational how numbers are represented in
Digital computers are
circuit we described in Chapter 2 binary form and look at binary
constructed from millions of very
and the sequential circuit which arithmetic. We also demonstrate
simple logic elements called
forms the subject of this chapter. how the properties of binary
gates. In this chapter we
A sequential circuit includes numbers are exploited to create
introduce the fundamental gates
memory elements and its current codes that compress data or even
and demonstrate how they can
behavior is governed by its past detect and correct errors.
be combined to create circuits
that carry out the basic functions inputs. Typical sequential circuits
required in a computer. are counters and registers.

INTRODUCTION
In this chapter we set the scene for the rest of the book. We define what we mean by computer
hardware, explain just why we teach computer hardware to computer science students, provide a
very brief history of computing, and look at the role of the computer.

range from the CPU to the memory and input/output


1.1 What is computer hardware? devices. The programs that control the operation of the com-
puter are its software. When a program is inside a computer
To begin with I feel we ought to define the terms hardware
its physical existence lies in the state of electronic switches,
and software. I could give a deeply philosophical definition,
the magnetization of tiny particles on magnetic disk, or
but perhaps an empirical one is more helpful. If any part of a
bumps on the surface of a CD or DVD. We can’t point to a
computer system clatters on the floor when dropped, it’s
program in a computer any more than we can point to
hardware. If it doesn’t, it’s software. This is a good working
a thought in the brain.
definition, but it’s incomplete because it implies that hardware
Two terms closely related to hardware are architecture and
and software are unrelated entities. As we will discover, soft-
organization. A computer’s architecture is an abstract view of
ware and hardware are often intimately related. Moreover, the
the computer, which describes what it can do. A computer’s
operation of much of today’s hardware is controlled by
architecture is the assembly language programmer’s view of
firmware (software embedded in the structure of the hardware).
the machine. You could say that architecture has a similar
A computer’s hardware includes all the physical compon-
meaning to functional specification. The architecture is an
ents that make up the computer system. These components

HARDWARE, ARCHITECTURE, AND ORGANIZATION


Hardware means all the parts of the computer that are not two computers that have been constructed in different ways
software. It includes the processor, its memory, the buses that with different technologies but with the same architecture.
connect devices together, and the peripherals. Organization describes how a computer is implemented.
Architecture describes the internal organization of a Organization is concerned with a computer’s functional
computer in an abstract way; that is, it defines the capabilities components and their interrelationship. Organization is about
of the computer and its programming model. You can have buses, timing, and circuits.
2 Chapter 1 Introduction to computer hardware

abstraction of the computer. A computer’s organization should the lives of computer scientists and programmers be
describes how the architecture is implemented; that is, it made miserable by forcing them to learn what goes on inside
defines the hardware used to implement the architecture. a computer?
Let’s look at a simple example that distinguishes between If topics in the past have fallen out of the curriculum with no
architecture and organization. A computer with a 32-bit obviously devastating effect on the education of students, what
architecture performs operations on numbers that are 32 bits about today’s curriculum? Do we still need to teach computer
wide. You could build two versions of this computer. One is science students about the internal operation of the computer?
a high-performance device that adds two 32-bit numbers in a Computer architecture is the oldest component of the
single operation. The other is a low-cost processor that gets computer curriculum. The very first courses on computer
a 32-bit number by bringing two 16-bit numbers from mem- science were concerned with the design and construction of
ory one after the other. Both computers end up with the same computers. At that time programming was in its infancy and
result, but one takes longer to get there. They have the same compilers, operating systems, and databases did not exist.
architecture but different organizations. In the 1940s, working with computers meant building com-
Although hardware and software are different entities, puters. By the 1960s computer science had emerged as a
there is often a trade-off between them. Some operations can discipline. With the introduction of courses in program-
be carried out either by a special-purpose hardware system or ming, numerical methods, operating systems, compilers, and
by means of a program stored in the memory of a general- databases, the then curriculum reflected the world of the
purpose computer. The fastest way to execute a given task is mainframe.
to build a circuit dedicated exclusively to the task. Writing a In the 1970s computer architecture was still, to a considerable
program to perform the same task on an existing computer extent, an offshoot of electronics. Texts were more concerned
may be much cheaper, but the task will take longer, as the with the circuits in a computer than with the fundamental prin-
computer’s hardware wasn’t optimized to suit the task. ciples of computer architecture as now encapsulated by the
Developments in computer technology in the late 1990s expression instruction set architecture (ISA).
further blurred the distinction between hardware and soft- Computer architecture experienced a renaissance in the
ware. Digital circuits are composed of gates that are wired 1980s. The advent of the low-cost microprocessor-based sys-
together. From the mid-1980s onward manufacturers were tems and the single-board computer meant that computer
producing large arrays of gates that could be interconnected science students could study and even get hands-on experi-
electronically to create a particular circuit. As technology ence of microprocessors. They could build simple systems,
progressed it became possible to reconfigure the connections test them, interface them to peripherals such as LEDs and
between gates while the circuit was operating. We now have switches, and write programs in machine code. Bill Gates
the technology to create computers that can repair errors, himself is a product of this era.
restructure themselves as the state of the art advances, or even Assembly language programming courses once mirrored
evolve. high-level language programming courses—students were
taught algorithms such as sorting and searching in assembly
language, as if assembly language were no more than the poor
person’s C. Such an approach to computer architecture is
1.2 Why do we teach computer now untenable. If assembly language is taught at all today, it is
hardware? used as a vehicle to illustrate instruction sets, addressing
modes, and other aspects of a processor’s architecture.
A generation ago, school children in the UK had to learn In the late 1980s and early 1990s computer architecture
Latin in order to enter a university. Clearly, at some point it underwent another change. The rise of the RISC micro-
was thought that Latin was a vital prerequisite for everyone processor turned the focus of attention from complex
going to university. When did they realize that students could instruction set computers to the new high-performance,
still benefit from a university education without a prior highly pipelined, 32-bit processors. Moreover, the increase in
knowledge of Latin? Three decades ago students taking a the performance of microprocessors made it harder and
degree in electronics had to study electrodynamics, the dance harder for classes to give students the hands-on experience
of electrons in magnetic fields, a subject so frightening that they had a few years earlier. In the 1970s a student could con-
older students passed on its horrors to the younger ones in struct a computer with readily available components and
hushed tones. Today, electrodynamics is taught only to stu- simple electronic construction techniques. By the 1990s clock
dents on specialist courses. rates rose to well over 100 MHz and buses were 32 bits wide
We can watch a television program without understanding making it difficult for students to construct microprocessor-
how a cathode ray tube operates, or fly in a Jumbo jet without based systems as they did in the 1980s. High clock rates
ever knowing the meaning of thermodynamics. Why then require special construction techniques and complex chips
1.2 Why do we teach computer hardware? 3

have hundreds of connections rather than the 40- or 64-pin program that did not provide students with an insight into the
packages of the 8086/68K era. computer would be strange in a university that purports to edu-
In the 1990s computer architecture was largely concerned cate students rather than to merely train them.
with the instruction set architecture, pipelining, hazards, Those supporting the continued teaching of computer
superscalar processors, and cache memories. Topics such as architecture employ several traditional arguments. First,
microprocessor systems design at the chip level and micro- education is not the same as training and CS students are not
processor interfacing had largely vanished from the CS cur- simply being shown how to use commercial computer pack-
riculum. These topics belonged to the CEng and EE curricula. ages. A course leading to a degree in computer science should
In the 1990s a lot was happening in computer science; for also cover the history and the theoretical basis for the subject.
example, the introduction of new subject areas such as Without an appreciation of computer architecture, the com-
object-oriented programming, communications and net- puter scientist cannot understand how computers have
works, and the Internet/WWW. The growth of the computer developed and what they are capable of.
market, particularly for those versed in the new Internet- However, there are concrete reasons why computer archi-
based skills, caused students to look at their computing tecture is still relevant in today’s world. Indeed, I would
curricula in a rather pragmatic way. Many CS students will maintain that computer architecture is as relevant to the
join companies using the new technologies, but very few of needs of the average CS student today as it was in the past.
them indeed will ever design chips or become involved with Suppose a graduate enters the industry and is asked to select
cutting-edge work in computer architecture. At my own uni- the most cost-effective computer for use throughout a large
versity, the demand for courses in Internet-based computing organization. Understanding how the elements of a com-
has risen and fewer students have elected to take computer puter contribute to its overall performance is vital—is it
architecture when it is offered as an elective. better to spend $50 on doubling the size of the cache or $100
on increasing the clock speed by 500 MHz?
1.2.1 Should computer architecture Computer architecture cannot be divorced entirely from
software. The majority of processors are found not in PCs or
remain in the CS curriculum? workstations but in embedded1 applications. Those designing
Developments in computer science have put pressure on multiprocessors and real-time systems have to understand
course designers to remove old material to make room for the fundamental architectural concepts and limitations of com-
new. The fraction of students that will ever be directly mercially available processors. Someone developing an auto-
involved in computer design is declining. Universities pro- mobile electronic ignition system may write their code in C,
vide programs in multimedia-based computing and visual- but might have to debug the system using a logic analyzer that
ization at both undergraduate and postgraduate levels. displays the relationship between interrupt requests from
Students on such programs do not see the point of studying engine sensors and the machine-level code.
computer architecture. There are two other important reasons for teaching com-
Some have suggested that computer architecture is a prime puter architecture. The first reason is that computer architec-
candidate for pruning. It is easy to argue that computer archi- ture incorporates a wealth of important concepts that appear
tecture is as irrelevant to computer science as, say, Latin is to in other areas of the CS curriculum. This point is probably
the study of contemporary English literature. If a student least appreciated by computer scientists who took a course in
never writes an assembly language program or designs an architecture a long time ago and did little more than learn
instruction set, or interfaces a memory to a processor, why about bytes, gates, and assembly language. The second reason
should we burden them with a course in computer architec- is that computer architecture covers more than the CPU; it is
ture? Does the surgeon study metallurgy in order to under- concerned with the entire computer system. Because so many
stand how a scalpel operates? computer users now have to work with the whole system
It’s easy to say that an automobile driver does not have to (e.g. by configuring hard disks, by specifying graphics cards,
understand the internal combustion engine to drive an auto- by selecting a SCSI or FireWire interface), a course covering
mobile. However, it is patently obvious that a driver who the architecture of computer systems is more a necessity than
understands mechanics can drive in such a way as to enhance a luxury.
the life of the engine and to improve its performance. The Some computer architecture courses cover the architecture
same is true of computer architecture; understanding com- and organization of the processor but make relatively little
puter systems can improve the performance of software if the
software is written to exploit the underlying hardware. 1
An embedded computer is part of a product (digital camera, cell
The digital computer lies at the heart of computer science.
phone, washing machine) that is not normally regarded as a computing
Without it, computer science would be little more than a branch device. The end user does not know about the computer and does not
of theoretical mathematics. The very idea of a computer science have to program it.
4 Chapter 1 Introduction to computer hardware

reference to buses, memory systems, and high-performance crashing the operating system or other applications. Covering
peripherals such as graphics processors. Yet, if you scan the these topics in an architecture course makes the student
pages of journals devoted to personal/workstation comput- aware of the support the processor provides for the operating
ing, you will rapidly discover that much attention is focused system and enables those teaching operating system courses
on aspects of the computer system other than the CPU itself. to concentrate more on operating system facilities than on
Computer technology was once driven by the paperless- the mechanics of the hardware.
office revolution with its demand for low-cost mass storage, High-level languages make it difficult to access peripherals
sufficient processing power to rapidly recompose large docu- directly. By using an assembly language we can teach students
ments, and low-cost printers. Today, computer technology is how to write device drivers that directly control interfaces.
being driven by the multimedia revolution with its insatiable Many real interfaces are still programmed at machine level by
demand for pure processing power, high bandwidths, low accessing registers within them. Understanding computer
latencies, and massive storage capacities. architecture and assembly language can facilitate the design
These trends have led to important developments in com- of high-performance interfaces.
puter architecture such as special hardware support for mul- Programming and data structures Students encounter the
timedia applications. The demands of multimedia are being notion of data types and the effect of strong and weak data
felt in areas other than computer architecture. Hard disks typing when they study high-level languages. Because
must provide a continuous stream of data because people can computer architecture deals with information in its most
tolerate a degraded picture much better than a picture with primitive form, students rapidly become familiar with the
even the shortest discontinuities. Such demands require advantages and disadvantages of weak typing. They learn the
efficient track-seeking algorithms, data buffering, and high- power that you have over the hardware by being able to apply
speed, real-time error correction and detection algorithms. almost any operations to binary data. Equally, they learn
Similarly, today’s high data densities require frequent recal- the pitfalls of weak typing as they discover the dangers of
ibration of tracking mechanisms due to thermal effects. Disk inappropriate operations on data.
drives now include SMART technologies from the AI world Computer architecture is concerned with both the type of
that are able to predict disk failure before it occurs. These operations that act on data and the various ways in which the
developments have as much right to be included in the archi- location of an operand can be accessed in memory. Computer
tecture curriculum as developments in the CPU. addressing modes and the various means of accessing data
naturally lead on to the notion of pointers. Students learn
about how pointers function at machine level and the sup-
1.2.2 Supporting the CS curriculum
port offered for pointers by various architectures. This aspect
It is in the realm of software that you can most easily build a is particularly important if the student is to become a C
case for the teaching of assembly language. During a student’s programmer.
career, they will encounter abstract concepts in areas ranging An understanding of procedure call and parameter passing
from programming languages to operating systems to real- mechanisms is vital to anyone studying processor perform-
time programming to AI. The foundation of many of these ance. Programming in assembly language readily demon-
concepts lies in assembly language programming and computer strates the passing of parameters by value and by reference.
architecture. Computer architecture provides bottom-up Similarly, assembly language programming helps you to
support for the top-down methodology taught in high-level understand concepts such as the use of local variables and
languages. Consider some of the areas where computer re-entrant programming.
architecture can add value to the CS curriculum. Students sometimes find the concept of recursion difficult.
The operating system Computer architecture provides a You can use an assembly language to demonstrate how recur-
firm basis for students taking operating system courses. In sion operates by tracing through the execution of a program.
computer architecture students learn about the hardware The student can actually observe how the stack grows as
that the operating system controls and the interaction procedures are called.
between hardware and software; for example, in cache sys- Computer science fundamentals Computer architecture is
tems. Consider the following two examples of the way in awash with concepts that are fundamental to computer science
which the underlying architecture provides support for generally and which do not appear in other parts of the
operating system facilities. undergraduate curriculum. A course in computer architecture
Some processors operate in either a privileged or a user can provide a suitable forum for incorporating fundamental
mode. The operating system runs in the privileged or pro- principles in the CS curriculum. For example, a first course in
tected mode and all applications run in the user mode. This computer architecture introduces the student to bits and
mechanism creates a secure environment in which the effects binary encoding techniques. A few years ago much time
of an error in an application program can be prevented from would have been spent on special-purpose codes for BCD
1.3 An overview of the book 5

arithmetic. Today, the professor is more likely to introduce introduction to flip-flops and their application to sequential
error-correcting codes (important in data communications circuits such as counters, timers, and sequencers.
systems and secure storage mechanisms) and data-compression Computer architecture and assembly language The prim-
codes (used by everyone who has ever zipped a file or used a itive instructions that directly control the operation of a com-
JPEG-encoded image). puter are called machine-code instructions and are composed
of sequences of binary values stored in memory. As program-
ming in machine code is exceedingly tedious, an aid to
1.3 An overview of the book machine code programming called assembly language has
been devised. Assembly language is shorthand permitting the
It’s difficult to know just what should be included in an intro- programmer to write machine-code instructions in a simple
ductory course on computer architecture, organization, and abbreviated form of plain language. High-level languages
hardware—and what should be excluded. Any topic can be (Java, C, Pascal, BASIC) are sometimes translated into a series
expanded to an arbitrary extent; if we begin with gates and of assembly-language instructions by a compiler as an inter-
Boolean algebra, do we go on to semiconductor devices and mediate step on the way to pure machine code. This interme-
then semiconductor physics? In this book, we cover the mater- diate step serves as a debugging tool for programmers who
ial specified by typical computer curricula. However, I have wish to examine the operation of the compiler and the output
included a wider range of material because the area of influ- it produces. Computer architecture is the assembly language
ence encompassed by the digital computer has expanded programmer’s view of a computer.
greatly in recent years. The major subject areas dealt with in Programmers writing in assembly language require a
this book are outlined below. detailed knowledge of the architecture of their machines,
Computer arithmetic Our system of arithmetic using the unlike the corresponding programmers operating in high-
base 10 has evolved over thousands of years. The computer car- level languages. At this point I must say that we introduce
ries out its internal operations on numbers represented in the assembly language to explain the operation of the central pro-
base two. This anomaly isn’t due to some magic power inher- cessing unit. Apart from certain special exceptions, programs
ent in binary arithmetic but simply because it would be uneco- should be written in a high-level language whenever possible.
nomic to design a computer to operate in denary (base 10) Computer organization This topic is concerned with how a
arithmetic. At this point I must make a comment. Time and computer is arranged in terms of its building blocks (i.e. the
time again, I read in the popular press that the behavior of logic and sequential circuits made from gates and flip-flops).
digital computers and their characteristics are due to the fact We introduce the architecture of a simple hypothetical com-
that they operate on bits using binary arithmetic whereas we puter and show how it can be organized in terms of func-
humans operate on digits using decimal arithmetic. That idea tional units. That is, we show how the computer goes about
is nonsense. Because there is a simple relationship between reading an instruction from memory, decoding it, and then
binary and decimal numbers, the fact that computers represent executing it.
information in binary form is a mere detail of engineering. It’s Input/output It’s no good having a computer unless it can
the architecture and organization of a computer that makes it take in new information (programs and data) and output the
behave in such a different way to the brain. results of its calculations. In this section we show how
Basic logic elements and Boolean algebra Today’s techno- information is moved into and out of the computer. The
logy determines what a computer can do. We introduce the operation of three basic input/output devices is described:
basic logic elements, or gates, from which a computer is made the keyboard, the display, and the printer.
up and show how these can be put together to create more We also examine the way in which analog signals can be
complex units such as arithmetic units. The behavior of these converted into digital form, processed digitally by a com-
gates determines both the way in which the computer carries puter, and then converted back into analog form. Until the
out arithmetic operations and the way in which the func- mid-1990s it was uneconomical to process rapidly changing
tional parts of a computer interact to execute a program. We analog signals (e.g. speech, music, video) digitally. The advent
need to understand gates in order to appreciate why the com- of high-speed low-cost digital systems has opened up a new
puter has developed in the way it has. The operation of cir- field of computing called digital signal processing (DSP). We
cuits containing gates can be described in terms of a formal introduce DSP and outline some of the basic principles.
notation called Boolean algebra. An introduction to Boolean Memory devices A computer needs memory to hold pro-
algebra is provided because it enables designers to build cir- grams, data, and any other information it may require at
cuits with the least number of gates. some point in the future. We look at the immediate access
As well as gates, computers require devices called flip-flops, store and the secondary store (sometimes called backing
which can store a single binary digit. The flip-flop is the store). An immediate access store provides a computer with
basic component of many memory units. We provide an the data it requires in approximately the same time as it takes
6 Chapter 1 Introduction to computer hardware

the computer to execute one of its machine-level operations. 1.4.1 Navigation and mathematics
The secondary store is very much slower and it takes thou-
sands of times longer to access data from a secondary store The development of navigation in the eighteenth century was
than from an immediate access store. However, secondary probably the most important driving force behind auto-
storage is used because it is immensely cheaper than an mated computation. It’s easy to tell how far north or south of
immediate access store and it is also non-volatile (i.e. the data the equator you are—you measure the height of the sun
isn’t lost when you switch the computer off). The most pop- above the horizon at midday and then use the elevation to
ular form of secondary store is the disk drive, which relies on work out your latitude. Unfortunately, calculating your lon-
magnetizing a moving magnetic material to store data. gitude relative to the prime meridian through Greenwich in
Optical storage technology in the form of the CD and DVD England is very much more difficult. Longitude is determined
became popular in the 1990s because it combines the rela- by comparing your local time (obtained by observing the
tively fast access time of the disk with the large capacity and angle of the sun) with the time at Greenwich.
low cost of the tape drive. The mathematics of navigation uses trigonometry, which
Operating systems and the computer An operating system is concerned with the relationship between the sides and
coordinates all the functional parts of the computer and pro- angles of a triangle. In turn, trigonometry requires an accur-
vides an interface for the user. We can’t cover the operating ate knowledge of the sine, cosine, and tangent of an angle.
system in detail here. However, because the operating system Those who originally devised tables of sines and other math-
is intimately bound up with the computer’s hardware, we do ematical functions (e.g. square roots and logarithms) had to
cover two of its aspects—multiprogramming and memory do a lot of calculation by hand. If x is expressed in radians
management. Multiprogramming is the ability of a computer (where 2␲ radians  360) and x 1, the expression for
to appear to run two or more programs simultaneously. sin(x) can be written as an infinite series of the form
Memory management permits several programs to operate
x 3 x 5 x7 … x2n1
as though each alone occupied the computer’s memory and sin(x)  x      (1)n
3! 5! 7! (2n  1)!
enables a computer with a small, high-speed random access
memory and a large, low-speed serial access memory (i.e. Although the calculation of sin(x) requires the summation
hard disk) to appear as if it had a single large high-speed ran- of an infinite number of terms, we can obtain a reasonably
dom access memory. accurate approximation to sin(x) by adding just a handful of
Computer communications Computers are networked when terms together because xn tends towards zero as n increases
they are connected together. Networking computers has for x 1.
many advantages, not least of which is the ability to share An important feature of the formula for sin(x) is that it
peripherals such as printers and scanners. Today we have two involves nothing more than the repetition of fundamental
types of network—the local area network (LAN), which arithmetic operations (addition, subtraction, multiplication,
interconnects computers within a building, and the wide area and division). The first term in the series is x itself. The sec-
network, which interconnects computers over much greater ond term is x 3/3!, which is derived from the first term by
distances (e.g. the Internet). Consequently, we have devoted a multiplying it by x2 and dividing it by 1  2  3. Each new
section to showing how computers communicate with each term is formed by multiplying the previous term by x2 and
other. Three aspects of computer communications are exam- dividing it by 2n(2n  1), where n is number of the term. It
ined. The first is the protocols or rules that govern the way in would eventually occur to people that this process could be
which information is exchanged between systems in an mechanized.
orderly fashion. The second is the way in which digital
information in a computer is encoded in a form suitable for 1.4.2 The era of mechanical computers
transmission over a serial channel, the various types of
channel, the characteristics of the physical channel, and how During the seventeenth century major advances were made in
data is reconstituted at the receiver. The third provides a watch making; for example, in 1656 Christiaan Huygens
brief overview of both local area and wide area networks. designed the first pendulum clock. The art of watch making
helped develop the gear wheels required by the first mechanical
calculators. In 1642 the French scientist Blaise Pascal designed
1.4 History of computing a simple mechanical adder and subtracter using gear wheels
with 10 positions marked on them. One complete rotation of
The computer may be a marvel of our age, but it has had a long a gear wheel caused the next wheel on its left to move one posi-
and rich history. Writing a short introduction to computer tion (a bit like the odometer used to record an automobile’s
history is difficult because there is so much to cover. Here we mileage). Pascal’s most significant contribution was the use of
provide some of the milestones in the computer’s development. a ratchet device that detected a carry (i.e. a rotation of a wheel
1.4 History of computing 7

from 9 to 0) and nudged the next wheel on the left one digit.
Number Number First Second
In other words, if two wheels show 58 and the right-hand squared difference difference
wheel is rotated two positions forward, it moves to the 0 posi-
tion and advances the 5 to 6 to get 60. Pascal’s calculator, the 1 1
Pascaline, could perform addition only. 2 4 3
In fact, Wilhelm Schickard, rather than Pascal, is now 3 9 5 2
generally credited with the invention of the first mechanical
4 16 7 2
calculator. His device, created in 1623, was more advanced
5 25 9 2
than Pascal’s because it could also perform partial multiplica-
6 36 11 2
tion. Schickard died in a plague and his invention didn’t
receive the recognition it merited. Such near simultaneous 7 49 13 2
developments in computer hardware have been a significant
feature of the history of computer hardware. Table 1.1 The use of finite differences to calculate squares.
Within a few decades, mechanical computing devices
advanced to the stage where they could perform addition,
subtraction, multiplication, and division—all the operations subtraction rather like Pascal’s mechanical adder. Its purpose
required by armies of clerks to calculate the trigonometric was to mechanize the calculation of polynomial functions
functions we mentioned earlier. and automatically print the result. It was a calculator rather
than a computer because it could carry out only a set of
The industrial revolution and early predetermined operations.
control mechanisms Babbage’s difference engine employed finite differences to
If navigation provided a requirement for mechanized com- calculate polynomial functions. Trigonometric functions can
puting, other developments provided important steps along be expressed as polynomials in the form a 0 x  a1x1 
the path to the computer. By about 1800 the industrial a2x2  . . . The difference engine can evaluate such expres-
revolution in Europe was well under way. Weaving was one sions automatically. Table 1.1 demonstrates how you can use
of the first industrial processes to be mechanized. A weaving the method of finite differences to create a table of squares
loom passes a shuttle pulling a horizontal thread to and fro without having to use multiplication. The first column con-
between vertical threads held in a frame. By changing the tains the natural integers 1, 2, 3, . . . The second column
color of the thread pulled by the shuttle and selecting whether contains the squares of these integers (i.e. 1, 4, 9, . . .). Column
the shuttle passes in front of or behind the vertical threads, 3 contains the first difference between successive pairs of
you can weave a particular pattern. Controlling the loom numbers in column 2; for example, the first value is 4 1  3,
manually is tedious and time consuming. In 1801 Joseph the second value is 9  4  5, and so on. The final column is
Jacquard designed a loom that could automatically weave a the second difference between successive pairs of first differ-
predetermined pattern. The information necessary to control ences. As you can see, the second difference is always 2.
the loom was stored in the form of holes cut in cards—the Suppose we want to calculate the value of 82 using finite
presence or absence of a hole at a certain point controlled the differences. We simply use Table 1.1 in reverse by starting
behavior of the loom. Information was read by rods that with the second difference and working back to the result. If
pressed against the card and either went through a hole or the second difference is 2, the next first difference (after 72) is
were stopped by the card. Some complex patterns required as 13  2  15. Therefore, the value of 82 is the value of 72 plus
many as 10 000 cards strung together in the form of a tape. the first difference; that is, 49  15  64. We have generated
82 without using multiplication. This technique can be
Babbage and the computer extended to evaluate many other mathematical functions.
Two of the most significant advances in computing were Babbage’s difference engine project was cancelled in 1842
made by Charles Babbage, a UK mathematician born in 1792: because of increasing costs. He did design a simpler differ-
his difference engine and his analytical engine. Like other ence engine using 31-digit numbers to handle seventh-order
mathematicians of his time, Babbage had to perform all differences, but no one was interested in financing it. In 1853
calculations by hand and sometimes he had to laboriously George Scheutz in Sweden constructed a working difference
correct errors in published mathematical tables. Living in the engine using 15-digit arithmetic and fourth-order differ-
age of steam, it was quite natural that Babbage asked himself ences. Incidentally, in 1991 a team at the Science Museum in
whether mechanical means could be applied to arithmetic London used modern construction techniques to build
calculations. Babbage’s difference engine. It worked.
The difference engine was a complex array of intercon- Charles Babbage went on to design the analytical engine,
nected gears and linkages that performed addition and which was to be capable of performing any mathematical
8 Chapter 1 Introduction to computer hardware

operation automatically. This truly remarkable and entirely One of the first effective communication systems was the
mechanical device was nothing less than a general-purpose optical semaphore, which passed visual signals from tower to
computer that could be programmed. The analytical engine tower across Europe. Claude Chappe in France developed a
included many of the elements associated with a modern elec- system with two arms, each of which could be in one of seven
tronic computer—an arithmetic processing unit that carries positions. The Chappe telegraph could send a message across
out all the calculations, a memory that stores data, and input France in about half an hour (good weather permitting). The
and output devices. Unfortunately, the sheer scale of the ana- telegraph was used for commercial purposes, but it also
lytical engine rendered its construction, at that time, impos- helped Napoleon to control his army.
sible. However, it is not unreasonable to call Babbage the King Maximilian had seen how the French visual sema-
father of the computer because his machine incorporated phore system had helped Napoleon’s military campaigns and
many of the intellectual concepts at the heart of the computer. in 1809 he asked the Bavarian Academy of Sciences to devise
Babbage envisaged that his analytical engine would be a scheme for high-speed communication over long distances.
controlled by punched cards similar to those used to control Samuil T. von Sömmering suggested a crude telegraph using
the operation of the Jacquard loom. Two types of punched 35 conductors, one for each character. Sömmering’s tele-
card were required. Operation cards specified the sequence of graph transmits electricity from a battery down one of these
operations to be carried out by the analytical engine and vari- 35 wires where, at the receiver, the current is passed through
able cards specified the locations in the store of inputs and a tube of acidified water. Passing a current through the water
outputs. breaks it down into oxygen and hydrogen. To use the
One of Babbage’s collaborators was Ada Gordon2, a math- Sömmering telegraph you detected the bubbles that appeared
ematician who became interested in the analytical engine when in one of the 35 glass tubes and then wrote down the cor-
she translated a paper on it from French to English. When responding character. Sömmering’s telegraph was ingenious
Babbage discovered the paper he asked her to expand the but too slow to be practical.
paper. She added about 40 pages of notes about the machine In 1819 Hans C. Oersted made one of the greatest discover-
and provided examples of how the proposed analytical engine ies of all time when he found that an electric current creates a
could be used to solve mathematical problems. Gordon magnetic field round a conductor. This breakthrough allowed
worked closely with Babbage and it’s been reported that she you to create a magnetic field at will. In 1828 Cooke exploited
even suggested the use of the binary system to store data. She Oersted’s discovery when he invented a telegraph that used
noticed that certain groups of operations are carried out over the magnetic field round a wire to deflect a compass needle.
and over again during the course of a calculation and pro- The growth of the railway networks in the early nineteenth
posed that a conditional instruction be used to force the ana- century spurred the development of the telegraph because you
lytical engine to perform the same sequence of operations had to warn stations down the line that a train was arriving. By
many times. This action is the same as the repeat or loop func- 1840 a 40-mile stretch between Slough and Paddington in
tion found in most of today’s high-level languages. London had been linked using the telegraph of Charles
Gordon devised algorithms to perform the calculation of Wheatstone and William Cooke. The Wheatstone and Cooke
Bernoulli numbers, making her one of the founders of numer- telegraph used five compass needles that normally hung in a
ical computation. Some regard Gordon as the world’s first vertical position. The needles could be deflected by coils to
computer programmer, who was constructing algorithms a point to the appropriate letter. You could transmit one of
century before programming became a recognized discipline— 20 letters (J, C, Q, U, X, and Z were omitted).
and long before any real computers were constructed.
Mechanical computing devices continued to be used in The first long-distance data links
compiling mathematical tables and performing the arithmetic We take wires and cables for granted. In the early nineteenth
operations used by everyone from engineers to accountants century, plastics hadn’t been invented and the only material
until about the 1960s. The practical high-speed computer had available for insulation waterproofing was a type of pitch
to await the development of the electronics industry. called asphaltum. In 1843 a form of rubber called gutta
percha was discovered. The Atlantic Telegraph Company cre-
1.4.3 Enabling technology— ated an insulated cable for underwater use containing a single
copper conductor made of seven twisted strands, surrounded
the telegraph by gutta percha insulation and protected by a ring of 18 iron
Many of the technological developments required to con- wires coated with hemp and tar.
struct a practical computer took place at the end of the
nineteenth century. The most important of these events was 2
Ada Gordon married William King in 1835. King inherited the title
the invention of the telegraph. We now provide a short history Earl of Lovelace and Gordon became Countess of Lovelace. Gordon is
of the development of telecommunications. often considered the founder of scientific computing.
1.4 History of computing 9

Submarine cable telegraphy began with a cable crossing lies in the physical properties of electrical conductors
the English Channel to France in 1850. The cable failed after and insulators. Thomson’s theories enabled engineers to
only a few messages had been exchanged and a more success- construct data links with much lower levels of distortion.
ful attempt was made the following year. Transatlantic cable Thomson contributed to computing by providing the the-
laying from Ireland began in 1857 but was abandoned when ory that describes the flow of pulses in circuits, which enabled
the strain of the cable descending to the ocean bottom caused the development of the telegraph and telephone networks. In
it to snap under its own weight. The Atlantic Telegraph turn, the switching circuits used to route messages through
Company tried again in 1858. Again, the cable broke after networks were used to construct the first electromechanical
only 3 miles but the two cable-laying ships managed to splice computers.
the two ends. The cable eventually reached Newfoundland in
August 1858 after suffering several more breaks and storm Developments in communications networks
damage. Although the first telegraph systems operated from point to
It soon became clear that this cable wasn’t going to be a point, the introduction of the telephone led to the develop-
commercial success. The receiver used the magnetic field from ment of switching centers. First-generation switching centers
the current in the cable to deflect a magnetized needle. employed a telephone operator who manually plugged a sub-
Unfortunately, after crossing the Atlantic the signal was too scriber’s line into a line connected to the next switching center
weak to be detected reliably. The original voltage used to drive in the link. By the end of the nineteenth century, the infra-
a current down the cable was approximately 600 V. So, they structure of computer networks was already in place.
raised the voltage to about 2000 V to drive more current along In 1897 an undertaker called Almon Strowger was annoyed
the cable and improve the detection process. Unfortunately, to find that he was not getting the trade he expected because
such a high voltage burned through the primitive insulation, the local telephone operator was connecting prospective
shorted the cable, and destroyed the first transatlantic tele- clients to Strowger’s competitor. So, Strowger cut out the
graph link after about 700 messages had been transmitted in human factor by inventing the automatic telephone exchange
3 months. that used electromechanical devices to route calls between
In England, the Telegraph Construction and Maintenance exchanges. When you dial a number using a rotary dial, a
Company developed a new 2300-mile-long cable weighing series of pulses are sent down the line to a rotary switch. If
9000 tons, which was three times the diameter of the failed you dial, for example, ‘5’, the five pulses move a switch five
1858 cable. Laying this cable required the largest ship in the steps clockwise to connect you to line number five, which
world, the Great Eastern. After a failed attempt in 1865 a routes your call to the next switching center. Consequently,
transatlantic link was finally established in 1866. It cost $100 when you phoned someone using Strowger’s technology the
in gold to transmit 20 words across the first transatlantic number you dialed determined the route your call took
cable at a time when a laborer earned $20/month. though the system.
By the time the telegraph was well established, radio was
Telegraph distortion and the theory of being developed. James Clerk Maxwell predicted radio waves
transmission lines in 1864 following his study of light and electromagnetic
The telegraph hadn’t been in use for very long before people waves. Heinrich Hertz demonstrated the existence of radio
discovered that it suffered from a problem called telegraph waves in 1887 and Guglielmo Marconi is credited with being
distortion. As the length of cables increased it became appar- the first to use radio to span the Atlantic in 1901.
ent that a sharply rising pulse at the transmitter end of a cable The light bulb was invented by Thomas A. Edison in 1879.
was received at the far end as a highly distorted pulse with Investigations into its properties led Ambrose Fleming to
long rise and fall times. This distortion meant that the 1866 discover the diode in 1904. A diode is a light bulb surrounded
transatlantic telegraph cable could transmit only eight words by a wire mesh that allows electricity to flow only one way
per minute. The problem was eventually handed to William between the filament (the cathode) and the mesh (the anode).
Thomson at the University of Glasgow. The flow of electrons from the cathode gave us the term
Thomson, who later became Lord Kelvin, was one of the ‘cathode ray tube’. In 1906 Lee de Forest modified Fleming’s
nineteenth century’s greatest scientists. He published more diode by placing a wire mesh between the cathode and anode.
than 600 papers, developed the second law of thermodynam- By changing the voltage on this mesh, it was possible to
ics, and created the absolute temperature scale. In 1855 change the flow of current between the cathode and anode.
Thomson presented a paper to the Royal Society analyzing This device, called a triode, could amplify signals. Without
the effect of pulse distortion, which became the cornerstone the vacuum tube to amplify weak signals, modern electronics
of what is now called transmission line theory. The transmis- would have been impossible. The term electronics refers to
sion line effect reduces the speed at which signals can change circuits with amplifying or active devices such as tubes or tran-
state. The cause of the problems investigated by Thomson sistors. The first primitive computers using electromechanical
10 Chapter 1 Introduction to computer hardware

devices did not use vacuum tubes and, therefore, these 1.4.4 The first electromechanical
computers were not electronic computers.
The telegraph, telephone, and vacuum tube were all steps
computers
on the path to the development of the computer and, later, The forerunner of today’s digital computers used electro-
computer networks. As each of these practical steps was mechanical components called relays, rather than electronic
taken, there was a corresponding development in the accom- circuits such as vacuum tubes and transistors. A relay is con-
panying theory (in the case of radio, the theory came before structed from a coil of wire wound round an iron cylinder.
the discovery). When a current flows through the coil, it generates a mag-
netic field that causes the iron to act like a magnet. A flat
Typewriters, punched cards, and tabulators springy strip of iron is located close to the iron cylinder.
Another important part of computer history is the humble When the cylinder is magnetized, the iron strip is attracted,
keyboard, which is still the prime input device of most which, in turn, opens or closes a switch. Relays can perform
personal computers. As early as 1711 Henry Mill, an any operation that can be carried out by the logic gates mak-
Englishman, described a mechanical means of printing text ing up today’s computers. You cannot construct fast com-
on paper a character at a time. In 1829 the American William puters from relays because they are far too slow, bulky, and
Burt was granted the first US patent for a typewriter, unreliable. However, the relay did provide a technology that
although his machine was not practical. It wasn’t until 1867 bridged the gap between the mechanical calculator and the
that three Americans, Christopher Sholes, Carlos Glidden, modern electronic digital computer.
and Samuel Soule, invented their Type-Writer, the forerun- One of the first electromechanical computers was built by
ner of the modern typewriter. One of the problems encoun- Konrad Zuse in Germany. Zuse’s Z2 and Z3 computers were
tered by Sholes was the tendency of his machine to jam when used in the early 1940s to design aircraft in Germany. The
digraphs such as ‘th’ and ‘er’ were typed. Hitting the ‘t’ and ‘h’ heavy bombing at the end of the Second World War
keys at almost the same time caused the letters ‘t’ and ‘h’ to destroyed Zuse’s computers and his contribution to the
strike the paper simultaneously and jam. His solution was to development of the computer was ignored for many years.
arrange the letters on the keyboard to avoid the letters of He is mentioned here to demonstrate that the notion of a
digraphs being located side by side. This layout has continued practical computer occurred to different people in different
until today and is now described by the sequence of the first places. The Z3 was completed in 1941 and was the World’s
six letters on the left of the top row—QWERTY. Because the first functioning programmable mechanical computer.
same digraphs do not occur in different languages, the layout Zuse’s Z4 computer was finished in 1945, was later taken to
of a French keyboard is different to that of an English key- Switzerland, and was used at the Federal Polytechnical
board. It is reported that Sholes made it easy to type ‘Type- Institute in Zurich until 1955.
Writer’ by putting all these characters on the same row. As Zuse was working on his computer in Germany, Howard
Another enabling technology that played a key role in the Aiken at Harvard University constructed his Harvard Mark I
development of the computer was the tabulating machine, a computer in 1944 with both financial and practical support
development of the mechanical calculator that processes data from IBM. Aiken was familiar with Babbage’s work and his
on punched cards. One of the largest data processing opera- electromechanical computer, which he first envisaged in 1937,
tions carried out in the USA during the nineteenth century operated in a similar way to Babbage’s proposed analytical
was the US census. A census involves taking the original data, engine. The original name for the Mark I was the Automatic
sorting and collating it, and tabulating the results. Sequence Controlled Calculator, which, perhaps, better
In 1879 Herman Hollerith became involved in the evaluation describes its nature.
of the 1880 US Census data. He devised an electric tabulating Aiken’s machine was a programmable calculator that was
system that could process data stored on cards punched by used by the US Navy until the end of the Second World
clerks from the raw census data. Hollerith’s electric tabulating War. Just like Babbage’s machine, the Mark I used decimal
machine could read cards, process the information on the cards, counter wheels to implement its main memory consisting of
and then sort them. The tabulator helped lay the foundations of 72 words of 23 digits plus a sign. The program was stored on
the data processing industry. a paper tape (similar to Babbage’s punched cards), although
Three threads converged to make the computer possible: operations and addresses (i.e. data) were stored on the same
Babbage’s calculating machines, which performed arithmetic tape. Input and output operations used punched cards or an
calculations; communications technology, which laid the electric typewriter. Because the Harvard Mark I treated data
foundations for electronics and even networking; and the and instructions separately, the term Harvard architecture is
tabulator because it and the punched card media provided a now applied to any computer with separate paths for data
means of controlling machines, inputting data into them, and instructions. The Harvard Mark I didn’t support condi-
and storing information. tional operations and therefore is not strictly a computer.
1.4 History of computing 11

However, it was later modified to permit multiple paper tape the ENIAC at the Moore School of Engineering at the
readers with a conditional transfer of control between University of Pennsylvania.
the readers. John von Neumann, one of the leading mathematicians of
his age, participated in EDVAC’s design. He wrote a docu-
ment entitled ‘First draft of a report on the EDVAC’, which
1.4.5 The first mainframes compiled the results of various design meetings. Before von
Relays have moving parts and can’t operate at very high Neumann, computer programs were stored either mechan-
speeds. It took the invention of the vacuum tube by John A. ically or in separate memories from the data used by the pro-
Fleming and Lee de Forest to make possible the design of gram. Von Neumann introduced the concept of the stored
high-speed electronic computers. John V. Atanasoff is now program—an idea so commonplace today that we take it for
credited with the partial construction of the first completely granted. In a stored program von Neumann machine both the
electronic computer. Atanasoff worked with Clifford Berry at program that specifies what operations are to be carried out
Iowa State College on their computer from 1937 to 1942. and the data used by the program are stored in the same
Their machine used a 50-bit binary representation of num- memory. The stored program computer consists of a memory
bers and was called the ABC (Atanasoff–Berry Computer). It containing instructions coded in binary form. The control
was designed to solve linear equations and wasn’t a general part of the computer reads an instruction from memory,
purpose computer. Atanasoff and Berry abandoned their carries it out, then reads the next instruction, and so on.
computer when they were assigned to other duties because of Although EDVAC is generally regarded as the first stored pro-
the war. gram computer, this is not strictly true because data and
instructions did not have a common format and were not
ENIAC interchangeable.
The first electronic general purpose digital computer was EDVAC promoted the design of memory systems. The
John W. Mauchly’s ENIAC (Electronic Numerical Integrator capacity of EDVAC’s mercury delay line memory was 1024
and Calculator), completed in 1945 at the University of words of 44 bits. A mercury delay line operates by converting
Pennsylvania. ENIAC was intended for use at the Army data into pulses of ultrasonic sound that continuously retic-
Ordnance Department to create firing tables that relate the ulate in a long column of mercury in a tube.
range of a field gun to its angle of elevation, wind conditions, EDVAC was not a great commercial success. Its construc-
and so on. For many years, ENIAC was regarded as the first tion was largely completed by April 1949, but it didn’t run its
electronic computer, although credit was later given to first applications program until October 1951. Because of its
Atanasoff and Berry because Mauchly had visited Atanasoff adoption of the stored program concept, EDVAC became a
and read his report on the ABC machine. topic in the first lecture course given on computers. These
ENIAC used 17 480 vacuum tubes and weighed about 30 t. lectures took place before EDVAC was actually constructed.
ENIAC was a decimal machine capable of storing 20 10-digit Another important early computer was IAS constructed by
decimal numbers. IBM card readers and punches imple- von Neumann and his colleagues at the Institute for
mented input and output operations. ENIAC was pro- Advanced Studies in Princeton. IAS is remarkably similar to
grammed by means of a plug board that looked like an old modern computers. Main memory was 1K words and a mag-
pre-automatic telephone switchboard; that is, a program was netic drum was used to provide 16K words of secondary stor-
set up manually by means of wires. In addition to these wires, age. The magnetic drum was the forerunner of today’s disk
the ENIAC operator had to manually set up to 6000 muti- drive. Instead of recording data on the flat platter found in a
position mechanical switches. Programming ENIAC was hard drive, data was stored on the surface of a rotating drum.
very time consuming and tedious. In the late 1940s the Whirlwind computer was produced
ENIAC did not support dynamic conditional operations at MIT for the US Air Force. This was the first computer
(e.g. IF . . . THEN). An operation could be repeated a fixed intended for real-time information processing. It employed
number of times by hard wiring the loop counter to an ferrite-core memory (the standard form of mainframe mem-
appropriate value. Because the ability to make a decision ory until the semiconductor integrated circuit came along in
depending on the value of a data element is vital to the the late 1960s). A ferrite core is a tiny bead of a magnetic mar-
operation of all computers, ENIAC was not a computer in tial that can be magnetized clockwise or counterclockwise to
today’s sense of the word. It was an electronic calculator. store a one or a zero. Ferrite core memory is no longer widely
used today, although the term remains in expressions such as
John von Neumann, EDVAC and IAS core dump, which means a printout of the contents of a region
The first US computer to use the stored program concept was of memory.
EDVAC (Electronic Discrete Variable Automatic Computer). One of the most important centers of early computer
EDVAC was designed by some of the same team that designed development in the 1940s was Manchester University in
12 Chapter 1 Introduction to computer hardware

England. In 1948 Tom Kilburn created a prototype computer the number of accesses to the slower main store. Cache
called the Manchester Baby. This was a demonstration memory has become one of the most important features of
machine that tested the concept of the stored program com- today’s high performance systems.
puter and the Williams store, which stored data on the surface In August 1980 IBM became the first major manufacturer
of a cathode ray tube. Some regard the Manchester Baby as to market a PC. IBM had been working on a PC since about
the world’s first true stored program computer. 1979 when it was becoming obvious that IBM’s market would
eventually start to come under threat from the PC manufac-
IBM’s place in computer history turers such as Apple and Commodore. IBM not only sold
No history of the computer can neglect the giant of the com- mainframes and personal computers—by the end of 1970s
puter world, IBM, which has had such an impact on the IBM had introduced the floppy disk, computerized super-
computer industry.Although IBM grew out of the Computing– market checkouts, and the first automatic teller machines.
Tabulating–Recording (C–T–R) Company founded in 1911,
its origin dates back to the 1880s. The C–T–R Company was 1.4.6 The birth of transistors, ICs, and
the result of a merger between the International Time
Recording (ITR) Company, the Computing Scale Company
microprocessors
of America, and Herman Hollerith’s Tabulating Machine Since the 1940s computer hardware has become smaller and
Company (founded in 1896). In 1914 Thomas J. Watson, faster. The power-hungry and unreliable vacuum tube was
Senior, left the National Cash Register Company to join the replaced by the smaller, reliable transistor in the 1950s. The
C–T–R company and soon became President. In 1917, a transistor plays the same role as a thermionic tube; the only
Canadian unit of the C–T–R company called International real difference is that a transistor switches a current flowing
Business Machines Co. Ltd was set up. Because this name was through a crystal rather than a beam of electrons flowing
so well suited to the C–T–R company’s role, they adopted it through a vacuum. The transistor was invented by William
for the whole organization in 1924. IBM bought Electromatic Shockley, John Bardeen, and Walter Brattain at AT&T’s Bell
Typewriters in 1933 and the first IBM electric typewriter was Lab in 1948.
marketed 2 years later. If you can put one transistor on a slice of silicon, you can
IBM’s first contact with computers was via its relationship put two or more transistors on the same piece of silicon. The
with Aiken at Harvard University. In 1948 Watson Senior at idea occurred to Jack St Clair Kilby at Texas Instruments
IBM gave the order to construct the Selective Sequence in 1958. Kilby built a working model and filed a patent
Control Computer. Although this was not a stored program early in 1959. In January of 1959, Robert Noyce at Fairchild
computer, it was IBM’s first step from the punched card Semiconductor was also thinking of the integrated circuit. He
tabulator to the computer. too applied for a patent and it was granted in 1961. Today,
Thomas. J. Watson, Junior, was responsible for building the both Noyce and Kilby are regarded as the joint inventors
Type 701 EDPM (Electronic Data Processing Machine) in of the IC.
1953 to convince his father that computers were not a threat
to IBM’s conventional business. The 700 series was successful The minicomputer era
and dominated the mainframe market for a decade. In 1956 The microprocessor was not directly derived from the main-
IBM launched a successor, the 704, which was the world’s first frame computer. Between the mainframe and the micro-
supercomputer. The 704 was largely designed by Gene processor lies the minicomputer, a cut-down version of the
Amdahl who later founded his own supercomputer company mainframe, which appeared in the 1960s. By the 1960s many
in the 1990s. departments of computer science could afford their own
IBM’s most important mainframe was the System/360, minicomputers and a whole generation of students learned
which was first delivered in 1965. The importance of the computer science from PDP-11s and NOVAs in the 1960s and
32-bit System/360 is that it was a member of a series of com- 1970s. Some of these minicomputers were used in real-time
puters, each with the same architecture (i.e. programming applications (i.e. applications in which the computer has to
model) but with different performance; for example, the respond to changes in its inputs within a specified time).
System/360 model 91 was 300 times faster than the model 20. One of the first minicomputers was Digital Equipment
IBM developed a common operating system, OS/360, for their Corporation’s PDP-5, introduced in 1964. This was followed
series. Other manufactures built their own computers that by the PDP-8, in 1966 and the very successful PDP-11, in
were compatible with System/360 and thereby began the slow 1969. Even the PDP-11 would be regarded as a very basic
process towards standardization in the computer industry. machine by today’s standards. Digital Equipment built on
In 1960 the Series/360 model 85 became the first computer their success with the PDP-11 series and introduced their
to implement cache memory. Cache memory keeps a copy of VAX architecture in 1978 with the VAX-11/780, which
frequently used data in very high-speed memory to reduce dominated the minicomputer world in the 1980s. The VAX
1.4 History of computing 13

EARLY MICROPROCESSOR SPINOFFS


The first two major microprocessors were the 8080 and machine-code level. The Z80 has a superset of the 8080’s
the 6800 from Intel and Motorola, respectively. Other instructions.
microprocessor manufacturers emerged when engineers A group of engineers left Motorola to form MOS
left Intel and Motorola to start their own companies. Technologies in 1975. They created the 6502 microprocessor,
Federico Faggin, one of the founders of Intel, left the which was similar to the 6800 but not software compatible
company and founded Zilog in 1974. Zilog made the with it. The 6502 was the first low-cost microprocessor and
Z80, which was compatible with Intel’s 8080 at the was adopted by Apple and several other early PCs.

range was replaced by the 64-bit Alpha architecture (a high- only more powerful than their predecessors in terms of the
performance microprocessor) in 1991. The Digital Equipment speed at which they could execute instructions, they were also
Corporation, renamed Digital, was taken over by Compaq more sophisticated in terms of the facilities they offered. Intel
in 1998. took the core of their 8080 microprocessor and converted it
from an 8-bit into a 16-bit machine, the 8086. Motorola did
Microprocessor and the PC not extend their 8-bit 6800 to create a 16-bit processor.
Credit for creating the world’s first microprocessor, the 4040, Instead, they started again and did not attempt to achieve
goes to Ted Hoff and Fagin at Intel. Three engineers from either object or source code compatibility with earlier
Japan worked with Hoff to implement a calculator’s digital processors. By beginning with a clean slate, Motorola was
logic circuits in silicon. Hoff developed a general purpose able to create a 32-bit microprocessor with an exceptionally
computer that could be programmed to carry out calculator clean architecture in 1979.
functions. Towards the end of 1969 the structure of a pro- Several PC manufacturers adopted the 68K; Apple used it
grammable calculator had emerged. The 4004 used about in the Macintosh and it was incorporated in the Atari and
2300 transistors and is considered the first general purpose Amiga computers. All three of these computers were regarded
programmable microprocessor, even though it was only a as technically competent and had many very enthusiastic
4-bit device. followers. The Macintosh was sold as a relatively high-priced
The 4004 was rapidly followed by the 8-bit 8008 micro- black box with the computer, software, and peripherals from a
processor, which was originally intended for a CRT applica- single source. This approach could not compete with the IBM
tion. By using some of the production techniques developed PC, launched in 1981, with an open system architecture that
for the 4004, Intel was able to manufacture the 8008 as early allowed the user to purchase hardware and software from the
as March 1972. The 8008 was soon replaced by a better supplier with the best price. The Atari and Amiga computers
version, the first really popular general purpose 8-bit micro- suffered because they had the air of the games machine.
processor, the 8080 (in production in early 1974). Shortly Although the Commodore Amiga in 1985 had many of the
after the 8080 went into production, Motorola created its hallmarks of a modern multimedia machine, it was derided
own competitor, the 8-bit 6800. as a games machine because few then grasped the importance
Six months after the 8008 was introduced, the first ready- of advanced graphics and high-quality sound.
made computer based on the 8008, the Micral, was designed The 68K developed into the 68020, 68030, 68040, and
and built in France. The term microcomputer was coined to 68060. Versions were developed for the embedded processor
refer to the Micral, although the Micral was not successful in market and Motorola played no further role in the PC market
the USA. In January 1975 Popular Electronics magazine pub- until Apple adopted Motorola’s PowerPC processor. The
lished an article on microcomputer design by Ed Roberts PowerPC came from IBM and was not a descendent of the
who had a small company called MITS. Roberts’ computer 68K family.
was called Altair and was constructed from a kit. Many fell in love with the Apple Mac. It was a sophisticated
Although the Altair was intended for hobbyists, it had a and powerful PC, but not a great commercial success. Apple’s
significant impact and sold 2000 kits in its first year. In March commercial failure demonstrates that those in the semi-
1976, Steve Wozniak and Steve Jobs designed a 6502-based conductor industry must realize that commercial factors
computer, which they called the Apple 1. A year later in 1977 are every bit as important as architectural excellence and
they created the Apple II with 16 kbytes of ROM, 4 kbytes of performance. Apple failed because their processor, from
RAM, and a color display and keyboard. Although unsoph- hardware to operating system, was proprietary. Apple didn’t
isticated, this was the first practical PC. publish detailed hardware specifications or license their BIOS
As microprocessor technology improved, it became pos- and operating system. IBM adopted open standards and
sible to put more and more transistors on larger and larger anyone could build a copy of the IBM PC. Hundreds of
chips of silicon. Microprocessors of the early 1980s were not manufacturers started producing parts of PCs and an entire
14 Chapter 1 Introduction to computer hardware

industry sprang up. You could buy a basic system from one high-performance hardware to process the signals in real-
place, a hard disk from another, and a graphics card from yet time. The MP3 player requires a high-speed data link to
another supplier. By publishing standards for the PC’s bus, download music from the Internet.
anyone could create a peripheral for the PC. What IBM lost in The demand for increasing reality in video games and real-
the form of increased competition, they more than made up time image processing has spurred development in special-
for in the rapidly expanding market. IBM’s open standard purpose video subsystems. Video processing requires the
provided an incentive for software writers to generate soft- ability to render images, which means drawing vast numbers
ware for the PC market. of polygons on the screen and filling them with a uniform
The sheer volume of PCs and their interfaces (plus the color. The more polygons used to compose an image, the
software base) pushed PC prices down and down. The Apple more accurate the rendition of the image.
was perceived as over-priced. Even though Apple adopted The effect of the multimedia revolution had led to the com-
the PowerPC, it was too late and Apple’s role in the PC world moditization of the PC, which is now just another commodity
was marginalized. However by 2005, cut-throat competition like a television or a stereo player. Equally, the growth of multi-
from PC manufacturers was forcing IBM to abandon its PC media has forced the development of higher speed processors,
business, whereas Apple was flourishing in a niche market low-cost high-density memory systems, multimedia-aware
that rewarded style. operating systems, data communications, and new processor
A major change in direction in computer architecture took architectures.
place in the 1980s when the RISC or Reduced Instruction Set
Computer first appeared. Some observers expected the RISC to The Internet revolution
sweep away all CISC processors like the 8086 and 68K families. Just as the computer itself was the result of a number of inde-
It was the work carried out by David Paterson at the pendent developments (the need for automated calculation,
University of Berkley in the early 1980s that brought the RISC the theoretical development of computer science, the
philosophy to a wider audience. Paterson was also respons- enabling technologies of communications and electronics,
ible for coining the term ‘RISC’ in 1980. The Berkeley RISC the keyboard and data processing industries), the Internet
was constructed at a university (like many of the first main- was the fruit of a number of separate developments.
frames such as EDSAC) and required only a very tiny fraction The principal ingredients of the Internet are communica-
of the resources consumed by these early mainframes. Indeed, tions, protocols, and hypertext. Communications systems
the Berkeley RISC is hardly more than an extended graduate have been developed throughout human history as we have
project. It took about a year to design and fabricate the RISC I already pointed out when discussing the enabling technology
in silicon. By 1983 the Berkeley RISC II had been produced behind the computer. The USA’s Department of Defense cre-
and that proved to be both a testing ground for RISC ideas ated a scientific organization, ARPA (Advanced Research
and the start of a new industry. Many of the principles of Projects Agency) in 1958 at the height of the Cold War. ARPA
RISC design were later incorporated in Intel’s processors. had some of the characteristics of the Manhattan project,
which had preceded it during the Second World War. A large
1.4.7 Mass computing and the rise of group of talented scientists was assembled to work on a pro-
ject of national importance. From its early days ARPA con-
the Internet centrated on computer technology and communications
The Internet and digital multimedia have driven the evolu- systems; moreover, ARPA was moved into the academic area
tion of the PC. The Internet provides interconnectivity and which meant that it had a rather different ethos from that of
the digital revolution has extended into sound and vision. the commercial world because academics cooperate and
The cassette-based personal stereo system has been displaced share information.
by the minidisk and the MP3 players with solid state memory. One of the reasons why ARPA concentrated on networking
The DVD with its ability to store an entire movie on a single was the fear that a future war involving nuclear weapons
disk first became available in 1996 and by 1998 over one would begin with an attack on communications centers lim-
million DVD players had been sold in the USA. The digital iting the capacity to respond in a coordinated manner. By
video camera that once belonged to the world of the profes- networking computers and ensuring that a message can take
sional filmmaker is now available to anyone with a modest many paths through the network to get from its source to its
income. destination, the network can be made robust and able to cope
All these applications have had a profound effect on the with the loss of some of its links of switching centers.
computer world. Digital video requires vast amounts of stor- In 1969 ARPA began to construct a testbed for networking,
age. Within 5 years, low-cost hard disk capacities grew from a system that linked four nodes: University of California at
about 1 Gbyte to 400 Gbytes or more. The DVD uses very Los Angeles, SRI (in Stanford), University of California at
sophisticated signal processing techniques that require very Santa Barbara, and University of Utah. Data was sent in the
1.5 The digital computer 15

form of individual packets or frames rather than as complete 1.5.1 The PC and workstation
end-to-end messages. In 1972 ARPA was renamed DARPA
(Defense Advances Research Projects Agency). The 1980s witnessed two significant changes in computing—
In 1973 the TCP/IP (transmission control protocol/Internet the introduction of the PC and the workstation. PCs bring
protocol) was developed at Stanford; this is the set of rules that computing power to people in offices and in their own
govern the routing of a packet through a computer network. homes. Although primitive PCs have been around since the
Another important step on the way to the Internet was Robert mid 1970s, the IBM PC and Apple Macintosh transformed
Metcalfe’s development of the Ethernet, which enabled com- the PC from an enthusiast’s toy into a useful tool. Software
puters to communicate with each other over a local area net- such as word processors, databases, and spreadsheets revolu-
work based on a low-cost cable. The Ethernet made it possible tionized the office environment, just as computer-aided
to link computers in a university together and the ARPANET design packages revolutionized the industrial design envir-
allowed the universities to be linked together. Ethernet was, onment. Today’s engineer can design a circuit and simulate
however, based on techniques developed during the construc- its behavior using one software package and then create a lay-
tion of the University of Hawaii’s radio-based packet-switching out for a printed circuit board (PCB) with another package.
ALOHAnet, another ARPA-funded project. Indeed, the output from the PCB design package may be
Up to 1983 ARPANET users had to use a numeric IP suitable for feeding directly into the machine that actually
address to access other users on the Internet. In 1983 the makes the PCBs.
University of Wisconsin created the Domain Name System In the third edition of this book in 1999 I said
(DNS), which routed packets to a domain name rather than Probably the most important application of the personal computer
an IP address. is in word processing . . . Today’s personal computers have immensely
The World’s largest community of physicists is at CERN in sophisticated word processing packages that create a professional-
Geneva. In 1990 Tim Berners-Lee implemented a hypertext- looking result and even include spelling and grammar checkers to
based system to provide information to other the members of remove embarrassing mistakes. When powerful personal computers
the high-energy physics community. This system was are coupled to laser printers, anyone can use desktop publishing
released by CERN in 1993 as the World-Wide Web (WWW). packages capable of creating manuscripts that were once the
province of the professional publisher.
In the same year, Marc Andreessen at the University of Illinois
developed a graphical user interface to the WWW, a browser Now, all that’s taken for granted. Today’s PCs can take video
called Mosaic. All that the Internet and the WWW had to do from your camcorder, edit it, add special effects, and then burn
now was to grow. it to a DVD that can be played on any home entertainment
system.
Although everyone is familiar with the PC, the concept of
1.5 The digital computer the workstation is less widely understood. A workstation can
be best thought of as a high-performance PC that employs
Before beginning the discussion of computer hardware state-of-the-art technology and is normally used in industry.
proper, we need to say what a computer is and to define a few Workstations have been produced by manufacturers such as
terms. If ever an award were to be given to those guilty of mis- Apollo, Sun, HP, Digital, Silicon Graphics, and Xerox. They
information in the field of computer science, it would go to share many of the characteristics of PCs and are used by
the creators of HAL in 2001, R2D2 in Star Wars, K9 in Doctor engineers or designers. When writing the third edition, I
Who, and Data in Star Trek. These fictional machines have stated that the biggest difference between workstations and
generated the popular myth that a computer is a reasonably PCs was in graphics and displays. This difference has all but
close approximation to a human brain, which stores an infinite vanished with the introduction of high-speed graphics cards
volume of data. and large LCD displays into the PC world.
The reality is a little more mundane. A computer is a
machine that takes in information from the outside world,
processes it according to some predetermined set of opera-
1.5.2 The computer as a data processor
tions, and delivers the processed information. This definition The early years of computing were dominated by the main-
of a computer is remarkably unhelpful, because it attempts to frame, which was largely used as a data processor. Figure 1.1
define the word computer in terms of the equally complex describes a computer designed to deal with the payroll of a
words information, operation, and process. Perhaps a better large factory. We will call the whole thing a computer, in
approach is to provide examples of what computers do by contrast with those who would say that the CPU (central
looking at the role of computers in data processing, numerical processing unit) is the computer and all the other devices
computation (popularly called number crunching), work- are peripherals. Inside the computer’s immediate access
stations, automatic control systems, and electronic systems. memory is a program, a collection of primitive machine-code
16 Chapter 1 Introduction to computer hardware

Display
Disk drives
Computer (central processing unit)

Plotter
Printer

Display

Line printer

Keyboard
Figure 1.1 The computer as a
Tape drive
data processor.

operations, whose purpose is to calculate an employee’s pay containing the relevant question. The keyboard can be used
based on the number of hours worked, the basic rate of pay, to modify the program itself so that new facilities may be
and the overtime rate. Of course, this program would also added as the system grows. Computers found in data process-
deal with tax and any other deductions. ing are often characterized by their large secondary stores and
Because the computer’s immediate access memory is relat- their extensive use of printers and terminals.
ively expensive, only enough is provided to hold the program
and the data it is currently processing. The mass of informa- 1.5.3 The computer as a numeric
tion on the employees is normally held in secondary store as
a disk file. Whenever the CPU requires information about a
processor
particular employee, the appropriate data is copied from the Numeric processing or number crunching refers to computer
disk and placed in the immediate access store. The time taken applications involving a very large volume of mathematical
to perform this operation is a small fraction of a second but is operations—sometimes billions of operations per job.
many times slower than reading from the immediate access Computers used in numeric processing applications are fre-
store. However, the cost of storing information on disk is very quently characterized by powerful and very expensive CPUs,
low indeed and this compensates for its relative slowness. very high-speed memories, and relatively modest quantities
The tape transport stores data more cheaply than the disk of input/output devices and secondary storage. Some super-
(tape is called tertiary storage). Data on the disks is copied computers are constructed from large arrays of microproces-
onto tape periodically and the tapes stored in the basement sors operating in parallel.
for security reasons. Every so often the system is said to crash Most of the applications of numeric processing are best
and everything grinds to a halt. The last tape dump can be described as scientific. For example, consider the application
reloaded and the system assumes the state it was in a short of computers to the modeling of the processes governing the
time before the crash. Incidentally, the term crash had the weather. The atmosphere is a continuous, three-dimensional
original meaning of a failure resulting from a read/write head medium composed of molecules of different gases. The sci-
in a disk drive crashing into the rotating surface of a disk and entist can’t easily deal with a continuous medium, but can
physically damaging the magnetic coating on its surface. make the problem more tractable by considering the atmo-
The terminals (i.e. keyboard and display) allow operators sphere to be composed of a very large number of cubes. Each
to enter data directly into the system. This information could of these cubes is considered to have a uniform temperature,
be the number of hours an employee has worked in the cur- density, and pressure. That is, the gas making up a cube shows
rent week. The terminal can also be used to ask specific ques- no variation whatsoever in its physical properties. Variations
tions, such as ‘How much tax did Mr XYZ pay in November?’ exist only between adjacent cubes. A cube has six faces and
To be a little more precise, the keyboard doesn’t actually ask the scientist can create a model of how the cube interacts with
questions but it allows the programmer to execute a program each of its six immediate neighbors.
1.5 The digital computer 17

The scientist may start by assuming that all cubes are


identical (there is no initial interaction between cubes) and
then consider what happens when a source of energy, the sun,
is applied to the model. The effect of each cube on its neigh-
bor is calculated and the whole process is repeated cyclically Position
sensors
(iteration). In order to get accurate results, the size of the
cubes should be small, otherwise the assumption that the Aileron control (roll)
properties of the air in the cube are uniform will not be valid. Majority
logic
Rudder control (yaw)
Elevator control (pitch)
Moreover, the number of iterations needed to get the results netwok

to converge to a steady-state value is often very large.


Consequently, this type of problem often requires very long CPU CPU CPU
A B C
runs on immensely powerful computers, or supercomputers
as they are sometimes called. The pressure to solve complex
scientific problems has been one of the major driving forces
behind the development of computer architecture. Figure 1.2 The computer as a control element in a flight
Numeric processing also pops up in some real-time control system.
applications of computers. Here, the term real-time indicates
that the results of a computation are required within a given distance off the runway centerline) and speed are determined
time. Consider the application of computers to air-traffic by radio techniques in conjunction with a ground-based
control. A rotating radar antenna sends out a radio signal that instrument-landing system. Information about the aircraft’s
is echoed back from a target. Because radio waves travel at a position is fed to the three computers, which, individually,
fixed speed (the speed of light), radar can be used to measure determine the error in the aircraft’s course. The error is
the bearing and distance (range) of each aircraft. At time t, the difference between the aircraft’s measured position and the
target i at position Pi,t returns an echo giving its range ri,t, and position it should be in. The output from the computer is the
bearing bi,t. Unfortunately, because of the nature of radar signals required to move the aircraft’s control surfaces
receivers, a random error is added to the value of each echo (ailerons, elevator, and rudder) and adjust the engine’s thrust.
from a target. In this case the computer’s program is held in ROM, a mem-
The computer obtains data from the radar receiver for n ory that can be read from but not written to. Once the
targets, updated p times a minute. From this raw data that program to land the aircraft has been developed, it requires
is corrupted by noise, the computer computes the position of only occasional modification.
each aircraft and its track and warns air traffic control of The automatic-landing system requires three computers,
possible conflicts. All this requires considerable high-speed each working on the same calculation with the same inputs.
numerical computation. The outputs of the computers are fed to a majority logic
Supercomputers are also used by the security services to circuit called a voting network. If all three inputs to the major-
crack codes and to monitor telecommunications traffic for ity logic circuit are the same, its output is identical to its
certain words and phrases. inputs. If one computer fails, the circuit selects its output to
be the same as that produced by the two good computers.
This arrangement is called triple modular redundancy and
1.5.4 The computer in automatic control makes the system highly reliable.
The majority of computers are found neither in data process- Another example of the computer as a controller can be
ing nor in numeric processing activities. The advent of the found in the automobile. Car manufacturers want to increase
microprocessor put the computer at the heart of many auto- the efficiency and performance of the internal combustion
matic control systems. When used as a control element, the engine and reduce the emission of harmful combustion
computer is embedded in a larger system and is invisible to the products. Figure 1.3 illustrates the structure of a computer-
observer. By invisible we mean that you may not be aware of ized fuel injection system that improves the performance of
the existence of the computer. Consider a computer in a an engine. The temperature and pressure of the air, the angle
pump in a gas station that receives cash in a slot and delivers of the crankshaft, and several other variables have to be meas-
a measured amount of fuel. The user doesn’t care whether the ured thousands of times a second. These input parameters
pump is controlled by a microprocessor or by a clockwork are used to calculate how much fuel should be injected into
mechanism, as long as it functions correctly. each cylinder.
A good example of a computer in automatic control is an The glass cockpit provides another example of the computer
aircraft’s automatic landing system, illustrated in Fig. 1.2. The as a controller. Until the mid 1980s the flight instrumentation
aircraft’s position (height, distance from touch down, and of commercial aircraft was almost entirely electromechanical.
18 Chapter 1 Introduction to computer hardware

Turbometer Pressure sensor Pressure sensor Pressure sensors


θth-Ne basic fuel
injection volume map
Injection
volume
CFI
Temperature computer
Resonance Throttle
sensor
chamber opening
Throttle sensor
Pump Main relay
Surge Air valve Engine rpm
relay
tank
Injection
TW volume
sensor
Reed
Throttle valve valve
Boost
Air cleaner pressure
Compressor Engine rpm
Fuel Fuel
Turbine pump filter PB-Ne basic fuel
Turbocharger Injector injection volume map
Pressure Killswitch
regulator
Pulser combination
switch
Wastegate valve
actuator Ignition Battery
Ignition Coil control
unit Ignition timing map
Pulser
Ignition
Muffler timing
Boost
pressure Engine
rpm

Figure 1.3 The computerized fuel injection system.

Today the mechanical devices that display height, speed, the late 1990s. GPS provides another interesting application
engine performance, and the altitude of the aircraft are being of the computer as a component in an electronic system. The
replaced by electronic displays controlled by microcomputers. principles governing GPS are very simple. A satellite in
These displays are based on the cathode ray tube or LED, medium Earth orbit at 20 200 km contains a very accurate
hence the expression ‘glass cockpit’. Electronic displays are atomic clock and it broadcasts both the time and its position.
easier to read and more reliable than their mechanical coun- Suppose you pick up the radio signal from one of these
terparts, but they provide only the information required by Navstar satellites, decode it, and compare the reported time
the flight crew at any instant. with your watch. You may notice that the time from the satel-
Figure 1.4 illustrates an aircraft display that combines a lite is inaccurate. That doesn’t mean that the US military has
radar image of clouds together with navigational informa- wasted its tax dollars on faulty atomic clocks, but that the sig-
tion. In this example the pilot can see that the aircraft is nal has been traveling through space before it reaches you.
routed from radio beacon WCO to BKP to BED and will miss Because the speed of light is 300 000 km/s, you know that the
the area of storm activity. Interestingly enough, this type of satellite must 20 000 km away. Every point that is 20 000 km
indicator has been accused of deskilling pilots, because they from the satellite falls on the surface of a sphere whose center
no longer have to create their own mental image of the posi- is the satellite.
tion of their aircraft with respect to the World from much If you perform the same operation with a second satellite,
cruder instruments. you know that you are on the surface of another sphere.
In the 1970s the USA planned a military navigation system These two spheres must intersect. Three-dimensional geo-
based on satellite technology called GPS (global positioning metry tells us that the points at which two spheres merge is
system), which became fully operational in the 1990s. The a ring. If you receive signals from three satellites, the three
civilian use of this military technology turned out to be spheres intersect at just two points. One of these points is
one of the most important and unexpected growth areas in normally located under the surface of the Earth and can be
1.6 The stored program computer—an overview 19

ROMs). GPS can even be integrated into expensive systems


that aren’t intended to move—unless they are stolen. If the
system moves, the GPS detects the new position and reports
it to the police.

1.6 The stored program


computer—an overview
Before discussing the stored program computer, consider first
the human being. It’s natural to compare today’s wonder, the
computer, with the human just as the Victorians did with
their technology. They coined expressions like, ‘He has a
screw loose’, or ‘He’s run out of steam’, in an endeavor to
describe humans in terms of their mechanical technology.
Figure 1.5 shows how a human can be viewed as a system
Figure 1.4 Computer-controlled displays in the glass cockpit. with inputs, a processing device, and outputs. The inputs are
This figure illustrates the primary navigation display (or sight (eyes), smell (nose), taste (tongue), touch (skin), sound
horizontal situation indicator) that the pilot uses to determine (ear), and position (muscle tension). The brain processes
the direction in which the aircraft is traveling (in this case information from its sensors and stores new information.
231—approximately south-west). In addition to the heading, The storage aspect of the brain is important because it mod-
the display indicates the position and density of cloud and the
ifies the brain’s operation by a process we call learning.
location of radio beacons. The three arcs indicate range from the
Because the brain learns from new stimuli, it doesn’t always
aircraft (30, 60, 90 nautical miles).
exhibit the same response to a given stimulus. Once a child
has been burned by a flame the child reacts differently the
disregarded. You can therefore work out your exact position next time they encounter fire.
on the surface of the Earth. This scheme relies on you having The brain’s ability to both store and process information is
access to the exact time (i.e. your own atomic clock). shared by the digital computer. Computers can’t yet mimic
However, by receiving signals from a fourth satellite you can the operation of the brain and simplistic comparisons
calculate the time as well as your position. between the computer and the brain are misleading at best
Several companies produce small low-cost GPS receivers and mischievous at worst. A branch of computer science is
that receive signals from the 24 Navstar satellites, decode the devoted to the study of computers that do indeed share some
timing signals and the ephemeris (i.e. satellite position), and of the brain’s properties and attempt to mimic the human
calculate the position in terms of latitude and longitude. By brain. Such computers are called neural nets.
embedding a microprocessor in the system, you can process The output from the brain is used to generate speech or to
the position data in any way you want. For example, by com- control the muscles needed to move the body.
paring successive positions you can work out your speed and Figure 1.6 shows how a computer can be compared with a
direction. If you enter the coordinates of a place you wish to human. A computer can have all the inputs a human has plus
go to, the processor can continually give you a bearing to inputs for things we can’t detect. By means of photoelectric
head, a distance to your destination, and an estimated time of devices and radio receivers, a computer can sense ultraviolet
arrival. light, infrared, X-rays, and radio waves. The computer’s out-
By adding a liquid crystal display and a map stored in a put is also more versatile than that of humans. Computers
read-only memory to a GPS receiver, you can make a hand- can produce mechanical movement (by means of motors)
held device that shows where you are with respect to towns, and generate light (TV displays), sound (loudspeakers), or
roads, and rivers. By 2000 you could buy a device for about even heat (by passing a current through a resistor).
$100 that showed exactly where you were on the surface of The computer’s counterpart of the brain is its central pro-
the Earth to an accuracy of a few meters. cessing unit plus its storage unit (memory). Like the brain, the
The combination of a GPS unit plus a microprocessor plus computer processes its various inputs and produces an output.
a display system became a major growth area from about We don’t intend to write a treatise on the differences
2000 because there are so many applications. Apart from its between the brain and the computer, but we should make a
obvious applications to sailing and aviation, GPS can be comment here to avoid some of the misconceptions about
included in automobiles (the road maps are stored on CD digital computers. It is probable that the brain’s processing
20 Chapter 1 Introduction to computer hardware

brain. In particular, the digital computer


Eyes is serially organized and performs a
single instruction at a time, whereas the
brain has a highly parallel organization
and is able to carry out many activities at
Nose the same time.
Mouth
(sound) Somewhere in every computer’s
memory is a block of information that
Tongue we call a program. The word program has
the same meaning as it does in the
expression program of studies, or program
of music. A computer program is a collec-
Skin Brain tion of instructions defining the actions
to be carried out by the computer
sequentially. The classic analogy with a
Inputs Outputs Muscle computer program is a recipe in a cook-
Ears (movement) ery book. The recipe is a sequence of
commands that must be obeyed one by
one in the correct order. Our analogy
Muscle between the computer program and the
tension recipe is particularly appropriate because
the cookery instructions involve opera-
Figure 1.5 The organization of a human being. tions on ingredients, just as the com-
puter carries out operations on data
stored in memory.
Figure 1.7 describes how a digital
computer can be divided into two parts:
a central processing unit (CPU) and a
Keyboard memory system. The CPU reads the pro-
Printer gram from memory and executes the
operations specified by the program.
Mouse
The word execute means carry out; for
example, the instruction add A to B
causes the addition of a quantity called
A to a quantity called B to be carried out.
Modem Central processing unit Sound system The actual nature of these instructions
does not matter here. What is important
is that the most complex actions carried
Inputs Outputs out by a computer can be broken down
Scanner Video display
into a number of more primitive opera-
tions. But then again, the most sublime
thoughts of Einstein or Beethoven can
Memory be reduced to a large number of impulses
transmitted across the synapses of the
cells in their brains.
The memory system stores two types
Figure 1.6 The organization of a computer. of information; the program and the
data acted on or created by the program. It isn’t necessary to
store both the program and data in the same memory. Most
and memory functions are closely interrelated, whereas in the computers store programs and data in a single memory
computer they are distinct. Some scientists believe that a system and are called von Neumann machines.
major breakthrough in computing will come only when A computer is little more than a black box that moves
computer architecture takes on more of the features of the information from one point to another and processes the
1.6 The stored program computer—an overview 21

Memory
Input 0 Get [4]
Data 1 Add it to [5]
Central
2 Put result in [6] Instruction to
processing be executed
Output unit 3 Stop
Program Address 4 2
(i.e. a location 5 7
in the memory)
6 1 Data element
in memory
Figure 1.7 Structure of the general purpose digital computer. 7

information as it goes along. When we say information we Figure 1.8 The program and data in memory.
Throughout this book square brackets denote ‘the contents of’
mean the data and the instructions held inside the computer.
so that in this figure, [4] is read as the contents of memory
Figure 1.7 shows two information-carrying paths connecting
location number 4 and is equal to 2.
the CPU to its memory. The lower path with the single
arrowhead from the memory to the CPU (heavily shaded in
Fig. 1.7) indicates the route taken by the computer’s program. Information is accessed from our memories by applying a key
The CPU reads the sequence of commands that make up a to all locations within the memory (brain). This key is related
program one by one from its memory. to the data being accessed (in some way) and is not related to
The upper path (lightly shaded in Fig. 1.7) with arrows at its location within the brain. Any memory locations contain-
both its ends transfers data between the CPU and memory. ing information that associates with the key respond to the
The program controls the flow of information along the data access. In other words, the brain carries out a parallel search
path. This data path is bidirectional, because data can flow in of its memory for the information it requires.
two directions. During a write cycle data generated by the Accessing many memory locations in parallel permits
program flows from the CPU to the memory where it is more than one location to respond to the access and is there-
stored for later use. During a read cycle the CPU requests the fore very efficient. Suppose someone says ‘chip’ to you. The
retrieval of a data item from memory, which is transferred word chip is the key that is fed to all parts of your memory for
from the memory to the CPU. matching.Your brain might produce responses of chip (silicon),
Suppose the instruction x  y  z is stored in memory. chip (potato), chip (on shoulder), and chip (gambling).
The CPU must first fetch the instruction from memory and The program in Fig. 1.8 occupies consecutive memory
bring it to the CPU. Once the CPU has analyzed or decoded locations 0–3 and the data locations 4–6. The first instruc-
the instruction it has to get the values of y and z from memory. tion, get [4], means fetch the contents of memory location num-
The CPU adds these values and sends the result, x, back to ber 4 from the memory. We employ square brackets to denote
memory for storage. the contents of the address they enclose, so that in this
Figure 1.8 demonstrates how the instructions making up a case [4]  2. The next instruction, at address 1, is add it to [5]
program and data coexist in the same memory. In this case and means add the number brought by the previous instruction
the memory has seven locations, numbered from 0 to 7. to the contents of location 5. Thus, the computer adds 2 and
Memory is normally regarded as an array of storage locations 7 to get 9. The third instruction, put result in [6], tells
(boxes or pigeonholes). Each of these boxes has a unique the computer to put the result (i.e. 9) in location 6. The 1 that
location or address containing data. For example, in the was in location 6 before this instruction was obeyed is
simple memory of Fig. 1.8, address 5 contains the number 7. replaced by 9. The final instruction in location 3 tells the
One difference between computers and people is that we computer to stop.
number m items from 1 to m, whereas the computer numbers We can summarize the operation of a digital computer by
them from 0 to m  1. This is because the computer regards 0 means of a little piece of pseudocode (pseudocode is a method
(zero) as a valid identifier. Unfortunately, people often of writing down an algorithm in a language that is a cross
confuse 0 the identifier with 0 meaning nothing. between a computer language such as C, Pascal, or Java and
Information in a computer’s memory is accessed by pro- plain English). We shall meet pseudocode again.
viding the memory with the address (i.e. location) of the
desired data. Only one memory location is addressed at a
time. If we wish to search through memory for a particular
item because we don’t know its address, we have to read the
items one at a time until we find the desired item. It appears
that the human memory works in a very different way.
22 Chapter 1 Introduction to computer hardware

1.7 The PC—a naming of


parts
The final part of this chapter looks at the computer
with which most readers will be familiar, the
PC. As we have not yet covered many of the ele-
ments of a computer, all we can do here is provide
an overview and to name some of the parts of a
typical computer system to help provide a context
for following chapters.
Figure 1.9 shows a typical single-board computer
(SBC). As its name suggests, the SBC consists of
one printed circuit board containing the micro-
processor, memory, peripherals, and everything
Figure 1.9 The microcontroller SBC.

PCI slots Basic I/O

CPU slot

Video
slot

Disk Memory slots


connectors

Figure 1.10 The PC motherboard.


1.7 The PC—a naming of parts 23

else it needs to function. Such a board can be embedded in ■ SUMMARY


systems ranging from automobile engines to cell phones. The
We began this chapter with a discussion of the role of computer
principal characteristic of the SBC is its lack of expandability
architecture in computer science education. Computer
or flexibility. Once you’ve made it, the system can’t be architecture provides the foundation of computing; it helps you
expanded. to get the best out of computers and it aids in an understanding
The PC is very different from the single-board computer of a wide range of topics throughout computing.
because each user has their own requirements; some need We provided a brief history of computing. We can’t do justice
lots of memory and fast video processing and some need to this topic in a few pages. What we have attempted to do is to
several peripherals such as printers and scanners. demonstrate that computing has had a long history and is the
One way of providing flexibility is to design a system with result of the merging of the telegraph industry, the card-based
slots into which you can plug accessories. This allows you to data processing industry, and the calculator industry.
buy a basic system with functionality that is common to all In this chapter we have considered how the computer can be
looked at as a component or, more traditionally, as part of a
computers with that board and then you can add specific
large system. Besides acting in the obvious role as a computer
enhancements such as a video card or a sound card.
system, computers are now built into a wide range of everyday
Figure 1.10 shows a PC motherboard. The motherboard items from toys to automobile ignition systems. In particular,
contains the CPU and all the electronics necessary to connect we have introduced some of the topics that make up a
the CPU to memory and to provide basic input/output such first-level course in computer architecture or computer
as a keyboard and mouse interface and an interface to floppy organization.
and hard disk drives (including CD and DVD drives). We have introduced the notion of the von Neumann
The motherboard in Fig. 1.10 has four areas of expandabil- computer, which stored instructions and data in the same
ity. Program and data memory can be plugged into slots memory. The von Neumann computer reads instructions from
allowing the user to implement enough memory for their memory, one by one and then executes them in turn.
application (and their purse). You can also plug a video card The final part of this chapter provided an overview of the
computer system with which most students will be
into a special graphics slot, allowing you to use a basic system
familiar—the PC. This computer has a motherboard into which
for applications such as data processing or an expensive state-
you can plug a Pentium microprocessor, memory, and
of-the-art graphics card for a high-performance games peripherals. You can create a computer that suits your own
machine with fast 3D graphics. price–performance ratio.
The CPU itself fits into a rectangular slot and is not per- As we progress through this book, we are going to examine
manently installed on the motherboard. If you want a faster how the computer is organized and how it is able to step
processor, you can buy one and plug it in your motherboard. through instructions in memory and execute them. We will also
This strategy helps prevent the computer becoming out of show how the computer communicates with the world outside
date too soon. the CPU and its memory.
The motherboard has built-in interfaces that are common
to nearly all systems. A typical motherboard has interfaces to ■ PROBLEMS
a keyboard and mouse, a floppy disk drive, and up to four
hard disks or CD ROMs. Over the last few years, special- Unlike the problems at the end of other chapters, these
purpose functions have migrated from plug-in cards to the problems are more philosophical and require further
background reading if they are to be answered well.
motherboard. For example, the USB serial interface, the local
area network, and the audio system have been integrated on 1.1 I have always claimed you cannot name the inventor of the
some of the high-performance motherboards. computer because what we now call a computer emerged after
The motherboard in Fig. 1.10 has five PCI connectors. a long series of incremental steps. Am I correct?
These connectors allow you to plug cards into the mother- 1.2 If you have to name one person as inventor of the
board. Each connector is wired to a bus, a set of parallel computer, who would you choose? And why?
conductors that carry information between the cards and
1.3 What is the difference between computer architecture and
the CPU and memory. One of the advantages of a PC is its
computer organization?
expandability because you can plug such a wide variety of
cards into its bus. There are modems and cards that capture 1.4 A Rolls–Royce is not a Volkswagen Beetle. Is the difference a
and process images from camcorders. There are cards that matter of architecture or organization?
contain TV receivers. There are cards that interface a PC to 1.5 List 10 applications of microprocessors you can think of
industrial machines in a factory. and classify them into the groups we described (e.g. computer
In this book we will be looking at all these aspects of a as a component). Your examples should cover as wide a range
computer. of applications as possible.
24 Chapter 1 Introduction to computer hardware

1.6 Do you think that a digital computer could ever be capable At the same time we live in a world in which many of its
of feelings, free will, original thought, and self-awareness in a inhabitants go short of the very basic necessities of life: water,
similar fashion to humans? If not, why not? food, shelter, and elementary health care. Does the computer
make a positive contribution to the future well-being of the
1.7 Some of the current high-performance civil aircraft such as
World’s inhabitants? Is the answer the same if we ask about the
the A320 AirBus have fly-by-wire control systems. In a
computer’s short-term effects or its long-term effects?
conventional aircraft, the pilot moves a yoke that provides
control inputs that are fed to the flying control surfaces and 1.12 The workstation makes it possible to design and to test
engines by mechanical linkages or hydraulic means. In the A320 (by simulation) everything from other computers to large
the pilot moves the type of joystick normally associated with mechanical structures. Coupled with computer communications
computer games. The pilot’s commands from the joystick (called networks and computer-aided manufacturing, it could be
a sidestick) are fed to a computer and the computer interprets argued that many people in technologically advanced societies
them and carries them out in the fashion it determines is most will be able to work entirely from home. Indeed, all their
appropriate. For example, if the pilot tries to increase the speed shopping and banking activities can also be performed from
to a level at which the airframe might be overstressed, the home. Do you think that this step will be advantageous or
computer will refuse to obey the command. Some pilots and disadvantageous? What will be the effects on society of a
some members of the public are unhappy about this population that can, largely, work from home?
arrangement. Are their fears rational?
1.13 In a von Neumann machine, programs and data share the
1.8 The computer has often been referred to as a high-speed same memory. The operation ‘get [4]’ reads the contents of
moron. Is this statement fair? memory location number 4 and you can then operate on the
1.9 Computers use binary arithmetic (i.e. all numbers are number you’ve just read from this location. However, the
composed of 1s and 0s) to carry out their operations. Humans contents of this location may not be a number. It may be an
normally use decimal arithmetic (0–9) and have symbolic instruction itself. Consequently, a program in a von Neumann
means of representing information (e.g. the Latin alphabet or machine can modify itself. Can you think of any implications
the Chinese characters). Does this imply a fundamental this statement has for computing?
difference between people and computers? 1.14 When discussing the performance of computers we
1.10 Shortly after the introduction of the computer, someone introduced the benchmark, a synthetic program whose
said that two computers could undertake all the computing in execution time provides a figure of merit for the performance of
the World. At that time the best computers were no more a computer. If you glance at any popular computer magazine,
powerful than today’s pocket calculators. The commentator you’ll find computers compared in terms of benchmarks.
assumed that computers would be used to solve a few scientific Furthermore, there are several different benchmarks. A computer
problems and little else. As the cost and size of computers has that performs better than others when executing one
been reduced, the role of computers has increased. Is there a benchmark might not do so well when executing a different
limit to the applications of computers? Do you anticipate any benchmark. What are the flaws in benchmarks as a test of
radically new applications of computers? performance and why do you think that some benchmarks favor
one computer more than another?
1.11 A microprocessor manufacturer, at the release of their new
super chip, was asked the question, ‘What can your 1.15 The von Neumann digital computer offers just one
microprocessor do?’ He said it was now possible to put it in computing paradigm. Other paradigms are provided by analog
washing machines so that the user could tell the machine what computers and neural networks. What are the differences
to do verbally, rather than by adjusting the settings manually. between these paradigms and are there others?
Gates, circuits, and
combinational logic 2
CHAPTER MAP

2 Logic elements and 3 Sequential logic 4 Computer arithmetic 5 The instruction set
Boolean algebra We can classify logic circuits into In Chapter 4 we demonstrate architecture
Digital computers are two groups: the combinational how numbers are represented in In Chapter 5 we introduce the
constructed from millions of very circuit we described in Chapter 2 binary form and look at binary computer’s instruction set
simple logic elements called and the sequential circuit which arithmetic. We also demonstrate architecture (ISA), which defines
gates. In this chapter we forms the subject of this chapter. how the properties of binary the machine-level programmer’s
introduce the fundamental gates A sequential circuit includes numbers are exploited to create view of the computer. The ISA
and demonstrate how they can memory elements and its current codes that compress data or describe the type of operations a
be combined to create circuits behavior is governed by its past even detect and correct errors. computer carries out. We are
that carry out the basic functions inputs. Typical sequential circuits interested in three aspects of the
required in a computer. are counters and registers. ISA: the nature of the
instructions, the resources used
by the instructions (registers and
memory), and the ways in which
the instructions access data
(addressing modes).

INTRODUCTION
We begin our study of the digital computer by investigating the elements from which it is
constructed. These circuit elements are gates and flip-flops and are also known as combinational
and sequential logic elements, respectively. A combinational logic element is a circuit whose
output depends only on its current inputs, whereas the output from a sequential element
depends on its past history (i.e. a sequential element remembers its previous inputs) as well as
its current input. We describe combinational logic in this chapter and devote the next chapter to
sequential logic.
Before we introduce the gate, we highlight the difference between digital and analog systems
and explain why computers are constructed from digital logic circuits. After describing the
properties of several basic gates we demonstrate how a few gates can be connected together to
carry out useful functions in the same way that bricks can be put together to build a house or a
school. We include a Windows-based simulator that lets you construct complex circuits and then
examine their behavior on a PC.
The behavior of digital circuits can be described in terms of a formal notation called Boolean
algebra. We include an introduction to Boolean algebra because it allows you to analyze circuits
containing gates and sometimes enables circuits to be constructed in a simpler form. Boolean
algebra leads on to Karnaugh maps, a graphical technique for the simplification and manipulation
of Boolean equations.
The last circuit element we introduce is the tri-state gate, which allows you to connect lots of
separate digital circuits together by means of a common highway called a bus. A digital computer
is composed of nothing more than digital circuits, buses, and sequential logic elements.
By the end of this chapter, you should be able to design a wide range of circuits that can
perform operations as diverse as selecting between one of several signals to implementing simple
arithmetic operations.
Real circuits can fail. The final part of this chapter takes a brief look at how you test digital
circuits.
26 Chapter 2 Gates, circuits, and combinational logic

The design of analog circuits such as audio amplifiers is a


2.1 Analog and digital systems demanding process, because analog signals must be processed
without changing their shape. Changing the shape of an
Before we can appreciate the meaning and implications of
analog signal results in its degradation or distortion.
digital systems, it’s necessary to look at the nature of analog
Information inside a computer is represented in digital
systems. The term analog is derived from the noun analogy
form. A digital variable is discrete in both value and in time, as
and means a quantity that is related to, or resembles, or
Fig. 2.2 demonstrates. The digital variable Y must take one of
corresponds to, another quantity; for example, the length of
four possible values. Moreover, Y changes from one discrete
a column of mercury in a thermometer is an analog of the
value to another instantaneously. In practice, no physical (i.e.
temperature because the length of the mercury is propor-
real) variable can change instantaneously and a real signal
tional to the temperature. Analog electronic circuits repres-
must pass through intermediate values as it changes from one
ent physical quantities in terms of voltages or currents.
discrete state to another.
An analog variable can have any value between its max-
All variables and constants in a digital system must take a
imum and minimum limits. If a variable X is represented by a
value chosen from a set of values called an alphabet. In
voltage in the range 10 V to 10 V, X may assume any one
decimal arithmetic the alphabet is composed of the symbols 0,
of an infinite number of values within this range. We can say
1, 2, . . . 9 and in Morse code the alphabet is composed of the
that X is continuous in value and can change its value by an
four symbols dot, dash, short space, and long space. Other
arbitrarily small amount. Fig. 2.1 plots a variable X as a con-
digital systems are Braille, semaphore, and the days of the week.
tinuous function of time; that is, X doesn’t jump instanta-
A major advantage of representing information in digital
neously from one value to another. In Fig. 2.1, a fragment of
form is that digital systems are resistant to error. A digital
the graph of X is magnified to reveal fluctuations that you
symbol can be distorted, but as long as the level of distortion
can’t see on the main graph. No matter how much you
is not sufficient for the symbol to be confused with a different
magnify this graph, the line will remain continuous and
symbol, the original symbol can always be recognized and
unbroken.
reconstituted. For example, if you write the letter K by hand,
most readers will be able to recognize
X(t) it as a K unless it is so badly formed
Magnification that it looks like another letter such as
an R or C.
Time-varying Digital computers use an alphabet
analog signal composed of two symbols called 0 and
1 (sometimes called false and true, or
low and high, or off and on). A digital
system with two symbols is called a
binary system. The physical repres-
entation of these symbols can be made
Time as unlike each other as possible to give
the maximum discrimination between
Figure 2.1 Characteristics of an analog variable. the two digital values. Computers
once stored binary information on
paper tape—a hole represented one
Y(t) binary value and no hole represented
A digital signal must have one of a fixed the other. When reading paper tape
number of values and change from one
value to another instantaneously the computer has only to distinguish
between a hole and no-hole. Suppose
3 we decided to replace this binary
computer by a decimal computer.
2 Imagine that paper tape were to be
used to store the 10 digits 0–9. A
1 number on the tape would consist of
no-hole or a hole in one of nine sizes
0 (10 symbols in all). How does this
Time computer distinguish between a size
Figure 2.2 Characteristics of an ideal digital variable. six hole and a size five or a size seven
2.1 Analog and digital systems 27

NOTES ON LOGIC VALUES


1. Every logic input or output must assume one of two 6. The signal level (i.e. high or low) that causes a variable to
discrete states. You cannot have a state that is neither carry out a function is arbitrary. If a high voltage causes the
1 nor 0. action, the variable is called active-high. If a low voltage
causes the action, the variable is called active-low. Thus, if
2. Each logic input or output can exist in only one state at any
an active-high signal is labeled START, a high level will
one time.
initiate the action. If the signal is active-low and labeled
3. Each logic state has an inverse or complement that is the START, a low level will trigger the action.
opposite of its current state. The complement of a true or
one state is a false or zero state, and vice versa. 7. By convention, a system of logic that treats a low level as a
0 or false state and a high-level as a 1 or true state is called
4. A logic value can be a constant or a variable. If it is a positive logic. Most of this chapter uses positive logic.
constant, it always remains in that state. If it is a variable, it
8. The term asserted is used to indicate that a signal is placed
may be switched between the states 0 and 1. A Boolean
in the level that causes its activity to take place. If we say
variable is also called a literal.
that START is asserted, we mean that it is placed in a high
5. A variable is often named by the action it causes to take state to cause the action determined by START. Similarly, if
place. The following logical variables are all self-evident: we say that LOAD is asserted, we mean that it is placed in a
START, STOP, RESET, COUNT, and ADD. low state to trigger the action.

LOGIC VALUES AND SIGNAL LEVELS


In a system using a 5 V power supply you might think that a An adder (represented by the circle with a ‘’) is placed
bit is represented by exactly 0 V or 5 V. Unfortunately, we between the two gates so that the input voltage to the second
can’t construct such precise electronic devices cheaply. We can gate is given by Vin  Vout  Vnoise; that is, a voltage called
construct devices that use two ranges of voltage to represent Vnoise is added to the output from gate 1. In a real circuit there
the binary values 0 and 1. For example, one logic family is, of course, no such adder. The adder is fictitious and
represents a 0 state by a signal in the range 0–0.4 V and a demonstrates how the output voltage may be modified by the
1 state by a signal in the range 2.8–5 V. addition of noise or interference. All electronic circuits are
This diagram subject to such interference; for example, the effect of noise
5V illustrates the ranges on a weak TV signal is to create snow on the screen.
Input range Output range of voltage used to
for a for a Note that the range of input signals that are recognized as
logical 1 logical 1 represent 0 and 1 representing a 1 state (i.e. 2.4–5 V) is greater than the
states. Digital range of output signals produced by a gate in a 1 state (i.e.
2.8 V
component 2.8–5 V). By making the input range greater than the output
2.4 V
manufacturers make range, the designer compensates for the effect of noise or
Forbidden several promises to unwanted signals. Suppose a noise spike of 0.2 V is added to
zone users. First, they a logical 1 output of 2.8 V to give a total input signal of 2.6 V.
guarantee that the This signal, when presented to the input circuit of a gate, is
0.8 V
Input range
output of a gate in a greater than 2.4 V and is still guaranteed to be recognized
for a 0.4 V logical 0 state shall be as a logical 1. The difference between the input and output
logical 0 Output range
for a in the range 0–0.4V ranges for a given logic value known as the gate’s guaranteed
0 V logical 0 and that the output of noise immunity.
a gate in a logical 1
state shall be in the range 2.8–5.0V. Similarly, they
guarantee that the input circuit of a gate shall
recognize a voltage in the range 0–0.8V as a logical V Vin
Logic element 1 out + Logic element 2
0 and a voltage in the range 2.4–5.0 V as a logical 1.
Here, two gates are wired together so that the
Vnoise
output of gate 1 becomes the input of gate 2. The
signal at the output of gate 1 is written Vout and Output from
Noise Input to
the input to gate 2 is written Vin. element 1
element 2
28 Chapter 2 Gates, circuits, and combinational logic

hole? Such a system would require extremely precise 2.2.1 The AND gate
electronics.
A single binary digit is known as a bit (BInary digiT) and is The AND gate is a circuit with two or more inputs and a sin-
the smallest unit of information possible; that is, a bit can’t be gle output. The output of an AND gate is true if and only if
subdivided into smaller units. Ideally, if a computer runs off, each of its inputs is also in a true state. Conversely, if one or
say, 3 V, a low level would be represented by 0.0 V and a high more of the inputs to the AND gate is false, the output will
level by 3.0 V. also be false. Figure 2.3 provides the circuit symbol for both a
two-input AND gate and a three-input AND gate. Note that
the shape of the gate indicates its AND function (this will
become clearer when we introduce the OR gate).
2.2 Fundamental gates An AND gate is visualized in terms of an electric circuit or
a highway as illustrated in Fig. 2.4. Electric current (or traffic)
The digital computer consists of nothing more than the inter-
flows along the circuit (road) only if switches (bridges) A and
connection of three types of primitive elements called AND,
B are closed. The logical symbol for the AND operator is a
OR, and NOT gates. Other gates called NAND, NOR, and
dot, so that A AND B can be written A ⋅ B. As in normal alge-
EOR gates can be derived from these gates. We shall see that
bra, the dot is often omitted and A ⋅ B can be written AB. The
all digital circuits, may be designed from the appropriate
logical AND operator behaves like the multiplier operator in
interconnection of NAND (or NOR) gates alone. In other
conventional algebra; for example, the expression (A  B) ⋅
words, the most complex digital computer can be reduced
(C  D)  A ⋅ C  A ⋅ D  B ⋅ C  B ⋅ D in both Boolean
to a mass of NAND gates. This statement doesn’t devalue
and conventional algebra.
the computer any more than saying that the human brain is
just a lot of neurons joined in a particularly complex way
devalues the brain. A A
C = A.B B C = A.B.C
We don’t use gates to build computers because we like B C
them or because Boolean algebra is great fun. We use gates
because they provide a way of mass producing cheap and (a) Two-input AND gate (b) Three-input AND gate
reliable digital computers.
Figure 2.3 The AND gate.

WHAT IS A GATE?
The word gate conveys the idea of a two-state device—open a gate by means of an example from the analog world.
or shut. A gate may be thought of as a black box with one or Consider the algebraic expression y  F(x)  2x2  x  1. If
more input terminals and an output terminal. The gate we think of x as the input to a black box and y its output, the
processes the digital signals at its input terminals to produce a block diagram demonstrates how y is generated by a sequence
digital signal at its output terminal. The particular type of the of operations on x. The operations performed on the input are
gate determines the actual processing involved. The output C those of addition, multiplication, and squaring. Variable x
of a gate with two input terminals A and B can be expressed in enters the ‘squarer’ and comes out as x2. The output from the
conventional algebra as C  F (A,B), where A, B, and C are two- squarer enters a multiplier (along with the constant 2) and
valued variables and F is a logical function. comes out as 2x2, and so on. By applying all the operations to
The output of a gate is a function only of its inputs. When input x, we end up with output 2x2  x  1. The boxes
we introduce the sequential circuit, we will discover that the carrying out these operations are entirely analogous to gates
sequential circuit’s output depends on its previous output as in the digital world—except that gates don’t do anything as
well as its current inputs. We can demonstrate the concept of complicated as addition or multiplication.

Squarer The input signal x is


x 2 Multiplier 2x 2 acted on by four functional
( )2 X units to create a signal
y = 2x 2 + x+ 1.

Input 2
x
Adder 2x 2 + x Adder 2x 2 + x +1 Output
+ + y

1
2.2 Fundamental gates 29

CIRCUIT CONVENTIONS
Because we write from left to right, many logic circuits are also P
read from left to right; that is, information flows from left to
right with the inputs of gates on the left and the outputs on X
the right.
These two lines are connected
Because a circuit often contains many signal paths, some of
these paths may have to cross over each other when the
diagram is drawn on two-dimensional paper. We need a means Y
of distinguishing between wires that join and wires that
simply cross each other (rather like highways that merge and These two lines are not connected
and cross over at this point
highways that fly over each other). The standard procedure is
to regard two lines that simply cross as not being connected as
the diagram illustrates. The connection of two lines is denoted A corollary of the statement that the same logic state exists
by a dot at their intersection. everywhere on a conductor is that a line must not be
The voltage at any point along a conductor is constant and connected to the output of more than one circuit—otherwise
therefore the logical state is the same everywhere on the line. the state of the line will be undefined if the outputs differ. At
If a line is connected to the input of several gates, the input to the end of this chapter we will introduce gates with special
each gate is the same. In this diagram, the value of X and P tri-state outputs that can be connected together without
must be the same because the two lines are connected. causing havoc.

The circuit is completed only n1 n2 n3 n4


if switch A and switch B is closed
0 00 000 0000
A B 1 01 001 0001
10 010 0010
11 011 0011
Switch A Switch B 100 0100
101 0101
Figure 2.4 The representation of an AND gate.
110 0110
111 0111
A useful way of describing the relationship between the 1000
inputs of a gate and its output is the truth table. In a truth 1001
table the value of each output is tabulated for every possible 1010
combination of the inputs. Because the inputs are two valued 1011
(i.e. binary with states 0 and 1), a circuit with n inputs has 2n
1100
lines in its truth table. The order in which the 2n possible
1101
inputs are taken is not important but by convention the order
1110
corresponds to the natural binary sequence (we discuss
binary numbers in Chapter 4). Table 2.1 describes the natural 1111
binary sequences for values of n from 1 to 4.
Table 2.2 illustrates the truth table for a two-input AND Table 2.1 The 2n possible values of an n-bit variable
gate, although there’s no reason why we can’t have any num- for n  1 to 4.
ber of inputs to an AND gate. Some real gates have three or
four inputs and some have 10 or more inputs. However, it
doesn’t matter how many inputs an AND gate has. Only one Inputs Output
line in the truth table will contain a 1 entry because all inputs A B F =A⋅B
must be true for the output to be true. 0 0 0 False because one or
When we introduce computer arithmetic, computer archi- more inputs is false
0 1 0
tecture, and assembly language programming, we will see 1 0 0
that computers don’t operate on bits in isolation. Computers 1 1 1
True because both
process entire groups of bits at a time. These groups are called inputs are true
words and are typically 8, 16, 32, or 64 bits wide. The AND Table 2.2 Truth table for the AND gate.
30 Chapter 2 Gates, circuits, and combinational logic

operation, when applied to words, is called a logical operation Inputs Output


to distinguish it from an arithmetic operation such as addi- A B F=A+B False because
tion, subtraction, or multiplication. When two words take no input is true
0 0 0
part in a logical operation such as an AND, the operation 0 1 1
takes place between the individual pairs of bits; for example 1 0 1
True because at least
bit ai of word A is ANDed with bit bi of word B to produce bit 1 1 1
one input is true
ci of word C. Consider the effect of ANDing of the following
two 8-bit words A  11011100 and B  01100101.x Table 2.3 Truth table for the OR gate.

word A
word B Switch A
C = A·B
A
In this example the result C  A ⋅ B is given by 01000100.
Why should anyone want to AND together two words? If you The circuit is
complete if either
AND bit x with 1, the result is x (because Table 2.2 demon- B switch A or B
strates that 1.0  0 and 1.1  1). If you AND bit x with 0 the is closed
result is 0 (because the output of an AND gate is true only if
Switch B
both inputs are true). Consequently, a logical AND is used to
mask certain bits in a word by forcing them to zero. For Figure 2.6 The representation of an OR gate.
example, if we wish to clear the leftmost four bits of an 8-bit
word to zero, ANDing the word with 00001111 will do the
trick. The following example demonstrates the effect of an The use of the term OR here is rather different from the
AND operation with a 00001111 mask. English usage of or. The Boolean OR means (either A or B) or
(both A and B), whereas the English usage often means A or
source word
mask
B but not (A and B). For example, consider the contrasting
result use of the word or in the two phrases: ‘Would you like tea
or coffee?’ and ‘Reduced fees are charged to members who
are registered students or under 25’. We shall see that the
2.2.2 The OR gate more common English use of the word or corresponds to
the Boolean function known as the EXCLUSIVE OR, an
The output of an OR gate is true if any one (or more than important function that is frequently abbreviated to EOR
one) of its inputs is true. Notice the difference between AND or XOR.
and OR operations. The output of an AND is true only if all A computer can also perform a logical OR on words as the
inputs are true whereas the output of an OR is true if at least following example illustrates.
one input is true. The circuit symbol for a two-input and a
three-input OR gate is given in Fig. 2.5. The logical symbol word A
for an OR operation is an addition sign, so that the logical word B
operation A OR B is written as A  B. The logical OR opera- C=A+B
tor is the same as the conventional addition symbol because The logical OR operation is used to set one or more bits in
the OR operator behaves like the addition operator in algebra a word to a logical 1. The term set means make a logical one,
(the reasons for this will become clear when we introduce just as clear means reset to a logical zero. For example, the
Boolean algebra). Table 2.3 provides the truth table for a two- least-significant bit of a word is set by ORing it with 00 . . . 01.
input OR gate. By applying both AND and OR operations to a word we can
The behavior of an OR gate can be represented by the selectively clear or set its bits. Suppose we have an 8-bit binary
switching circuit of Fig. 2.6. A path exists from input to out- word and we wish to clear bits 6 and 7 and set bits 4 and 5. If
put if either of the two switches is closed. the bits of the word are d0 to d7, we can write:

A A d7 d6 d5 d4 d3 d2 d1 d0 Source word
C=A+B B D=A+B+C
B C 0 0 1 1 1 1 1 1 AND mask
0 0 d5 d4 d3 d2 d1 d0 First result
(a) Two-input OR gate. (b) Three-input OR gate.
0 0 1 1 0 0 0 0 OR mask
Figure 2.5 The OR gate. 0 0 1 1 d3 d2 d1 d0 Final result
2.2 Fundamental gates 31

2.2.3 The NOT gate normally closed so that they form a switch that is closed when
switch A is open.
The NOT gate is also called an inverter or a complementer and If switch A is closed, a current flows through the coil to
is a two-terminal device with a single input and a single out- generate a magnetic field that magnetizes the iron core. The
put. If the input of an inverter is X, its output is NOT X which contact on the iron strip is pulled toward the core, opening

is written X or X*. Figure 2.7 illustrates the symbol for an the contacts and breaking the circuit. In other words, closing
inverter and Table 2.4 provides its truth table. Some teachers switch A opens the relay’s switch and vice versa. The system in

vocalize X as ‘not X’ and others as ‘X not’. The inverter is the Fig. 2.8 behaves like a NOT gate. The relay is used by a com-
simplest of gates because the output is the opposite of puter to control external devices and is described further
the input. If the input is 1 the output is 0 and vice versa. By the when we deal with input and output devices.
way, the triangle in Fig. 2.7 doesn’t represent an inverter. Like both the AND and OR operations, the NOT function
The small circle at the output of the inverter indicates the can also be applied to words:
inversion operation. We shall see that this circle indicates
logical inversion wherever it appears in a circuit. word A
We can visualize the operation of the NOT gate in terms of B=A
the relay illustrated in Fig. 2.8. A relay is an electromechanical
switch (i.e. a device that is partially electronic and partially
mechanical) consisting of an iron core around which a coil of 2.2.4 The NAND and NOR gates
wire is wrapped. When a current flows through a coil, it gen- The two most widely used gates in real circuits are the NAND
erates a magnetic field that causes the iron core to act as a and NOR gates. These aren’t fundamental gates because the
magnet. Situated close to the iron core is a pair of contacts, NAND gate is derived from an AND gate followed by an
the lower of which is mounted on a springy strip of iron. If inverter (Not AND) and the NOR gate is derived from an OR
switch A is open, no current flows through the coil and the gate followed by an inverter (Not OR), respectively. The circuit
iron core remains unmagnetized. The relay’s contacts are symbols for the NAND and NOR gates are given in Fig. 2.9.
The little circle at the output of a NAND gate represents the
symbol for inversion or complementation. It is this circle that
A A converts the AND gate to a NAND gate and an OR gate to a
The output is NOR gate. Later, when we introduce the concept of mixed
the logical
complement of logic, we will discover that this circle can be applied to the
the input inputs of gates as well as to their outputs.
Table 2.5 gives the truth table for the NAND and the NOR
Figure 2.7 The NOT gate or inverter. gates. As you can see, the output columns in the NAND and
NOR tables are just the complements of the outputs in the
corresponding AND and OR tables.
Input Output

We can get a better feeling for the effect that different gates
A F=A have on two inputs, A and B, by putting all the gates together
0 1 in a single table (Table 2.6). We have also included the
1 0 EXCLUSIVE OR (i.e. EOR) and its complement the
EXCLUSIVE NOR (i.e. EXNOR) in Table 2.6 for reference.
Table 2.4 Truth table for the inverter. The EOR gate is derived from AND, OR, and NOT gates and
is described in more detail later in this chapter. It should be
Contacts
noted here that A·B is not the same as A·B, just as AB is
(switch A) not the same as AB.

A A A.B A
Iron strip Output C=A.B C = A.B
B B
A
Battery AND gate followed by an inverter NAND gate

Coil A A+B A
C=A+B C=A+B
B B

Iron core OR gate followed by an inverter NOR gate

Figure 2.8 The operation of a relay. Figure 2.9 Circuit symbols for the NAND and NOR gates.
32 Chapter 2 Gates, circuits, and combinational logic

2.2.5 Positive, negative, and mixed logic Suppose we regard the low level as true and use negative
logic, Table 2.7 shows that we have an AND gate whose out-
At this point we introduce the concepts of positive logic, put is low if and only if each input is low. It should also be
negative logic, and mixed logic. Some readers may find that apparent that an AND gate in negative logic functions as an
this section interrupts their progress toward a better under- OR gate in positive logic. Similarly, a negative logic OR gate
standing of the gate and may therefore skip ahead to the next functions as an AND gate in positive logic. In other words, the
section. same gate is an AND gate in negative logic and an OR gate in
Up to now we have blurred the distinction between two positive logic. Figure 2.10 demonstrates the relationship
unconnected concepts. The first concept is the relationship between positive and negative logic gates.
between low/high voltages in a digital circuit, 0 and 1 logical For years engineers used the symbol for a positive logic
levels, and true/false logic values. The second concept is the AND gate in circuits using active-low signals with the result
logic function; for example, AND, OR, and NOT. So far, we that the reader was confused and could only understand the
have used positive logic in which a high-level signal represents
a logical one state and this state is called true.
Table 2.7 provides three views of the AND function. The Logical form Positive logic Negative logic
leftmost column provides the logical truth table in which the
output is true only if all inputs are true (we have used T and F A B A⋅B A B A⋅B A B A⋅B
to avoid reference to signal levels). The middle column
F F F 0 0 0 1 1 1
describes the AND function in positive logic form in which
F T F 0 1 0 1 0 1
the output is true (i.e. 1) only if all inputs are true (i.e. 1).
T F F 1 0 0 0 1 1
The right hand column in Table 2.7 uses negative logic in
which 0 is true and 1 is false. The output A ⋅ B is true (i.e. 0) T T T 1 1 1 0 0 0
only when both inputs are true (i.e. 0).
As far as digital circuits are concerned, there’s no funda- Table 2.7 Truth table for AND gate in positive and negative
mental difference between logical 1s and 0s and it’s as sensible logic forms.
to choose a logical 0 level as the true state as it is to choose a
logical 1 state. Indeed, many of the signals in real digital
systems are active-low which means that their function is A A
C C
carried out by a low-level signal. B B

C is high if A or B is high C is low if A and B is low


A B NAND A B NOR
C⫽A · B C⫽A  B
A A
C C
0 0 1 0 0 1 B B
0 1 1 0 1 0
1 0 1 1 0 0
1 1 0 1 1 0 C is high if A and B is high C is low if A or B is low

Table 2.5 Truth table for the NAND and NOR gates. Figure 2.10 Positive and negative logic.

Inputs Output
A B AND A ⋅ B OR A⫹B NAND A.B NOR A  B EOR A 䊝 B EXNOR A 䊝 B

0 0 0 0 1 1 0 1
0 1 0 1 1 0 1 0
1 0 0 1 1 0 1 0
1 1 1 1 0 0 0 1

Table 2.6 Truth table for six gates.


2.2 Fundamental gates 33

GATES AS TRANSMISSION ELEMENTS


We can provide more of an insight into what gates do by logical 1 state. The other input is a variable X and we wish to
treating them as transmission elements that control the flow determine the effect the gate has on the transmission of X
of information within a computer. through it.
We are going to take three two-input gates (i.e. AND, OR, Figures (a) and (b) demonstrate the behavior of an AND
EOR) and see what happens when we apply a variable to one gate. When C  0, an AND gate is disabled and its output is
input and a control signal to the other input. The figure forced into a logical zero state. When C  1, the AND gate is
illustrates three pairs of gates. Each pair demonstrates the enabled and its X input is transmitted to the output
situation in which the control input C is set to a logical 0 and a unchanged. We can think of an AND gate as a simple switch
that allows or inhibits the passage of
(a) AND gate Control input C=0 (a) AND gate Control input C=1 a logical signal. Similarly, in Fig (c)
and (d) an OR gate is enabled by
C  0 and disabled by C  1.
X X
0 X However, when the OR gate is
C=0 C=1
disabled, its output is forced into a
Gate disabled (output low) Gate enabled (output=X) logical one state.
The EOR gate in Fig (e) and (f) is a
more interesting device. When its
(c) OR gate Control input C=0 (d) OR gate Control input C=1 control input is 0, it transmits the
other input unchanged. But when
X X C  1, it transmits the complement
X 1 of X. The EOR gate can best be
C=0 C=1
regarded as a programmable inverter.
Gate enabled (output=X) Gate disabled (output high) Later we shall make good use of this
property of an EOR gate.
The reason we’ve introduced the
(e) EOR gate Control input C=0 (f) EOR gate Control input C=1 concept of a gate as a transmission
element is that digital computers can
X X be viewed as a complex network
X X through which information flows and
C=0 C=1
this information is operated on by
Gate acts as pass-through element (output=X) Gate acts as inverter (output=X) gates as it flows round the system.

circuit by mentally transforming the positive logic gate into


A A
its negative logic equivalent. In mixed logic both positive Circuit Circuit
B B
logic and negative logic gates are used together in the same
circuit. The choice of whether to use positive or negative logic
(a) Positive logic system (b) Negative logic system
is determined only by the desire to improve the clarity of a The circuit is activated when The circuit is activated when
diagram or explanation. A is low and B is low A is low and B is low
Why do we have to worry about positive and negative
logic? If we stick to positive logic, life would be much simpler. Figure 2.11 Mixed logic.
True, but life is never that simple. Many real electronic sys-
tems are activated by low-level signals and that makes it sens-
ible to adopt negative logic conventions. Let’s look at an
example. Consider a circuit that is activated by a low-level confusing. When you see a gate with an OR shape you think
signal only when input A is a low level and input B is a low of an OR function. However, in this case, the gate is actually
level. Figure 2.11 demonstrates the circuit required to imple- performing an AND operation on low-level signals.
ment this function. Note that the bubble at the input to the What we need is a means of preserving the AND shape and
circuit indicates that it is activated by a low level. indicating we are using negative logic signals. Figure 2.11(b)
In Fig. 2.11(a) we employ positive logic and draw an OR does just that. By placing inverter circles at the AND gate’s
gate because the output of an OR gate is 0 only when both its inputs and output we immediately see that the output of the
inputs are 0. There’s nothing wrong with this circuit, but it’s gate is low if and only if both of its inputs are low.
34 Chapter 2 Gates, circuits, and combinational logic

A A G1 P
B B
Active-low output
D
Q
C G2 G4 F
C
Figure 2.12 Using mixed logic. R
G3
F is the output
P, Q, and R are
There is no physical difference between the circuits of A, B, and C are intermediate
inputs variables
Figs. 2.11(a) and 2.11(b). They are both ways of representing
the same thing. However, the meaning of the circuit in
Fig. 2.11(b) is clearer. Figure 2.13 The use of gates—Example 1.
Consider another example of mixed logic in which we use
both negative and positive logic concepts. Suppose a circuit is
activated by a low-level signal if input A is low and input B Inputs Intermediate values Output
high, or input D is high, or input C is low. Figure 2.12 shows A B C P  A⋅B Q  B⋅C R  A⋅C FPQR
how we might draw such a circuit. For most of this book we
will continue to use positive logic. 0 0 0 0 0 0 0
0 0 1 0 0 0 0
0 1 0 0 0 0 0
2.3 Applications of gates 0 1 1 0 1 0 1
1 0 0 0 0 0 0
We now look at four simple circuits to demonstrate that a few 1 0 1 0 0 1 1
gates can be connected together in such a way as to create a 1 1 0 1 0 0 1
circuit whose function and importance may readily be appre- 1 1 1 1 1 1 1
ciated by the reader. Following this informal introduction to
circuits we introduce Digital Works, a Windows-based pro-
Table 2.8 Truth table for Fig. 2.13.
gram that lets you construct and simulate circuits containing
gates on a PC. We then return to gates and provide a more
formal section on the analysis of logic circuits by means of doesn’t help us a lot at this point. However, by visually
Boolean algebra. inspecting the truth table for F we can see that the output is
Circuits are constructed by connecting gates together. The true if two or more of the inputs A, B, and C, are true. That is,
output from one gate can be connected (i.e. wired) to the this circuit implements a majority logic function whose out-
input of one or more other gates. However, two outputs can- put takes the same value as the majority of inputs. We have
not be connected together. already seen how such a circuit is used in an automatic land-
ing system in an aircraft by choosing the output from three
Example 1 Consider the circuit of Fig. 2.13 that uses three
independent computers to be the best (i.e. majority) of three
two-input AND gates labeled G1, G2, and G3, and a three-
inputs. Using just four basic gates, we’ve constructed a circuit
input OR gate labeled G4. This circuit has three inputs A, B,
that does something useful.
and C, and an output F. What does it do?
We can tackle this problem in several ways. One approach Example 2 The circuit of Fig. 2.14 has three inputs, one out-
is to create a truth table that tabulates the output F for all the put, and three intermediate values (we’ve also included a
eight possible combinations of the three inputs A, B, and C. mixed logic version of this circuit on the right hand side of
Table 2.8 corresponds to the circuit of Fig. 2.13 and includes Fig. 2.14). By inspecting the truth table for this circuit
columns for the outputs of the three AND gates as well as the (Table 2.9) we can see that when the input X is 0, the output,
output of the OR gate, F. F, is equal to Y. Similarly, when X is 1, the output is equal to Z.
The three intermediate signals P, Q, and R are defined by The circuit of Fig. 2.14 behaves like an electronic switch, con-
P  A ⋅ B, Q  B ⋅ C, and R  A ⋅ C. Figure 2.13 tells us that necting the output to one of two inputs, Y or Z, depending on
we can write down the output function, F, as the logical OR of the state of a control input X.
the three intermediate signals P, Q, and R; that is, The circuit of Fig. 2.14 is a two-input multiplexer that can
F  P  Q  R. be represented by the arrangement of Fig. 2.15. Because the
We can substitute the expressions for P, Q, and R to get word multiplexer appears so often in electronics, it is
F  A ⋅ B  B ⋅ C  A ⋅ C. This is a Boolean equation, but it frequently abbreviated to MUX.
2.3 Applications of gates 35

Y Q Y Q
P P
F F

R R
Z Z
Figure 2.14 The use of gates—
X X Mixed logic version
Example 2.

Inputs Intermediate values Output the set of inputs that cause the output to be true. In Table 2.9
the output is true when
X Y Z PX Q  P ·Y R  X ·Z F  Q·R
(1) X  0, Y  1, Z  0 (X ·Y ·Z)
0 0 0 1 1 1 0
(2) X  0, Y  1, Z  1 (X ·Y ·Z)
0 0 1 1 1 1 0
0 1 0 1 0 1 1 (3) X  1, Y  0, Z  1 (X·Y ·Z)
0 1 1 1 0 1 1 (4) X  1, Y  1, Z  1 (X ·Y ·Z)
1 0 0 0 1 1 0
There are four possible combinations of inputs that make the
1 0 1 0 1 0 1
output true. Therefore, the output can be expressed as the
1 1 0 0 1 1 0 logical sum of the four cases (1)–(4) above; that is,
1 1 1 0 1 0 1
F  X ·Y ·Z  X·Y ·Z  X·Y ·Z  X·Y ·Z

Table 2.9 Truth table for Fig. 2.14. This function is true if any of the conditions (1)–(4) are
true. A function represented in this way is called a sum-of-
products (S-of-P) expression because it is the logical OR (i.e.
Input Y sum) of a group of terms each composed of several of vari-
Output F
Input Z
Electronic ables ANDed together (i.e. products). A sum-of-products
switch
expression represents one of the two standard ways of writing
down a Boolean expression.
An alternative way of writing a Boolean equation is called
Control input X
select Y or Z
a product-of-sums (P-of-S) expression and consists of several
terms ANDed together. The terms are made up of variables
Figure 2.15 The logical representation of Figure 2.14. ORed together. A typical product-of-sums expression has
the form

We can derive an expression for F in terms of inputs X, Y, and F  (A  B  C)·(A  B  C)·(A  B  C)


Z in two ways. From the circuit diagram of Fig. 2.14, we can Later we shall examine ways of converting sum-of-products
get an equation for F by writing the output of each gate in expressions into product-of sums expressions and vice versa.
terms of its inputs. Each of the terms (1)–(4) in Example 2, is called a minterm.
F  Q·R A minterm is an AND (product) term that includes each of
the variables in either its true or complemented form. For
Q  Y·P
example, in the case above X ⋅ Y ⋅ Z is a minterm, but if we had
PX had the term X ⋅ Y that would not be a minterm, because X ⋅ Y
includes only two of the three variables. When an equation is
Therefore Q  Y·X by substituting for P
expressed as a sum of minterms, it is said to be in its canonical
R  X·Z form. Canonical is just a fancy word that means standard.
Therefore F  Y·X ·X·Z As the output of the circuit in Fig. 2.14 must be the same
whether it is derived from the truth table or from the logic
When we introduce Boolean algebra we will see how this diagram, the two equations we have derived for F must be
type of expression can be simplified. Another way of obtain- equivalent, with the result that
ing a Boolean expression is to use the truth table. Each time a
logical one appears in the output column, we can write down Y·X ·X ·Z  X ·Y ·Z  X·Y ·Z  X·Y ·Z  X·Y ·Z
36 Chapter 2 Gates, circuits, and combinational logic

This equation demonstrates that a given Boolean function has two inputs, two intermediate values, and one output.
can be expressed in more than one way. Table 2.10 provides its truth table.
The multiplexer of Fig. 2.14 may seem a very long way from The circuit of Fig. 2.17 represents one of the most
computers and programming. However, multiplexers are important circuits in digital electronics, the exclusive or (also
found somewhere in every computer because computers oper- called EOR or XOR). The exclusive or corresponds to the
ate by modifying the flow of data within a system. A multi- normal English use of the word or (i.e. one or the other but
plexer allows one of two data streams to flow through a switch not both). The output of an EOR gate output is true if one of
that is electronically controlled. Let’s look at a highly simplified the inputs is true but not if both inputs are true.
example. The power of a digital computer (or a human brain) An EOR circuit always has two inputs (remember that
lies in its ability to make decisions. Decision taking in a com- AND and OR gates can have any number of inputs). Because
puter corresponds to the conditional branch; for example, the EOR function is so widely used, the EOR gate has its own
special circuit symbol (Fig. 2.18) and the EOR operator its
own special logical symbol ‘䊝’; for example, we can write

F  A EOR B  A 䊝 B
We can’t go into the details of how such a construct is imple-
mented here. What we would like to do is to demonstrate that The EOR is not a fundamental gate because it is constructed
something as simple as a multiplexer can implement some- from basic gates.
thing as sophisticated as a conditional branch. Consider Because the EOR gate is so important, we will discuss it a
the system of Fig. 2.16. Two numbers P and Q are fed to a little further. Table 2.10 demonstrates that F is true when
comparator where they are compared. If they are the same, A  0 and B  1, or when A  1 and B  0. Consequently,
the output of the comparator is 1 (otherwise it’s 0). The same the output F  A ·B  A·B. From the circuit in Fig. 2.17 we
output is used as the control input to a multiplexer that can write
selects between two values X and Y. In practice, such a system F  P·Q
would be rather more complex (because P, Q, X, and Y are all
multi-bit values), but the basic principles are the same. PAB

Example 3 Figure 2.17 describes a simple circuit with three Q  A·B


gates: an OR gate, an AND gate, and a NAND gate. This circuit Therefore F  (A  B) ·A ·B
As these two equations (i.e. F  A·B  A·B
Implementing and F  (A  B) ·A ·B are equivalent, we can
Output
IF P = Q therefore also build an EOR function in the
P THEN Y manner depicted in Fig. 2.19.
ELSE X It’s perfectly possible to build an EOR with
The output from four NAND gates (Fig. 2.20). We leave it as an
Comparator Control Multiplexer the multiplexer is exercise for the reader to verify that Fig. 2.20 does
Y if the control input is
true, otherwise X indeed represent an EOR gate. To demonstrate
Q that two different circuits have the same func-
tion, all you need do is to construct a truth table
The output from
the comparator is Y X
for each circuit. If the outputs are the same for
true if P = Q each and every possible input, the circuits are
equivalent.
Figure 2.16 Application of the multiplexer.
Inputs Intermediate values Output

A A B P⫽A⫹B Q  A·B F ⫽ P⋅Q


P
G1
B G3 F 0 0 0 1 0
0 1 1 1 1
Q 1 0 1 1 1
G2
1 1 1 0 0

Figure 2.17 The use of gates—Example 3. Table 2.10 Truth table for the circuit of Fig. 2.17.
2.3 Applications of gates 37

The EOR is a remarkably versatile logic element that pops allows us to build an equality tester that indicates whether or
up in many places in digital electronics. The output of an not two words are identical (Fig. 2.21).
EOR is true if its inputs are different and false if they are the In Fig. 2.21 two m-bit words (Word 1 and Word 2) are fed
same. As we’ve already stated, unlike the AND, OR, NAND to a bank of m EOR gates. Bit i from Word 1 is compared with
and NOR gates the EOR gate can have only two inputs. The bit i from Word 2 in the ith EOR gate. If these two bits are the
EOR gate’s ability to detect whether its inputs are the same same, the output of this EOR gate is zero.
If the two words in Fig. 2.21 are equal, the outputs of all
EORs are zero and we need to detect this condition in order
A to declare that Word 1 and Word 2 are identical. An AND gate
C=A⊕B will give a 1 output when all its inputs are 1. However, in this
B case, we have to detect the situation in which all inputs are 0.
We can therefore connect all m outputs from the m EOR gates
Figure 2.18 Circuit symbol for an EOR gate. to an m-input NOR gate (because the output of a NOR gate
is 1 if all inputs are 0).
If you look at Fig. 2.21 you can see that the outputs from
A A the EOR gates aren’t connected to a NOR gate but to an
G1 A.B
B m-input AND gate with inverting inputs. The little bubbles at
B
the AND gate’s inputs indicate inversion and are equivalent to
G3 NOT gates. When all inputs to the AND gate are active-low,
B F=A.B + A.B
the AND gate’s output will go active-high (exactly what we
A.B
A G2 want). In mixed logic we can regard an AND gate with active-
low inputs and an active-high output as a NOR gate.
Remember that we required an equality detector (i.e. com-
Figure 2.19 An alternative circuit for an EOR gate. parator) in Fig. 2.21 (Example 2) to control a multiplexer.
We’ve just built one.

A Example 4 The next example of an important circuit con-


structed from a few gates is the prioritizer whose circuit is
given in Fig. 2.22. As this is a rather more complex circuit
F=A⊕B
than the previous three examples, we’ll explain what it does
first. A prioritizer deals with competing requests for attention
and grants service to just one of those requesting attention.
B The prioritizer is a device with n inputs and n outputs. Each
of the inputs is assigned a priority from 0 to n1 (assume
Figure 2.20 An EOR circuit constructed with NAND gates only. that the highest priority is input n1, and the lowest is 0).

Word 1
Bit m–1 Bit 1 Bit 0 Each EOR gate compares
a pair of bits

F (high if Word 1 = Word 2)

m-input AND gate


with active-low inputs
Word 2
Figure 2.21 The application of
Bit m–1 Bit 1 Bit 0 EOR gates in an equality tester.
38 Chapter 2 Gates, circuits, and combinational logic

x0
Inputs Outputs
G1 y0
x4 x3 x2 x1 x0 y4 y3 y2 y1 y0

x1 0 0 0 0 0 0 0 0 0 0
G2 y1 0 0 0 0 1 0 0 0 0 1
0 0 0 1 0 0 0 0 1 0
0 0 0 1 1 0 0 0 1 0
x2 0 0 1 0 0 0 0 1 0 0
G3 y2
0 0 1 0 1 0 0 1 0 0
0 0 1 1 0 0 0 1 0 0
x3 0 0 1 1 1 0 0 1 0 0
y3 0 1 0 0 0 0 1 0 0 0
G4
0 1 0 0 1 0 1 0 0 0
0 1 0 1 0 0 1 0 0 0
x4 y4 0 1 0 1 1 0 1 0 0 0
0 1 1 0 0 0 1 0 0 0
Figure 2.22 Example 4—the priority circuit. 0 1 1 0 1 0 1 0 0 0
0 1 1 1 0 0 1 0 0 0
If two or more inputs are asserted simultaneously, only the 0 1 1 1 1 0 1 0 0 0
output corresponding to the input with the highest priority 1 0 0 0 0 1 0 0 0 0
is asserted. Computers use this type of circuit to deal with 1 0 0 0 1 1 0 0 0 0
simultaneous requests for service from several peripherals 1 0 0 1 0 1 0 0 0 0
(e.g. disk drives, the keyboard, the mouse, and the modem). 1 0 0 1 1 1 0 0 0 0
Consider the five-input prioritizer circuit in Fig. 2.22. The 1 0 1 0 0 1 0 0 0 0
prioritizer’s five inputs x0 to x4 are connected to the outputs
1 0 1 0 1 1 0 0 0 0
of five devices that can make a request for attention (input x4
1 0 1 1 0 1 0 0 0 0
has the highest priority). That is, device i can put a logical
1 on input xi to request attention at priority level i. If several 1 0 1 1 1 1 0 0 0 0
inputs are set to 1 at the same time, the prioritizer sets only 1 1 0 0 0 1 0 0 0 0
one of its outputs to 1, all the other outputs remain at 0. 1 1 0 0 1 1 0 0 0 0
For example, if the input is x4,x3,x2,x1,x0  00110, the output 1 1 0 1 0 1 0 0 0 0
y4,y3,y2,y1,y0  00100, because the highest level of input is x2. 1 1 0 1 1 1 0 0 0 0
Table 2.11 provides a truth table for this prioritizer. 1 1 1 0 0 1 0 0 0 0
If you examine the circuit of Fig. 2.22, you can see that out- 1 1 1 0 1 1 0 0 0 0
put y4 is equal to input x4 because there is a direct connection. 1 1 1 1 0 1 0 0 0 0
If x4 is 0, then y4 is 0; and if x4 is 1 then y4 is 1. The value of x4 1 1 1 1 1 1 0 0 0 0
is fed to the input of the AND gates G3, G2, and G1 in the
lower priority stages via an inverter. If x4 is 1, the logical level
at the inputs of the AND gates is 0, which disables them and Table 2.11 Truth table for the priority circuit of Fig. 2.22.
forces their outputs to 0. If x4 is 0, the value fed back to the
AND gates is 1 and therefore they are not disabled by x4.
Similarly, when x3 is 1, gates G3, G2 and G1 are disabled,
(c) Compare the circuit diagrams of P and Q in terms of speed
and so on.
and cost of implementation.
Example 5 Our final example looks at two different circuits
(a) The circuit diagram for P  (X  Y)(Y 䊝 Z) is given by
that do the same thing. This is a typical exam question.
Fig. 2.23 and the circuit diagram for Q  Y ·Z  X·Y·Z
(a) Using AND, OR, and NOT gates only, draw circuits to is give by Fig. 2.24.
generate P and Q from inputs X, Y, and Z, where (b) The truth table for functions P and Q is given in
P  (X  Y)(Y 䊝 Z) and Q  Y ·Z  X·Y·Z . Table 2.12 from which it can be seen that P  Q.
(b) By means of a truth table establish a relationship between (c) We can compare the two circuits in terms of speed
P and Q. and cost.
2.3 Applications of gates 39

COMPARING DIFFERENT DIGITAL CIRCUITS WITH THE SAME FUNCTION


Different combinations of gates may be used to implement account of the number of interconnections is the total number
the same function. This isn’t the place to go into the detailed of inputs to gates. In Fig. 2.17 there are six inputs, whereas in
design of logic circuits, but it is interesting to see how the Fig. 2.19 there are eight inputs.
designer might go about selecting one particular Number of packages Simple gates of the types we describe
implementation in preference to another. Some of the basic here are available in 14-pin packages (two pins of which are
criteria by which circuits are judged are listed below. In general, needed for the power supply). As it costs virtually nothing to
the design of logic circuits is often affected by other factors add extra gates to the silicon chip, only the number of pins
than those described here. (i.e. external connections to the chip) limits the total number
Speed The speed of a circuit (i.e. how long it takes the output of gates in a physical package. Thus, an inverter requires two
to respond to a change at an input) is approximately governed pins, so that six inverters are provided on the chip. Similarly, a
by the maximum number of gates through which a change of two-input AND/NAND/OR/NOR gate needs three pins, so
state must propagate (i.e. pass). The output of a typical gate four of these gates are put on the chip. Because each of these
might take 5 ns to change following a logic change at its input circuits uses three different types of gate, both circuits
(5 ns  5  109 s). Figs 2.17 and 2.19 both implement an require three 14 pin integrated circuits. Even so, the circuit of
EOR function. In Fig. 2.17 there are only two gates in series, Fig. 2.17 is better than that of Fig. 2.19 because there are
whereas in Fig. 2.19 there are three gates in series. Therefore more unused gates left in the ICs, freeing them for use by
the implementation of an EOR function in Fig. 2.17 is 50% other parts of the computer system. Note that the circuit of
faster. All real gates don’t have the same propagation delay, Fig. 2.20 uses only one package because all gates are the
because some gates respond more rapidly than others. same type.
Number of interconnections It costs money to wire gates You should appreciate that this is an introductory text and
together. Even if a printed circuit is used, somebody has to what we have said is appropriate only to logic circuits
design it and the more interconnections used the more it will constructed from basic logic elements. Computer-aided design
cost. Increasing the number of interconnections in a circuit techniques are used to handle more complex systems with
also increases the probability of failure due to a faulty hundreds of gates. Indeed, complex circuits are largely
connection. One parameter of circuit design that takes constructed from programmable digital elements.

X
X+Y
Y
Y
P

Z YZ
Z YZ+ YZ
YZ

Figure 2.23 Circuit diagram for P.

Y
Y YZ

Z
Q
Z XYZ+ YZ

X XYZ Figure 2.24 Circuit diagram for Q.

Propagation delay The maximum delay in the circuit for Cost Total number of gates needed to implement P is 7.
P is four gates in series in the Y path (i.e. NOT gate, AND gate, Total number of gates needed to implement Q is 5. Total
OR gate, AND gate). The maximum delay in the circuit for inputs in the circuit for P is 12. Total inputs in the circuit for
Q is three gates in series in both Y and Z paths (i.e. NOT gate, Q is 9. Clearly, the circuit for Q is better than that for P both
AND gate, OR gate). Therefore the circuit for Q is 33% faster in terms of the number of gates and the number of inputs to
than that for P. the gates.
40 Chapter 2 Gates, circuits, and combinational logic

X Y Z XY Y䊝Z P  (X  Y)(Y 䊝 Z) Y·Z X ·Y·Z Q  Y·Z  X ·Y·Z

0 0 0 1 0 0 0 0 0
0 0 1 1 1 1 1 0 1
0 1 0 0 1 0 0 0 0
0 1 1 0 0 0 0 0 0
1 0 0 1 0 0 0 0 0
1 0 1 1 1 1 1 0 1
1 1 0 1 1 1 0 1 1
1 1 1 1 0 0 0 0 0

Table 2.12 Truth table for Figs 2.23 and 2.24.

This is the push This is the LED


The pointer tool allows button input tool symbol that lets you
you to select an object that lets you create see the state of a
in the window. You can a 0 or a 1 input. point in the circuit.
click the right-hand
mouse button to alter
the object's properties This is the wiring
This is a typical gate symbol. Click on tool that lets you
or act on it in some
it and move the mouse to where you wire gates together.
way.
want the gate to be positioned. Click
again and the gate is placed there.
This is the hand
tool that allows
you to set the
state of a switch.

Figure 2.25 Digital Works—the initial screen.

2.4 Introduction to Digital Works that Digital Works simulates both simple 1-bit storage
elements called flip-flops and larger memory components
We now introduce a Windows-based logic simulator called such as ROM and RAM.
Digital Works that enables you to construct a logic circuit After installing Digital Works on your system, you can run
from simple gates (AND, OR, NOT, NAND, NOR, EOR, it to get the initial screen shown in Fig. 2.25. We have anno-
XNOR) and to analyze the circuit’s behavior. Digital Works tated six of the most important icons on the toolbars. A cir-
also supports the tri-state logic gate that enables you to con- cuit is constructed by using the mouse to place gates on the
struct systems with buses. In the next chapter we will discover screen or workspace and a wiring tool to connect the gates
2.4 Introduction to Digital Works 41

together. The input to your circuit may come from a clock clicking at a suitable point in the workspace as Fig. 2.27
generator (a continuous series of alternating 1s and 0s), a demonstrates. If you hold the control key down when placing
sequence generator (a user-defined sequence of 1s and 0s), or a gate, you can place multiple copies of the gate in the work-
a manual input (from a switch that you can push by means of space. The OR gate is shown in broken outline because we’ve
the mouse). You can observe the output of a gate by connect- just placed it (i.e. it is currently selected). Once a gate has been
ing it to a display, LED. You can also send the output of the placed, you can select it with the mouse by clicking the left
LED to a window that displays either a waveform or a button and drag it wherever you want. You can click the right
sequence of binary digits. button to modify the gate’s attributes (e.g. the number of
Digital Works has been designed to be consistent with the inputs).
Windows philosophy and has a help function that provides You can tidy up the circuit by moving the gates within the
further information about its facilities and commands. The work area by left clicking a gate and dragging it to where you
File command in the top toolbar provides the options you want it. Figure 2.28 shows the work area after we’ve moved
would expect (e.g. load, save, save as). the gates to create a symmetrical layout. You can even drag
gates around the work area after they’ve been wired up and
reposition wires by left clicking and dragging any node
2.4.1 Creating a circuit (a node is a point on a wire that consists of multiple sections
We are going to design and test an EOR circuit that has the or links).
logic function A·B  A·B. This function can be imple- Digital Works displays a grid to help you position the gates.
mented with two inverters, two AND gates, and an OR gate. The grid can be turned on or off and the spacing of the grid
Figure 2.26 shows three of the icons we are going to use to lines changed. Objects can be made to snap to the grid. These
create this circuit. The first icon is the new circuit icon that functions are accessed via the View command in the top line.
creates a fresh circuit (which Digital Works calls a macro). Before continuing, we need to save the circuit. Figure 2.29
The second icon is the pointer tool used to select a gate (or demonstrates how we use the conventional File function in
other element) from the toolbars. The third icon is a gate that the toolbar to save a circuit. We have called this circuit
can be planted in the work area. OUP_EOR1 and Digital Works inserts the extension .dwm.
Let’s start by planting some gates on the work area. The The next step is to wire up the gates to create a circuit. First
EOR requires two AND gates, an OR gate, and two inverters. select the wiring tool from the tool bars by left clicking on it
First click on the pointer tool on the bottom row of icons. If it (Fig. 2.30). Then position the cursor over the point at which
hasn’t already been selected, it will become depressed when you wish to connect a wire and left click. The cursor changes
you select it. The pointer tool remains selected until another to wire when it’s over a point that can legally be connected to.
tool is selected. Left click to attach a wire and move the cursor to the point
You select a gate from the list on the second row of icons by you wish to connect. Left click to create a connection. Instead
first left clicking on the gate with the pointer tool and then left of making a direct connection between two points, you can

This is one of the gates


that you can select and put
in the work area.

This is the pointer tool and


This icon creates a new is the most important icon
macro. If a circuit is because you use it to
already open, you will be select objects.
invited to save it.

Figure 2.26 Beginning a session with Digital Works.


Random documents with unrelated
content Scribd suggests to you:
Assembling the 1903 machine in the new camp building at
Kill Devil Hills, October 1903.

It became so cold that the brothers had to make a heater from a


drum used to hold carbide. Wilbur assured his father:

However we are entirely comfortable, and have no trouble keeping


warm at nights. In addition to the classifications of last year, to wit,
1, 2, 3, and 4 blanket nights, we now have 5 blanket nights, & 5
blankets & 2 quilts. Next come 5 blankets, 2 quilts & fire; then 5, 2,
fire, & hot-water jug. This as far as we have got so far.
The 1903 machine and camp buildings at Kill Devil Hills,
Nov. 24, 1903.

At last the weather cleared, the engine began to purr, their 41


hand-made heater functioned better after improvements, and,
with the help of a tire cement they had used in their bicycle shop,
they “stuck those sprockets so tight I doubt whether they will ever
come loose again.” Chanute visited their camp for a few days and
wrote November 23, “I believe the new machine of the Wrights to be
the most promising attempt at flight that has yet been made.” Both
brothers sensed that the goal was in sight.

The powered machine’s undercarriage (landing gear) consisted of


two runners, or sledlike skids, instead of wheels. These were
extended farther out in front of the wings than were the landing skids
on the gliders to guard against the machine rolling over in landing.
Four feet, eight inches apart, the two runners were ideal for landing
as skids on the soft beach sands. But for take-offs, it was necessary
to build a single-rail starting track 60 feet long on which ran a small
truck which held the machine about 8 inches off the ground. The
easily movable starting rail was constructed of four 15-foot 2 × 4’s
set on edge, with the upper surface topped by a thin strip of metal.

The truck which supported the skids of the plane during the takeoff
consisted of two parts: a crossbeam plank about 6 feet long laid
across a smaller piece of wood forming the truck’s undercarriage
which moved along the track on two rollers made from modified
bicycle hubs. For take-offs, the machine was lifted onto the truck with
the plane’s undercarriage skids resting on the two opposite ends of
the crossbeam. A modified bicycle hub was attached to the forward
crosspiece of the plane between its skids to prevent the machine
from nosing over on the launching track. A wire from the truck
attached to the end of the starting track held the plane back while
the engine was warmed up. Then the restraining wire was released
by the pilot. The airplane, riding on the truck, started forward along
the rail. If all went well, the machine was airborne and hence lifted
off the truck before reaching the end of the starting track; while the
truck, remaining on the track, continued on and ran off the rail.

With the new propeller shafts installed, the powered machine was
ready for its first testing on December 12. However, the wind was too
light for the machine to take-off from the level ground near their
camp with a run of only 60 feet permitted by the starting track. Nor
did they have enough time before dark to take the machine to one of
the nearby Kill Devil Hills, where, by placing the track on a steeply
inclined slope, enough speed could be promptly attained for starting
in calm air. The following day was Sunday, which the brothers spent
resting and reading, hoping for suitable weather for flying the next
day so that they could be home by Christmas.

On December 14 it was again too calm to permit a start from level


ground near the camp. The Wrights, therefore, decided to take 42
the machine to the north side of Kill Devil Hill about a quarter
of a mile away to make their first attempt to fly in a power-driven
machine. They had arranged to signal nearby life-savers to inform
them when the first trial was ready to start. A signal was placed on
one of the camp buildings that could be seen by personnel on duty
about a mile away at the Kill Devil Hills Life Saving Station.
The first Wright Flyer rests on the starting track at Kill
Devil Hill prior to the trial of Dec. 14, 1903. The four men
from the Kill Devil Hills Life Saving Station helped move the
machine from the campsite to the hill. The two boys ran
home on hearing the engine start.

The Wrights were soon joined by five lifesavers who helped to


transport the machine from camp to Kill Devil Hill. Setting the 605-
pound machine on the truck atop the starting track, they ran the
truck to the end of the track and added the rear section of the track
to the front end. By relaying sections of the track, the machine rode
on the truck to the site chosen for the test, 150 feet up the side of
the hill.

The truck, with the machine thereon, facing downhill, was fastened
with a wire to the end of the starting track, so that it could not start
until released by the pilot. The engine was started to make sure it
was in proper condition. Two small boys, with a dog, who had come
with the lifesavers, “made a hurried departure over the hill for home
on hearing the engine start.” Each brother was eager for the chance
to make the first trial, so a coin was tossed to determine which of
them it should be; Wilbur won.

Wilbur took his place as pilot while Orville held a wing to steady the
machine during the run on the track. The restraining wire was
released, the machine started forward quickly on the rail, leaving
Orville behind. After a run of 35 or 40 feet, the airplane took off.
Wilbur turned the machine up too suddenly after leaving the track,
before it had gained enough speed. It climbed a few feet, stalled,
and settled to the ground at the foot of the hill after being in the air
just 3½ seconds. This trial was considered unsuccessful because the
machine landed at a point at the base of the hill many feet lower
than that from which it had started on the side of the hill. Wilbur
wrote of his trial:

43

Wilbur Wright in damaged machine near the base of Kill


Devil Hill after unsuccessful trial of Dec. 14, 1903. Repairs
were completed by the afternoon of December 16, but poor
wind conditions prevented another trial until the following
day.
Crew members of the Kill Devil Hills Life Saving Station,
about 1900. In 1903, lifesavers from this station witnessed
the attempt on December 14 and saw the successful flights
of December 17.

However the real trouble was an error in judgment, in 44


turning up too suddenly after leaving the track, and as the
machine had barely speed enough for support already, this slowed
it down so much that before I could correct the error, the machine
began to come down, though turned up at a big angle. Toward the
end it began to speed up again but it was too late, and it struck
the ground while moving a little to one side, due to wind and a
rather bad start.

In landing, one of the skids and several other parts were broken,
preventing a second attempt that day. Repairs were completed by
noon of the 16th, but the wind was too calm to fly the machine that
afternoon. The brothers, however, were confident of soon making a
successful flight. “There is now no question of final success,” Wilbur
wrote his father, though Langley had recently made two attempts to
fly and had failed in both. “This did not disturb or hurry us in the
least,” Orville commented on Langley’s attempts. “We knew that he
had to have better scientific data than was contained in his published
works to successfully build a man-carrying flying machine.”
December 17, 1903: The Day Man First Flew

Thursday, December 17 dawned, and was to go down in history as a


day when a great engineering feat was accomplished. It was a cold
day with winds of 22 to 27 miles an hour blowing from the north.
Puddles of water near the camp were covered with ice. The Wrights
waited indoors, hoping the winds would diminish. But they continued
brisk, and at 10 in the morning the brothers decided to attempt a
flight, fully realizing the difficulties and dangers of flying a relatively
untried machine in so high a wind.

In strong winds, hills were not needed to launch the machine, since
the force of the winds would enable the machine to take off on the
short starting track from level ground. Indeed, the winds were almost
too gusty to launch the machine at all that day, but the brothers
estimated that the added dangers while in flight would be
compensated in part by the slower speed in landing caused by flying
into stiff winds. As a safety precaution, they decided to fly as close to
the ground as possible. They were superb flyers, courageous, but
never foolhardy.

A signal was again displayed to notify the men at the Kill Devil Hills
Life Saving Station that further trials were intended. They took the
machine out of the hanger, and laid the 60-foot starting track in 45
a south-to-north direction on a smooth stretch of level ground
less than 100 feet west of the hanger and more than 1,000 feet north
of Kill Devil Hill. They chose this location for the trials because the
ground had recently been covered with water, and because it was so
level that little preparation was necessary to lay the track. Both the
starting track and the machine resting on the truck faced directly into
the north wind. The restraining wire was attached from the truck to
the south end of the track.
Getting ready for the first flight, Dec. 17, 1903. From a
diorama in the Wright Brothers National Memorial Visitor
Center.

Before the brothers were quite ready to fly the machine, John T.
Daniels, Willie S. Dough, and Adam D. Etheridge, personnel from the
Kill Devil Hills Life Saving Station, arrived to see the trials; with them
came William C. Brinkley of Manteo, and John T. Moore, a boy from
Nags Head. The right to the first trial belonged to Orville; Wilbur had
used his turn in the unsuccessful attempt on December 14. Orville
put his camera on a tripod before climbing aboard the machine, and
told Daniels to press the button when the machine had risen directly
in front of the camera.

After running the engine and propellers a few minutes, the take-off
attempt was ready. At 10:35 a.m., Orville lay prone on the lower wing
with hips in the cradle that operated the control mechanisms. He
released the restraining wire and the machine started down the 60-
foot track, traveling slowly into the headwind at about 7 or 8 miles an
hour—so slow that Wilbur was able to run alongside holding the right
wing to balance the machine on the track. After a run of 40 feet on
the track, the machine took off. When the airplane had risen about 2
feet above ground, Daniels snapped the famous photograph of the
conquest of the air. The plane then climbed 10 feet into the sky, while
Orville struggled with the controlling mechanisms to keep it from
rising too high in such an irregular, gusty wind.

46

The first flight.

Orville sought to fly a level flight course, though buffeted by the


strong headwind. However, when turning the rudder up or down, the
plane turned too far either way and flew an erratic up-and-down
course, first quickly rising about 10 feet, then suddenly darting close
to the ground. The first successful flight ended with a sudden dart to
the ground after having flown 120 feet from the take-off point in 12
seconds time at a groundspeed of 6.8 miles an hour and an airspeed
of 30 miles an hour. In the words of Orville Wright:

This flight lasted only 12 seconds, but it was nevertheless the first
in the history of the world in which a machine carrying a man had
raised itself by its own power into the air in full flight, had sailed
forward without reduction of speed, and had finally landed at a
point as high as that from which it started.
Orville found that the new, almost untried, controlling mechanisms
operated more powerfully than the previous controls he had used in
gliders. He also learned that the front rudder was balanced too near
the center. Because of its tendency to turn itself when started, the
unfamiliar powered machine’s front rudder turned more than was
necessary.

The airplane had been slightly damaged on landing. Quick repairs


were made. With the help of the onlookers, the machine was brought
back to the track and prepared for a second flight. Wilbur took his
turn at 11:20 a.m., and flew about 175 feet in about 12 seconds. He
also flew an up-and-down course, similar to the first flight, while
operating the unfamiliar controls. The speed over the ground during
the second flight was slightly faster than that of the first flight
because the winds were diminishing. The airplane was carried back
to the starting track and prepared for a third flight.

47

Third flight of Dec. 17, 1903, Orville Wright at the controls.


No photograph was taken of the day’s second flight, in
which Wilbur Wright was operator.
End of fourth and longest flight of Dec. 17, 1903. Distance:
852 feet; time: 59 seconds.

Close-up of 1903 machine at end of last flight, rudder


frame broken in landing. Courtesy, Smithsonian Institution.

48
Orville Wright’s diary showing Dec. 17, 1903 entry. This
account is the only contemporary written record of these
momentous flights.
At 11:40 a.m., Orville made the third flight, flying a steadier course
than that of the two previous flights. All was going nicely when a
sudden gust of wind from the side lifted the airplane higher by 12 to
15 feet, turning it sidewise in an alarming manner. With the plane
flying sidewise, Orville warped the wingtips to recover lateral balance,
and pointed the plane down to land as quickly as possible. The new
lateral control was more effective than he had expected. The 49
plane not only leveled off, but the wing that had been high
dropped more than he had intended, and it struck the ground shortly
before the plane landed. The third flight was about 200 feet in about
15 seconds.
(Orville Wright’s diary—December 17 entry, continued)

Wilbur started on the fourth flight at noon. He flew the first few
hundred feet on an up-and-down course similar to the first two
flights. But after flying 300 feet from the take-off point, the airplane
was brought under control. The plane flew a fairly even course for an
additional 500 feet, with little undulation to disturb its level flight.
While in flight about 800 feet from the take-off point, the airplane
commenced pitching again, and, in one of its darts downward, 50
struck the ground. The fourth flight measured 852 feet over the
ground; the time in the air was 59 seconds.
(Orville Wright’s diary—December 17 entry, continued)

The four successful flights made on December 17 were short because


the Wrights, not desiring to fly a new machine at much height in
strong winds, sometimes found it impossible to correct the up-and-
down motion of the airplane before it struck the ground. Wilbur
remarked:

Those who understand the real significance of the conditions under


which we worked will be surprised rather at the length than the
shortness of the flights made with an unfamiliar machine after less
than one minute’s practice. The machine possesses greater 51
capacity of being controlled than any of our former
machines.
(Orville Wright’s diary—December 17 entry, continued)

They carried the airplane back to camp and set it up a few feet west
of the hangar. While the Wrights and onlookers were discussing the
flights, a sudden gust of wind struck the plane and turned it over a
number of times, damaging it badly. The airplane could not be
repaired in time for any more flights that year; indeed, it was never
flown again. Daniels gained the dubious honor of becoming the first
airplane casualty when he was slightly scratched and bruised while
caught inside the machine between the wings in an attempt to 52
stop the plane as it rolled over. Subsequent events were vivid in
Daniels’ mind while reminiscing of his “first—and God help me—my
last flight.” He relates:

I found myself caught in them wires and the machine blowing


across the beach heading for the ocean, landing first on one end
and then on the other, rolling over and over, and me getting more
tangled up in it all the time. I tell you, I was plumb scared. When
the thing did stop for half a second I nearly broke up every wire
and upright getting out of it.

Orville made this matter-of-fact entry in his diary: “After dinner we


went to Kitty Hawk to send off telegram to M. W. While there we
called on Capt. and Mrs. Hobbs, Dr. Cogswell and the station men.”
Toward evening that day Bishop Milton Wright in Dayton received the
telegram from his sons:

Success four flights Thursday morning all against twenty-one mile


wind started from level with engine power alone average speed
through air thirty-one miles longest 57 seconds inform press home
Christmas. Orevelle Wright.

In the transmission of the telegram, 57 seconds was incorrectly given


for the 59-second record flight, and Orville’s name was misspelled.
The Norfolk telegraph operator leaked the news to a local paper, the
Virginian-Pilot. The resulting story produced a series of false reports
as to the length and duration of the December 17 flights. Practically
none of the information contained in the telegram was used, except
that the Wrights had flown.

The Bishop gave out a biographical note:


Wilbur is 36, Orville 32, and they are as inseparable as twins. For
several years they have read up on aeronautics as a physician
would read his books, and they have studied, discussed, and
experimented together. Natural workmen, they have invented,
constructed, and operated their gliders, and finally their ‘Wright
Flyer,’ jointly, all at their own personal expense. About equal credit
is due each.

The world took little note of the Wrights’ tremendous achievement


and years passed before its full significance was realized. After
reading the Wrights’ telegram, the Associated Press representative in
Dayton remarked, “Fifty-seven seconds, hey? If it had been fifty-
seven minutes then it might have been a news item.” Three years
after the first flight an editorial appeared in the December 15, 1906,
issue of the Scientific American, which included the following:

In all the history of invention, there is probably no parallel to the


unostentatious manner in which the Wright brothers of Dayton,
Ohio, ushered into the world their epoch-making invention of the
first successful aeroplane flying-machine.

53
Orville Wright wired his father to announce the successful
flights of Dec. 17, 1903.

Form No. 100.

THE WESTERN UNION TELEGRAPH


COMPANY.
———INCORPORATED———
23,000 OFFICES IN AMERICA. CABLE SERVICE TO
ALL THE WORLD.

This Company TRANSMITS and DELIVERS messages only on conditions


limiting its liability, which have been assented to by the sender of the
following message.

Errors can be guarded against only by repeating a message back to the


sending station for comparison, and the Company will not hold itself liable
for errors or delays in transmission or delivery of Unrepeated Messages,
beyond the amount of tolls paid thereon, nor in any case where the claim is
not presented in writing within sixty days after the message is filed with the
Company for transmission.

This is an UNREPEATED MESSAGE, and is delivered by request of the sender


under the conditions named above.
ROBERT C. CLOWRY, President and General Manager.

RECEIVED at
176 C KA CS 33 Paid. Via Norfolk Va
Kitty Hawk N C Dec 17
Bishop M Wright
7 Hawthorne St

Success four flights thursday morning all against twenty


one mile wind started from Level with engine power alone
average speed through air thirty one miles longest 57
seconds inform Press home Christmas.

Orevelle Wright 525P


After the First Flight

After 1903, the Wrights carved brilliant careers in aeronautics and


helped found the aviation industry. The successful flights made at Kill
Devil Hills in December 1903 encouraged them to make
improvements on a new plane called Flyer No. 2. About 100 flights
were flown near Dayton in 1904. These totaled only 45 minutes in
the air, although they made two 5-minute flights. Experimenting
chiefly with control and maneuver, many complete circuits of the
small flying field were made.

A new and improved plane, Flyer No. 3, was built in 1905. On


October 5 they made a record flight of 24⅕ miles, while the plane
was in the air 38 minutes and 3 seconds. The era of the airplane was
well on the way. The lessons and successes at Kill Devil Hills in
December 1903 were fast making the crowded skies of the Air Age
possible.

Believing their invention was now perfected for practical use, the
Wrights wanted the United States Government to have a world
monopoly on their patents, and more important, on all the
aerodynamic, design, and pilotage secrets they knew relating to the
airplane. As early as 1905 they had received overtures from
representatives of foreign governments. The United States Army
turned down their first offers without making an effort to investigate
whether the airplane had been brought to a stage of practical
operation. But disbelief was on the wane. In February 1908 the
United States War Department made a contract with the brothers for
an airplane. Only 3 weeks later the Wrights closed a contract with a
Frenchman to form a syndicate for the rights to manufacture, sell, or
license the use of the Wright airplane in France.
54

Orville Wright in 1904 flight 85 at Huffman Prairie near


Dayton, November 16. Distance: approximately 1,760 feet;
time: 45 seconds.
1905 flight 41—Orville’s 12-mile flight of September 29.

55
1905 flight 46, October 4—20.8 miles in 33.3 minutes, the
second longest flight of 1905. It was exceeded only by the
24-mile flight of October 5. The era of the airplane was well
on its way.

56

Orville Wright (1871-1948) taken about 1908.

During their Dayton experiments, the Wrights had continued to pilot


their airplanes while lying prone with hips in the cradle on the lower
wing. Now they adopted a different arrangement of the control levers
to be used in a sitting position and added a seat for a passenger. The
brothers brought their airplane to Kill Devil Hills in April 1908 to
practice handling the new arrangement of the control levers. They
wanted to be prepared for the public trials to be made for the United
States Government, near Washington, and for the company in
France.

They erected a new building at Kill Devil Hills to house the airplane
and to live in, because storms the year before had nearly demolished
their 1903 camp buildings. Between May 6 and May 14, 1908, the
Wrights made 22 flights at their old testing grounds. On May 14 the
first flight with two men aboard a plane was made near West Hill;
Wilbur Wright being the pilot, and Charles Furnas, a mechanic, the
passenger. Orville and Furnas then made a flight together of 57
over 2 miles, passing between Kill Devil Hill and West Hill, and
turning north near the sound to circle Little Hill before returning over
the starting point close to their camp to land near West Hill on the
second lap.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like