0% found this document useful (0 votes)
75 views

Hardware Computer Organization-I

The document discusses the basic components and organization of computer systems based on the Von Neumann architecture. It describes how computer systems are built from elementary blocks like transistors and gates. The key components are the memory, central processing unit (CPU), input/output (I/O) devices, and the control unit that coordinates operations. The CPU contains the arithmetic logic unit (ALU) for computations. The document focuses on how memory is organized and accessed in Von Neumann systems through addressing, decoding, and the use of registers like the memory address register and memory data register for fetching and storing data. It introduces the concept of cache memory to speed up memory access times.

Uploaded by

frmshibu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Hardware Computer Organization-I

The document discusses the basic components and organization of computer systems based on the Von Neumann architecture. It describes how computer systems are built from elementary blocks like transistors and gates. The key components are the memory, central processing unit (CPU), input/output (I/O) devices, and the control unit that coordinates operations. The CPU contains the arithmetic logic unit (ALU) for computations. The document focuses on how memory is organized and accessed in Von Neumann systems through addressing, decoding, and the use of registers like the memory address register and memory data register for fetching and storing data. It introduces the concept of cache memory to speed up memory access times.

Uploaded by

frmshibu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Hardware: Computer Systems Organization

1
Computer Systems Organization

 Transistors, gates, logic circuits are the elementary


building blocks of computer systems.
 To understand the complete flow of computer process, we
must study computers as collections of functional units or
subsystems that perform tasks such as instruction
processing, information storage, computation and data
transfer.
 These functional units are built out of the elementary
blocks.

2
Computer Systems Organization

3
Computer Systems Organization

 The Components of a Computer system


 There are huge number of computer systems on the market
 Computer differ in speed, memory, capacity, input/output
capabilities and available software.
 Example: super computers, work stations, laptops, and
smart phones.
 However, in spite of all these differences virtually every
computer in use today is based on a single design.
 The structure and organization of virtually all computer
computers are based on a single theoretical model of
computer design called the Von Neumann architecture.
 It is named after the mathematician John Von Neumann
who proposed it in 1946.
4
Computer Systems Organization

 Von Neumann architecture

 Four major subsystems: Memory, Input/output, the


arithmetic/logic unit (ALU) and the control unit.

5
Computer Systems Organization

 Von Neumann architecture

 The store program concept: The instructions to be executed


is sored in memory as binary
 The sequential execution of instructions: instructions are
fetched from memory one at a time and passed to the
control unit, where it is decoded and executed.
6
Computer Systems Organization

 Memory
 Memory is the functional unit of a computer
 All information stored in memory is represented
internally using the binary numbering system.
 Computer memory uses an access technique called
random access.
 “RAM’ is frequently used to refer to the memory unit.

7
Computer Systems Organization

 Memory
 Memory is divided into fixed-size units called cells,
and each cell is associated with a unique identifier
called as address. These address are unsigned integers
0, 1, 2…. maximum limit.
 All access to memory ae to a specified address and we
must always fetch or store a complete cell. ( all bits in
the cell).
 The time it take to fetch or store the contents of a cell
is the same for al the cells in memory.

8
Computer Systems Organization

 Memory
 In addition there is ROM( Read-only memory) is also
random access memory into which information has
been prerecorded during the manufacture.
 ROM is used to hold important system instructions
and data.

9
Computer Systems Organization

 Memory

10
Computer Systems Organization

 Memory
 The memory units is made up of a cells that contain a
fixed number of binary digits.
 The number of bits per cell is called the cell size or
the memory width and it is usually denoted as W.
 Earlier generations of computers had no standardized
value for the size.
 Now computer manufactures use a standard cell size
of 8 bits ( 1 byte).
What will be the largest
unsigned integer value
that can be stored in a
cell (W=8) ?
11
Computer Systems Organization

 Memory
 Many computers use 2 or 4 bytes to store integers and
4 or 8 bytes to store real number.
 It take multiple memory access to fetch a single data
item.
 Each memory cell in RAM is identifies by a unique
unsigned integer address .
 If N bits represent to address the address of a cell then
the smallest address is 0. and largest address is a
string of N 1s.

12
Computer Systems Organization

 Memory
 The range of address available on a computer is 0 to
2N-1.
 Typical values of N in the 1960 and 1970s were
16,20, 22 and 24.
 Today all the computers have at least 64 bits address

13
Computer Systems Organization

 Memory
 Fetch/store controller
 Fetch: retrieve a value from memory
 Store: store a value into memory
 Memory address register (MAR)
 Memory data register (MDR)
 Memory cells, with decoder(s) to select individual
cells

14
Computer Systems Organization

 Memory

15
Computer Systems Organization

 Memory
 The Memory Address Register (MAR) holds the
address of the cell to be fetched or stored. Because the
MAR must be capable of holding any address, it must
be at least N bits wide
 The Memory Data Register (MDR) contains the data
value being fetched or stored. We might be tempted to
say that the MDR should be W bits wide, where W is
the cell size.

16
Computer Systems Organization

 Memory
 Fetch operation
 The address of the desired memory cell is moved
into the MAR
 Fetch/store controller signals a “fetch,” accessing
the memory cell
 The value at the MAR’s location flows into the
MDR

17
Computer Systems Organization

 Memory
 Store operation
 The address of the cell where the value should go
is placed in the MAR
 The new value is placed in the MDR
 Fetch/store controller signals a “store,” copying
the MDR’s value into the desired cell

18
Computer Systems Organization

 Memory

19
Computer Systems Organization

 Memory
 The operation “Decode the address in the MAR”
means that the memory unit must translate the N-bit
address stored in the MAR into the set of signals
needed to access that one specific memory cell.
 This can be done using normal decoder

20
Computer Systems Organization

 Memory
 Traditional 2 to 4 deorder
0 1 1 0

21
Computer Systems Organization

 Memory
 Memory access using decoder ( 4 bit address)

22
Computer Systems Organization

 Memory
 it does not scale very well. That is, it cannot be used
to build a large memory unit. In modern computers a
typical value for N, the number of bits used to
represent an address, is 32. A decoder circuit with 32
input lines would have 232, or more than 4 billion,
output lines.

23
Computer Systems Organization

 Memory
 it does not scale very well. That is, it cannot be used
to build a large memory unit. In modern computers a
typical value for N, the number of bits used to
represent an address, is 32. A decoder circuit with 32
input lines would have 232, or more than 4 billion,
output lines.

24
Computer Systems Organization

 Memory
 Two dimensional memory organization

25
 Cache Memory

 As computers became faster the processor was sitting


idle waiting for data or instructions to arrive.
 As a solution to problem cache is introduced

26
 Cache Memory
 Von Neumann described only a single type of memory
 Whenever the computer needed an instruction or a piece
of data, Von Neumann simply assumed it would get it
from RAM using the fetch operation just described.
 Computer designers recommended cache based on
observation called Principle of Locality
 It will access that same instruction or piece of data
in the very near future.
 It will likely access the instructions or data that are
located near that piece of data, where “near” means
an address whose numerical value is close to this
27
one.
 Cache Memory
 When the computer needs a piece of information, it does
not immediately do the memory fetch operation
described earlier. Instead, it carries out the following
three steps:
1. Look first in cache memory to see whether the
information is there. If it is, then the computer can access
it at the higher speed of the cache.
2. If the desired information is not in the cache, then access
it from RAM at the slower speed, using the fetch
operation described earlier.

28
 Cache Memory
3. Copy the data just fetched into the cache along with the k
immediately following memory locations. If the cache is
full, then discard some of the older items that have not
recently been accessed. (The assumption is that we will
not need them again for a while.)

29
Introduction

 Cache Memory
 Assume that the average access time of our RAM is 10
nsec
 The average access time of the cache is 2 nsec
 Assume that the information we need is in the cache 70%
of the time
 70% of the time we get what we need in 2 nsec
 30% of the time we have wasted that 2 nsec because the
information is not in the cache and must be obtained
from RAM

Total time ?
30
 Cache Memory
 Assume that the average access time of our RAM is 10
nsec
 The average access time of the cache is 2 nsec
 Assume that the information we need is in the cache 70%
of the time
 70% of the time we get what we need in 2 nsec
 30% of the time we have wasted that 2 nsec because the
information is not in the cache and must be obtained
from RAM

Total time ?
31
 Input/Output and Mass Storage
 The input/output (I/O) units are the devices that allow a
computer system to communicate and interact with the
outside world as well as store information.
 Nonvolatile storage is the role of mass storage devices
such as disks and tapes.

32
 Input/Output and Mass Storage
 Input/output devices come in two basic types: those that
represent information in human-readable form for human
consumption, and those that
 I/O devices as keyboards, screens, and printers.
 store information in machine-readable form for access by
a computer system.
 mass storage systems, includes floppy disks, flash
memory, hard disks, CDs, DVDs, and streaming tapes.

33
 Input/Output and Mass Storage
 Mass storage devices themselves come in two distinct
forms, direct access storage devices (DASDs) and
sequential access storage devices (SASDs).
 In direct access storage device, every unit of information
still has a unique address, but the time needed to access
that information depends on its physical location and the
current state of the device.
 The best examples of DASDs are the types of disks listed
earlier: hard disks, floppy disks, CDs, DVDs, and so on.

34
 Input/Output and Mass Storage
 A disk stores information in units called sectors, each of
which contains an address and a data block containing a
fixed number of bytes:

 A fixed number of these sectors are


placed in a concentric circle on the
surface of the disk, called a track:

35
 Input/Output and Mass Storage
 Disk storage

 Seek time is the time needed to position the read/write head


over the correct track;
 latency is the time for the beginning of the desired sector to
rotate under the read/write head; and
 Transfer time is the time for the entire sector to pass under the
read/write head and have its contents read into or written from
memory.
36
 Input/Output and Mass Storage
 Disk storage
 Let’s assume a disk drive with the following physical
characteristics:
 Rotation speed = 7,200 rev/min = 120 rev/sec = 8.33
msec/rev (1 msec = 0.001 sec)
 Arm movement time = 0.02 msec to move to an adjacent
track (i.e., moving from track i to either track i+1 or i-1)
 Number of tracks/surface = 1,000 (numbered 0 to 999)
 Number of sectors/track = 64
 Number of bytes/sector = 1,024

37
 Input/Output and Mass Storage
 Disk storage
 The access time for this disk can be determined as
follows.
 Seek Time
 Best case = 0 msec (no arm movement)
 Worst case = 999 x 0.02 = 19.98 msec (move from
track 0 to track 999)
 Average case = 300 x 0.02 = 6 msec (assume that on
average, the read/write head must move about 300
tracks)

38
 Input/Output and Mass Storage
 Disk storage
 The access time for this disk can be determined as
follows.
 Latency
 Best case = 0 msec (sector is just about to come
under the read/write head)
 Worst case = 8.33 msec (we just missed the sector
and must wait one full revolution)
 Average case = 4.17 msec (one-half a revolution)

39
 Input/Output and Mass Storage
 Disk storage
 The access time for this disk can be determined as
follows.
 Transfer
 1/64 x 8.33 msec = 0.13 msec (the time for one
sector, or 1/64th of a track, to pass under the
read/write head; this time will be the same for all
sectors)

40
 Input/Output and Mass Storage
 A sequential access storage device behaves just like the
old audio cassette tapes of the 1980s and 1990s.

41
 Input/Output and Mass Storage
 Typical memory access time is about 10 nsec. The
 Time to complete the I/O operation “locate and read one
disk sector” will be more than that.
 The solution to this problem is to use a device called an
I/O controller to compensate for any speed

42
 Input/Output and Mass Storage
 Differences between I/O devices and other parts of the
computer. It has a small amount of memory, called an
I/O buffer, and enough I/O control and logic
 Processing capability to handle the mechanical functions
of the I/O device, such as the read/write head, paper feed
mechanism, and screen display.
 It is also able to transmit to the processor a special
hardware signal, called an interrupt signal, when an I/O
operation is done.

43
 Input/Output and Mass Storage

44
 Input/Output and Mass Storage
 Let’s assume that we want to display one line (80
characters) of text on a screen.
 First the 80 characters are transferred from their current
location in memory to the I/O buffer storage within the
I/O controller.
 Once this information is in the I/O buffer, the processor
can instruct the I/O controller to begin the output
operation.

45
 Input/Output and Mass Storage
 The control logic of the I/O controller handles the actual
transfer and display of these 80 characters to the screen.
 This transfer may be at a much slower rate—perhaps
only hundreds or thousands of characters per second.
 However, the processor does not sit idle during this
output operation. It is free to do something else, perhaps
work on another program.

46

You might also like