Computer Fundamental Theory

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

I/O Interface

The method that is used to transfer information between internal storage and external
I/O devices is known as I/O interface
here exists special hardware components between CPU and peripherals to supervise
and synchronize all the input and output transfers that are called interface units.
Mode of Transfer:
The binary information that is received from an external device is usually stored in
the memory unit. The information that is transferred from the CPU to the external
device is originated from the memory unit. CPU merely processes the information
but the source and target is always the memory unit. Data transfer between CPU and
the I/O devices may be done in different modes. Data transfer to and from the
peripherals may be done in any of the three possible ways
Programmed I/O:

Programmed I/O: It is due to the result of the I/O instructions that are written in the
computer program. Each data item transfer is initiated by an instruction in the
program. Usually the transfer is from a CPU register and memory. In this case it
requires constant monitoring by the CPU of the peripheral devices.
Example of Programmed I/O: In this case, the I/O device does not have direct access to
the memory unit. A transfer from I/O device to memory requires the execution of
several instructions by the CPU, including an input instruction to transfer the data
from device to the CPU and store instruction to transfer the data from CPU to
memory. In programmed I/O, the CPU stays in the program loop until the I/O unit
indicates that it is ready for data transfer. This is a time consuming process since it
needlessly keeps the CPU busy. This situation can be avoided by using an interrupt
facility.
1. Interrupt- initiated I/O: Since in the above case we saw the CPU is kept
busy unnecessarily. This situation can very well be avoided by using an
interrupt driven method for data transfer. By using interrupt facility and
special commands to inform the interface to issue an interrupt request
signal whenever data is available from any device. In the meantime the
CPU can proceed for any other program execution. The interface
meanwhile keeps monitoring the device. Whenever it is determined that
the device is ready for data transfer it initiates an interrupt request signal
to the computer. Upon detection of an external interrupt signal the CPU
stops momentarily the task that it was already performing, branches to the
service program to process the I/O transfer, and then return to the task it
was originally performing.
 The I/O transfer rate is limited by the speed with which the
processor can test and service a device.
 The processor is tied up in managing an I/O transfer; a number
of instructions must be executed for each I/O transfer.
 Terms:
 Hardware Interrupts: Interrupts present in the
hardware pins.
 Software Interrupts: These are the instructions used
in the program whenever the required functionality is
needed.
 Vectored interrupts: These interrupts are associated
with the static vector address.
 Non-vectored interrupts: These interrupts are
associated with the dynamic vector address.
 Maskable Interrupts: These interrupts can be
enabled or disabled explicitly.
 Non-maskable interrupts: These are always in the
enabled state. we cannot disable them.
 External interrupts: Generated by external devices
such as I/O.
 Internal interrupts: These devices are generated by
the internal components of the processor such as
power failure, error instruction, temperature sensor,
etc.
 Synchronous interrupts: These interrupts are
controlled by the fixed time interval. All the interval
interrupts are called as synchronous interrupts.
 Asynchronous interrupts: These are initiated based
on the feedback of previous instructions. All the
external interrupts are called as asynchronous
interrupts.

1. Direct Memory Access: The data transfer between a fast storage media
such as magnetic disk and memory unit is limited by the speed of the
CPU. Thus we can allow the peripherals directly communicate with each
other using the memory buses, removing the intervention of the CPU. This
type of data transfer technique is known as DMA or direct memory access.
During DMA the CPU is idle and it has no control over the memory buses.
The DMA controller takes over the buses to manage the transfer directly
between the I/O devices and the memory unit.
1. Bus grant request time.
2. Transfer the entire block of data at transfer rate of device
because the device is usually slow than the speed at which the
data can be transferred to CPU.
3. Release the control of the bus back to CPU So, total time taken
to transfer the N bytes = Bus grant request time + (N) *
(memory transfer rate) + Bus release control time.
2. Buffer the byte into the buffer
3. Inform the CPU that the device has 1 byte to transfer (i.e. bus grant
request)
4. Transfer the byte (at system bus speed)
5. Release the control of the bus back to CPU.
1. Transfer the entire block of data at transfer rate of device
because the device is usually slow than the speed at which the
data can be transferred to CPU.
2. Release the control of the bus back to CPU So, total time taken
to transfer the N bytes = Bus grant request time + (N) *
(memory transfer rate) + Bus release control time.

A bus in computer architecture is a medium for transmitting


information between devices. In a typical computer, you have a central
processing unit, system memory, and some I/O devices that need to
communicate with each other. Busses connect all of these components
together as a shared transmission medium. This means a signal sent by
any device can be received by any other device connected to the bus. It
also means that only one device can send at a time. If there were more
than one signal traveling on the bus during a clock cycle, the signals
would overlap.

The System Bus

Busses typically consist of 50-100 lines that connect various


components in a computer. System buses generally enable
communication between the CPU, the main memory, and I/O. I/O
devices can be anything from graphics processing units to printers and
keyboards. They are usually connected to the system bus via an external
bus using a so-called bridge. The system buses generally fall into one of
three categories.

o The lines of the data bus are used to move data between
system components.
o The address bus communicates the address of the data to be
retrieved between the CPU and main memory or an I/O
storage device. The width of a bus that does not use
multiplexing determines how many memory locations are
addressable. For example, a 32-bit address bus yields a total of
2^32 = 4.29 billion possible address locations.
o The control bus is used to control how the components access
the data bus and the address bus. Since all components have
access to the bus, control is important to prevent signals from
overlapping.

To send data, a component must acquire control over the control bus
and then send data via the data bus. To receive data, a component must
obtain control via the control bus and then specify the address of the
data it wants via the address bus.

Memory Bus vs. Address Bus vs. Data Bus

People often end up confused with the terms used to describe the
different types of buses.
The term memory bus is often used interchangeably with the data bus.
But actually, the memory bus describes the whole internal bus system
that is required for the processor to communicate with memory. It is,
thus, one level higher on the abstraction hierarchy than the address bus
and the data bus.

In order to obtain data from memory, you need to know where that data
is stored. The address bus is used by the processor to send the memory
address to the main memory. Since the address only flows in one
direction, the address bus is unidirectional. The data bus then enables
the processor to retrieve data from the specified location or send data
to the specified memory location. It is, thus, bidirectional.

The memory bus consists of the data bus and the address bus
Control Bus
The job of the control bus is to coordinate the orderly transfer of data
and addresses between the processor on the one hand, and the main
memory and I/O devices on the other hand.

How do Control Signals Work


Control Busses have a set of commands and status signals to control
how data is transmitted. These signals control the timing of information
transmission as well as the types of operations to be performed.

o A memory write command specifies that data currently on the


data bus should be written into the location on the address bus.
o A memory read command specifies that data currently on the
data bus should be read from the location on the address bus.
o An I/O write command tells the data bus to send it the data to the
specified I/O port.
o An I/O read command tells the data bus to retrieve data from the
specified I/O port.
o A Transfer ACK or Acknowledgment signal means that the data
has been cleared for transfer and thus placed on or received from
the bus.
o Using a bus request signal, a module can acquire control of the
bus.
o A module can be granted control of the bus through a bus grant
signal.
o An interrupt request signals that an interrupt is required
o The clock line synchronizes operations across the bus
o A reset signal resets all modules

Types of Busses

We broadly distinguish between internal buses and external buses on


the basis of their location relative to the computer system and between
parallel buses and serial buses on the basis of the transmission
technology used.

Internal Bus vs. External Bus


Internal bus structures allow components inside the computer to
communicate with each other. The system-level bus is an internal bus
that enables the CPU to communicate with the main memory and I/O
devices. External buses enable the computer to communicate with
external devices. The most well-known example is the universal serial
bus (USB) which allows users to plug external devices into the computer
system.

Parallel Bus vs. Serial Bus

A parallel bus can send several data streams at once along parallel lines.
The obvious drawback is that the busses need to accommodate more
and more lines in order to transmit more data leading to wider and
wider buses. Serial buses were designed to address this problem.
As the name implies, they send multiple data packets in series over one
line instead of using multiple lines. While this may seem slower at first
glance, serial buses operate at significantly higher clock speeds
allowing for more data transfer over one line. Because they require
fewer lines, serial buses are generally cheaper to implement.
While traditional system buses transmit data in parallel, more recent
technologies such as PCI Express tend to rely on serial buses.

Peripheral Bus and Expansion Bus


The communication pathway of a system bus can be extended to
peripheral I/O devices by using an expansion bus or peripheral bus. The
most commonly used type of bus that allows us to plug external devices
into the system today is the universal serial bus (USB). Usually, transfer
via an expansion bus or peripheral bus is much slower than on an
internal system bus. For example, the memory bandwidth of the most
recent AMD processor as of this writing is 204.8 GB/s, whereas USBs
usually operate in the range of hundreds of MB/s. Accordingly, modern
systems decouple the transfer to and from external devices from the
memory operations happening inside the system..

Point-to-Point Interconnect

As the speed of processors and, thus, of transfer increased, the shared


bus architecture became increasingly untenable due to the overhead
required for synchronizing multiple functions inside the bus.

Today’s systems largely rely on direct pairwise connections between


components. These connections operate similarly to IP/TCP network
communications consisting of several layers and transporting data in
packets. For example, Quick Path Interconnect (QPI), a standard in
point-to-point interconnection developed by Intel, is a four-layer data
transfer architecture:

o Physical Layer which consists of the physical hardware required


to carry the signals such as cables and circuits.
o Link Layer that manages the data flow.
o Routing Layer, which routes the data packets from start to
destination.
o Protocol Layer that specifies the rules and protocols for data
exchange.

COMPUTER-SYSTEM STRUCTURES
 Computer-System Operation
 I/O Structure
 Storage Structure
 Storage Hierarchy
 Hardware Protection
 General System Architecture
Computer-System Operation

 I/O devices and the CPU can operate concurrently.


 Each device controller is in charge of a particular device type.
 Each device controller has a local buffer.
 CPU moves data from/to main memory to/from the local buffers.
 I/O is from the device to local buffer of controller.
 Device controller informs CPU that it has finished its operation by causing
an interrupt.

Computer-System Operation
I/O devices and the CPU can execute concurrently
Each device controller has a local buffer
CPU moves data from/to main memory to/from local buffers
Device controller informs CPU that it has finished its operation by causing An interrupt
The occurrence of an event is usually signaled by an Interrupt from either the hardware or
the software. Hardware may trigger an interrupt at any time by sending a signal to the CPU,
usually by way of the system bus. Software may trigger an interrupt executing a special
operation called a System call (also called a monitor call)

Common Functions of Interrupts


Interrupt transfers control to the interrupt service routine generally,
through the interrupt vector, which contains the addresses of all the
service routines.
Interrupt architecture must save the address of the interrupted
instruction. Incoming interrupts are disabled while another interrupt
is being processed to prevent a lost interrupt.
A trap is a software-generated interrupt caused either by an error or
a user request.
An operating system is interrupt driven.

Interrupts
Types

1. Hardware - Asynchronous
Device informs CPU that something has happened e.g. a key has been
pressed on the keyboard.
2. Hardware - Synchronous
CPU has tried to do something that has caused the interrupt. e.g. tried to
read from an invalid memory location. (not always a problem, it may mean
that that page is on disk needs to be fetched). Often called an Exception or
Trap.
3. Software
CPU asked for the interrupt to happen. e.g. to perform an OS Call. Often
called a Trap.

Hardware Interrupts

 I/O devices use Asynchronous Hardware Interrupts (i.e. caused by outside


world and may happen at any time).
 Transfers control to the interrupt service routine, through the interrupt
vector, which contains the addresses of all the service routines.
 CPU must save the address of the interrupted instruction.

Interrupt Handling

 Interrupt handling is a very important part of the OS.


 The operating system must preserve the state of the CPU by storing all
registers.
 Determine which type of interrupt has occurred:
o polling - ask each device if it caused the interrupt.
o vectored interrupt system - device identifies itself when it causes the
interrupt.
 Separate segments of code determine what action should be taken for
each type of interrupt.
 The operating system preserves the state of the CPU by storing registers and the
program counter

I/O Structure
A general-purpose computer system consists of CPUs and multiple device
controllers that are connected through a common bus. Each device controller
is in charge of a specific type of device. Depending on the controller, more
than one device may be attached. For instance, seven or more devices can be
attached to the small computer-systems interface (SCSI) controller.
A device controller maintains some local buffer storage and a set of special-
purpose registers. The device controller is responsible for moving the data
between the peripheral devices that it controls and its local buffer storage.
Typically, operating systems have a device driver for each device controller.
This device driver understands the device controller and provides the rest of
the operating system with a uniform interface to the device.
To start an I/O operation, the device driver loads the appropriate registers
within the device controller. The device controller, in turn, examines the
contents of these registers to determine what action to take (such as “read a
character from the keyboard”). The controller starts the transfer of data from
the device to its local buffer. Once the transfer of data is complete, the device
controller informs the device driver via an interrupt that it has finished its
operation. The device driver then returns control to the operating system,
possibly returning the data or a pointer to the data if the operation was a read.
For other operations, the device driver returns status information.
System call – request to the operating system to allow user to wait for I/O completion
Device-status table contains entry for each I/O device indicating its type, address, and
state

Operating system indexes into I/O device table to determine device status and to
modify table entry to include interrupt

Storage Structure
The CPU can load instructions only from memory, so any programs to run must be
stored there. General-purpose computers run most of their programs from
rewriteable memory, called main memory (also called or RAM). Main commonly is
implemented in a semiconductor technology called DRAM.

All forms of memory provide an array of words. Each word has its own address.
Interaction is achieved through a sequence of load or store instructions to specific
memory addresses. The load instruction moves a word from main memory to an
internal register within the CPU, whereas the store instruction moves the content
of a register to main memory.

Ideally, we want the programs and data to reside in main memory permanently.
This arrangement usually is not possible for the following two reasons:

1) Main memory is usually too small to store all needed programs and data
permanently.

2) Main memory is a volatile storage device that loses its contents when power is
turned off or otherwise lost.

Thus, most computer systems provide secondary storage as an extension of main


memory. The main requirement for secondary storage is that it be able to hold
large quantities of data permanently. The most common secondary-storage
device is a magnetic disk which provides storage for both programs and data.

• Main memory – only large storage media that the CPU can access directly

• Secondary storage – extension of main memory that provides large nonvolatile


storage capacity

• Magnetic disks – rigid metal or glass platters covered with magnetic recording
material

Storage Hierarchy

• Storage systems organized in hierarchy

• Speed

I/O Calls
Blocking I/O

 User program requests I/O, control returns to user program only upon I/O
completion.
o CPU may be allocated to another process.
Non-Blocking I/O

 After I/O starts, control returns to user program without waiting for I/O
completion.

Storage Hierarchy

 Storage systems can be organized in a hierarchy:


o speed
o cost
o volatility
 Most programs make accesses to memory which are localised
o in time
i.e. the program spends a lot of time executing short sections of
code.
o in space
i.e. the program reads and writes to certain memory locations a lot;
these locations tend to be close together.
 Caching - copying information into faster storage system; main memory
can be viewed as a fast cache for secondary memory.

Storage Structure
Main memory – only large storage media that the CPU can access directly.
Secondary storage – extension of main memory that provides large nonvolatile
storage capacity.

Magnetic disks – rigid metal or glass platters covered with magnetic recording
material ✦Disk surface is logically divided into tracks, which are subdivided into
sectors. ✦The disk controller determines the logical interaction between the
device and the computer.
I/O Protection
All I/O instructions are privileged instructions.

Must ensure that a user program could never gain control

of the computer in monitor mode (I.e., a user program

that, as part of its execution, stores a new address in the

interrupt vector)

Memory Protection
Must provide memory protection at least for the interrupt

vector and the interrupt service routines.

In order to have memory protection, add two registers

that determine the range of legal addresses a program


may access:

✦ Base register – holds the smallest legal physical memory

address.

✦ Limit register – contains the size of the range

Memory outside the defined range is protected.

Hardware Protection
When executing in monitor mode, the operating system

has unrestricted access to both monitor and user’s

memory.

The load instructions for the base and limit registers are

privileged instructions.

CPU Protection
Timer – interrupts computer after specified period to
ensure operating system maintains control.

✦ Timer is decremented every clock tick.

✦ When timer reaches the value 0, an interrupt occurs.

Timer commonly used to implement time sharing.

Time also used to compute the current time.

Load-timer is a privileged instruction.

You might also like