0% found this document useful (0 votes)
26 views45 pages

Co 3

The document discusses the fundamentals of programming a basic computer, focusing on computer architecture, organization, and the distinctions between machine language and assembly language. It explains key concepts such as instruction sets, registers, and the role of the Arithmetic Logic Unit (ALU) in executing operations. Additionally, it covers the advantages and disadvantages of machine language, the function of assemblers, and various bus organizations used in computer architecture.

Uploaded by

bhargavii2409
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views45 pages

Co 3

The document discusses the fundamentals of programming a basic computer, focusing on computer architecture, organization, and the distinctions between machine language and assembly language. It explains key concepts such as instruction sets, registers, and the role of the Arithmetic Logic Unit (ALU) in executing operations. Additionally, it covers the advantages and disadvantages of machine language, the function of assemblers, and various bus organizations used in computer architecture.

Uploaded by

bhargavii2409
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Programming Basic Computer

In the context of computer architecture and organization, "programming a basic computer"


refers to writing a sequence of instructions (or "code") in a language that the computer can
understand and execute, allowing it to perform specific tasks, often at a low level using basic
operations and registers.
Here's a more detailed explanation:
Key Concepts:
 Computer Architecture:
This is the blueprint or design of a computer system, including how its components are
arranged and interact.
 Computer Organization:
This focuses on how the physical components of the computer, like the CPU, memory, and
I/O devices, are connected and structured to implement the architecture.
 Programming:
This is the process of creating instructions (code) that a computer can follow to solve
problems or perform tasks.
 Basic Computer:
A simplified model of a computer system, often used to understand the fundamental
principles of computer architecture and organization.
 Instructions:
A command that tells the computer to perform an operation, such as adding two numbers,
moving data, or jumping to another part of the code.
 Instruction Set Architecture (ISA):
A set of instructions that a processor understands and can execute.
 Registers:
Small, fast storage locations within the CPU used to hold data or instructions during
processing
 ALU:
Arithmetic Logic Unit, a key part of the processor that performs arithmetic and logical
operations.
 Control Unit (CU):
A part of the processor that directs the operation of the computer, fetching instructions,
decoding them and coordinating their execution.
Basic Computer Programming:
 Assembly Language:
A low-level programming language that uses mnemonics (short codes) to represent
instructions, making it closer to machine language.
 Machine Language:
The lowest level of programming, consisting of binary code (0s and 1s) that the computer
directly executes.
 Instruction Format:
The structure of a machine language instruction, including fields for the operation code
(what to do) and operands (what to operate on).
 Addressing Modes:
Ways of specifying the location of data or instructions (e.g., direct addressing, indirect
addressing).
 Registers:
Small memory locations within the CPU used to store data or instructions.
Examples:
 Load: Retrieve data from memory into a register.
 Add: Add two numbers together.
 Store: Save data from a register into memory.
 Branch: Change the flow of execution to another part of the code.
Why Learn Basic Computer Programming?
 Understanding Hardware:
It helps understand how hardware components work together and how software interacts
with them.
 Foundation for Advanced Programming:
It builds a strong foundation for learning more complex programming languages and
computer systems.
 Problem-Solving:
It develops problem-solving skills and the ability to break down complex tasks into smaller,
manageable steps.
 System Optimization:
It provides insights for optimizing programs for better performance on specific computer
architectures.

What is Machine Language


Machine language is a low-level programming language that is
understood by computers. Machine language is made up of binary
bits 0 and 1. Machine language is also known as machine codes
or object code. As machine language consists of only 0 and 1,
that's why it is difficult to understand in raw form. Machine
language cannot understood by humans. The CPU processes this
machine code as input. In this article, we are going to learn about
what is Machine language, the features of machine language, the
advantages and disadvantages of machine learning, and why it is
difficult for humans to understand machine language(low-level
language).
What is Machine Language?
Machine language is a low-level programming language that
consists of binary bits i.e. only 0 and 1. The data present in binary
form is the reason for its fast execution. In Machine language,
instructions are directly executed by the CPU. Machine language
is also known as object code or machine code. Machine language
is binary language.

M
achine Language
Needs of Machine Language
As a human, we write code in high level language. The
programming language which we use to write codes such as C,
C++ and java are high level languages. High level language is not
understood by computer directly so it is converted into low level
machine language to understand the meaning of code and
perform execution. Computers compile the code written by us and
translate into machine code and then execute it. Computers are
only able to understand machine language.
Features of Machine Language
Below are some feature of Machine Language.
 Machine language is a low level language.
 Machine language consist of only 0 and 1 bits.
 Machine languages are platform dependent.
 It is nearly impossible to learn machine language for humans
because it requires a lot of memoization.
 Machine language is used to create and construct drivers as
well.
Understand the Complexity of Machine
language
In machine language every character, integer and special
symbols are written in form of 0 and 1 . To understand machine
language let's take an example of a machine language
instruction. This is a simple addition operation: 01100110
00001010. This binary sequence represents an instruction that
tells the computer to add two numbers together.
Meaning of Binary bits in Machine Language:
A sequence of bits is used to give commands in machine
languages.
 The 1s (one) represents the true or on states.
 On the other hand, the 0s (zero) represent the off or false
states.
 That's why no human can remember the binary codes of
machine languages. As a result, learning these languages is
not possible for humans.
Machine Language Instruction Components
Machine language consist of two instruction components :
1. Operand(s)
The operand(s) represents the data that the operation must be
performed on. This data can take various forms, depending on the
processor's architecture. This can be a register containing a
value, a memory address pointing to a location in memory where
the data is stored, or a constant value embedded within the
instruction itself.
2. Opcode
The opcode (Operation code) represents the operation that the
processor must perform. This indicate that the instruction is an
arithmetic operation such as addition, subtraction, multiplication,
or division.
Advantages of Machine Language
Some advantages of machine language are listed below:
 Machine languages are faster in execution because they are in
binary form.
 Machine language does not need to be translated , because it
is already present in simple binary form.
 The CPU directly executes the machine language.
 The evolution of the computer system and operating system
over the time period is due to machine language.
 Machine languages are used in developing a high-grade
computer system.
Disadvantages of Machine Language
Some disadvantages of machine language are listed below:
 Machine language are complex to understand and memorize.
 Writing codes in machine language is time-consuming.
 It is very difficult to resolve bugs and errors present in the
codes and programs.
 Codes written in machine languages are more prone to error.
 Machine languages are not easy to modify.
 Machine language are platform Independent.

what is assembly language


Assembly language is a programming language that allows programmers to write instructions
that a computer's CPU can execute directly. It's a low-level language that's closer to machine
code than higher-level languages like Java or Python.
How does it work?
 Assembly language uses mnemonic codes, abbreviations, and short-hand to make it easier for
humans to read.
 Each instruction in an assembly language program specifies an operation and the operands to
perform it on.
 Assembly language programs are written in a text editor and then assembled into a file that
can run on a computer.
When is it used?
 Assembly language is used in applications that require precise control over hardware, such as
operating systems, firmware, and embedded systems.
 It's also used in malware analysis and security exploitation.
Why is it platform-dependent?
 Each processor architecture has its own assembly language instructions and conventions.

assembler in computer
An assembler is a computer program that translates assembly language
into machine code. This allows a computer's central processing unit (CPU)
to execute instructions written in assembly language.
How does an assembler work?
 An assembler converts human-readable instructions into binary code.
 It translates combinations of mnemonics and syntax into numerical
equivalents.
 It calculates constant expressions and resolves symbolic names for
memory locations.
 It assembles and converts assembly language source code into object
code.
Why use an assembler?
 Assemblers are used for low-level programming.
 They are specific to a particular computer architecture.
 They give programmers direct control over the CPU and memory.
 They allow programmers to optimize how the machine performs tasks.
Types of assemblers
 Single-pass assemblers complete their work in one scan.
 Multiple-pass assemblers complete their work in multiple scans.
Assembler vs. compiler
Assemblers are similar to compilers in that they produce executable
code. However, assemblers are simpler because they only convert low-
level code.
Introduction of ALU and Data Path
Representing and storing numbers were the basic operations of
the computers of earlier times. The real go came when
computation, manipulating numbers like adding and multiplying
came into the picture. These operations are handled by the
computer’s arithmetic logic unit (ALU). The ALU is the
mathematical brain of a computer. The first ALU (Arithmetic Logic
Unit) was indeed the INTEL 74181, which was implemented as
part of the 7400 series TTL (Transistor-Transistor Logic)
integrated circuits. It was released by Intel in 1970.
What is ALU?
ALU is a digital circuit that provides arithmetic and logic
operations. It is the fundamental building block of the central
processing unit of a computer. A modern central processing
unit(CPU) has a very powerful ALU and it is complex in design. In
addition to ALU modern CPU contains a control unit and a set of
registers. Most of the operations are performed by one or more
ALUs, which load data from the input register. Registers are a
small amount of storage available to the CPU. These registers can
be accessed very fast. The control unit tells ALU what operation to
perform on the available data. After calculation/manipulation, the
ALU stores the output in an output register.
The CPU can be divided into two sections: the data section and
the control section. The data section is also known as the data
path.
An Arithmetic Logic Unit (ALU) is a key component of the CPU
responsible for performing arithmetic and logical operations. The
collection of functional units like ALUs, registers, and buses that
move data within the processor. together are known as Data Path,
they execute instructions and manipulate data during processing
tasks.
BUS
In early computers BUS were parallel electrical wires with multiple
hardware connections. Therefore a bus is a communication
system that transfers data between components inside a
computer, or between computers. It includes hardware
components like wires, optical fibers, etc and software, including
communication protocols. The Registers, ALU, and the
interconnecting BUS are collectively referred to as data paths.
Types of the bus
There are mainly three type of bus:-
1. Address bus: Transfers memory addresses from the processor
to components like storage and input/output devices. It’s one-
way communication.
2. Data bus: carries the data between the processor and other
components. The data bus is bidirectional.
3. Control bus: carries control signals from the processor to
other components. The control bus also carries the clock’s
pulses. The control bus is unidirectional.
The bus can be dedicated, i.e., it can be used for a single purpose
or it can be multiplexed, i.e., it can be used for multiple purposes.
when we would have different kinds of buses, different types of
bus organizations will take place.
Registers
In Computer Architecture, the Registers are very fast computer
memory which is used to execute programs and operations
efficiently. but In that scenario, registers serve as gates, sending
signals to various components to carry out little tasks. Register
signals are directed by the control unit, which also operates the
registers.
The following list of five registers for in-out signal data storage:
1. Program Counter
A program counter (PC) is a CPU register in the computer
processor which has the address of the next instruction to be
executed from memory . As each instruction gets fetched, the
program counter increases its stored value by 1. It is a digital
counter needed for faster execution of tasks as well as for
tracking the current execution point.
2. Instruction Register
In computing, an instruction register (IR) is the part of a CPU’s
control unit that holds the instruction currently being executed
or decoded. The instruction register specifically holds the
instruction and provides it to the instruction decoder circuit.
3. Memory Address Register
The Memory Address Register (MAR) is the CPU register that
either stores the memory address from which data will be
fetched from the CPU, or the address to which data will be sent
and stored. It is a temporary storage component in the
CPU(central processing unit) that temporarily stores the
address (location) of the data sent by the memory unit until the
instruction for the particular data is executed.
4. Memory Data Register
The memory data register (MDR) is the register in a computer’s
processor, or central processing unit, CPU, that stores the data
being transferred to and from the immediate access storage.
Memory data register (MDR) is also known as memory buffer
register (MBR).
5. General Purpose Register
General-purpose registers are used to store temporary data
within the microprocessor . It is a multipurpose register. They
can be used either by a programmer or by a user.
What is Data Path?
Suppose that the CPU needs to carry out any data processing
action, such as copying data from memory to a register and vice
versa, moving register content from one register to another, or
adding two numbers in the ALU. Therefore, whenever a data
processing action takes place in the CPU, the data involved for
that operation follows a particular path, or data path.
Data paths are made up of various functional components, such
as multipliers or arithmetic logic units. Data path is required to do
data processing operations.
One Bus Organization

In one bus organization, a single bus is used for multiple


purposes. A set of general-purpose registers, program counters,
instruction registers, memory address registers (MAR), memory
data registers (MDR) are connected with the single bus. Memory
read/write can be done with MAR and MDR. The program
counterpoints to the memory location from where the next
instruction is to be fetched. Instruction register is that very
register will hold the copy of the current instruction. In the case of
one bus organization, at a time only one operand can be read
from the bus.
As a result, if the requirement is to read two operands for the
operation then the read operation needs to be carried twice. So
that’s why it is making the process a little longer. One of the
advantages of one bus organization is that it is one of the
simplest and also this is very cheap to implement. At the same
time a disadvantage lies that it has only one bus and this “one
bus” is accessed by all general-purpose registers, program
counter, instruction register, MAR, MDR making each and every
operation sequential. No one recommends this architecture
nowadays.
Two Bus Organization
To overcome the disadvantage of one bus organization another
architecture was developed known as two bus organization. In
two bus organizations, there are two buses. The general-purpose
register can read/write from both the buses. In this case, two
operands can be fetched at the same time because of the two
buses. One bus fetch operand for ALU and another bus fetch for
register. The situation arises when both buses are busy fetching
operands, the output can be stored in a temporary register and
when the buses are free, the particular output can be dumped on
the buses.
There are two versions of two bus organizations, i.e., in-bus and
out-bus. From in-bus, the general-purpose register can read data
and to the out bus, the general-purpose registers can write data.
Here buses get dedicated.

Three Bus Organization


In three bus organizations we have three buses, OUT bus1, OUT
bus2, and an IN bus. From the out buses, we can get the operand
which can come from the general-purpose register and evaluated
in ALU and the output is dropped on In Bus so it can be sent to
respective registers. This implementation is a bit complex but
faster in nature because in parallel two operands can flow into
ALU and out of ALU. It was developed to overcome the busy
waiting problem of two bus organizations. In this structure after
execution, the output can be dropped on the bus without waiting
because of the presence of an extra bus. The structure is given
below in the figure.

The main advantages of multiple bus organizations over the


single bus are as given below.
1. Increase in size of the registers.
2. Reduction in the number of cycles for execution.
3. Increases the speed of execution or we can say faster
execution.

Subroutine, Subroutine nesting and Stack memory

In computer programming, Instructions that are frequently used


in the program are termed Subroutines. This article will provide a
detailed discussion on Subroutines, Subroutine Nesting, and Stack
Memory. Additionally, we will explore the advantages and
disadvantages of these topics. Let’s begin with Subroutines.

What is a Subroutine?
A set of instructions that are used repeatedly in a program can
be referred to as a Subroutine. Only one copy of this Instruction is
stored in the memory. When a Subroutine is required it can be
called many times during the Execution of a particular program. A
call Subroutine Instruction calls the Subroutine. Care Should be
taken while returning a Subroutine as a Subroutine can be called
from a different place from the memory.
The content of the PC must be Saved by the call Subroutine
Instruction to make a correct return to the calling program.

Process of a subroutine in a program

The subroutine linkage method is a way in which computers call


and return the Subroutine. The simplest way of Subroutine
linkage is saving the return address in a specific location, such as
a register which can be called a link register called Subroutine.
Advantages of Subroutines

 Code reuse: Subroutines can be reused in multiple parts of a


program, which can save time and reduce the amount of code
that needs to be written.
 Modularity: Subroutines help to break complex programs into
smaller, more manageable parts, making them easier to
understand, maintain, and modify.
 Encapsulation: Subroutines provide a way to encapsulate
functionality, hiding the implementation details from other
parts of the program.

Disadvantages of Subroutines

 Overhead: Calling a subroutine can incur some overhead,


such as the time and memory required to push and pop data
on the stack.
 Complexity: Subroutine nesting can make programs more
complex and difficult to understand, particularly if the nesting
is deep or the control flow is complicated.
 Side Effects: Subroutines can have unintended side effects,
such as modifying global variables or changing the state of the
program, which can make debugging and testing more difficult.
What is Subroutine Nesting?
Subroutine nesting is a common Programming practice In which
one Subroutine calls another Subroutine.

A Subroutine calling another subroutine

From the above figure, assume that when Subroutine 1 calls


Subroutine 2 the return address of Subroutine 2 should be saved
somewhere. So if the link register stores the return address of
Subroutine 1 this will be (destroyed/overwritten) by the return
address of Subroutine 2. As the last Subroutine called is the first
one to be returned ( Last in first out format). So stack data
structure is the most efficient way to store the return addresses of
the Subroutines.
The Return address of the subroutine is stored in stack memory

What is Stack Memory?


A Stack is a basic data structure that can be implemented
anywhere in the memory. It can be used to store variables that
may be required afterwards in the program Execution. In a stack,
the first data put will be the last to get out of a stack. So the last
data added will be the first one to come out of the stack (last in
first out).

Stack memory having data A, B & C

So from the diagram above first, A is added then B & C. While


removing the first C is Removed then B & A.

Advantages of subroutine nesting and Stack Memory

 Flexibility: Subroutine nesting allows for the creation of


complex programs with many levels of abstraction, making it
easier to organize code and reuse functionality.
 Efficient use of memory: Stack memory is used to allocate
and deallocate local variables, allowing for efficient use of
memory resources.
 Error handling: Stack Memory can be used to keep track of
the state of the program, allowing for recovery from errors and
exceptions.

Disadvantages of Subroutine Nesting and Stack


Memory
 Stack overflow: If too many subroutine calls are nested or if
the local variables are too large, the stack memory can
overflow, causing the program to crash.
 Security vulnerabilities: Stack-based buffer overflows can
be exploited by attackers to execute malicious code or crash
the program.
 Performance: The use of stack memory can impact program
performance, particularly if the program requires a large
amount of memory or if the stack needs to be frequently
accessed.

Micro Programmed Control


A microprogrammed control unit is one where the control signals
are stored in a memory, known as control memory, and are fetched
as part of microinstructions to direct the operation of the CPU

Introduction of Control Unit and its Design


Last Updated : 28 Dec, 2024



A Central Processing Unit is the most important component of


a computer system. A control unit is a part of the CPU. A control
unit controls the operations of all parts of the computer but it
does not carry out any data processing operations.
What is a Control Unit?
The Control Unit is the part of the computer’s central processing
unit (CPU), which directs the operation of the processor. It was
included as part of the Von Neumann Architecture by John von
Neumann. It is the responsibility of the control unit to tell
the computer’s memory, arithmetic/logic unit, and input and
output devices how to respond to the instructions that have been
sent to the processor. It fetches internal instructions of the
programs from the main memory to the processor instruction
register, and based on this register contents, the control unit
generates a control signal that supervises the execution of these
instructions. A control unit works by receiving input information
which it converts into control signals, which are then sent to the
central processor. The computer’s processor then tells the
attached hardware what operations to perform. The functions that
a control unit performs are dependent on the type of CPU because
the architecture of the CPU varies from manufacturer to
manufacturer.
Examples of devices that require a CU are:
 Control Processing Units(CPUs)
 Graphics Processing Units(GPUs)

Functions of the Control Unit


 It coordinates the sequence of data movements into, out of,
and between a processor’s many sub-units.
 It interprets instructions.
 It controls data flow inside the processor.
 It receives external instructions or commands to which it
converts to sequence of control signals.
 It controls many execution units(i.e.ALU , data buffers
and registers ) contained within a CPU.
 It also handles multiple tasks, such as fetching, decoding,
execution handling and storing results.
The control unit of a CPU fetches and executes instructions,
playing a critical role in system performance. Its design ensures
smooth operation of various components.
Types of Control Unit
There are two types of control units:
 Hardwired
 Micro programmable control unit.
Hardwired Control Unit
In the Hardwired control unit, the control signals that are
important for instruction execution control are generated by
specially designed hardware logical circuits, in which we can not
modify the signal generation method without physical change of
the circuit structure. The operation code of an instruction contains
the basic data for control signal generation. In the instruction
decoder, the operation code is decoded. The instruction decoder
constitutes a set of many decoders that decode different fields of
the instruction opcode.
As a result, few output lines going out from the instruction
decoder obtains active signal values. These output lines are
connected to the inputs of the matrix that generates control
signals for execution units of the computer. This matrix
implements logical combinations of the decoded signals from the
instruction opcode with the outputs from the matrix that
generates signals representing consecutive control unit states
and with signals coming from the outside of the processor, e.g.
interrupt signals. The matrices are built in a similar way as a
programmable logic arrays.

Control signals for an instruction execution have to be generated


not in a single time point but during the entire time interval that
corresponds to the instruction execution cycle. Following the
structure of this cycle, the suitable sequence of internal states is
organized in the control unit. A number of signals generated by
the control signal generator matrix are sent back to inputs of the
next control state generator matrix.
This matrix combines these signals with the timing signals, which
are generated by the timing unit based on the rectangular
patterns usually supplied by the quartz generator. When a new
instruction arrives at the control unit, the control units is in the
initial state of new instruction fetching. Instruction decoding
allows the control unit enters the first state relating execution of
the new instruction, which lasts as long as the timing signals and
other input signals as flags and state information of the computer
remain unaltered.
A change of any of the earlier mentioned signals stimulates the
change of the control unit state. This causes that a new
respective input is generated for the control signal generator
matrix. When an external signal appears, (e.g. an interrupt) the
control unit takes entry into a next control state that is the state
concerned with the reaction to this external signal (e.g. interrupt
processing).
The values of flags and state variables of the computer are used
to select suitable states for the instruction execution cycle. The
last states in the cycle are control states that commence fetching
the next instruction of the program: sending the program counter
content to the main memory address buffer register and next,
reading the instruction word to the instruction register of
computer. When the ongoing instruction is the stop instruction
that ends program execution, the control unit enters an operating
system state, in which it waits for a next user directive.
Micro Programmable control unit
The fundamental difference between these unit structures and
the structure of the hardwired control unit is the existence of the
control store that is used for storing words containing encoded
control signals mandatory for instruction execution.
In microprogrammed control units, subsequent instruction words
are fetched into the instruction register in a normal way.
However, the operation code of each instruction is not directly
decoded to enable immediate control signal generation but it
comprises the initial address of a microprogram contained in the
control store.
 With a single-level control store: In this, the instruction
opcode from the instruction register is sent to the control store
address register. Based on this address, the first
microinstruction of a microprogram that interprets execution of
this instruction is read to the microinstruction register . This
microinstruction contains in its operation part encoded control
signals, normally as few bit fields. In a set microinstruction field
decoders, the fields are decoded. The microinstruction also
contains the address of the next microinstruction of the given
instruction microprogram and a control field used to control
activities of the microinstruction address generator.

The last mentioned field decides the addressing


mode (addressing operation) to be applied to the address
embedded in the ongoing microinstruction. In microinstructions
along with conditional addressing mode, this address is refined
by using the processor condition flags that represent the status
of computations in the current program. The last
microinstruction in the instruction of the given microprogram is
the microinstruction that fetches the next instruction from the
main memory to the instruction register.
 With a two-level control store: In this, in a control unit with
a two-level control store, besides the control memory for
microinstructions, a nano-instruction memory is included. In
such a control unit, microinstructions do not contain encoded
control signals. The operation part of microinstructions
contains the address of the word in the nano-instruction
memory, which contains encoded control signals. The nano-
instruction memory contains all combinations of control signals
that appear in microprograms that interpret the complete
instruction set of a given computer, written once in the form of
nano-instructions.

In this way, unnecessary storing of the same operation parts of


microinstructions is avoided. In this case, microinstruction word
can be much shorter than with the single level control store. It
gives a much smaller size in bits of the microinstruction
memory and, as a result, a much smaller size of the entire
control memory. The microinstruction memory contains the
control for selection of consecutive microinstructions, while
those control signals are generated at the basis of nano-
instructions. In nano-instructions, control signals are frequently
encoded using 1 bit/ 1 signal method that eliminates decoding.
Advantages of a Well-Designed Control Unit
 Efficient instruction execution: A well-designed control unit
can execute instructions more efficiently by optimizing the
instruction pipeline and minimizing the number of clock cycles
required for each instruction.
 Improved performance: A well-designed control unit can
improve the performance of the CPU by increasing the clock
speed, reducing the latency, and improving the throughput.
 Support for complex instructions: A well-designed control
unit can support complex instructions that require multiple
operations, reducing the number of instructions required to
execute a program.
 Improved reliability: A well-designed control unit can
improve the reliability of the CPU by detecting and correcting
errors, such as memory errors and pipeline stalls.
 Lower power consumption: A well-designed control unit can
reduce power consumption by optimizing the use of resources,
such as registers and memory , and reducing the number of
clock cycles required for each instruction.
 Better branch prediction: A well-designed control unit can
improve branch prediction accuracy, reducing the number of
branch mispredictions and improving performance.
 Improved scalability: A well-designed control unit can
improve the scalability of the CPU, allowing it to handle larger
and more complex workloads.
 Better support for parallelism: A well-designed control unit
can better support parallelism, allowing the CPU to execute
multiple instructions simultaneously and improve overall
performance.
 Improved security: A well-designed control unit can improve
the security of the CPU by implementing security features such
as address space layout randomization and data execution
prevention.
 Lower cost: A well-designed control unit can reduce the cost
of the CPU by minimizing the number of components required
and improving manufacturing efficiency.
Disadvantages of a Poorly-Designed Control
Unit
 Reduced performance: A poorly-designed control unit can
reduce the performance of the CPU by introducing pipeline
stalls, increasing the latency, and reducing the throughput.
 Increased complexity: A poorly-designed control unit can
increase the complexity of the CPU, making it harder to design,
test, and maintain.
 Higher power consumption: A poorly-designed control unit
can increase power consumption by inefficiently using
resources, such as registers and memory, and requiring more
clock cycles for each instruction.
 Reduced reliability: A poorly-designed control unit can
reduce the reliability of the CPU by introducing errors, such as
memory errors and pipeline stalls.
 Limitations on instruction set: A poorly-designed control
unit may limit the instruction set of the CPU, making it harder
to execute complex instructions and limiting the functionality
of the CPU.
 Inefficient use of resources: A poorly-designed control unit
may inefficiently use resources such as registers and memory,
leading to wasted resources and reduced performance.
 Limited scalability: A poorly-designed control unit may limit
the scalability of the CPU, making it harder to handle larger and
more complex workloads.
 Poor support for parallelism: A poorly-designed control unit
may limit the ability of the CPU to support parallelism, reducing
the overall performance of the system.
 Security vulnerabilities: A poorly-designed control unit may
introduce security vulnerabilities, such as buffer overflows or
code injection attacks.
 Higher cost: A poorly-designed control unit may increase the
cost of the CPU by requiring additional components or
increasing the manufacturing complexity.

address sequencing
In the context of computer architecture, address sequencing refers to the
process of generating a sequence of memory addresses to control the
execution of instructions. This is a fundamental aspect of how computers
access and process data efficiently.
Here's a more detailed explanation:
 Purpose:
Address sequencing ensures that instructions and data are accessed in
the correct order, enabling the CPU to execute programs correctly.
 Methods:
 Incrementing: The simplest method is to increment the current address to
fetch the next instruction in sequential memory locations.
 Branching: Instructions can use branching (conditional or unconditional) to
jump to different memory addresses, altering the execution flow.
 Mapping: Instruction codes can be mapped to specific routines in control
memory, which are then executed.
 Microprogrammed Control:
In microprogrammed control units, address sequencing becomes crucial
for controlling the execution of micro-instructions, which are stored in
groups within control memory, each representing a specific routine.
 Control Unit's Role:
The control unit (or CPU's control part) uses address sequencing to
initiate sequences of micro-operations.
 Examples of Address Sequencing in Computer Architecture
 Incrementing CAR: The hardware controlling the address sequencing of the
control memory must be capable of incrementing the control address register
(CAR), moving to the next microinstruction address.
 Branching: Unconditional or conditional branching based on status bits
allows the control unit to jump to different routines or microinstructions.

 Mapping Instruction Codes: A mapping process converts instruction code


bits to an address within the control memory, where the corresponding
routine is located.
 Subroutine Call and Return: Address sequencing facilitates the smooth
execution flow of subroutine calls and returns.
 Microsequencer:
Some CPUs have a dedicated unit called a microsequencer, which is
responsible for generating the addresses needed to execute
microprograms.

Introduction to Input-Output Interface


Input-Output Interface is used as a method which helps in
transferring of information between the internal storage devices
i.e. memory and the external peripheral device . A peripheral
device is that which provide input and output for the computer, it
is also called Input-Output devices. For Example: A keyboard and
mouse provide Input to the computer are called input devices
while a monitor and printer that provide output to the computer
are called output devices. Just like the external hard-drives, there
is also availability of some peripheral devices which are able to
provide both input and output.

Input-Output Interface
In micro-computer base system, the only purpose of peripheral
devices is just to provide special communication links for the
interfacing them with the CPU. To resolve the differences between
peripheral devices and CPU, there is a special need for
communication links.
The major differences are as follows:
1. The nature of peripheral devices is electromagnetic and
electro-mechanical. The nature of the CPU is electronic. There
is a lot of difference in the mode of operation of both peripheral
devices and CPU.
2. There is also a synchronization mechanism because the data
transfer rate of peripheral devices are slow than CPU.
3. In peripheral devices, data code and formats are differ from the
format in the CPU and memory.
4. The operating mode of peripheral devices are different and
each may be controlled so as not to disturb the operation of
other peripheral devices connected to CPU.
There is a special need of the additional hardware to resolve the
differences between CPU and peripheral devices to supervise and
synchronize all input and output devices.
Functions of Input-Output Interface:
1. It is used to synchronize the operating speed of CPU with
respect to input-output devices.
2. It selects the input-output device which is appropriate for the
interpretation of the input-output signal.
3. It is capable of providing signals like control and timing signals.
4. In this data buffering can be possible through data bus.
5. There are various error detectors.
6. It converts serial data into parallel data and vice-versa.
7. It also convert digital data into analog signal and vice-versa.

I/O Interface (Interrupt and DMA Mode)


The method that is used to transfer information between internal
storage and external I/O devices is known as I/O interface. The
CPU is interfaced using special communication links by the
peripherals connected to any computer system. These
communication links are used to resolve the differences between
CPU and peripheral. There exists special hardware components
between CPU and peripherals to supervise and synchronize all the
input and output transfers that are called interface units.
Mode of Transfer:
The binary information that is received from an external device is
usually stored in the memory unit. The information that is
transferred from the CPU to the external device is originated from
the memory unit. CPU merely processes the information but the
source and target is always the memory unit. Data transfer
between CPU and the I/O devices may be done in different modes.
Data transfer to and from the peripherals may be done in any of
the three possible ways
1. Programmed I/O.
2. Interrupt- initiated I/O.
3. Direct memory access( DMA).
Now let’s discuss each mode one by one.
1. Programmed I/O: It is due to the result of the I/O instructions
that are written in the computer program. Each data item
transfer is initiated by an instruction in the program. Usually
the transfer is from a CPU register and memory. In this case it
requires constant monitoring by the CPU of the peripheral
devices.
Example of Programmed I/O: In this case, the I/O device
does not have direct access to the memory unit. A transfer
from I/O device to memory requires the execution of several
instructions by the CPU, including an input instruction to
transfer the data from device to the CPU and store instruction
to transfer the data from CPU to memory. In programmed I/O,
the CPU stays in the program loop until the I/O unit indicates
that it is ready for data transfer. This is a time consuming
process since it needlessly keeps the CPU busy. This situation
can be avoided by using an interrupt facility. This is discussed
below.
2. Interrupt- initiated I/O: Since in the above case we saw the
CPU is kept busy unnecessarily. This situation can very well be
avoided by using an interrupt driven method for data transfer.
By using interrupt facility and special commands to inform the
interface to issue an interrupt request signal whenever data is
available from any device. In the meantime the CPU can
proceed for any other program execution. The interface
meanwhile keeps monitoring the device. Whenever it is
determined that the device is ready for data transfer it initiates
an interrupt request signal to the computer. Upon detection of
an external interrupt signal the CPU stops momentarily the task
that it was already performing, branches to the service
program to process the I/O transfer, and then return to the task
it was originally performing.
 The I/O transfer rate is limited by the speed with which the
processor can test and service a device.
 The processor is tied up in managing an I/O transfer; a
number of instructions must be executed for each I/O
transfer.
 Terms:
o Hardware Interrupts: Interrupts present in the
hardware pins.
o Software Interrupts: These are the instructions used
in the program whenever the required functionality
is needed.
o Vectored interrupts: These interrupts are associated
with the static vector address.
o Non-vectored interrupts: These interrupts are
associated with the dynamic vector address.
o Maskable Interrupts: These interrupts can be
enabled or disabled explicitly.
o Non-maskable interrupts: These are always in the
enabled state. we cannot disable them.
o External interrupts: Generated by external devices
such as I/O.
o Internal interrupts: These devices are generated by
the internal components of the processor such as
power failure, error instruction, temperature sensor,
etc.
o Synchronous interrupts: These interrupts are
controlled by the fixed time interval. All the interval
interrupts are called as synchronous interrupts.
o Asynchronous interrupts: These are initiated based
on the feedback of previous instructions. All the
external interrupts are called as asynchronous
interrupts.
3. Direct Memory Access : The data transfer between a fast
storage media such as magnetic disk and memory unit is
limited by the speed of the CPU. Thus we can allow the
peripherals directly communicate with each other using the
memory buses, removing the intervention of the CPU. This type
of data transfer technique is known as DMA or direct memory
access. During DMA the CPU is idle and it has no control over
the memory buses. The DMA controller takes over the buses to
manage the transfer directly between the I/O devices and the
memory unit.
 Bus grant request time.
 Transfer the entire block of data at transfer rate of device
because the device is usually slow than the speed at which
the data can be transferred to CPU.
 Release the control of the bus back to CPU So, total time
taken to transfer the N bytes = Bus grant request time + (N)
* (memory transfer rate) + Bus release control time.
4. Buffer the byte into the buffer
5. Inform the CPU that the device has 1 byte to transfer (i.e. bus
grant request)
6. Transfer the byte (at system bus speed)
7. Release the control of the bus back to CPU.
Advantages:
Standardization: I/O interfaces provide a standard way of
communicating with external devices. This means that different
devices can be connected to a computer using the same
interface, which makes it easier to swap out devices and reduces
the need for specialized hardware.
Modularity: With I/O interfaces, different devices can be added
or removed from a computer without affecting the other
components. This makes it easier to upgrade or replace a faulty
device without affecting the rest of the system.
Efficiency: I/O interfaces can transfer data between the
computer and the external devices at high speeds, which allows
for faster data transfer and processing times.
Compatibility: I/O interfaces are designed to be compatible with
a wide range of devices, which means that users can choose from
a variety of devices that are compatible with their computer’s I/O
interface.

Disadvantages:
Cost: I/O interfaces can be expensive, especially if specialized
hardware is required to connect a particular device to a computer
system.
Complexity: Some I/O interfaces can be complex to configure
and require specialized knowledge to set up and maintain. This
can be a disadvantage for users who are not familiar with the
technical aspects of computer hardware.
Compatibility issues: While I/O interfaces are designed to be
compatible with a wide range of devices, there can still be
compatibility issues with certain devices. In some cases, device
drivers may need to be installed to ensure proper functionality.
Security risks: I/O interfaces can be a security risk if they are
not properly configured or secured. Hackers can exploit
vulnerabilities in I/O interfaces to gain unauthorized access to a
computer system or steal data.

Peripherals Devices in Computer


Organization
Generally peripheral devices, however, are not essential for the
computer to perform its basic tasks, they can be thought of as an
enhancement to the user’s experience. A peripheral device is a
device that is connected to a computer system but is not part of
the core computer system architecture. Generally, more people
use the term peripheral more loosely to refer to a device external
to the computer case.
Classification of Peripheral devices
It is generally classified into 3 basic categories which are given
below:
1. Input Devices:
The input device is defined as it converts incoming data and
instructions into a pattern of electrical signals in binary code that
are comprehensible to a digital computer. Example:
Keyboard, mouse, scanner, microphone etc.

Keyboard: A keyboard is an input device that allows users to


enter text and commands into a computer system.

Mouse: A mouse is an input device that allows users to control


the cursor on a computer screen.

Scanner: A scanner is an input device that allows users to


convert physical documents and images into digital files.

Microphone: A microphone is an input device that allows users


to record audio.

2. Output Devices:
An output device is generally the reverse of the input process and
generally translates the digitized signals into a form intelligible to
the user. The output device is also performed for sending data
from one computer system to another. For some time punched
card and paper tape readers were extensively used for input, but
these have now been supplanted by more efficient devices.
Example:
Monitors, headphones, printers etc.

Monitor: A monitor is an output device that displays visual


information from a computer system.

Printer: A printer is an output device that produces physical


copies of documents or images.

Speaker: A speaker is an output device that produces audio.

3. Storage Devices:
Storage devices are used to store data in the system which is
required for performing any operation in the system. The storage
device is one of the most required devices and also provides
better compatibility. Example:
Hard disk, magnetic tape, Flash memory etc.

Hard Drive: A hard drive is a storage device that stores data


and files on a computer system.

USB Drive: A USB drive is a small, portable storage device


that connects to a computer system to provide additional
storage space.

Memory Card: A memory card is a small, portable storage device


that is commonly used in digital cameras and smartphones.

External Hard Drive: An external hard drive is a storage


device that connects to a computer system to provide
additional storage space.

4. Communication Devices:
Communication devices are used to connect a computer system to
other devices or networks. Examples of communication devices
include:

Modem: A modem is a communication device that allows a


computer system to connect to the internet.
Network Card: A network card is a communication device that
allows a computer system to connect to a network.
Router: A router is a communication device that allows
multiple devices to connect to a network.

Advantages of Peripherals Devices


Peripherals devices provide more features due to this operation of
the system is easy. These are given below:
 It is helpful for taking input very easily.
 It is also provided a specific output.
 It has a storage device for storing information or data
 It also improves the efficiency of the system.

Introduction to Input-Output
Interface
Input-Output Interface is used as a method which helps in transferring of
information between the internal storage devices i.e. memory and the
external peripheral device . A peripheral device is that which provide input
and output for the computer, it is also called Input-Output devices. For
Example: A keyboard and mouse provide Input to the computer are called
input devices while a monitor and printer that provide output to the
computer are called output devices. Just like the external hard-drives,
there is also availability of some peripheral devices which are able to
provide both input and output.

Input-Output Interface
In micro-computer base system, the only purpose of peripheral devices is
just to provide special communication links for the interfacing them with
the CPU. To resolve the differences between peripheral devices and CPU,
there is a special need for communication links.
The major differences are as follows:
1. The nature of peripheral devices is electromagnetic and electro-
mechanical. The nature of the CPU is electronic. There is a lot of
difference in the mode of operation of both peripheral devices and
CPU.
2. There is also a synchronization mechanism because the data transfer
rate of peripheral devices are slow than CPU.
3. In peripheral devices, data code and formats are differ from the format
in the CPU and memory.
4. The operating mode of peripheral devices are different and each may
be controlled so as not to disturb the operation of other peripheral
devices connected to CPU.
There is a special need of the additional hardware to resolve the
differences between CPU and peripheral devices to supervise and
synchronize all input and output devices.
Functions of Input-Output Interface:
1. It is used to synchronize the operating speed of CPU with respect to
input-output devices.
2. It selects the input-output device which is appropriate for the
interpretation of the input-output signal.
3. It is capable of providing signals like control and timing signals.
4. In this data buffering can be possible through data bus.
5. There are various error detectors.
6. It converts serial data into parallel data and vice-versa.
7. It also convert digital data into analog signal and vice-versa.
Asynchronous Data Transfer
Last Updated : 24 Sep, 2023



Asynchronous data transfer enables computers to send and


receive data without having to wait for a real-time response. With
this technique, data is conveyed in discrete units known as
packets that may be handled separately. This article will explain
what asynchronous data transfer is, its primary terminologies,
advantages and disadvantages, and some frequently asked
questions.

Terminologies used in Asynchronous Data


Transfer
 Sender: The machine or gadget that transfers the data.
 Receiver: A device or computer that receives data.
 Packet: A discrete unit of transmitted and received data.
 Buffer: A short-term location for storing incoming or departing
data.
Classification of Asynchronous Data
Transfer
 Strobe Control Method
 Handshaking Method
Strobe Control Method For Data Transfer

Strobe control is a method used in asynchronous data transfer


that synchronizes data flow between two devices. Bits are
transmitted one at a time, independently of one another, and
without the aid of a clock signal in asynchronous communication.
To properly receive the data, the receiving equipment needs to be
able to synchronize with the transmitting device.
Strobe control involves sending data along with a different signal
known as the strobe signal. The strobe signal alerts the receiving
device that the data is valid and ready to be read. The receiving
device waits for the strobe signal before reading the data to
ensure sure it is synchronized with its clock.
The strobe signal is usually generated by the transmitting device
and is sent either before or after the data. If the strobe signal is
sent before the data, it is called a leading strobe. If it is sent after
the data, it is called a trailing strobe.
Types of Strobes

It is advantageous to utilize strobe control because it enables


asynchronous data transfer, which is helpful when the
participating devices have dissimilar clock rates or are not
synchronized. The time of data transfer is also made more flexible
by strobe control since the receiving device doesn’t have to
synchronize with the transmitting device’s clock; instead, it can
wait for the strobe signal before reading the data.
Overall, strobe control, which is frequently employed in a range of
electronic devices and systems, is a helpful technique for assuring
dependable data flow in asynchronous communication.

Handshaking Method For Data Transfer

During an asynchronous data transfer, two devices manage their


communication using handshaking. It is guaranteed that the
transmitting and receiving devices are prepared to send and
receive data. Handshakes are essential in asynchronous
communication since there is no clock signal to synchronize the
data transfer.
During handshaking, we use two types of signals mostly they are
request-to-send (RTS) and clear-to-send (CTS). The receiving
device is notified by an RTS signal when the transmitting
equipment is ready to provide data. The receiving device
responds with a CTS signal when it is ready to accept data.
once data is transmitted to the receiver end. the receiver
generates a signal that it has done by sending an
acknowledgment (ACK) signal. If the data is not successfully
received, the receiving device will notify that a new transmission
is necessary via a negative acknowledgment (NAK) signal.
The handshaking procedure guarantees synchronized and
dependable data delivery. Additionally, it allows for flow
management, preventing the transmitting device from sending
the receiving device an excessive amount of data all at once. In
order to offer flow control, handshaking signals are utilized to
regulate the rate at which data is sent.
The Handshaking Method in asynchronous data transfer is used in
different devices for the transfer of data to ensure reliable
communication.

Advantages of Asynchronous Data Transfer


 Because asynchronous data transfer sends data in discrete,
independently processable pieces, it enables faster data
transfer speeds.
 This method is more effective than synchronous data transfer
because there is no need for the receiver to respond.
 Transmission is done by making large files or data sets into
smaller packets and sending them in parallel cuts the duration
time.
Disadvantages of Asynchronous Data
Transfer
 Asynchronous data transfer requires more complex
programming and it may be possible that some data may get
corrupted or lose data if packets are not received in the correct
order or are lost during transmission.
 As we know there will be no real-time communication in
asynchronous data transport can be more prone to errors than
synchronous data transfer.

Priority Interrupts | (S/W Polling and Daisy


Chaining)

In I/O Interface (Interrupt and DMA Mode), we have discussed the


concept behind the Interrupt-initiated I/O. To summarize, when I/O
devices are ready for I/O transfer, they generate an interrupt request
signal to the computer. The CPU receives this signal, suspends the
current instructions it is executing, and then moves forward to service that
transfer request. But what if multiple devices generate interrupts
simultaneously. In that case, we have a way to decide which interrupt is to
be serviced first. In other words, we have to set a priority among all the
devices for systemic interrupt servicing. The concept of defining the
priority among devices so as to know which one is to be serviced first in
case of simultaneous requests is called a priority interrupt system. This
could be done with either software or hardware methods.
SOFTWARE METHOD – POLLING
In this method, all interrupts are serviced by branching to the same
service program. This program then checks with each device if it is the
one generating the interrupt. The order of checking is determined by the
priority that has to be set. The device having the highest priority is
checked first and then devices are checked in descending order of
priority. If the device is checked to be generating the interrupt, another
service program is called which works specifically for that particular
device. The structure will look something like this-
if (device[0].flag)
device[0].service();
else if (device[1].flag)
device[1].service();
.
.
.
.
.
.
else
//raise error
The major disadvantage of this method is that it is quite slow. To
overcome this, we can use hardware solution, one of which involves
connecting the devices in series. This is called Daisy-chaining method.
HARDWARE METHOD – DAISY CHAINING
The daisy-chaining method involves connecting all the devices that can
request an interrupt in a serial manner. This configuration is governed by
the priority of the devices. The device with the highest priority is placed
first followed by the second highest priority device and so on. The given
figure depicts this arrangement.
WORKING: There is an interrupt request line which is common to all the
devices and goes into the CPU.
 When no interrupts are pending, the line is in HIGH state. But if any of
the devices raises an interrupt, it places the interrupt request line in the
LOW state.
 The CPU acknowledges this interrupt request from the line and then
enables the interrupt acknowledge line in response to the request.
 This signal is received at the PI(Priority in) input of device 1.
 If the device has not requested the interrupt, it passes this signal to the
next device through its PO(priority out) output. (PI = 1 & PO = 1)
 However, if the device had requested the interrupt, (PI =1 & PO = 0)
o The device consumes the acknowledge signal and block its
further use by placing 0 at its PO(priority out) output.
o The device then proceeds to place its interrupt vector
address(VAD) into the data bus of CPU.
o The device puts its interrupt request signal in HIGH state to
indicate its interrupt has been taken care of.
 If a device gets 0 at its PI input, it generates 0 at the PO output to tell
other devices that acknowledge signal has been blocked. (PI = 0 & PO
= 0)
Hence, the device having PI = 1 and PO = 0 is the highest priority device
that is requesting an interrupt. Therefore, by daisy chain arrangement we
have ensured that the highest priority interrupt gets serviced first and have
established a hierarchy. The farther a device is from the first device, the
lower its priority.
Priority interrupts:
Advantages:
1. Priority interrupts allow for the efficient handling of high-priority tasks
that require immediate attention. This is especially important in real-
time systems where certain tasks must be completed within strict time
constraints.

2. They are more efficient than software polling as the processor does not
waste time constantly checking for events that have not occurred.
3. Priority interrupts are also more deterministic, as the response time to
an event can be accurately predicted based on its priority level.

Disadvantages:
1. One potential disadvantage of priority interrupts is the possibility of
lower priority tasks being starved of resources if high-priority tasks are
continuously interrupting the processor.

2. If not implemented properly, priority interrupts can lead to priority


inversion, where a low-priority task holds a resource required by a
higher-priority task, causing a delay in the high-priority task’s
execution.

Software polling:
Advantages:
1. Software polling is relatively simple to implement and does not require
specialized hardware.

2. It can be used to detect events that occur at irregular intervals, as the


processor can check for events whenever it is not performing other
tasks.

Disadvantages:
1. Software polling is less efficient than priority interrupts as the processor
must constantly check for events even if none have occurred.

2. In real-time systems, software polling may not be suitable as it is


difficult to guarantee the response time to an event, especially if the
processor is busy with other tasks.

Daisy chaining:
Advantages:
1. Daisy chaining allows multiple devices to share a single interrupt line,
reducing the number of interrupt lines required.

2. It is relatively simple to implement and does not require specialized


hardware.

Disadvantages:
1. Daisy chaining can result in increased response time as each device
must wait for the previous device to complete its interrupt handling
before it can start its own.
2. It can also be difficult to implement and troubleshoot, especially if there
are multiple devices on the same interrupt line.

OMA stands for "Office for Metropolitan Architecture", an international


architectural firm founded by Rem Koolhaas, Elia and Zoe Zenghelis, and
Madelon Vriesendorp.
Here's a more detailed explanation:
 Founded: OMA was established in 1975.
 Partners: Rem Koolhaas, Elia and Zoe Zenghelis, and Madelon
Vriesendorp founded OMA.
 Focus: OMA is known for its work that embraces the energy of modernity
and focuses on architecture, urbanism, and cultural analysis.
 Offices: OMA has offices in Rotterdam, New York, Hong Kong, and
Australia.
 Notable works: includes the Austrian House (2023), Taipei Performing
Arts Center (2022), Axel Springer Campus in Berlin (2020), the Qatar
National Library and the Qatar Foundation Headquarters (2018) and
Fondazione Prada in Milan (2015/2018)
OMA
OMA stands for "Office for Metropolitan Architecture", an international
architectural firm founded by Rem Koolhaas, Elia and Zoe Zenghelis, and
Madelon Vriesendorp.
Here's a more detailed explanation:
 Founded: OMA was established in 1975.
 Partners: Rem Koolhaas, Elia and Zoe Zenghelis, and Madelon
Vriesendorp founded OMA.
 Focus: OMA is known for its work that embraces the energy of modernity
and focuses on architecture, urbanism, and cultural analysis.
 Offices: OMA has offices in Rotterdam, New York, Hong Kong, and
Australia.
 Notable works: includes the Austrian House (2023), Taipei Performing
Arts Center (2022), Axel Springer Campus in Berlin (2020), the Qatar
National Library and the Qatar Foundation Headquarters (2018) and
Fondazione Prada in Milan (2015/2018)

Input-Output Processor
The DMA mode of data transfer reduces the CPU’s overhead in
handling I/O operations. It also allows parallelism in CPU and I/O
operations. Such parallelism is necessary to avoid the wastage of
valuable CPU time while handling I/O devices whose speeds are
much slower as compared to CPU. The concept of DMA operation
can be extended to relieve the CPU further from getting involved
with the execution of I/O operations. This gives rise to the
development of special purpose processors called Input-Output
Processor (IOP) or IO channels.
The Input-Output Processor (IOP) is just like a CPU that handles
the details of I/O operations. It is more equipped with facilities
than those available in a typical DMA controller. The IOP can fetch
and execute its own instructions that are specifically designed to
characterize I/O transfers. In addition to the I/O-related tasks, it
can perform other processing tasks like arithmetic, logic,
branching, and code translation. The main memory unit takes a
pivotal role. It communicates with the processor by means of
DMA.
The Input-Output Processor is a specialized processor which loads
and stores data in memory along with the execution of I/O
instructions. It acts as an interface between the system and
devices. It involves a sequence of events to execute I/O
operations and then store the results in memory.

Input-Output Processor

Features of an Input-Output Processor


 Specialized Hardware: An IOP is equipped with specialized
hardware that is optimized for handling input/output
operations. This hardware includes input/output ports, DMA
controllers, and interrupt controllers.
 DMA Capability: An IOP has the capability to perform Direct
Memory Access (DMA) operations. DMA allows data to be
transferred directly between peripheral devices and memory
without going through the CPU, thereby freeing up the CPU for
other tasks.
 Interrupt Handling: An IOP can handle interrupts from
peripheral devices and manage them independently of the
CPU. This allows the CPU to focus on executing application
programs while the IOP handles interrupts from peripheral
devices.
 Protocol Handling: An IOP can handle communication
protocols for different types of devices such as Ethernet, USB,
and SCSI. This allows the IOP to interface with a wide range of
devices without requiring additional software support from the
CPU.
 Buffering: An IOP can buffer data between the CPU and
peripheral devices. This allows the IOP to handle large amounts
of data without overloading the CPU or the peripheral devices.
 Command Processing: An IOP can process commands from
peripheral devices independently of the CPU. This allows the
CPU to focus on executing application programs while the IOP
handles peripheral device commands.
 Parallel Processing: An IOP can perform input/output
operations in parallel with the CPU. This allows the system to
handle multiple tasks simultaneously and improve overall
system performance.
Applications of I/O Processors
 Data Acquisition Systems: I/O processors can be used in
data acquisition systems to acquire and process data from
various sensors and input devices. The I/O processor can
handle high-speed data transfer and perform real-time
processing of the acquired data.
 Industrial Control Systems: I/O processors can be used in
industrial control systems to interface with various control
devices and sensors. The I/O processor can provide precise
timing and control signals, and can also perform local
processing of the input data.
 Multimedia Applications: I/O processors can be used in
multimedia applications to handle the input and output of
multimedia data, such as audio and video. The I/O processor
can perform real-time processing of multimedia data, including
decoding, encoding, and compression.
 Network Communication Systems: I/O processors can be
used in network communication systems to handle the input
and output of data packets. The I/O processor can perform
packet routing, filtering, and processing, and can also perform
encryption and decryption of the data.
 Storage Systems: I/O processors can be used in storage
systems to handle the input and output of data to and from
storage devices. The I/O processor can handle high-speed data
transfer and perform data caching and prefetching operations.
Advantages of Input-Output Processor
 The I/O devices can directly access the main memory without
the intervention of the processor in I/O processor-based
systems.
 It is used to address the problems that arise in the Direct
memory access method.
 Reduced Processor Workload: With an I/O processor, the
main processor doesn’t have to deal with I/O operations,
allowing it to focus on other tasks. This results in more efficient
use of the processor’s resources and can lead to faster overall
system performance.
 Improved Data Transfer Rates: Since the I/O processor can
access memory directly, data transfers between I/O devices
and memory can be faster and more efficient than with other
methods.
 Increased System Reliability: By offloading I/O tasks to a
dedicated processor, the system can be made more fault-
tolerant. For example, if an I/O operation fails, it won’t affect
other system processes.
 Scalability: I/O processor-based systems can be designed to
scale easily, allowing for additional I/O processors to be added
as needed. This can be particularly useful in large-scale data
centres or other environments where the number of I/O devices
is constantly changing.
 Flexibility: I/O processor-based systems can be designed to
handle a wide range of I/O devices and interfaces, providing
more flexibility in system design and allowing for better
customization to meet specific requirements.
Disadvantages of Input-Output Processor
 Cost: I/O processors can add significant costs to a system due
to the additional hardware and complexity required. This can
be a barrier to adoption, especially for smaller systems.
 Increased Complexity: The addition of an I/O processor can
increase the overall complexity of a system, making it more
difficult to design, build, and maintain. This can also make it
harder to diagnose and troubleshoot issues.
 Limited Performance Gains: While I/O processors can
improve system performance by offloading I/O tasks from the
main processor, the gains may not be significant in all cases. In
some cases, the additional overhead of the I/O processor may
actually slow down the system.
 Synchronization Issues: With multiple processors accessing
the same memory, synchronization issues can arise, leading to
potential data corruption or other errors.
 Lack of Standardization: There are many different I/O
processor architectures and interfaces available, which can
make it difficult to develop standardized software and
hardware solutions. This can limit interoperability and make it
harder for vendors to develop compatible products.

Serial Communications Interface (SCI)


In the world of digital electronics, communication between
devices is an important part of system design. One of the basic
and more widely used methods of this communication is SCI
(Serial Communications Interface). SCI is necessary to enable
communication between microcontrollers, sensors, computers
and various other digital devices. Its simplicity and efficiency
make it the preferred choice for many applications, from simple
data acquisition systems to complex industrial automation
processes..
What is Serial Communications Interface
(SCI)
A Serial Communications Interface (SCI) may be a sort
of communication convention utilized for serial information
transmission between gadgets. It empowers the trade of
information by sending bits consecutively over a single channel,
which can be a wire or remote medium. SCI ordinarily
employments a combine of signals for communication: one for
transmitting (Tx) and one for getting Reception(Rx). This interface
is essential in inserted frameworks,
permitting microcontrollers to communicate with peripherals,
other microcontrollers, and computers..
Types of Serial Communication
Interface(SCI)
SCI can be classified into many types based on its implementation
and their usage:
1. Simplex
2. Half Duplex
3. Full Duplex
Simplex SCI
Simplex communication involves one-way transmission of signals,
with a clear sender and receiver but no ability for the receiver to
reply. It is like a one-lane road, allowing efficient and
straightforward communication. Lets see the diagram how
simplex works

Simplex
Components
1. Sender/Transmitter: The device or module responsible for
sending the data.
2. Receiver: The device or module that receives and processes
the data.
3. Communication Medium: The physical or wireless medium
through which the data is transmitted, such as cables, fiber
optics, or radio waves.
Working of Simplex
The working of simplex type of communication is very easy to
understand that only one type of transmission happens that is
from sender to receiver. The reverse communication is not
possible, only the sender alone can send the messages to
receiver. The receiver only can read it but their no possibility of
reverse communication in these type of communication.
Half Duplex
This mode uses a single communication channel for both
transmission and reception, but not simultaneously. Devices must
take turns to send and receive data. Lets see the diagram how
Half Duplex works.
Half Duplex SCI
Components
1. Sender/Transmitter: The device or module responsible for
sending the data.
2. Receiver: The device or module that receives and processes
the data.
3. Communication Medium: The physical or wireless medium
through which the data is transmitted, such as cables, fiber
optics, or radio waves.
Working of Half Duplex
The working of half duplex involves both side communication but
not simultaneously. That means if sender sends the message to
receiver than after the the message is successfully transmitted
only the receiver can send the return message to sender. In the
same way if receiver sends the message to sender than after the
the message is successfully transmitted only the sender can send
the return message to receiver.
Full Duplex
In this mode, data transmission and reception occur
simultaneously on separate channels, allowing continuous
bidirectional communication. Lets see the diagram how Full
Duplex works.

Full Duplex SCI


Components
1. Sender/Transmitter: The device or module responsible for
sending the data.
2. Receiver: The device or module that receives and processes
the data.
3. Communication Medium: The physical or wireless medium
through which the data is transmitted, such as cables, fiber
optics, or radio waves.
Working of Full Duplex
The working of full duplex involves both side communication
same as half duplex but here simultaneous communication is
possible. That means if sender sends the message to receiver at
the same that is possible the receiver can also send message to
sender. In the same way if receiver sends the message to sender
than at the same that is possible the sender can also send
message to receiver.
Advantages and Disadvantages of Serial
Communication Interface(SCI)
Advantages
 Simplicity: Easy to implement and understand.
 Low Cost: Requires less cost to construct.
 Versatility: Suits for various applications and devices.
 Long Distance: It works good for longer communication.
 Error Checking: Parity bits help in detecting errors in
transmission.
Disadvantages
 Slower Speed: It is slower than parallel communication.
 Limited Data Size: Generally transmits one byte at a time.
 Asynchronous Issues: It Requires precise timing for reliable
communication.
 Overhead: Start and stop bits add to the data overhead.
 Error Handling: It requires Simple error detection, but not
correction.
Applications Of Serial Communication
Interface(SCI)
1. Embedded Systems: In the part of Communication between
microcontrollers and sensors.
2. Computing Peripherals: Connecting mice, keyboards, and
other devices.
3. Industrial Automation: Data transfer between PLCs and
industrial machines.
4. Telemetry Systems: Remote monitoring and control
applications.
5. Modems: Communication over telephone lines.
6. GPS Modules: Data transmission from GPS receivers to other
devices.
7. Medical Devices: Data transfer in medical monitoring
equipment.
8. Automotive Systems: Communication within vehicle control
systems.
9. Home Automation: Integrating various smart devices.
10. Robotics: Controlling and monitoring robotic systems.

You might also like