0% found this document useful (0 votes)
12 views

System Programming & Microprocessor

Uploaded by

No One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

System Programming & Microprocessor

Uploaded by

No One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Unit I:

System Software
System software refers to the low-level software that manages and controls a
computer’s hardware and provides basic services to higher-level software.
There are two main types of software: systems software and application
software. Systems software includes the programs that are dedicated to
managing the computer itself, such as the operating system, file management
utilities, and disk operating system (or DOS).

System software is software that provides a platform for other software.


Some examples can be operating systems, antivirus software, disk
formatting software, computer language translators, etc. These are
commonly prepared by computer manufacturers. This software consists of
programs written in low-level languages, used to interact with the hardware
at a very basic level. System software serves as the interface between the
hardware and the end users.
System software refers to the collection of programs and software
components that enable a computer or computing device to function
properly. It acts as an intermediary between the user and the computer
hardware, allowing the user to interact with the hardware and use various
applications and programs. Some common types of system software include
operating systems (such as Windows, macOS, or Linux), device drivers,
utility programs, programming languages, and system libraries.
Examples of System Software
System software is a type of computer program that is designed to run a
computer’s hardware and application programs and examples of system
software include operating systems (OS) (like macOS, Linux, Android, and
Microsoft Windows), game engines, search engines (like google, Bing,
Yahoo!), industrial automation, computational science software, and
(SaS)software as a service applications.
• Operating systems (OS): Windows, Linux, macOS, etc.
• Device drivers: software that enables the communication between
hardware and OS.
• Firmware: pre-installed low-level software that controls a device’s
basic functions.
• Utility software: tools for system maintenance and optimization.
• Boot loaders: software that initializes the OS during startup.
System Software Components:

Your system has three basic types of software: application programs, device
drivers, and operating systems. Each type of software performs a completely
different job, but all three work closely together to perform useful work. While
some special-purpose programs do not fit neatly into any of these classes,
most software does. Programs run in the memory portion of the system. While
running, programs are known as processes or jobs. The
following illustration shows the relationship between the different software
programs and the hardware.

Application Programs

Application programs are the top software layer. You can perform specific
tasks with these programs, such as using a word processor for writing, a
spreadsheet for accounting, or a computer-aided design program for drawing.
The other two layers, device drivers and the operating system, play important
support roles. Your system might run one application program at a time, or it
might run many simultaneously.

Device Drivers

Device drivers are a set of highly specialized programs. Device drivers help
application programs and the operating system do their tasks. Device drivers
(in particular, adapters), do not interact with you. They interact directly with
computer hardware elements and shield the application programs from the
hardware specifics of computers.

Operating System

An operating system is a collection of programs that controls the running of


programs and organizes the resources of a computer system. These
resources are the hardware components of the system, such as keyboards,
printers, monitors, and disk drives. Your AIX operating system comes with
programs, called commands or utilities, that maintain your files, send and
receive messages, provide miscellaneous information about your system, and
so on.

An application program relies on the operating system to perform many


detailed tasks associated with the internal workings of the computer. The
operating system also accepts commands directly from you to manage files
and security. There are many extensions to the AIX operating system that
allow you to customize your environment.

Root-User Processes

Root-user processes are programs that can be run only by a user with root
authority. A system administrator has root authority for all processes.

Root-user processes include:

• Read or write any object


• Call any system function
• Perform certain subsystem-control operations

When you are not allowed to run a command, the system displays a message
saying you do not have the correct permissions or you are not allowed to run
that command. The system administrator may be the only person who can log
in as root on your system. The system administrator can also set you up to
use particular commands, giving you some control over processes.

System Software Evolution:

• Software Evolution is the process of developing software using


software engineering principles and methods.
• This involves the initial development of software, its maintenance, and
timely updates until and unless desired software is developed, which
fulfils the expected requirements from the software.

Importance of Software Evolution:

• Organizations have big investments in their software systems. They are


like their critical business assets.
• To maintain the standard value of these assets to the business, they
must be changed and updated in a timely manner and according to
changing requirements of the business.
• The majority of the software budget in large organizations is more
focused on changing and evolving existing software rather than
developing new software.
• Because this will save the cost and time required to develop new
software.
Evolution and Servicing:

• Evolution - Evolution is the stage in a software system’s life cycle


where it is in operational use and is evolving as new requirements are
proposed and implemented in the system.
• Servicing - In this stage, the software remains useful but the only
changes made are those required to keep it operational i.e. bug fixes
and changes to reflect changes in the software’s environment. No new
functionality is added.
• Phase-out - In this stage, the software may still be used but no further
changes are made to it.

A Spiral Model of Development and Evolution:


A spiral model of development and evolution represents how a software
system evolves through a sequence of multiple releases.
Evolution processes:

• Software evolution processes depend on


o The type of software being maintained
o The development processes used
o The skills and experience of the people involved
• Proposals for change are the driver of system evolution.
• These should be linked with components that are affected by the
change, thus allowing the cost and impact of the change to be
estimated.
• Change identification and evolution continues throughout the system's
lifetime.

The software Evolution Process:


Change Implementation -

• Iteration of the development process where the revisions to the system


are designed implemented and tested.
• A critical difference is that the first stage of change implementation may
involve program understanding, especially if the original system
developers are not responsible for the change implementation.
• During the program understanding phase, you have to understand how
the program is structured, how it delivers functionality, and how the
proposed change might affect the program.

Urgent Change Requests -

• Urgent changes may have to be implemented without going through all


stages of the software evolution process
o If a serious system fault has to be repaired to allow normal
operation to continue;
o If changes to the system’s environment (e.g. an OS upgrade)
have unexpected effects;
o If there are business changes that require very rapid response
(e.g. the release of a competing product).

Agile Methods and Evolution -

• Agile methods are based on incremental development so the transition


from development to evolution is a seamless one.
o Evolution is simply a continuation of the development process
based on frequent system releases.
• Automated regression testing is particularly valuable when changes are
made to a system.
• Changes may be expressed as additional user stories.

Software Evolution Laws:


Three different categories are as follows:

• S-Type (Static-type) - This is software, which works strictly according


to defined specifications and solutions. The s-type software is least
subjected to changes hence this is the simplest of
all. For example, Calculator [program] for Mathematical Computation.
• P-Type (Practical-type) - This is software with a collection of
procedures. This is defined by exactly what procedures can
do. For example, Gaming Software.
• E-Type (Embedded-type) - This software works closely as the
requirement of the real-world environment. This software has a high
degree of evolution as there are various changes in laws, taxes, etc. in
real-world situations. For example, Online Trading Software.
General machine structures (memory, register, data institutions)
The structure above consists of -

1. Instruction Interpreter
2. Location Counter
3. Instruction Register
4. Working Registers
5. General Register

The Instruction Interpreter Hardware is basically a group of circuits that


perform the operation specified by the instructions fetched from the memory.

The Location Counter can also be called as Program/Instruction Counter


simply points to the current instruction being executed.

The working registers are often called as the "scratch pads" because they are
used to store temporary values while calculation is in progress.

This CPU interfaces with Memory through MAR & MBR

MAR (Memory Address Register) - contains address of memory location (to be


read from or stored into)
MBR (Memory Buffer Register) - contains copy of address specified by MAR

Memory controller is used to transfer data between MBR & the memory
location specified by MAR

The role of I/O Channels is to input or output information from memory.

Machine Language:

Machine code, also known as machine language, is the elemental language of


computers. It is read by the computer's central processing unit (CPU), is
composed of digital binary numbers and looks like a very long sequence of
zeros and ones. Ultimately, the source code of every human-readable
programming language must be translated to machine language by a compiler
or an interpreter, because binary code is the only language that computer
hardware can understand.

Each CPU has its own specific machine language. The processor reads and
handles instructions, which tell the CPU to perform a simple task. Instructions
are comprised of a certain number of bits. If instructions for a particular
processor are 8 bits, for example, the first 4 bits part (the opcode) tells the
computer what to do and the second 4 bits (the operand) tells the computer
what data to use.

01001000 01100101 01101100 01101100 01101111 00100001

Depending upon the processor, a computer's instruction sets may all be the
same length, or they may vary, depending upon the specific instruction. The
architecture of the particular processor determines how instructions are
patterned. The execution of instructions is controlled by firmware or the CPU's
internal wiring.

Human programmers rarely, if ever, deal directly with machine code anymore.
If developers are debugging a program at a low level, they might use a
printout that shows the program in its machine code form. The printout, which
is called a dump, is very difficult and to work with a tool called a dump. Utility
programs used to create dumps will often represent four bits by a
single hexadecimal to make the machine code easier to read and contain
other information about the computer's operation, such as the address of the
instruction that was being executed at the time the dump was initiated.

Problems in Machine Language:

1. The main drawback of machine language is how difficult it is to develop,


learn, and execute codes and algorithms.
2. It is pretty time-consuming to fix flaws and mistakes in codes and
programs.
3. Only some people can memorize or even write the code.
4. It is platform independent language.
5. Machine language coding requires a significant amount of time.
6. They tend to make mistakes by default and handle and maintain them.
7. It is difficult to change.
8. It's hard to recall instructions in numerical form, and this causes
mistakes.
9. The machine language codes are unique and cannot be reused.
10. Learning and using machine language is a complicated
programming language.
Octal and Hexadecimal Numeration:
Because binary numeration requires so many bits to represent relatively small
numbers compared to the economy of the decimal system, analysing the
numerical states inside of digital electronic circuitry can be a tedious task.

Computer programmers who design sequences of number codes instructing a


computer what to do would have a very difficult task if they were forced to
work with nothing but long strings of 1’s and 0’s, the “native language” of any
digital circuit.

To make it easier for human engineers, technicians, and programmers to


“speak” this language of the digital world, other systems of place-weighted
numeration have been made which are very easy to convert to and from
binary.

One of those numeration systems is called octal, because it is a place-


weighted system with a base of eight. Valid ciphers include the symbols 0, 1,
2, 3, 4, 5, 6, and 7. Each place weight differs from the one next to it by a factor
of eight.

Another system is called hexadecimal, because it is a place-weighted system


with a base of sixteen.

Valid ciphers include the normal decimal symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, and


9, plus six alphabetical characters A, B, C, D, E, and F, to make a total of
sixteen.

As you might have guessed already, each place weight differs from the one
before it by a factor of sixteen.

Let’s count again from zero to twenty using decimal, binary, octal, and
hexadecimal to contrast these systems of numeration:

Number Decimal Binary Octal Hexadecimal

------ ------- ------- ----- -----------

Zero 0 0 0 0

One 1 1 1 1

Two 2 10 2 2
Three 3 11 3 3

Four 4 100 4 4

Five 5 101 5 5

Six 6 110 6 6

Seven 7 111 7 7

Eight 8 1000 10 8

Nine 9 1001 11 9

Ten 10 1010 12 A

Eleven 11 1011 13 B

Twelve 12 1100 14 C

Thirteen 13 1101 15 D

Fourteen 14 1110 16 E

Fifteen 15 1111 17 F

Sixteen 16 10000 20 10

Seventeen 17 10001 21 11

Eighteen 18 10010 22 12

Nineteen 19 10011 23 13

Twenty 20 10100 24 14

Assembly Language:
Assembly language is a low-level language that helps to communicate
directly with computer hardware. It uses mnemonics to represent the
operations that a processor has to do. Which is an intermediate language
between high-level languages like C++ and the binary language. It uses
hexadecimal and binary values, and it is readable by humans.

How Assembly Language Works?


Assembly languages contain mnemonic codes that specify what the
processor should do. The mnemonic code that was written by the
programmer was converted into machine language (binary language) for
execution. An assembler is used to convert assembly code into machine
language. That machine code is stored in an executable file for the sake of
execution.
It enables the programmer to communicate directly with the hardware such
as registers, memory locations, input/output devices or any
other hardware components. Which could help the programmer to directly
control hardware components and to manage the resources in an efficient
manner.
How to execute Assembly Language?
• Write assembly code: Open any text editor in device and write the
mnemonic codes in it and save the file with a proper extension
according to your assembler. Extension can be .asm, .s, .asmx.
• Assembling the code: Convert your code to machine language
using an assembler.
• Generating object file: It will generate an object file corresponding
to your code. It will have an extension .obj.
• Linking and creating executables: Our assembly language might
contain multiple source codes. And we have to link them to libraries
to make it executable. We can use a linker like lk for this purpose.
• Running program: After creating an executable file we can run it
as usual. It will depend on the software that how to run the
program.
Components of Assembly Language
• Registers: Registers are the fast memory locations situated inside
the processor. Which helps ALU to perform arithmetic operations
and temporary storing of data. Example: Ax (Accumulator), Bx, Cx.
• Command: An instruction in assembly code known as a command
informs the assembler what to do. Assembly language instructions
typically employ self-descriptive abbreviations to make the
vocabulary simple, as “ADD” for addition and “MOV” for data
movement.
• Instructions: Instructions are the mnemonic codes that we give to
the processor to perform specific tasks like LOAD, ADDITION,
MOVE. Example: ADD
• Labels: It is a symbolic name/identifier given to indicate a particular
location or address in the assembly code. Example: FIRST to
indicate starting of execution part of code.
• Mnemonic: A mnemonic is an acronym for an assembly language
instruction or a name given to a machine function. Each mnemonic
in assembly corresponds to a specific machine instruction. Add is
an illustration of one of these machine commands. CMP, Mul, and
Lea are among further instances.
• Macro: Macros are the program codes that can be used anywhere
in the program through calling it once we define it. And it is often
embedded with assemblers and compilers. We should define it
using a directive %macro. Example: %macro
ADD_TWO_NUMBERS 2
add eax, %1
add eax, %2
%endmacro
• Operands: These are the data or values that we are given through
instruction to perform some operation on it. Example: In ADD R1,R2
; R1 and R2 are operands.
• Opcode: These are the mnemonic codes that specify to the
processor which operation has to be done. Example: ADD means
Addition.
Advantages of Assembly Language
• It provides precise control over hardware and hence increased
code optimization.
• It allows direct access to hardware components like registers, so it
enables tailored solutions for hardware issues.
• Efficient resource utilization because of low level control, optimized
code, resource awareness, customization etc.
• It is ideal for programming microcontrollers, sensors and other
hardware components.
• It is used in security researches for finding security vulnerabilities,
reverse engineering software for system security.
• It is very essential for the making the operating
systems, kernel and device controllers that requires hardware
interaction for its functionality.
Disadvantages of Assembly Language
• Complex and very hard to learn the language especially for
beginners.
• It is highly machine dependent. So, it limits portability.
• It is really hard to maintain the code, especially for large scale
projects.
• It is very time consuming since it is really hard to understand and
very length of code.
• Debugging is very challenging to programmers.
High Level Language:-
High level languages are programming languages which are used for
writing programs or software which could be understood by the humans and
computer. High level languages are easier to understand for humans
because it uses lot of symbols letters phrases to represent logic and
instructions in a program. It contains high level of abstraction compared to
low level languages.
Examples of Some High Level Languages
• Python: Used by programmers for various tasks, such as web
development, data analysis, artificial intelligence, and scientific
computing because it have very simple syntax and easy to learn
and understand.
• Java: Java is known for its use on multiple platforms, its not as
easy as python but its used in mobile app development, web
applications.
• C++: C++ is very popular and widely used because it have features
of both low and high level languages which makes it suitable for
making complex software such as game engines, desktop
applications, performance critical applications.
• Ruby: Ruby is used for web development, web applications it is
known for its elegant syntax and awesome frameworks such as
rails.
• JavaScript: Javascript is used with html or other languages for
making the web design interactive and dynamic. Its library helps
creating multipurpose web applications.
Execution of High Level Languages
There are three ways in which high level languages can be executed
Interpreted
In this there is a program called interpreter, its task is to read the code
written in high level language understand the program flow and execute it. It
just executes it or perform instruction in it without compiling. This happens in
case of interpreted languages such as Python, Ruby, and JavaScript.

Compiled Languages
This a little different from interpreted languages, in this first the code is
transformed into a executable code, in machine language . Two ways of
compilation are there first is by machine code generation other is by
Intermediate representation.
• Machine code generation- In this the code is compiled by compilers
into machine code then they are executed.
•In intermediate representation the code is first, compiled to
intermediate representation which is saved so that there is no need
to read source file again. When its saved it forms a byte code which
can be executed by the machine.
some compiled languages are C, C++, Java.

Transcompile Languages
This is also known as Source-to-source translation Because in this the
source code of one language is converted into source code of another
programming language. This is done so that code can be used for multiple
platforms. Some transcompile languages are TypeScript, Coffee Script etc.
Use of High Level Language
• Web Development: Web development can be easily done using
html, css, JavaScript they are high level languages because they
are easy to understand and make the web development easy to
learn for everyone.
• Data Analysis: R and Python are the most popular languages for
data analysis, data analysts use them for study large amount of
data due to different libraries and data manipulation capabilities.
• DBMS(Data Base Management System): Its a place your data is
stored and managed. Like if you create website you need to store
data somewhere such as files, coding etc. So php and sql are high
level languages which help us storing and accessing the data.
• Game Development: Nowadays gaming is getting popular, what
makes it possible to build these complicated high graphic games,
width lot of buttons, accessibility. It because of high level languages
which makes game development possible.
Advantages of High Level Languages
• The biggest advantage of using high level languages is that they
are easy to understand, remember, learn, writing codes, to debug.
• There are different libraries available which can be used for
development, many defined operators, data types and frameworks
also which reduce the amount we need to write.
• They are portable to use, means we can use there code for
different platforms without much modification.
• It provides a higher level of abstraction, means it hides the
complexities of hardware from the programmer. You don’t need to
know about hardware before writing program.
Disadvantages of High level Languages
• High Level languages are slower as compared to low level
languages because there is more level of abstraction with hardware
so they require more processing and more memory for execution
• High level languages have less control over hardware, because the
complexities of hardware for the programmer in high level
languages.
• For the maximum utilization of hardware or CPU in terms of
performance low level languages are best.
UNIT- II
Assembler
Assembler is a program for converting instructions written in low-level
assembly code into relocatable machine code and generating along
information for the loader.

It is necessary to convert user written programs into a machinery code. This is


called as translation of the high level language to low level that is machinery
language. This type of translation is performed with the help of system
software. Assembler can be defined as a program that translates an assembly
language program into a machine language program. Self assembler is a
program that runs on a computer and produces the machine codes for the
same computer or same machine. It is also known as resident assembler. A
cross assembler is an assembler which runs on a computer and produces the
machine codes for other computer.

It generates instructions by evaluating the mnemonics (symbols) in operation


field and find the value of symbol and literals to produce machine code. Now,
if assembler do all this work in one scan then it is called single pass
assembler, otherwise if it does in multiple scans then called multiple pass
assembler. Here assembler divide these tasks in two passes:
• Pass-1:
1. Define symbols and literals and remember them in symbol
table and literal table respectively.
2. Keep track of location counter
3. Process pseudo-operations
4. Defines program that assigns the memory addresses to
the variables and translates the source code into machine
code
• Pass-2:
1. Generate object code by converting symbolic op-code into
respective numeric op-code
2. Generate data for literals and look for values of symbols
3. Defines program which reads the source code two times
4. It reads the source code and translates the code into object
code.
Firstly, We will take a small assembly language program to understand the
working in their respective passes. Assembly language statement format:
[Label] [Opcode] [operand]

Example: M ADD R1, ='3'


where, M - Label; ADD - symbolic opcode;
R1 - symbolic register operand; (='3') - Literal

Assembly Program:
Label Op-code operand LC value(Location counter)
JOHN START 200
MOVER R1, ='3' 200
MOVEM R1, X 201
L1 MOVER R2, ='2' 202
LTORG 203
X DS 1 204
END 205

Let’s take a look on how this program is working:


1. START: This instruction starts the execution of program from location
200 and label with START provides name for the program.(JOHN is
name for program)
2. MOVER: It moves the content of literal(=’3′) into register operand
R1.
3. MOVEM: It moves the content of register into memory operand(X).
4. MOVER: It again moves the content of literal(=’2′) into register
operand R2 and its label is specified as L1.
5. LTORG: It assigns address to literals(current LC value).
6. DS(Data Space): It assigns a data space of 1 to Symbol X.
7. END: It finishes the program execution.
Working of Pass-1:
Define Symbol and literal table with their addresses. Note: Literal address is
specified by LTORG or END.
Step-1: START 200
(here no symbol or literal is found so both table would be empty)
Step-2: MOVER R1, =’3′ 200
( =’3′ is a literal so literal table is made)
Literal Address

=’3′ –––

Step-3: MOVEM R1, X 201


X is a symbol referred prior to its declaration so it is stored in symbol table
with blank address field.

Symbol Address

X –––

Step-4: L1 MOVER R2, =’2′ 202


L1 is a label and =’2′ is a literal so store them in respective tables
Symbol Address

X –––

L1 202

Literal Address

=’3′ –––

=’2′ –––

Step-5: LTORG 203


Assign address to first literal specified by LC value, i.e., 203
Literal Address
Literal Address

=’3′ 203

=’2′ –––

Step-6: X DS 1 204
It is a data declaration statement i.e X is assigned data space of 1. But X is a
symbol which was referred earlier in step 3 and defined in step 6.This
condition is called Forward Reference Problem where variable is referred prior
to its declaration and can be solved by back-patching. So now assembler will
assign X the address specified by LC value of current step.
Symbol Address

X 204

L1 202

Step-7: END 205


Program finishes execution and remaining literal will get address specified by
LC value of END instruction. Here is the complete symbol and literal table
made by pass 1 of assembler.
Symbol Address

X 204

L1 202

Literal Address

=’3′ 203
Literal Address

=’2′ 205

Now tables generated by pass 1 along with their LC value will go to pass-2 of
assembler for further processing of pseudo-opcodes and machine op-codes.
Working of Pass-2:
Pass-2 of assembler generates machine code by converting symbolic
machine-opcodes into their respective bit configuration(machine
understandable form). It stores all machine-opcodes in MOT table (op-code
table) with symbolic code, their length and their bit configuration. It will also
process pseudo-ops and will store them in POT table(pseudo-op table).
Various Data bases required by pass-2:
1. MOT table(machine opcode table)
2. POT table(pseudo opcode table)
3. Base table(storing value of base register)
4. LC ( location counter)

Take a look at flowchart to understand:


As a whole assembler works as:

Software Testing Tools:


Software Testing tools are the tools that are used for the testing of software.
Software testing tools are often used to assure firmness, thoroughness, and
performance in testing software products. Unit testing and subsequent
integration testing can be performed by software testing tools. These tools are
used to fulfill all the requirements of planned testing activities. These tools
also work as commercial software testing tools. The quality of the software is
evaluated by software testers with the help of various testing tools.

Types of Testing Tools


Software testing is of two types, static testing, and dynamic testing. Also, the
tools used during these testing are named accordingly on these testings.
Testing tools can be categorized into two types which are as follows:
1. Static Test Tools: Static test tools are used to work on the static testing
processes. In the testing through these tools, the typical approach is taken.
These tools do not test the real execution of the software. Certain input and
output are not required in these tools. Static test tools consist of the following:
• Flow analyzers: Flow analyzers provides flexibility in the data flow
from input to output.
• Path Tests: It finds the not used code and code with inconsistency in
the software.
• Coverage Analyzers: All rationale paths in the software are assured
by the coverage analyzers.
• Interface Analyzers: They check out the consequences of passing
variables and data in the modules.
2. Dynamic Test Tools: Dynamic testing process is performed by the
dynamic test tools. These tools test the software with existing or current data.
Dynamic test tools comprise the following:
• Test driver: The test driver provides the input data to a module-
under-test (MUT).
• Test Beds: It displays source code along with the program under
execution at the same time.
• Emulators: Emulators provide the response facilities which are used
to imitate parts of the system not yet developed.
• Mutation Analyzers: They are used for testing the fault tolerance of
the system by knowingly providing the errors in the code of the
software.
There is one more categorization of software testing tools. According to this
classification, software testing tools are of 10 types:
1. Test Management Tools: Test management tools are used to store
information on how testing is to be done, help to plan test activities,
and report the status of quality assurance activities. For example,
JIRA, Redmine, Selenium, etc.
2. Automated Testing Tools: Automated testing tools helps to conduct
testing activities without human intervention with more accuracy and
less time and effort. For example, Appium, Cucumber, Ranorex, etc.
3. Performance Testing Tools: Performance testing tools helps to
perform effectively and efficiently performance testing which is a type
of non-functional testing that checks the application for parameters
like stability, scalability, performance, speed, etc. For example,
WebLOAD, Apache JMeter, Neo Load, etc.
4. Cross-browser Testing Tools: Cross-browser testing tools helps to
perform cross-browser testing that lets the tester check whether the
website works as intended when accessed through different
browser-OS combinations. For example, Testsigma, Testim,
Perfecto, etc.
5. Integration Testing Tools: Integration testing tools are used to test
the interface between the modules and detect the bugs. The main
purpose here is to check whether the specific modules are working
as per the client’s needs or not. For example, Citrus, FitNesse,
TESSY, etc.
6. Unit Testing Tools: Unit testing tools are used to check the
functionality of individual modules and to make sure that all
independent modules works as expected. For example, Jenkins,
PHPUnit, JUnit, etc.
7. Mobile Testing Tools: Mobile testing tools are used to test the
application for compatibility on different mobile devices. For example,
Appium, Robotium, Test IO, etc.
8. GUI Testing Tools: GUI testing tools are used to test the graphical
user interface of the software. For example, EggPlant, Squish,
AutoIT, etc.
9. Bug Tracking Tools: Bug tracking tool helps to keep track of various
bugs that come up during the application lifecycle management. It
helps to monitor and log all the bugs that are detected during
software testing. For example, Trello, JIRA, GitHub, etc.
10. Security Testing Tools: Security testing is used to detect the
vulnerabilities and safeguard the application against the malicious
attacks. For example, NetSparker, Vega, ImmuniWeb, etc.

Line editor:
In computing, a line editor is a text editor in which each editing command
applies to one or more complete lines of text designated by the user. Line
editors predate screen-based text editors and originated in an era when a
computer operator typically interacted with a teleprinter (essentially
a printer with a keyboard), with no video display, and no ability to move a
cursor interactively within a document. Line editors were also a feature of
many home computers, avoiding the need for a more memory-intensive full-
screen editor.
Line editors are limited to typewriter keyboard text-oriented input and output
methods. Most edits are a line-at-a-time. Typing, editing, and document
display do not occur simultaneously. Typically, typing does not enter text
directly into the document. Instead, users modify the document text by
entering these commands on a text-only terminal. Commands and text, and
corresponding output from the editor, will scroll up from the bottom of the
screen in the order that they are entered or printed to the screen. Although the
commands typically indicate the line(s) they modify, displaying the edited text
within the context of larger portions of the document requires a separate
command.
Line editors keep a reference to the "current line" to which the entered
commands usually are applied. In contrast, modern screen based editors
allow the user to interactively and directly navigate, select, and modify
portions of the document. Generally line numbers or a search based context
(especially when making changes within lines) are used to specify which part
of the document is to be edited or displayed.
Line editors are still used non-interactively in shell scripts and when dealing
with failing operating systems. Update systems such as patch traditionally
used diff data converted into a script of ed commands. They are also used in
many MUD systems, though many people edit text on their own computer
using MUD's download and upload features.

Screen editors:
In this type of editors, the user is able to see the cursor on the screen and
can make a copy, cut, paste operation easily. It is very easy to use mouse
pointer.

• Screen editor uses the what you see is what you get principle in editor
design.

• A editor displays a screenful of text at a time. The user can move the cursor
over the screen, position it at the point where user desires to perform some
editing and proceed with the editing directly.

• The user has full control over the entire terminal. For example over an type
exiting string which user wishes to replace. User can bring the cursor over a
character to be deleted and press a delete key.

• It is possible to see the effect of an edit operation on the screen. Ex : vi,


emacs, Notepad

Debug Monitor:

Debug monitor is a powerful graphical- and/or console-mode tool for


monitoring all activities handled by the target system ( which can be a kernel
or source code of your app or any electrical equipment connected to your
device etc…) . You can use this tool to monitor how each command is
executed in kernel.

It will provide details regarding the target system execution and in case of any
error this information will be useful for developers to fix the issue. Some debug
monitors records the error event and reproduce it based on the developer’s
need.
UNIT- III
Components of System Programming

System programming involves developing software that interfaces with the


underlying hardware and operating system. It typically involves low-level
programming languages and involves the following components:

1. System libraries: These are pre-written pieces of code that


programmers can use to interact with the operating system and
hardware. They provide a set of functions and routines that can
be called from within a program to access system resources.
2. Device drivers: These are specialized programs that enable the
operating system to communicate with hardware devices such
as printers, scanners, and network cards. Device drivers
translate requests from the operating system into commands
that the hardware can understand and execute.
3. Assemblers and compilers: These are software tools used to
convert human-readable programming code into machine-
readable binary code that can be executed by the computer's
CPU. Assemblers and compilers are essential for creating low-
level system programs that interact with the hardware and
operating system.
4. Debuggers and profilers: These are tools used by programmers
to diagnose and debug system software. Debuggers allow
programmers to step through code line by line, examine
memory contents, and identify errors. Profilers help identify
performance bottlenecks and optimize system software.
5. Linkers: Combine multiple object files into executable
programs, resolving references between code segments. It is
the process of combining various pieces of code and data
together to form a single executable file that can be reloaded
into the memory.
6. Loaders: Load executable programs into memory for execution,
managing memory allocation and program execution. Loader is
an operating system utility that copies the programs on hard
disk to main memory (RAM).
7. System calls: These are functions provided by the operating
system that allow programmers to request services from the
operating system, such as opening a file, creating a process, or
allocating memory. System calls are typically used to interact
with the operating system and perform low-level tasks.
8. Operating system APIs: These are interfaces provided by the
operating system that allow programmers to interact with
higher-level system functions, such as networking, file
management, and process management.
9. DBMS: Database Management System is a software for
creating and managing databases. The DBMS provides users
and programmers with a systematic way to create, retrieve,
update and manage data.

Compiler Pass:
A Compiler pass refers to the traversal of a compiler through the entire
program. Compiler passes are of two types Single Pass Compiler, and Two
Pass Compiler or Multi-Pass Compiler. These are explained as follows.
Types of Compiler Pass
1. Single Pass Compiler
If we combine or group all the phases of compiler design in a single module
known as a single pass compiler.

Single Pass Compiler


In the above diagram, there are all 6 phases are grouped in a single module,
some points of the single pass compiler are as:
• A one-pass/single-pass compiler is a type of compiler that passes
through the part of each compilation unit exactly once.
• Single pass compiler is faster and smaller than the multi-pass
compiler.
• A disadvantage of a single-pass compiler is that it is less efficient in
comparison with the multipass compiler.
• A single pass compiler is one that processes the input exactly once,
so going directly from lexical analysis to code generator, and then
going back for the next read.
Note: Single pass compiler almost never done, early Pascal compiler did
this as an introduction.
Problems with Single Pass Compiler
• We can not optimize very well due to the context of expressions are
limited.
• As we can’t back up and process, it again so grammar should be
limited or simplified.
• Command interpreters such as bash/sh/tcsh can be considered
Single pass compilers, but they also execute entries as soon as
they are processed.
2. Two-Pass compiler or Multi-Pass compiler
A Two pass/multi-pass Compiler is a type of compiler that processes
the source code or abstract syntax tree of a program multiple times. In multi-
pass Compiler, we divide phases into two passes as:
First Pass is referred as
• Front end
• Analytic part
• Platform independent

Second Pass is referred as


• Back end
• Synthesis Part
• Platform Dependent
Problems that can be Solved With Multi-Pass Compiler
First: If we want to design a compiler for a different programming
language for the same machine. In this case for each programming
language, there is a requirement to make the Front end/first pass for each of
them and only one Back end/second pass as:

Second: If we want to design a compiler for the same programming


language for different machines/systems. In this case, we make different
Back end for different Machine/system and make only one Front end for the
same programming language as:
Difference between One Pass and Two Pass Compiler
One pass Two-pass

It performs Translation in one pass It performs Translation in two pass

It scans the entire file only once. It requires two passes to scan the source file.

It generates Intermediate code It does not generate Intermediate code

It is faster than two pass assembler It is slower than two pass assembler

A loader is not required A loader is required.

A loader is required as the object code is


No object program is written.
generated.

Perform some professing of Perform processing of assembler directives


assembler directives. not done in pass-1

The data structure used are:


The data structure used are:
The symbol table, literal table, pool
The symbol table, literal table, and pool table.
table, and table of incomplete.

These assemblers first process the assembly


These assemblers perform the whole
code and store values in the opcode table and
conversion of assembly code to
symbol table and then in the second step they
machine code in one go.
generate the machine code using these tables.

Example: C and Pascal uses One


Example: Modula-2 uses Multi Pass Compiler.
Pass Compiler.
Compile and go loader:
In this scheme, the architecture of memory is like, an assembler present in
memory and it will always be there when we have a compile-and-go loading
scheme. In another part of memory, there is an assembled machine
instruction which means the assembled source program. Assembled
machine instruction is placed directly into their assigned memory location.
Working:
In this scheme, the source code goes into the translator line by line, and then
that single line of code loads into memory. In another language, chunks of
source code go into execution. Line-by-line code goes to the translator so
there is no proper object code. Because of that, if the user runs the same
source program, every line of code will again be translated by a translator.
So here re-translation happens.

Compile and go loader scheme

The source program goes through the translator (compiler/assembler) and it


consumes one part of the memory ad the second part of the memory is
consumed by the assembler. The source program does not need that
assembler but it is still there so this is a waste of memory.
Advantages:
1. It is very simple to implement.
2. The translator is enough to do the task, no subroutines are needed.
3. It is the most simple scheme of the functions of the loader.
4. Improved performance: The use of a compiler and loader can result
in faster and more efficient code execution. This is because the
compiler can optimize the code during the compilation process, and
the loader can perform certain memory-related optimizations during
program loading.
5. Portability: A compiler and loader can help make software more
portable by allowing the same source code to be compiled and
loaded on different hardware platforms and operating systems.
6. Security: The loader can perform various security checks during
program loading to ensure that the program does not have any
malicious code. This can help prevent security vulnerabilities and
protect the user’s data.
7. Ease of use: Compilers and loaders can automate many aspects of
the software development process, making it easier and faster to
develop, test, and deploy software.
8. Flexibility: The use of a compiler and loader can provide developers
with greater flexibility in terms of the programming languages and
tools they can use.
Disadvantages:
1. There is no use of the assembler but it is still there so a wastage of
memory takes place.
2. When source code runs multiple times the translation is also done
every time. so re-translation is happening.
3. Difficult to produce an orderly modular program
4. Difficult to handle multiple segments like if the source program is in
a different language. eg. one subroutine is assembly language &
another subroutine is FORTRAN.

Absolute Loader:

The absolute loader transfers the text of the program into memory at the
address provided by the assembler after reading the object program line by
line. There are two types of information that the object program must
communicate from the assembler to the loader.
It must convey the machine instructions that the assembler has created
along with the memory address.
It must convey the start of the execution. At this point, the software will begin
to run after it has loaded.
The object program is the sequence of the object records. Each object
record specifies some specific aspect of the program in the object module.
There are two types of records:
Text record containing a binary image of the assembly program.
Transfer the record that contains the execution’s starting or entry point.
The formats of text and transfer records are shown below:
Algorithm:

The algorithm for the absolute loader is quite simple. The object file is read
record by record by the loader, and the binary image is moved to the
locations specified in the record. The final record is a transfer record. When
the control reaches the transfer record, it is transferred to the entry point for
execution.

Flowchart:
Subroutine Linkage Relocating:
A set of instructions that are used repeatedly in a program can be referred
to as a Subroutine. Only one copy of this Instruction is stored in the memory.
When a Subroutine is required it can be called many times during the
Execution of a particular program. A call Subroutine Instruction calls the
Subroutine. Care Should be taken while returning a Subroutine as a
Subroutine can be called from a different place from the memory.
The content of the PC must be Saved by the call Subroutine Instruction to
make a correct return to the calling program.

Process of a subroutine in a program

The subroutine linkage method is a way in which computers call and return
the Subroutine. The simplest way of Subroutine linkage is saving the return
address in a specific location, such as a register which can be called a link
register called Subroutine.

Advantages of Subroutines

• Code reuse: Subroutines can be reused in multiple parts of a


program, which can save time and reduce the amount of code that
needs to be written.
• Modularity: Subroutines help to break complex programs into
smaller, more manageable parts, making them easier to
understand, maintain, and modify.
• Encapsulation: Subroutines provide a way to encapsulate
functionality, hiding the implementation details from other parts of
the program.

Disadvantages of Subroutines

• Overhead: Calling a subroutine can incur some overhead, such as


the time and memory required to push and pop data on the stack.
• Complexity: Subroutine nesting can make programs more complex
and difficult to understand, particularly if the nesting is deep or the
control flow is complicated.
• Side Effects: Subroutines can have unintended side effects, such
as modifying global variables or changing the state of the program,
which can make debugging and testing more difficult.
What is Subroutine Nesting?
Subroutine nesting is a common Programming practice In which one
Subroutine calls another Subroutine.

A Subroutine calling another subroutine

From the above figure, assume that when Subroutine 1 calls Subroutine 2
the return address of Subroutine 2 should be saved somewhere. So if the
link register stores the return address of Subroutine 1 this will be
(destroyed/overwritten) by the return address of Subroutine 2. As the last
Subroutine called is the first one to be returned ( Last in first out format). So
stack data structure is the most efficient way to store the return addresses of
the Subroutines.

The Return address of the subroutine is stored in stack memory


Direct linking loader:

Introduction: The direct linking loader is the most common type of loader. This
type of loader is a re - locatable loader. The loader cannot have the direct
access to the source code. And to place the object code in the memory there
are two situations: either the address of the object code could be absolute
which then can be directly placed at the specified location or the address can
be relative. If at all the address is relative then it is the assembler who informs
the loader about the relative addresses.

The assembler should give the following information to the loader:


1. The length of the object code segment
2. A list of external symbols (could be used by other segments)
3. List of External symbols (The segment is using)
4. Information about address constants
5. Machine code translation of the source program

The list of symbols which are not defined in the current segment but can be
used in the current segment are stored in a data structure called USE table.
The USE table holds the information such as name of the symbol, address,
and address relativity.

The lists of symbols which are defined in the current segment and can be
referred by the other segments are stored in a data structure called
DEFINITION table. The definition table holds the information such as symbol,
address.

The assembler generates following types of cards:

1. ESD - External symbol dictionary contains information about all symbols


that are defined in this program but referenced somewhere. It contains

· Symbol Name

· TYPE

· Relative Location

· Length

· Reference no

2. TXT - Text card contains actual object cards.

3. RLD - Relocation and linkage directory contains information about address


dependent instructions of a program. The RLD cards contains the following
information
· Location of the constant that needs relocation

· By what it has to be changed

· The operation to be performed

The Format of RLD

· Reference No

· Symbol

· Flag

· Length

· Relative Location

4. END - Indicates the end of the program and specifies starting address of for
execution

Advantages: The main task of loader is to load a source program into memory
and prepares it for further execution. In pass - I direct link loader allocates
segments and define symbols for lexical analysis. Each symbol in the phase
assigned to next available location after proceeding. Segment in order to
minimize the amount of storage required for the total program. Hence pass - I
deals with only for allocating segment and defining symbols

Therefore pass - I of Direct Link Loader (DLL) is limited scope and is obelized
to mainly deals for allocation of segments and defining symbols.
Types of Operating Systems:
There are several types of Operating Systems which are mentioned below.
• Batch Operating System
• Multi-Programming System
• Multi-Processing System
• Multi-Tasking Operating System
• Time-Sharing Operating System
• Distributed Operating System
• Network Operating System
• Real-Time Operating System
1. Batch Operating System
This type of operating system does not interact with the computer directly.
There is an operator which takes similar jobs having the same requirement
and groups them into batches. It is the responsibility of the operator to sort
jobs with similar needs.
Examples of Batch Operating Systems: Payroll Systems, Bank
Statements, etc.

2. Multi-Programming Operating System


Multiprogramming Operating Systems can be simply illustrated as more than
one program is present in the main memory and any one of them can be
kept in execution. This is basically used for better execution of resources.

3. Multi-Processing Operating System


Multi-Processing Operating System is a type of Operating System in which
more than one CPU is used for the execution of resources. It betters the
throughput of the System.

4. Multi-Tasking Operating System


Multitasking Operating System is simply a multiprogramming Operating
System with having facility of a Round-Robin Scheduling Algorithm. It can
run multiple programs simultaneously.
There are two types of Multi-Tasking Systems which are listed below.
• Preemptive Multi-Tasking
• Cooperative Multi-Tasking

5. Time-Sharing Operating Systems


Each task is given some time to execute so that all the tasks work smoothly.
Each user gets the time of the CPU as they use a single system. These
systems are also known as Multitasking Systems. The task can be from a
single user or different users also. The time that each task gets to execute is
called quantum. After this time interval is over OS switches over to the next
task.
Examples of Time-Sharing OS with explanation
• IBM VM/CMS: IBM VM/CMS is a time-sharing operating system
that was first introduced in 1972. It is still in use today, providing a
virtual machine environment that allows multiple users to run their
own instances of operating systems and applications.
• TSO (Time Sharing Option): TSO is a time-sharing operating
system that was first introduced in the 1960s by IBM for the IBM
System/360 mainframe computer. It allowed multiple users to
access the same computer simultaneously, running their own
applications.
• Windows Terminal Services: Windows Terminal Services is a
time-sharing operating system that allows multiple users to access
a Windows server remotely. Users can run their own applications
and access shared resources, such as printers and network
storage, in real-time.

6. Distributed Operating System


These types of operating system is a recent advancement in the world of
computer technology and are being widely accepted all over the world and,
that too, at a great pace. Various autonomous interconnected computers
communicate with each other using a shared communication network.
Independent systems possess their own memory unit and CPU. These are
referred to as loosely coupled systems or distributed systems. These
systems’ processors differ in size and function. The major benefit of working
with these types of the operating system is that it is always possible that one
user can access the files or software which are not actually present on his
system but some other system connected within this network i.e., remote
access is enabled within the devices connected in that network.
Examples of Distributed Operating Systems are LOCUS, etc.

7. Network Operating System


These systems run on a server and provide the capability to manage data,
users, groups, security, applications, and other networking functions. These
types of operating systems allow shared access to files, printers, security,
applications, and other networking functions over a small private network.
One more important aspect of Network Operating Systems is that all the
users are well aware of the underlying configuration, of all other users within
the network, their individual connections, etc. and that’s why these
computers are popularly known as tightly coupled systems.

Examples of Network Operating Systems are Microsoft Windows Server


2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell
NetWare, BSD, etc.

8. Real-Time Operating System


These types of OSs serve real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is
called response time.
Examples of Real-Time Operating Systems are Scientific experiments,
medical imaging systems, industrial control systems, weapon systems,
robots, air traffic control systems, etc.

Functions of Operating Systems:

1.Security
To safeguard user data, the operating system employs password protection
and other related measures. It also protects programs and user data from
illegal access.

2.Control over System Performance


The operating system monitors the overall health of the system in order to
optimise performance. To get a thorough picture of the system’s health, keep
track of the time between system responses and service requests. This can
aid performance by providing critical information for troubleshooting issues.

3.Job Accounting
The operating system maintains track of how much time and resources are
consumed by different tasks and users, and this data can be used to
measure resource utilisation for a specific user or group of users.

4.Error Detecting Aids


The OS constantly monitors the system in order to discover faults and
prevent a computer system from failing.

5.Coordination between Users and Other Software


Operating systems also organise and assign interpreters, compilers,
assemblers, as well as other software to computer users.

6.Memory Management
The operating system is in charge of managing the primary memory, often
known as the main memory. The main memory consists of a vast array of
bytes or words, each of which is allocated an address. Main memory is rapid
storage that the CPU can access directly. A program must first be loaded
into the main memory before it can be executed. For memory management,
the OS performs the following tasks:
• The OS keeps track of primary memory – meaning, which user
program can use which bytes of memory, memory addresses that
have already been assigned, as well as memory addresses yet to be
used.
• The OS determines the order in which processes would be permitted
memory access and for how long in multiprogramming.
• It allocates memory to the process when the process asks for it and
deallocates memory when the process exits or performs an I/O activity.
7.Process Management
The operating system determines which processes have access to the
processor and how much processing time every process has in a
multiprogramming environment. Process scheduling is the name for this
feature of the operating system. For processor management, the OS
performs the following tasks:
• It keeps track of how processes are progressing.
• A traffic controller is a program that accomplishes this duty.
• Allocates a processor-based CPU to a process. When a process is no
longer needed, the processor is deallocated.

8.Device Management
A file system is divided into directories to make navigation and usage more
efficient. Other directories and files may be found in these directories. The
file management tasks performed by an operating system are: it keeps track
of where data is kept, user access settings, and the state of each file, among
other things. The file system is the name given to all of these features.
UNIT- IV

Architecture of 8085 microprocessor


Introduction :

The 8085 microprocessor is an 8-bit microprocessor that was developed


by Intel in the mid-1970s. It was widely used in the early days of personal
computing and was a popular choice for hobbyists and enthusiasts due to its
simplicity and ease of use. The architecture of the 8085 microprocessor
consists of several key components, including the accumulator, registers,
program counter, stack pointer, instruction register, flags register, data bus,
address bus, and control bus.

The accumulator is an 8-bit register that is used to store arithmetic and logical
results. It is the most commonly used register in the 8085 microprocessor and
is used to perform arithmetic and logical operations such as addition,
subtraction, and bitwise operations.

1. The 8085 microprocessor has six general-purpose registers, including B, C,


D, E, H, and L, which can be combined to form 16-bit register pairs. The B and
C registers can be combined to form the BC register pair, the D and E
registers can be combined to form the DE register pair, and the H and L
registers can be combined to form the HL register pair. These register pairs
are commonly used to store memory addresses and other data.

The program counter is a 16-bit register that contains the memory address of
the next instruction to be executed. The program counter is incremented after
each instruction is executed, which allows the microprocessor to execute
instructions in sequence.

The stack pointer is a 16-bit register that is used to manage the stack. The
stack is a section of memory that is used to store data temporarily, such as
subroutine addresses and other data. The stack pointer is used to keep track
of the top of the stack.

The instruction register is an 8-bit register that contains the current instruction
being executed. The instruction register is used by the microprocessor to
decode and execute instructions.

2.The flags register is an 8-bit register that contains status flags that indicate
the result of an arithmetic or logical operation. These flags include the carry
flag, zero flag, sign flag, and parity flag. The carry flag is set when an
arithmetic operation generates a carry, the zero flag is set when the result of
an arithmetic or logical operation is zero, the sign flag is set when the result of
an arithmetic or logical operation is negative, and the parity flag is set when
the result of an arithmetic or logical operation has an even number of 1 bits.

3.The data bus is an 8-bit bus that is used to transfer data between the
microprocessor and memory or other devices. The data bus is bidirectional,
which means that it can be used to read data from memory or write data to
memory.

The address bus is a 16-bit bus that is used to address memory and other
devices. The address bus is used to select the memory location or device that
the microprocessor wants to access.

4.The control bus is a set of signals that controls the operations of the
microprocessor, including the read and write operations. The control bus
includes signals such as the read signal, write signal, interrupt signal, and
reset signal. The read signal is used to read data from memory or other
devices, the write signal is used to write data to memory or other devices, the
interrupt signal is used to signal the microprocessor that an interrupt has
occurred, and the reset signal is used to reset the microprocessor to its initial
state.

8085 is an 8-bit, general-purpose microprocessor. It consists of the following


functional units:
Arithmetic and Logic Unit (ALU) :

It is used to perform mathematical operations like addition, multiplication,


subtraction, division, decrement, increment, etc. Different operations are
carried out in ALU: Logical operations, Bit-Shifting Operations, and
Arithmetic Operations.

Flag Register:

It is an 8-bit register that stores either 0 or 1 depending upon which value is


stored in the accumulator. Flag Register contains 8-bit out of which 5-bits are
important and the rest of 3-bits are “don’t Care conditions”. The flag register is
a dynamic register because after each operation to check whether the result is
zero, positive or negative, whether there is any overflow occurred or not, or for
comparison of two 8-bit numbers carry flag is checked. So for numerous
operations to check the contents of the accumulator and from that contents if
we want to check the behavior of given result then we can use Flag register to
verify and check. So we can say that the flag register is a status register
and it is used to check the status of the current operation which is being
carried out by ALU.

Different Fields of Flag Register:

1. Carry Flag

2. Parity Flag

3. Auxiliary Carry Flag

4. Zero Flag

5. Sign Flag

Accumulator:

Accumulator is used to perform I/O, arithmetic, and logical operations. It is


connected to ALU and the internal data bus. The accumulator is the heart of
the microprocessor because for all arithmetic operations Accumulator’s 8-bit
pin will always there connected with ALU and in most-off times all the
operations carried by different instructions will be stored in the accumulator
after operation performance.

General Purpose Registers:


There are six general-purpose registers. These registers can hold 8-bit values.
These 8-bit registers are B,C,D,E,H,L. These registers work as 16-bit
registers when they work in pairs like B-C, D-E, and H-L. Here registers W
and Z are reserved registers. We can’t use these registers in arithmetic
operations. It is reserved for microprocessors for internal operations like
swapping two 16-bit numbers. We know that to swap two numbers we need a
third variable hence here W-Z register pair works as temporary registers and
we can swap two 16-bit numbers using this pair.

Program Counter :

Program Counter holds the address value of the memory to the next
instruction that is to be executed. It is a 16-bit register.

For Example: Suppose current value of Program Counter : [PC] = 4000H

(It means that next executing instruction is at location 4000H.After


fetching,program Counter(PC) always increments

by +1 for fetching of next instruction.)

Stack Pointer :

It works like a stack. In stack, the content of the register is stored that is later
used in the program. It is a 16-bit special register. The stack pointer is part of
memory but it is part of Stack operations, unlike random memory access.
Stack pointer works in a continuous and contiguous part of the memory.
whereas Program Counter(PC) works in random memory locations. This
pointer is very useful in stack-related operations like PUSH, POP, and nested
CALL requests initiated by Microprocessor. It reserves the address of the
most recent stack entry.

Temporary Register:

It is an 8-bit register that holds data values during arithmetic and logical
operations.

Instruction register and decoder:

It is an 8-bit register that holds the instruction code that is being decoded. The
instruction is fetched from the memory.

Timing and control unit:


The timing and control unit comes under the CPU section, and it controls the
flow of data from the CPU to other devices. It is also used to control the
operations performed by the microprocessor and the devices connected to it.
There are certain timing and control signals like Control signals, DMA Signals,
RESET signals and Status signals.

Interrupt control:

Whenever a microprocessor is executing the main program and if suddenly an


interrupt occurs, the microprocessor shifts the control from the main program
to process the incoming request. After the request is completed, the control
goes back to the main program. There are 5 interrupt signals in 8085
microprocessors: INTR, TRAP, RST 7.5, RST 6.5, and RST 5.5.

Priorities of Interrupts: TRAP > RST 7.5 > RST 6.5 > RST 5.5 > INTR

Address bus and data bus:

The data bus is bidirectional and carries the data which is to be stored. The
address bus is unidirectional and carries the location where data is to be
stored.

In the 8085 microprocessor, the address bus and data bus are two separate
buses that are used for communication between the microprocessor and
external devices.

The Address bus is used to transfer the memory address of the data that
needs to be read or written. The address bus is a 16-bit bus, allowing the
8085 to access up to 65,536 memory locations.

The Data bus is used to transfer data between the microprocessor and
external devices such as memory and I/O devices. The data bus is an 8-bit
bus, allowing the 8085 to transfer 8-bit data at a time. The data bus can also
be used for instruction fetch operations, where the microprocessor fetches the
instruction code from memory and decodes it.

The combination of the address bus and data bus allows the 8085 to
communicate with and control external devices, allowing it to execute its
program and perform various operations.

Serial Input/output control:

It controls the serial data communication by using Serial input data and Serial
output data.
Serial Input/Output control in the 8085 microprocessor refers to the
communication of data between the microprocessor and external devices in a
serial manner, i.e., one bit at a time. The 8085 has a serial I/O port (SID/SOD)
for serial communication. The SID pin is used for serial input and the SOD pin
is used for serial output. The timing and control of serial communication is
managed by the 8085’s internal circuitry. The 8085 also has two special
purpose registers, the Serial Control Register (SC) and the Serial Shift
Register (SS), which are used to control and monitor the serial
communication.

The flow of an Instruction Cycle in 8085 Architecture :

1. Execution starts with Program Counter. It starts program execution with


the next address field. it fetches an instruction from the memory location
pointed by Program Counter.

2. For address fetching from the memory, multiplexed address/data bus


acts as an address bus and after fetching instruction this address bus
will now acts as a data bus and extract data from the specified memory
location and send this data on an 8-bit internal bus. For multiplexed
address/data bus Address Latch Enable(ALE) Pin is used. If ALE = 1
(Multiplexed bus is Address Bus otherwise it acts as Data Bus).

3. After data fetching data will go into the Instruction Register it will store
data fetched from memory and now data is ready for decoding so for
this Instruction decoder register is used.

4. After that timing and control signal circuit comes into the picture. It
sends control signals all over the microprocessor to tell the
microprocessor whether the given instruction is for READ/WRITE and
whether it is for MEMORY/I-O Device activity.

5. Hence according to timing and control signal pins, logical and arithmetic
operations are performed and according to that data fetching from the
different registers is done by a microprocessor, and mathematical
operation is carried out by ALU. And according to operations Flag
register changes dynamically.

6. With the help of Serial I/O data pin(SID or SOD Pins) we can send or
receive input/output to external devices .in this way execution cycle is
carried out.

7. While execution is going on if there is any interrupt detected then it will


stop execution of the current process and Invoke Interrupt Service
Routine (ISR) Function. Which will stop the current execution and do
execution of the current occurred interrupt after that normal execution
will be performed.

Uses of 8085 microprocessor :

The 8085 microprocessor is a versatile 8-bit microprocessor that has been


used in a wide variety of applications, including:

1. Embedded Systems: The 8085 microprocessor is commonly used in


embedded systems, such as industrial control systems, automotive
electronics, and medical equipment.

2. Computer Peripherals: The 8085 microprocessor has been used in a


variety of computer peripherals, such as printers, scanners, and disk
drives.

3. Communication Systems: The 8085 microprocessor has been used in


communication systems, such as modems and network interface cards.

4. Instrumentation and Control Systems: The 8085 microprocessor is


commonly used in instrumentation and control systems, such as
temperature and pressure controllers.

5. Home Appliances: The 8085 microprocessor is used in various home


appliances, such as washing machines, refrigerators, and microwave
ovens.

6. Educational Purposes: The 8085 microprocessor is also used for


educational purposes, as it is an inexpensive and easily accessible
microprocessor that is widely used in universities and technical schools.

Issues in 8085 microprocessor :

Here are some common issues with the 8085 microprocessor:

1. Overheating: The 8085 microprocessor can overheat if it is used for


extended periods or if it is not cooled properly. Overheating can cause
the microprocessor to malfunction or fail.

2. Power Supply Issues: The 8085 microprocessor requires a stable power


supply for proper operation. Power supply issues such as voltage
fluctuations, spikes, or drops can cause the microprocessor to
malfunction.
3. Timing Issues: The 8085 microprocessor relies on accurate timing
signals for proper operation. Timing issues such as clock signal
instability, noise, or interference can cause the microprocessor to
malfunction.

4. Memory Interface Issues: The 8085 microprocessor communicates with


memory through its address and data buses. Memory interface issues
such as faulty memory chips, loose connections, or address decoding
errors can cause the microprocessor to malfunction.

5. Hardware Interface Issues: The 8085 microprocessor communicates


with other devices through its input/output ports. Hardware interface
issues such as faulty devices, incorrect wiring, or improper device
selection can cause the microprocessor to malfunction.

6. Programming Issues: The 8085 microprocessor is programmed with


machine language or assembly language instructions. Programming
issues such as syntax errors, logic errors, or incorrect instruction
sequences can cause the microprocessor to malfunction or produce
incorrect results.

7. Research and development: The 8085 microprocessor is often used in


research and development projects, where it can be used to develop
and test new digital electronics and computer systems. Researchers
and developers can use the microprocessor to prototype new systems
and test their performance.

8. Retro computing: The 8085 microprocessor is still used by enthusiasts


today for retro computing projects. Retro computing involves using older
computer systems and technologies to explore the history of computing
and gain a deeper understanding of how modern computing systems
have evolved.

Instruction cycle in 8085 microprocessor:

Inrtoduction :

The 8085 microprocessor is a popular 8-bit microprocessor that was first


introduced by Intel in 1976. It has a set of instructions that it can execute, and
the execution of each instruction involves a series of steps known as the
instruction cycle.

The instruction cycle of the 8085 microprocessor consists of four basic steps,
which are:
1. Fetch: In this step, the microprocessor fetches the instruction from the
memory location pointed to by the program counter (PC). The PC is
incremented by one after the fetch operation.

2. Decode: Once the instruction is fetched, the microprocessor decodes it


to determine the operation to be performed and the operands involved.

3. Execute: In this step, the microprocessor performs the operation


specified by the instruction on the operands.

4. Store: Finally, the result of the execution is stored in the appropriate


memory location or register.

Once the execution of an instruction is complete, the microprocessor returns


to the fetch step to fetch the next instruction to be executed. This cycle
repeats until the program is complete or interrupted.

Why need the Instruction cycle in 8085 microprocessor ?


The instruction cycle is a fundamental concept in the operation of the 8085
microprocessor because it is the process by which the microprocessor
fetches, decodes, and executes instructions. The execution of a program in a
microprocessor involves a sequence of instructions, and each instruction is
executed using the instruction cycle.
The instruction cycle is necessary in the 8085 microprocessor because it
ensures that the instructions are executed in the correct sequence and that
the correct operation is performed on the correct data. The fetch step
ensures that the correct instruction is obtained from memory, the decode
step ensures that the correct operation is determined, and the execute step
ensures that the correct operation is performed on the correct data.
Furthermore, the instruction cycle allows the microprocessor to execute
instructions at a very high speed, which is critical in applications where real-
time performance is required. By efficiently executing instructions using the
instruction cycle, the microprocessor can perform complex tasks and
computations quickly and accurately.
Time required to execute and fetch an entire instruction is called instruction
cycle. It consists:

• Fetch cycle – The next instruction is fetched by the address stored


in program counter (PC) and then stored in the instruction register.
• Decode instruction – Decoder interprets the encoded instruction
from instruction register.
• Reading effective address – The address given in instruction is
read from main memory and required data is fetched. The effective
address depends on direct addressing mode or indirect addressing
mode.
• Execution cycle – consists memory read (MR), memory write
(MW), input output read (IOR) and input output write (IOW)
The time required by the microprocessor to complete an operation of
accessing memory or input/output devices is called machine cycle. One time
period of frequency of microprocessor is called t-state. A t-state is measured
from the falling edge of one clock pulse to the falling edge of the next clock
pulse. Fetch cycle takes four t-states and execution cycle takes three t-
states.

Timing diagram for fetch cycle or opcode fetch:


Above diagram represents:
• 05 – lower bit of address where opcode is stored. Multiplexed
address and data bus AD0-AD7 are used.
• 20 – higher bit of address where opcode is stored. Multiplexed
address and data bus AD8-AD15 are used.
• ALE – Provides signal for multiplexed address and data bus. If
signal is high or 1, multiplexed address and data bus will be used
as address bus. To fetch lower bit of address, signal is 1 so that
multiplexed bus can act as address bus. If signal is low or 0,
multiplexed bus will be used as data bus. When lower bit of address
is fetched then it will act as data bus as the signal is low.
• RD (low active) – If signal is high or 1, no data is read by
microprocessor. If signal is low or 0, data is read by
microprocessor.
• WR (low active) – If signal is high or 1, no data is written by
microprocessor. If signal is low or 0, data is written by
microprocessor.
• IO/M (low active) and S1, S0 – If signal is high or 1, operation is
performing on input output. If signal is low or 0, operation is
performing on memory.
Uses of Instruction cycle in 8085 microprocessor :
Some of the key uses of the instruction cycle in the 8085 microprocessor
include:
1. Execution of instructions: The instruction cycle is used to execute
the instructions in a program. Each instruction in the program is
fetched, decoded, and executed using the instruction cycle.
2. Control flow: The instruction cycle is used to control the flow of
instructions in a program. Once an instruction is executed, the
microprocessor moves on to the next instruction in the program.
3. Real-time processing: The instruction cycle allows the 8085
microprocessor to execute instructions quickly and accurately,
making it well-suited for real-time processing applications where
speed and accuracy are critical.
4. Resource management: The instruction cycle is used to manage
the resources of the 8085 microprocessor, including the memory
and registers. The fetch step retrieves instructions from memory,
while the store step writes results back to memory or registers.
5. Interrupt handling: The instruction cycle is used to handle interrupts
in the 8085 microprocessor. When an interrupt occurs, the current
instruction cycle is suspended, and the microprocessor jumps to a
separate interrupt routine to handle the interrupt.

Issues of Instruction cycle in 8085 microprocessor :


Some of the key issues of the instruction cycle in the 8085 microprocessor
include:
1. Timing: The instruction cycle requires precise timing to ensure that
each step is executed correctly. If the timing is off, it can lead to
incorrect results or cause the microprocessor to malfunction.
2. Instruction set limitations: The 8085 microprocessor has a limited
instruction set, which can make it difficult to perform certain
operations or tasks. This can lead to inefficient code and slower
execution times.
3. Data transfer: The instruction cycle is used to transfer data between
memory and registers, but this process can be slow and inefficient.
This can be a problem when working with large amounts of data or
when real-time processing is required.
4. Interrupt handling: Although the instruction cycle is used to handle
interrupts, it can be challenging to ensure that the microprocessor
returns to the correct point in the program after handling an
interrupt.
5. Instruction sequencing: The instruction cycle requires instructions to
be executed in a specific sequence, which can limit the flexibility of
the microprocessor and make it difficult to optimize code for specific
tasks.
Architecture of 8086:
Introduction :
The 8086 microprocessor is an 8-bit/16-bit microprocessor designed by Intel
in the late 1970s. It is the first member of the x86 family of microprocessors,
which includes many popular CPUs used in personal computers.
The architecture of the 8086 microprocessor is based on a complex
instruction set computer (CISC) architecture, which means that it supports a
wide range of instructions, many of which can perform multiple operations in
a single instruction. The 8086 microprocessor has a 20-bit address bus,
which can address up to 1 MB of memory, and a 16-bit data bus, which can
transfer data between the microprocessor and memory or I/O devices.
The 8086 microprocessor has a segmented memory architecture, which
means that memory is divided into segments that are addressed using both a
segment register and an offset. The segment register points to the start of a
segment, while the offset specifies the location of a specific byte within the
segment. This allows the 8086 microprocessor to access large amounts of
memory, while still using a 16-bit data bus.

The 8086 microprocessor has two main execution units: the execution unit
(EU) and the bus interface unit (BIU). The BIU is responsible for fetching
instructions from memory and decoding them, while the EU executes the
instructions. The BIU also manages data transfer between the
microprocessor and memory or I/O devices.
The 8086 microprocessor has a rich set of registers, including general-
purpose registers, segment registers, and special registers. The general-
purpose registers can be used to store data and perform arithmetic and
logical operations, while the segment registers are used to address memory
segments. The special registers include the flags register, which stores
status information about the result of the previous operation, and the
instruction pointer (IP), which points to the next instruction to be executed.
A Microprocessor is an Integrated Circuit with all the functions of a CPU.
However, it cannot be used stand-alone since unlike a microcontroller it has
no memory or peripherals.

8086 does not have a RAM or ROM inside it. However, it has internal
registers for storing intermediate and final results and interfaces with
memory located outside it through the System Bus.

In the case of 8086, it is a 16-bit Integer processor in a 40-pin, Dual Inline


Packaged IC.

The size of the internal registers(present within the chip) indicates how much
information the processor can operate on at a time (in this case 16-bit
registers) and how it moves data around internally within the chip,
sometimes also referred to as the internal data bus.
8086 provides the programmer with 14 internal registers, each of 16 bits or 2
bytes wide. The main advantage of the 8086 microprocessor is that it
supports Pipelining.

Memory segmentation:
• In order to increase execution speed and fetching speed, 8086
segments the memory.
• Its 20-bit address bus can address 1MB of memory, it segments it
into 16 64kB segments.
• 8086 works only with four 64KB segments within the whole 1MB
memory.
The internal architecture of Intel 8086 is divided into 2 units: The Bus
Interface Unit (BIU), and The Execution Unit (EU). These are explained as
following below.

1. The Bus Interface Unit (BIU):

It provides the interface of 8086 to external memory and I/O devices via the
System Bus. It performs various machine cycles such as memory read, I/O
read, etc. to transfer data between memory and I/O devices.

BIU performs the following functions are as follows:


• It generates the 20-bit physical address for memory access.
• It fetches instructions from the memory.
• It transfers data to and from the memory and I/O.
• Maintains the 6-byte pre-fetch instruction queue(supports
pipelining).

BIU mainly contains the 4 Segment registers, the Instruction Pointer, a


pre-fetch queue, and an Address Generation Circuit.

Instruction Pointer (IP):


• It is a 16-bit register. It holds offset of the next instructions in
the Code Segment.
• IP is incremented after every instruction byte is fetched.
• IP gets a new value whenever a branch instruction occurs.
• CS is multiplied by 10H to give the 20-bit physical address of the
Code Segment.
• The address of the next instruction is calculated by using the
formula CS x 10H + IP.
Example:
CS = 4321H IP = 1000H
then CS x 10H = 43210H + offset = 44210H
Here Offset = Instruction Pointer(IP)
This is the address of the next instruction.

Code Segment register: (16 Bit register): CS holds the base address for
the Code Segment. All programs are stored in the Code Segment and
accessed via the IP.
Data Segment register: (16 Bit register): DS holds the base address for
the Data Segment.

Stack Segment register: (16 Bit register): SS holds the base address for
the Stack Segment.

Extra Segment register: (16 Bit register): ES holds the base address for
the Extra Segment.
Please note that segments are present in memory and segment registers are
present in Microprocessor.
Segment registers store starting address of each segments in memory.

Address Generation Circuit:


• The BIU has a Physical Address Generation Circuit.
• It generates the 20-bit physical address using Segment and Offset
addresses using the formula:
• In Bus Interface Unit (BIU) the circuit shown by the Σ symbol is
responsible for the calculation unit which is used to calculate the
physical address of an instruction in memory.
Physical Address = Segment Address x 10H + Offset Address
6 Byte Pre-fetch Queue:
• It is a 6-byte queue (FIFO).
• Fetching the next instruction (by BIU from CS) while executing the
current instruction is called pipelining.
• Gets flushed whenever a branch instruction occurs.
• The pre-Fetch queue is of 6-Bytes only because the maximum size
of instruction that can have in 8086 is 6 bytes. Hence to cover up all
operands and data fields of maximum size instruction in 8086
Microprocessor there is a Pre-Fetch queue is 6 Bytes.
• The pre-Fetch queue is connected with the control unit which is
responsible for decoding op-code and operands and telling the
execution unit what to do with the help of timing and control signals.
• The pre-Fetch queue is responsible for pipelining and because of
that 8086 microprocessor is called fetch, decode, execute type
microprocessor. Since there are always instructions present for
decoding and execution in this queue the speed of execution in the
microprocessor is gradually increased.
• When there is a 2-byte space in the instruction pre-fetch queue
then only the next instruction will be pushed into the
queue otherwise if only a 1-byte space is vacant then there will not
be any allocation in the queue. It will wait for a spacing of 2 bytes in
subsequent queue decoding operations.
• Instruction pre-fetch queue works in a sequential manner so if there
is any branch condition then in that situation pre-fetch queue fails.
Hence to avoid chaos instruction queue is flushed out when any
branch or conditional jumps occur.

2.prefetch unit:

The Prefetch Unit in the 8086 microprocessor is a component responsible for


fetching instructions from memory and storing them in a queue. The prefetch
unit allows the 8086 to perform multiple instruction fetches in parallel,
improving the overall performance of the microprocessor.
The prefetch unit consists of a buffer and a program counter that are used to
fetch instructions from memory. The buffer stores the instructions that have
been fetched and the program counter keeps track of the memory location of
the next instruction to be fetched. The prefetch unit fetches several
instructions ahead of the current instruction, allowing the 8086 to execute
instructions from the buffer rather than from memory.
This parallel processing of instruction fetches helps to reduce the wait time
for memory access, as the 8086 can continue to execute instructions from
the buffer while it waits for memory access to complete. This results in
improved overall performance, as the 8086 is able to execute more
instructions in a given amount of time.
The prefetch unit is an important component of the 8086 microprocessor, as
it allows the microprocessor to work more efficiently and perform more
instructions in a given amount of time. This improved performance helps to
ensure that the 8086 remains competitive in its performance and capabilities,
even as technology continues to advance.

3. The Execution Unit (EU):

The main components of the EU are General purpose registers, the ALU,
Special purpose registers, the Instruction Register and Instruction Decoder,
and the Flag/Status Register.
1. Fetches instructions from the Queue in BIU, decodes, and executes
arithmetic and logic operations using the ALU.
2. Sends control signals for internal data transfer operations within the
microprocessor.(Control Unit)
3. Sends request signals to the BIU to access the external module.
4. It operates with respect to T-states (clock cycles) and not machine
cycles.
8086 has four 16-bit general purpose registers AX, BX, CX, and DX which
store intermediate values during execution. Each of these has two 8-bit parts
(higher and lower).
• AX register: (Combination of AL and AH Registers)
It holds operands and results during multiplication and division
operations. Also an accumulator during String operations.

• BX register: (Combination of BL and BH Registers)


It holds the memory address (offset address) in indirect addressing
modes.

• CX register: (Combination of CL and CH Registers)


It holds the count for instructions like a loop, rotates, shifts and
string operations.

• DX register: (Combination of DL and DH Registers)


It is used with AX to hold 32-bit values during multiplication and
division.

Arithmetic Logic Unit (16-bit): Performs 8 and 16-bit arithmetic and logic
operations.

Special purpose registers (16-bit): Special purpose registers are called


Offset registers also. Which points to specific memory locations under each
segment.
We can understand the concept of segments as Textbook pages. Suppose
there are 10 chapters in one textbook and each chapter takes exactly 100
pages. So the book will contain 1000 pages. Now suppose we want to
access page number 575 from the book then 500 will be the segment base
address which can be anything in the context of microprocessors like Code,
Data, Stack, and Extra Segment. So 500 will be segment registers that are
present in Bus Interface Unit (BIU). And 500 + 75 is called an offset register
through which we can reach on specific page number under a specific
segment.
Hence 500 is the segment base address and 75 is an offset address or
(Instruction Pointer, Stack Pointer, Base Pointer, Source Index, Destination
Index) any of the above according to their segment implementation.
• Stack Pointer: Points to Stack top. Stack is in Stack Segment,
used during instructions like PUSH, POP, CALL, RET etc.
• Base Pointer: BP can hold the offset addresses of any location in
the stack segment. It is used to access random locations of the
stack.
• Source Index: It holds offset address in Data Segment during
string operations.
• Destination Index: It holds offset address in Extra Segment during
string operations.
Instruction Register and Instruction Decoder:
The EU fetches an opcode from the queue into the instruction register. The
instruction decoder decodes it and sends the information to the control circuit
for execution.

Flag/Status register (16 bits): It has 9 flags that help change or recognize
the state of the microprocessor.

6 Status flags:
1. Carry flag(CF)
2. Parity flag(PF)
3. Auxiliary carry flag(AF)
4. Zero flag(Z)
5. Sign flag(S)
6. Overflow flag (O)
Status flags are updated after every arithmetic and logic operation.

3 Control flags:
1. Trap flag(TF)
2. Interrupt flag(IF)
3. Direction flag(DF)
These flags can be set or reset using control instructions like CLC, STC,
CLD, STD, CLI, STI, etc. The Control flags are used to control certain
operations.
4.Decode unit:

The Decode Unit in the 8086 microprocessor is a component that decodes


the instructions that have been fetched from memory. The decode unit takes
the machine code instructions and translates them into micro-operations that
can be executed by the microprocessor’s execution unit.
The Decode Unit works in parallel with the Prefetch Unit, which fetches
instructions from memory and stores them in a queue. The Decode Unit
reads the instructions from the queue and translates them into micro-
operations that can be executed by the microprocessor.
The Decode Unit is an important component of the 8086 microprocessor, as
it allows the microprocessor to execute instructions efficiently and accurately.
The decode unit ensures that the microprocessor can execute complex
instructions, such as jump instructions and loop instructions, by translating
them into a series of simple micro-operations.
The Decode Unit is responsible for decoding instructions, performing
register-to-register operations, and performing memory-to-register
operations. It also decodes conditional jumps, calls, and returns, and
performs data transfers between memory and registers.
The Decode Unit helps to improve the performance of the 8086
microprocessor by allowing it to execute instructions quickly and accurately.
This improved performance helps to ensure that the 8086 remains
competitive in its performance and capabilities, even as technology
continues to advance.

5.control unit :

The Control Unit in the 8086 microprocessor is a component that manages


the overall operation of the microprocessor. The control unit is responsible
for controlling the flow of instructions through the microprocessor and
coordinating the activities of the other components, including the Decode
Unit, Execution Unit, and Prefetch Unit.
The Control Unit acts as the central coordinator for the microprocessor,
directing the flow of data and instructions and ensuring that the
microprocessor operates correctly. It also monitors the state of the
microprocessor, ensuring that the correct sequence of operations is followed.
The Control Unit is responsible for fetching instructions from memory,
decoding them, executing them, and updating the microprocessor’s state. It
also handles interrupt requests and performs system management tasks,
such as power management and error handling.
The Control Unit is an essential component of the 8086 microprocessor, as it
allows the microprocessor to operate efficiently and accurately. The control
unit ensures that the microprocessor can execute complex instructions, such
as jump instructions and loop instructions, by coordinating the activities of
the other components.
The Control Unit helps to improve the performance of the 8086
microprocessor by managing the flow of instructions and data through the
microprocessor, ensuring that the microprocessor operates correctly and
efficiently. This improved performance helps to ensure that the 8086 remains
competitive in its performance and capabilities, even as technology
continues to advance.

The 8086 microprocessor uses three different buses to transfer data and
instructions between the microprocessor and other components in a
computer system. These buses are:

1.Address Bus: The address bus is used to send the memory address of
the instruction or data being read or written. The address bus is 16 bits wide,
allowing the 8086 to address up to 64 kilobytes of memory.
2.Data Bus: The data bus is used to transfer data between the
microprocessor and memory. The data bus is 16 bits wide, allowing the 8086
to transfer 16-bit data words at a time.
3.Control Bus: The control bus is used to transfer control signals between
the microprocessor and other components in the computer system. The
control bus is used to send signals such as read, write, and interrupt
requests, and to transfer status information between the microprocessor and
other components.
The buses in the 8086 microprocessor play a crucial role in allowing the
microprocessor to access and transfer data from memory, as well as to
interact with other components in the computer system. The 8086’s ability to
use these buses efficiently and effectively helps to ensure that it remains
competitive in its performance and capabilities, even as technology
continues to advance.
Execution of whole 8086 Architecture:
1. All instructions are stored in memory hence to fetch any instruction
first task is to obtain the Physical address of the instruction is to be
fetched. Hence this task is done by Bus Interface Unit (BIU) and by
Segment Registers. Suppose the Code segment has a Segment
address and the Instruction pointer has some offset address then
the physical address calculator circuit calculates the physical
address in which our instruction is to be fetched.
2. After the address calculation instruction is fetched from memory
and it passes through C-Bus (Data bus) as shown in the figure, and
according to the size of the instruction, the instruction pre-fetch
queue fills up. For example MOV AX, BX is 1 Byte instruction so it
will take only the 1st block of the queue, and MOV BX,4050H is 3
Byte instruction so it will take 3 blocks of the pre-fetch queue.
3. When our instruction is ready for execution, according to the FIFO
property of the queue instruction comes into the control system or
control circuit which resides in the Execution unit. Here instruction
decoding takes place. The decoding control system generates an
opcode that tells the microprocessor unit which operation is to be
performed. So the control system sends signals all over the
microprocessor about what to perform and what to extract from
General and Special Purpose Registers.
4. Hence after decoding microprocessor fetches data from GPR and
according to instructions like ADD, SUB, MUL, and DIV data
residing in GPRs are fetched and put as ALU’s input. and after that
addition, multiplication, division, or subtraction whichever
calculation is to be carried out.
5. According to arithmetic, flag register values change dynamically.
6. While Instruction was decoding and executing from step-3 of
our algorithm, the Bus interface Unit doesn’t remain idle. it
continuously fetches an instruction from memory and put it in
a pre-fetch queue and gets ready for execution in a FIFO
manner whenever the time arrives.
7. So in this way, unlike the 8085 microprocessor, here the fetch,
decode, and execution process happens in parallel and not
sequentially. This is called pipelining, and because of the instruction
pre-fetch queue, all fetching, decoding, and execution process
happen side-by-side. Hence there is partitioning in 8086
architecture like Bus Interface Unit and Execution Unit to support
Pipelining phenomena.
Advantages of Architecture of 8086:
The architecture of the 8086 microprocessor provides several advantages,
including:
1. Wide range of instructions: The 8086 microprocessor supports a
wide range of instructions, allowing programmers to write complex
programs that can perform many different operations.
2. Segmented memory architecture: The segmented memory
architecture allows the 8086 microprocessor to address large
amounts of memory, up to 1 MB, while still using a 16-bit data bus.
3. Powerful instruction set: The instruction set of the 8086
microprocessor includes many powerful instructions that can
perform multiple operations in a single instruction, reducing the
number of instructions needed to perform a given task.
4. Multiple execution units: The 8086 microprocessor has two main
execution units, the execution unit and the bus interface unit, which
work together to efficiently execute instructions and manage data
transfer.
5. Rich set of registers: The 8086 microprocessor has a rich set of
registers, including general-purpose registers, segment registers,
and special registers, allowing programmers to efficiently
manipulate data and control program flow.
6. Backward compatibility: The architecture of the 8086
microprocessor is backward compatible with earlier 8-bit
microprocessors, allowing programs written for these earlier
microprocessors to be easily ported to the 8086 microprocessor.
Dis-advantages of Architecture of 8086:
The architecture of the 8086 microprocessor has some disadvantages,
including:
1. Complex programming: The architecture of the 8086
microprocessor is complex and can be difficult to program,
especially for novice programmers who may not be familiar with the
assembly language programming required for the 8086
microprocessor.
2. Segmented memory architecture: While the segmented memory
architecture allows the 8086 microprocessor to address a large
amount of memory, it can be difficult to program and manage, as it
requires programmers to use both segment registers and offsets to
address memory.
3. Limited performance: The 8086 microprocessor has a limited
performance compared to modern microprocessors, as it has a
slower clock speed and a limited number of execution units.
4. Limited instruction set: While the 8086 microprocessor has a wide
range of instructions, it has a limited instruction set compared to
modern microprocessors, which can limit its functionality and
performance in certain applications.
5. Limited memory addressing: The 8086 microprocessor can only
address up to 1 MB of memory, which can be limiting in applications
that require large amounts of memory.
6. Lack of built-in features: The 8086 microprocessor lacks some built-
in features that are commonly found in modern microprocessors,
such as hardware floating-point support and virtual memory
management.
Addressing modes in 8086 microprocessor
• the given
modes:
Addressing
operated
that
whether1. the
bymodes
given
andata
instruction
Register operand
is
in an
8085
mode immediate
–isis
microprocessor
Inknown
registerdata
asoraddressing
this type ofregister
oraddressing
an
Theaddress.
way
pair.modes.
of
Types
specifying
It also
mode of
This
bothspecifies
addressing
specifies
data
the to be
operands
are registers. Example:
2. MOV AX, BX
3. XOR AX, DX
ADD AL, BL
4. Immediate mode – In this type of addressing mode the source
operand is a 8 bit or 16 bit data. Destination operand can never be
immediate data. Example:
5. MOV AX, 2000
6. MOV CL, 0A
7. ADD AL, 45
AND AX, 0000
Note that to initialize the value of segment register an register is
required.

MOV AX, 2000


MOV CS, AX
8. Displacement or direct mode – In this type of addressing mode the
effective address is directly given in the instruction as displacement.
Example:
9. MOV AX, [DISP]
MOV AX, [0500]
10. Register indirect mode – In this addressing mode the effective
address is in SI, DI or BX. Example: Physical Address = Segment
Address + Effective Address
11. MOV AX, [DI]
12. ADD AL, [BX]
MOV AX, [SI]
13. Based indexed mode – In this the effective address is sum of
base register and index register.
14. Base register: BX, BP
Index register: SI, DI
The physical memory address is calculated according to the base
register. Example:

MOV AL, [BP+SI]


MOV AX, [BX+DI]
15. Indexed mode – In this type of addressing mode the effective
address is sum of index register and displacement. Example:
16. MOV AX, [SI+2000]
MOV AL, [DI+3000]
17. Based mode – In this the effective address is the sum of base
register and displacement. Example:
MOV AL, [BP+ 0100]
18. Based indexed displacement mode – In this type of addressing
mode the effective address is the sum of index register, base register
and displacement. Example:
MOV AL, [SI+BP+2000]
19. String mode – This addressing mode is related to string
instructions. In this the value of SI and DI are auto incremented and
decremented depending upon the value of directional flag. Example:
20. MOVS B
MOVS W
21. Input/Output mode – This addressing mode is related with input
output operations. Example:
22. IN A, 45
OUT A, 50
23. Relative mode – In this the effective address is calculated with
reference to instruction pointer. Example:
24. JNZ 8 bit address
IP=IP+8 bit address
UNIT- V

Programmable peripheral interface 8255


PPI 8255 is a general purpose programmable I/O device designed to
interface the CPU with its outside world such as ADC, DAC, keyboard etc.
We can program it according to the given condition. It can be used with
almost any microprocessor. It consists of three 8-bit bidirectional I/O ports
i.e. PORT A, PORT B and PORT C. We can assign different ports as input or
output functions.
Block diagram –

It consists of 40 pins and operates in +5V regulated power supply. Port C is


further divided into two 4-bit ports i.e. port C lower and port C upper and port
C can work in either BSR (bit set rest) mode or in mode 0 of input-output
mode of 8255. Port B can work in either mode 0 or in mode 1 of input-output
mode. Port A can work either in mode 0, mode 1 or mode 2 of input-output
mode. It has two control groups, control group A and control group B. Control
group A consist of port A and port C upper. Control group B consists of port
C lower and port B. Depending upon the value if CS’, A1 and A0 we can
select different ports in different modes as input-output function or BSR. This
is done by writing a suitable word in control register (control word D0-D7).
CS’ A1 A0 Selection Address

0 0 0 PORT A 80 H

0 0 1 PORT B 81 H

0 1 0 PORT C 82 H

0 1 1 Control Register 83 H

1 X X No Seletion X

Pin diagram –
• PA0 – PA7 – Pins of port A
• PB0 – PB7 – Pins of port B
• PC0 – PC7 – Pins of port C
• D0 – D7 – Data pins for the transfer of data
• RESET – Reset input
• RD’ – Read input
• WR’ – Write input
• CS’ – Chip select
• A1 and A0 – Address pins
Operating modes –
1. Bit set reset (BSR) mode – If MSB of control word (D7) is 0, PPI
works in BSR mode. In this mode only port C bits are used for set
or reset.

2. Input-Output mode – If MSB of control word (D7) is 1, PPI works


in input-output mode. This is further divided into three modes:

• Mode 0 –In this mode all the three ports (port A, B, C) can
work as simple input function or simple output function. In
this mode there is no interrupt handling capacity.
• Mode 1 – Handshake I/O mode or strobed I/O mode. In
this mode either port A or port B can work as simple input
port or simple output port, and port C bits are used for
handshake signals before actual data transmission. It has
interrupt handling capacity and input and output are
latched. Example: A CPU wants to transfer data to a
printer. In this case since speed of processor is very fast
as compared to relatively slow printer, so before actual
data transfer it will send handshake signals to the printer
for synchronization of the speed of the CPU and the
peripherals.

• Mode 2 – Bi-directional data bus mode. In this mode only


port A works, and port B can work either in mode 0 or
mode 1. 6 bits port C are used as handshake signals. It
also has interrupt handling capacity.

Analog-to-Digital Converter (ADC):

The transducer’s electrical analog output serves as the analog input to

the ADC. The ADC converts this analog input to a digital output. This

digital output consists of a number of bits that represent the value of the

analog input. For example, the ADC might convert the transducer’s 800-

to 1500-mV analog values to binary values ranging from 01010000 (80)

to 10010110 (150). Note that the binary output from the ADC is

proportional to the analog input voltages so that each unit of the digital

output represents 10mV.

The digital representation of the analog vales is transmitted from the

ADC to the digital computer, which stores the digital value and

processes it according to a program of instructions that it is executing.

Digital-to-Analog Conversion

Basically, D/A conversion is the process of taking a value represented in

digital code (such as straight binary or BCD) and converting it to a

voltage or current which is proportional to the digital value. Fig. 7.2


shows the symbol for a typical 4-bit D/A converter. Now, we will

examine the various input/output relationships.

Fig. Four bit DAC with voltage output.

The digital inputs D,C,B, and A are usually derived from the output

register of a digital system. The 24

= 16 different binary numbers

represented by these 4 bits for each input number, the D/A converter

output voltage is a unique value. In fact, for this case, the analog output

voltage Vout is equal in volts to the binary number.

In general,

Analog output = K × digital input

where K is the proportionality factor and it is constant value for a given

DAC. The analog output can of course be a voltage or current. When it

is a voltage, K will be in voltage units, and when the output is current, K

will be in current units. For the DAC of K=1 V, so that

VOUT = (1 V) × digital input

We can use this to calculate VOUT for any value of digital input. For

example, with a digital input of 11002 = 1210, we obtain

VOUT = 1V × 12 = 12V
Digital-to-Analog Converter (DAC)

This digital output from the computer is connected to a DAC, which

converts it to a proportional analog voltage or current. For example, the

computer might produce a digital output ranging from 0000000 to

11111111, which the DAC converts to a voltage ranging from 0 to 10V.

Introduction to Input-Output Interface:


Input- Output Interface is used as an method which helps in transferring of
information between the internal storage devices i.e. memory and the external
peripheral device . A peripheral device is that which provide input and output
for the computer, it is also called Input-Output devices. For Example: A
keyboard and mouse provide Input to the computer are called input devices
while a monitor and printer that provide output to the computer are called
output devices. Just like the external hard-drives, there is also availability of
some peripheral devices which are able to provide both input and output.
transferring
memory
that
Input-Output
Input
printer
Just
peripheral
which
like
tothat
the
and
the
devices
provide
provide
computer
of
external
Interface
the
devices.
information
external
input
which
output
hard-drives,
are
For
is and
used
are
peripheral
called
Example:
between
to output
the
able
asinput
computer
an
there
tothe
for
A
device
method
provide
devices
keyboard
the
internal
is also
computer,
are
.A
which
both
while
peripheral
called
availability
storage
and
input
helps
amouse
output
itmonitor
is
and
devices
in
device
also
of output.
provide
devices.
some
and
called
i.e.
is

Input-Output Interface
In micro-computer base system, the only purpose of peripheral devices is just
to provide special communication links for the interfacing them with the
CPU. To resolve the differences between peripheral devices and CPU, there
is a special need for communication links.
The major differences are as follows:
1. The nature of peripheral devices is electromagnetic and electro-
mechanical. The nature of the CPU is electronic. There is a lot of
difference in the mode of operation of both peripheral devices and
CPU.
2. There is also a synchronization mechanism because the data
transfer rate of peripheral devices are slow than CPU.
3. In peripheral devices, data code and formats are differ from the
format in the CPU and memory.
4. The operating mode of peripheral devices are different and each
may be controlled so as not to disturb the operation of other
peripheral devices connected to CPU.
There is a special need of the additional hardware to resolve the differences
between CPU and peripheral devices to supervise and synchronize all input
and output devices.
Functions of Input-Output Interface:
1. It is used to synchronize the operating speed of CPU with respect to
input-output devices.
2. It selects the input-output device which is appropriate for the
interpretation of the input-output signal.
3. It is capable of providing signals like control and timing signals.
4. In this data buffering can be possible through data bus.
5. There are various error detectors.
6. It converts serial data into parallel data and vice-versa.
7. It also convert digital data into analog signal and vice-versa.

Memory Interfacing:

Several memory chips and I/O devices are connected to a microprocessor.


When we are executing any instruction, the address of memory location or an
I/O device is sent out by the microprocessor. The corresponding memory chip
or I/O device is selected by a decoding circuit.

Memory requires some signals to read from and write to registers and
microprocessor transmits some signals for reading or writing data.

The interfacing process includes matching the memory requirements with the
microprocessor signals. Therefore, the interfacing circuit should be designed
in such a way that it matches the memory signal requirements with the
microprocessor's signals.

8279 Programmable Keyboard

The Intel 8279 is a programmable keyboard interfacing device. Data input and
display are the integral part of microprocessor kits and microprocessor-based
systems.

8279 has been designed for the purpose of 8-bit Intel microprocessors.

8279 has two sections namely keyboard section and display section.

The function of the keyboard section is to interface the keyboard which is


used as input device for the microprocessor. It can also interface toggle or
thumb switches.

The purpose of the display section is to drive alphanumeric displays or


indicator lights. It is directly connected to the microprocessor bus.

The microprocessor is relieved from the burden of scanning the keyboard or


refreshing the display.

Some important Features are:


o Simultaneous keyboard display operations
o Scanned sensor mode
o Scanned keyboard mode
o 8-character keyboard FIFO
o Strobed input entry mode
o 2-key lock out or N-key roll over with contact debounce
o Single 16-charcter display
o Dual 8 or 16 numerical display
o Interrupt output on key entry
o Programmable scan timing and mode programmable from CPUAD

Intel 8257
o The Intel 8257 is a programmable DMA controller.
o It is a 4-channel programmable Direct Memory Access
(DMA) controller.
o It is a 40 pin I.C. package and requires +5V supply for its operation.
o It can perform three operations, namely read, write, and verify.
o Each channel incorporates two 16-bit registers, namely DMA address
register and byte count register.
o Each channel can transfer data up to 64kb and can be programmed
independently.
o It operates in 2 -modes: Master mode and Slave mode.

8257 Architecture
8257 Pin Description

DRQ0 - DRQ3: These are DMA request lines. An I/O device sends the DMA
request on one of these lines. On the line, a HIGH status generates a DMA
request.

DACK0 - DACK3 : These are DMA acknowledge lines. The Intel 8257 sends
an acknowledge signal through one of these lines informing an I/O device that
it has been selected for DMA data transfer. On the line, a LOW acknowledges
the I/O device.

A0 - A7: These are address lines. A0 - A3 are bidirectional lines. These lines
carry 4 LSBs of 16-bit memory address generated by the 8257 in the master
mode. In the slave mode, these lines are all the input lines. The inputs select
one from the registers to be read or programmed. A4 - A7 lines gives tristated
outputs in the master mode which carry 4 through 7 of the 16-bit memory
address generated by the Intel 8257.

D0 - D7: These are data lines. These are bidirectional three state lines. While
programming the controller the CPU sends data for the DMA address register,
the byte count register and the mode set register through these data lines.
AD
AD

AEN: Address latch enable.

ADSTB: A HIGH on this line latches the 8MSBs of the address, which are
sent on D-bus, into Intel 8212 connected for this purpose.

CS: It is chip select.

(I/OR): I/O read. It is a bidirectional line. In output mode it is used to access


data from the I/O device during the DMA write cycle.

(I/OW): I/O write. It is a bidirectional line. In output mode it allows the transfer
of data to the I/O device during the DMA read cycle. Data is transferred from
the memory.

MEMR: Memory read

MEMW: Memory write

TC: Byte count (Terminal count).

MARK: Modulo 128 Mark.

CLK: Clock

HRQ: Hold request

HLDA: Hold acknowledges

You might also like