0% found this document useful (0 votes)
4 views

Microprocessor U 1 Notes

The document provides an introduction to microprocessors, detailing their architecture, components, and applications. It covers the evolution of microprocessors from the first generation to the fifth generation, including key models and their specifications. Additionally, it explains the differences between Harvard and Princeton architectures, highlighting their respective data handling capabilities.

Uploaded by

Nikhil Pant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Microprocessor U 1 Notes

The document provides an introduction to microprocessors, detailing their architecture, components, and applications. It covers the evolution of microprocessors from the first generation to the fifth generation, including key models and their specifications. Additionally, it explains the differences between Harvard and Princeton architectures, highlighting their respective data handling capabilities.

Uploaded by

Nikhil Pant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit-1 Lecture-1

Today’s Target
✓ Introduction to Microprocessor
✓ It’s applications
 AKTU PYQs
Microprocessor (BEE- 602)

AKTU Syllabus : Unit-I


Introduction to Microprocessor
Introduction to Microprocessor and its applications, Microprocessor Evolution Tree,
Microprocessor Architecture (Harvard & Princeton), General Architecture of the
Microprocessor and its operations, Component of Microprocessor system: Processor,
Buses, Memory, Inputs-outputs (I/Os) and other Interfacing devices.
Introduction to Microprocessor
1. A Microprocessor is an important part of a computer architecture without which you will
not be able to perform anything on your computer. It is a programmable device that takes
in input performs some arithmetic and logical operations over it and produces the desired
output.
2. The microprocessor is a multipurpose, clock-driven, register-based, digital integrated
circuit that accepts binary data as input, processes it according to instructions stored in its
memory, and provides results (also in binary form) as output.
3. In simple words, a Microprocessor is a digital device on a chip that can fetch instructions
from memory, decode and execute them, and give results.
DIAGRAM OF A COMPUTER SYSTEM

A computer is a programmable machine that receives input, stores and manipulates


data/information, and provides output in a useful format.

Fig: Diagram of a Computer System


Basic component of Microcomputer

CPU - Central Processing Unit


 The portion of a computer system that carries out the instructions of a computer
program
 The primary element carrying out the computer's functions. It is the unit that reads and
executes program instructions.
 The data in the instruction tells the processor what to do.
BLOCK DIAGRAM OF A MICROPROCESSOR

 Basic computer system consist of a Central processing unit (CPU), memory (RAM and
ROM), input/output (I/O) unit.

Fig. Microprocessor structure


Microprocessor’s applications

 The fetching, decoding, executing and write back the instructions are the four main primary

functions of the processor.

 The 8085 microprocessor is commonly used in instrumentation and control systems, such as

temperature and pressure controllers.

 In mobile phones, laptops, computers, washing machines, etc processors are used.
University Questions

1. Write basic operations of microprocessor with block diagram. (2017-18)


2. Write about different languages of digital computer. (2017-18)
3. Write the applications of microprocessor? (2018-19)
Unit-1 Lecture-2
Today’s Target
✓ General Architecture of the Microprocessor and its operations.
✓ Component of Microprocessor system: Processor, Buses, Memory,
Inputs-outputs (I/Os) and other Interfacing devices.
✓ AKTU PYQs
General Architecture of the Microprocessor
In this system, the microprocessor is the master and all other peripherals are slaves. The
master controls all the peripherals and initiates all operations.
Input
The input section transfers data and instructions in binary from the outside world to the
microprocessor. It includes devices such as keyboards, teletypes, and analog-to-digital
converters. Typically, a microcomputer includes a keyboard as an input device. The key board
has sixteen data keys (o to 9 and A to F) and some additional function keys to perform
operations such as storing data and executing programs.

Output
The output section transfers data from the microprocessor to output devices such as light
emitting diodes (LEDs), cathode-ray-tubes (CRTs), printers, magnetic tape, or another
computer. Typically, single-board computers include LEDs and seven-segment LEDs as output
devices.
Arithmetic and Logic Unit : In this area of the microprocessor, computing functions are
performed on data. The CPU performs arithmetic operations such as addition and subtraction,
and logic operations such as AND, OR, and exclusive OR. Results are stored either in register or
in memory or sent to output devices.
Register Unit : This area of the microprocessor consists of various registers. The register are
used primarily to store data temporarily during the executing of a program. Some of the
registers are accessible to the user through instructions.
Control Unit : The control unit provides the necessary timing and control signals to all the
operations in the microcomputer. It controls the flow of data between the microprocessor and
peripherals (including memory).
Memory
Memory stores binary information such as instructions and data, and provides that information
to the microprocessor whenever necessary. To execute programs, the microprocessor reads
instructions and data from memory and performs the computing operations in its ALU section.
Result are either transferred to the output section for display or stored in memory for later use.
Classification of Memory

ROM (Read Only Memory):


The first classification of memory is ROM. The data in this memory can only be read, no
writing is allowed. It is used to store permanent programs. It is a non-volatile type of
memory.
The classification of ROM memory is as follows:

Masked ROM: The program or data are permanently installed at the time of manufacturing as
per requirement. The data cannot be altered. The process of permanent recording is expensive
but economic for large quantities.

PROM (Programmable Read Only Memory): The basic function is same as that of masked
ROM. But in PROM, we have fuse links. Depending upon the bit pattern, the fuse can be burnt or
kept intact. This job is performed by PROM programmer.
To do this, it uses high current pulse between two lines. Because of high current, the fuse will
get burnt; effectively making two lines open. Once a PROM is programmed we cannot change
connections, only a facility provided over masked ROM is, the user can load his program in it.
The disadvantage is a chance of re-growing of the fuse and changes the programmed data
because of aging.
EPROM (Erasable Programmable Read Only Memory): The EPROM is programmable by the
user. It uses MOS circuitry to store data. They store 1’s and 0’s in form of charge. The
information stored can be erased by exposing the memory to ultraviolet light which erases the
data stored in all memory locations. For ultraviolet light, a quartz window is provided which is
covered during normal operation. Upon erasing it can be reprogrammed by using EPROM
programmer. This type of memory is used in a project developed and for experiment use. The
advantage is it can be programmed erased and reprogrammed. The disadvantage is all the data
get erased even if you want to change single data bit.

EEPROM: EEPROM stands for electrically erasable programmable read only memory. This is
similar to EPROM except that the erasing is done by electrical signals instead of ultraviolet
light. The main advantage is the memory location can be selectively erased and reprogrammed.
But the manufacturing process is complex and expensive so do not commonly used.
R/W Memory (Read/Write Memory):
The RAM is also called as read/write memory. The RAM is a volatile type of memory. It
allows the programmer to read or write data. If the user wants to check the execution of
any program, user feeds the program in RAM memory and executes it. The result of
execution is then checked by either reading memory location contents or by register
contents.

Classification of RAM memory: It is available in two types:

SRAM (Static RAM): SRAM consists of the flip-flop; using either transistor or MOS. for each bit
we require one flip-flop. Bit status will remain as it is; unless and until you perform next write
operation or power supply is switched off.

Advantages of SRAM:
Fast memory (less access time)
Refreshing circuit is not required.

Disadvantages of SRAM:
Low package density
Costly
DRAM (Dynamic RAM): In this type of memory a data is stored in form of charge in capacitors.
When data is 1, the capacitor will be charged and if data is 0, the capacitor will not be charged.
Because of capacitor leakage currents, the data will not be held by these cells. So the DRAMs
require refreshing of memory cells. It is a process in which same data is read and written after a
fixed interval.
Advantages of DRAM:
High package density
Low cost

Disadvantages of DRAM:
Required refreshing circuit to maintain or refresh charge on the capacitor, every after few
milliseconds.
System Bus
The system bus is a communication path between the microprocessor and the peripherals; it is
nothing but a group of wires that carries bits. The microcomputer bus is in many ways similar
to a one-track, express subway, the microcomputer bus carries bits between the
microprocessor and only one peripheral at a time. The same bus is time - shared to
communicate with various peripherals, with the timing provided by the control section of the
microprocessor.
Data Bus
This is a bi-directional bus, because data can flow to or from the CPU. The CPU’s eight data pins,
D0 through D7,can be either inputs or outputs, depending on whether the CPU is performing a
read or a write operation. During data bus by the memory or I/O element. During a write
operation the CPU’s data pins act as outputs and place data on the data bus, which are then
sent to the selected memory or I/O element.
Address Bus
This is a unidirectional bus, because information flows over it in only one direction, from the
CPU to the memory or I/O elements. The CPU alone can place logic levels on the lines of the
address bus, thereby generating 216 = 65,536 different possible addresses. Each of these
addresses corresponds to one memory location or one I/O element. When the CPU wants to
communicate (read or write) with a certain memory location or I/O device, it places the
appropriate 16-bit address code on its 16 address pin outputs, A0 through A15, and onto the
address bus. These address bits are then decoded to select the desired memory location or I/O
device.
Control Bus
This is the set of signals that is used to synchronize the activities of the separate
microcomputer elements. Some of these control signals, such as RD and WR are sent by the
CPU to the other elements to tell them what type of operation is currently in progress. The I/O
elements can send control signals to the CPU.
I/O DEVICES AND THEIR INTERFACING

1. Input / Output (I/O)


 MPU communicates with outside word through I/O device.
 There are 2 different methods by which MPU identifies and communicates With I/O devices
these methods are:
i) Direct I/O (Peripheral)
ii) Memory-Mapped I/O
 The methods differ in terms of the
a) No. of address lines used in identifying an I/O device.
b) Type of control lines used to enable the device.
c) Instructions used for data transfer.
1. Direct I/O (Peripheral):-
 This method uses two instructions (IN & OUT) for data transfer.
 MPU uses 8 address lines to send the address of I/O device (can identify 256 input devices
& 256 output devices).
 The (I/P & O/P devices) can be differentiated by control signals I/O Read (IOR) and I/O
Write (IOW).
 The steps in communicating with an I/O device are similar to those in communicating with
memory and can be summarized as follows:
1. The MPU places an 8-bit device address on address bus then decoded.
2. The MPU sends a control signal (IOR or IOW) to enable the I/O device.
3. Data are placed on the data bus for transfer.
1. Memory-Mapped I/O:-
 The MPU uses 16 address lines to identify an I/O device.
 This is similar to communicating with a memory location.
 Use the same control signals (MEMR or MEMW) and instructions as those of memory.
 The MPU views these I/O devices as if they were memory locations.
 There are no special I/O instructions.
 It can identify 64k address shared between memory & I/O devices.
University Questions

1. What is bus? (2023-24)


2. List advantages of memory mapped I/O technique of data
transfer in microprocessor? (2015-16)
3. What are interfacing logical devices? (2017-18)
Unit-1 Lecture-3
Today’s Target
✓ Microprocessor Evolution Tree
✓ Microprocessor Architecture (Harvard & Princeton)
✓ AKTU PYQs
Microprocessor Evolution Tree

1971 –Intel 4004 - 4 bit μp


1972 –Intel 8008 - 8 bit μp
1973 –Intel 8080 - 8 bit μp
1974 –Motorolla 6800 - 8 bit μp
1976 –Zilog 80 - 8 bit μp
1976 –Intel 8085 - 8 bit μp
1. First Generation – 4bit Microprocessors
The Intel corporation came out with the first generation of microprocessors in 1971. They were 4-bit
processors namely Intel 4004. The speed of the processor was 740 kHz taking 60k instructions per second.
It had 2300 transistors and 16 pins inside.
Built on a single chip, it was useful for simple arithmetic and logical operations. A control unit was there to
understand the instructions from memory and execute the tasks.
2. Second Generation – 8bit Microprocessor
The second generation began in 1973 by Intel as the first 8 – bit microprocessor. It was useful for

arithmetic and logic operations on 8-bit words. The first processor was 8008 with a clock speed of
500kHz and 50k instructions per second.
Followed by an 8080 microprocessor in 1974 with a speed of 2 MHz and 60k instruction per second. Lastly
came the 8085 microprocessor in 1976 having an ability of 769230 instruction per second with 3 MHz
speed.
3. Third Generation – 16bit Microprocessor
The third generation began with 8086-88 microprocessors in 1978 with 4.77, 8 & 10 MHz speed and 2.5
million instructions per second. Other important inventions were Zilog Z800, and 80286, which came out
in 1982 and could read 4 million instructions per second with 68 pins inside.
4. Fourth Generation – 32bit Microprocessors
 Intel was still the leader as many companies came out with 32-bit microprocessors around
1986. Their clock speed was between 16 MHz to 33 MHz with 275k transistors inside.
 One of the first ones was the Intel 80486 microprocessor of 1986 with 16-100MHz clock
speed and 1.2 Million transistors with 8 KB of cache memory. Followed by the PENTIUM
microprocessor in 1993 which had 66 MHz clock speed and 8-bit of cache memory.

5. Fifth Generation – 64bit Microprocessors


 Began in 1995, the Pentium processor was one of the first 64-bit processors with 1.2 GHz to
3 GHz clock speed. There were 291 Million transistors and 64kb instruction per second.
 Followed by i3, i5, i7 microprocessors in 2007, 2009, 2010 respectively. These were some of
the key points of this generation.
Microprocessor Architecture (Harvard & Princeton)
When data and code lie in different memory blocks, then the architecture is referred
as Harvard architecture. In case data and code lie in the same memory block, then the
architecture is referred as Princeton Architecture or Von Neumann architecture .

Fig. Harvard architecture Fig. Princeton Architecture


Princeton Architecture
 The Princeton architecture was first proposed by a computer scientist John von Neumann.
In this architecture, one data path or bus exists for both instruction and data. As a result,
the CPU does one operation at a time. It either fetches an instruction from memory, or
performs read/write operation on data. So an instruction fetch and a data operation
cannot occur simultaneously, sharing a common bus.

 Princeton architecture supports simple hardware. It allows the use of a single, sequential
memory. Today's processing speeds vastly outpace memory access times, and we employ a
very fast but small amount of memory (cache) local to the processor.
Harvard Architecture

 The Harvard architecture offers separate storage and signal buses for instructions and
data. This architecture has data storage entirely contained within the CPU, and there is
no access to the instruction storage as data. Computers have separate memory areas
for program instructions and data using internal data buses, allowing simultaneous
access to both instructions and data.

 Programs needed to be loaded by an operator; the processor could not boot. itself. In a
Harvard architecture, there is no need to make the two memories share properties
Princeton Architecture vs Harvard Architecture

The following points distinguish the Princeton Architecture from the Harvard Architecture.

Princeton Architecture Harvard Architecture

Single memory to be shared by both code and Separate memories for code and data.
data.

Processor needs to fetch code in a separate Single clock cycle is sufficient, as separate buses
clock cycle and data in another clock cycle. So it are used to access code and data.
requires two clock cycles.

Slower in speed, thus more time-consuming. Higher speed, thus less time consuming.

Simple in design. Complex in design.


University Questions

1. What is the difference between Harvard architecture and von


Neumann architecture? (2023-24)
2. Explain evolution of microprocessor with its different
generation. (2017-18)
3. Define following: (i)Nibble(ii)word (2017-18)
Q. Define following: (i)Nibble(ii)word (2017-18)

Nibble 4 bit

Byte 8 bit

Word 16 bit

Long word 32 bit


Unit-1 Lecture-4
Today’s Target

✓ AKTU PYQs
✓ Practice Question
Q1. Differentiate between Microprocessor and Microcontroller?(2022-23)

Ans. Let us now take a look at the most notable differences between a microprocessor and a
microcontroller.

Microprocessor Microcontroller
Microprocessors are multitasking in nature. Can perform Single task oriented. For example, a washing machine is
multiple tasks at a time. For example, on computer we can designed for washing clothes only.
play music while writing text in text editor.

RAM, ROM, I/O Ports, and Timers can be added externally RAM, ROM, I/O Ports, and Timers cannot be added
and can vary in numbers. externally. These components are to be embedded
together on a chip and are fixed in numbers.

Designers can decide the number of memory or I/O ports Fixed number for memory or I/O makes a microcontroller
needed. ideal for a limited but specific task.

External support of external memory and I/O ports makes Microcontrollers are lightweight and cheaper than a
a microprocessor-based system heavier and costlier. microprocessor.

External devices require more space and their power A microcontroller-based system consumes less power and
consumption is higher. takes less space.
Q2a. Define compiler or interpreter in programming languages. (2017-
18)
Ans.
A compiler is a computer program that translates source code written in a high-level
programming language into machine code or byte code, which is directly understandable and
executable by a computer's processor. Eg:- C++, Java, and C#
An interpreter can refer to a person who translates spoken or written language from one
language to another, or, in the context of computer science, a program that executes
instructions in a programming language directly, line by line, without first compiling the code
into machine code. Eg:- Python, Ruby, and PHP

A compiler is defined as a software that transforms an entire set of source code into object
code and saves it as a file before executing it. Conversely, an interpreter converts and executes
source code line by line without saving it and points out errors along the way.
Q2b. Differentiate between Compiler and Interpreter? (2022-23)
Ans.
Compiler Interpreter
Steps of Programming:
•Program Creation.
•Analysis of language by the compiler and throws Steps of Programming:
errors in case of any incorrect statement. •Program Creation.
•In case of no error, the Compiler converts the •Linking of files or generation of Machine Code is
source code to Machine Code. not required by Interpreter.
•Linking of various code files into a runnable •Execution of source statements one by one.
program.
•Finally runs a Program.
The compiler saves the Machine Language in form The Interpreter does not save the Machine
of Machine Code on disks. Language.
Compiled codes run faster than Interpreter. Interpreted codes run slower than Compiler.
Compilers more often take a large amount of time In comparison, Interpreters take less time for
for analyzing the source code. analyzing the source code.
The compiler generates an output in the form of
The interpreter does not generate any output.
(.exe).
Any change in the source program during the
Any change in the source program after the
translation does not require retranslation of the entire
compilation requires recompiling the entire code.
code.
Errors are displayed in Compiler after Compiling
Errors are displayed in every single line.
together at the current time.
The compiler can see code upfront which helps in The Interpreter works by line working of Code, that’s
running the code faster because of performing why Optimization is a little slower compared to
Optimization. Compilers.
It does not require source code for later execution. It requires source code for later execution.
Execution of the program takes place only after the Execution of the program happens after every line is
whole program is compiled. checked or evaluated.
CPU utilization is more in the case of a Compiler. CPU utilization is less in the case of a Interpreter.
The use of Compilers mostly happens in The use of Interpreters is mostly in Programming and
Production Environment. Development Environments.
C, C++, C#, etc are programming languages that are Python, Ruby, Perl, SNOBOL, MATLAB, etc are
compiler-based. programming languages that are interpreter-based.
Q3. Write about different languages of digital computer. (2017-18)
Ans.

The types of Computer Languages known to us are-

Low-Level Language
A Low-level computer language consists of only 1’s and 0’s. First and Second generation computers were
first built using this language. This type of language is easily understood by a computer but it is very
difficult for humans to understand this. These Low-level languages are specifically designed to interact
with the computer hardware and are categorized into two types- Machine level language and Assembly
level language.

1. Machine Level Language


Machine level language is a type of Low-level language. This language is believed to be the oldest
computer language. Computers tend to understand only the language of Digital Electronics which deals
with the presence and absence of voltages. 2 logics can play their role within the computer which are-

Positive Logic: In this, the presence of voltage is denoted by 1 and the absence of voltage is denoted by 0.
Negative Logic: Here, the presence of voltage is denoted by 0 while the absence of voltage is denoted by 1.
Computers can follow one of the logics at a time and not both simultaneously. A program can be written
using only 0s and 1s to make the computer understand and data can also be represented using only 0s and
1s. Such a program is called a Machine Language program. A computer can directly understand a program
written in the machine language hence, a machine language program does not require any translator to
convert from one form to another.
2. Assembly Level Language
Assembly level language was introduced with the advancement of Machine Level Language. This
computer language uses symbols, which are popularly known as mnemonics in computer terminology to
write the instructions. Hence, writing a program in Assembly Level Language is more understandable to
humans rather than machine-level language. In this language, symbolic names are used to denote
addresses and data. The Assembly language code gets converted into a Machine language code with the
help of an Assembler for the computer to understand the binary-converted Assembly Language.

High-Level Language
High-Level Languages are the advanced development languages in the evolution of computer languages.
The main goal of these languages is to make programming easier and less error-free. These high-level
languages use words and commands along with symbols and numbers. High-Level Programming
languages are created to be more user-friendly and easier for humans to understand than Low-level
languages. They use keywords similar to English words, making coding more intuitive. Here are some
examples of High-Level Programming languages are-
C
C++
Java
Java Script
Python
C#
PHP
Difference between High Level and Low-Level Language

SL. High-Level Language Low Level Language


No.

1 High Level Languages are easily understood by Low-Level languages are hard to understand by humans
humans as they use English statements. because of the usage of binary numbers which can be
easily understood by computers.
2 These languages are human-friendly. These languages are programmer friendly.
3 This takes a long time to be executed. Program Execution time is less.
4 These are simple to maintain. These are complex to maintain.
5 Debugging is easy in High-Level languages. Debugging is hard in Low-level languages.
6 Programs of High-level languages are portable Programs in low-level languages are not portable.
hence, can be used on any computer.
7 These languages are widely used in today’s Low-level languages are not used in the prevailing
technology. technology.

You might also like