CSC 221
CSC 221
Machine Language
Machine Language is the language written as strings of binary 1`s and 0`s. It is the only
language which a computer understands without using a translation program.
A machine language instruction has two parts. The first part is the operation code which tells
the computer what function to perform and the second part is the operand which tells the
computer where to find or store the data which is to be manipulated. A programmer needs to
write numeric codes for the instruction and storage location of data.
Note - In computer programming, an operand is a term used to describe any object that is
capable of being manipulated. For example, in "1 + 2" the "1" and "2" are the operands and
the plus symbol is the operator.
Disadvantages –
It is a low level programming language that allows a user to write a program using
alphanumeric mnemonic codes, instead of numeric codes for a set of instructions.
2
Besides being laborious to write, assembly language programs have another drawback: They only
run on one microprocessor family, sometimes only on a single microprocessor. In other words, an
assembly program written for the Mac won't run on a PC, and vice versa. That's because each
processor knows a different set of operations.
Of course, there must be a reason people write programs in assembly language. Actually, there
are three: speed, program size, and control. Assuming equal skill of the programmers, an
assembly language program is almost always faster than the equivalent high-level program, and in
its finished, executable form, it's usually much smaller (even though the assembly programmer had
to write many more lines of code).And because you can control the microprocessor on a step-by-
step basis, your program gives you exactly the results you want.
Advantages –
It is machine dependent
Each statement in a high level language is a micro instruction which is translated into several
machine language instructions.
A compiler is a translator program which translates a high level programming language into
equivalent machine language programs. It compiles a set of machine language instructions
for every high level language program.
Linker: A program used with a compiler to provide links to the libraries needed for an
executable program. It takes one or more object code generated by a compiler and combines
them into a single executable program.
Interpreter: It is a translator used for translating high level language into the desired output.
It takes one statement, translates it into machine language instructions and then immediately
executes the result. Its output is the result of program execution.
It is machine independent
It is easier to learn and use
It is easier to maintain and gives few errors
Disadvantages –
It lowers efficiency
It is less flexible
To begin our discussion of addressing techniques, let's first define the term. As used
throughout this book, the term addressing refers to the way we tell the 6502 what memory
location we wish to operate on. For example, BASIC has two addressing modes. The first is a
direct mode in which the memory location (address) in question is specified directly, for
example:
4
Understanding Memory Address Modes
There are many ways to locate data and instructions in primary memory and these
methods are called “memory address modes”.
Memory address modes determine the method used within the program to access
data either from the Cache or the RAM.
In this challenge we will focus on four different memory address modes:
Immediate Access
Direct Access
Indirect Access
Indexed Access
We will use Assembly programs to see how these modes can impact on the output
or flow of a program. Remember, with LMC (or any assembly languages) an
instruction consists of an opcode followed by an operand.
For instance:
Instruction: ADD 7
Opcode (using a mnemonic): ADD
Operand: 7
Memory address modes enable us to provide either a hard coded value or a
memory location for the operand.
Immediate Addressing
5
Direct Addressing
Direct Addressing is a very simple way of addressing memory – it means that the operand of an instruction
refers directly to a location in memory.
For example:
ADD 7
This instruction would not add the value 7 to the accumulator. Instead it would store the value
currently stored at memory location 7 (that will be need to be fetched from the Cache/RAM).
This memory address mode is fairly fast (not as fast as immediate addressing though).
Indirect Addressing
Indirect addressing means that the address of the data is held in an intermediate location so that
the address is first ‘looked up’ and then used to locate the data itself. Fetching the value is a two
step process: First the indirect address is used to locate an entry into a lookup table (called Vector
Table) where the actual address of where the value can be fetched from is stored.
;
So, indirect addressing mode is a two steps process: the operand is an address towards a memory
location that contains an address where the value can be fetched from.
Indexed Addressing
Indexed addressing means that the final address for the data is determined by adding an offset to a base
address.
This memory address mode is ideal to store and access values stored in arrays. Arrays are often
stored as a complete block in memory (A block of consecutive memory locations). The array has
a base address which is the location of the first element, then an index is used that adds an
offset to the base address in order to fetch the specified element within the array.
Register Addressing
Register addressing mode indicates the operand data is stored in the register itself, so the instruction contains
the address of the register. The data would be retrieved from the register. Here's how this would work:
6
4-bit microprocessors
In the 4-bit microprocessor or computer architecture will have a data path width or a highest
operand width of 4 bits or a nibble. And also these architectures or microprocessors typically
will have a matching register file with registers width of 4 bits and 4-8-bit wide addresses.
Most of the first microprocessors during the early 1970s had 4-bit word sizes. Both the Intel
4004, the first commercial microprocessor, and the 4040 had a 4-bit word length, but had 8-
bit instructions. Some of the first microcontrollers such as the TMS1000 made by Texas
Instruments and NEC's μPD751 also had 4-bit words. 4-bit word were proven to be very
limiting and by 1974 there was a shift to larger architectures such as 8 and 12-bit
architectures.
The advent of microprocessors was accidental. Intel Corporation founded by Moore and
Noyce in 1968 was initially focused on creating semiconductor memory (DRAMs and
EPROMs) for digital computers. A company called Busicom, a Japanese calculator
manufacturer, in the year 1969, approached Intel with a design for a small calculator, which
required 12 custom chips. Ted Hoff, an Intel engineer felt that a general-purpose logic device
could replace the separate multiple components. This is the first idea towards the
development of the first microprocessor. Microprocessors made a modest beginning as the
main drivers for calculator designs.
Federico Faggin and Stanley Mazor realized Ted Hoff's ideas into hardware at Intel. And as a
result we had Intel 4000 family comprising of multiple microprocessors. E.g. 4001 (2K
ROM), the 4002 (320-bit RAM), the 4003 (10-bit I/O shift-register) and the 4004, a 4-bit
central processing unit (CPU). In the year November 15, 1971, Intel introduced the 4004
microprocessor to the worldwide market. It was designed with 2,300 transistors and actually
it was a 4-bit PMOS chip. It was not truly a general-purpose microprocessor as it was
basically designed for a calculator. Almost at the same period of time, Texas Instruments also
developed the 4-bit microprocessor TMS 1000. As the inventor and owner of the
microprocessor patent, Texas Instruments is recognized.
8-bit microprocessors
Microprocessor8085Microcontroller
In computer architecture, 8-bit integers, memory addresses, or other data units are those that
are 8 bits (1 octet or 1 Byte) wide. Also, 8-bit CPU and ALU architectures are those that are
based on registers, address buses, or data buses of that size. 8-bit is also a generation of
microcomputers in which 8-bit microprocessors were the norm. Mainly we can consider 8-bit
microprocessor means 8-bit data or information bus.
for controlling a CRT display, Federico Faggin and his team at Intel designed a chip
produced by Computer Terminals Corporation. Later this chip was called as Datapoint. This
chip did not meet Datapoint's functional requirement of speed and they decided not to use it.
7
World's first 8-bit general-purpose microprocessor 8008 as developed Intel in the year 1972.
The Intel 8008 was used in the famous Mark-8 computer kit. On realizing the potential of this
product, Intel introduced the improved 8008. And the respective architecture was known as
8080 microprocessor in the year 1974. The Intel 8080 really created the microprocessor
market. Motorola 6800T is the other notable 8-bit microprocessors which was designed for
use in automotive and industrial applications, and Rockwell PPS-8, Signetics 2650. They
were having innovative and powerful instruction set architecture.
Along with the improvements of integration technologies, the 8224 clock generator and the
8228 system controller chips got integrated and implemented by Intel which was required by
the 8080, that is, along with the 8080 microprocessor within a single chip – the Intel 8085.
The other improved 8-bit microprocessors include Motorola MC6809 designed for high
performance, Zilog Z-80 and RCA COSMAC designed. This was designed and developed
for aerospace applications.
Predictions by Moore, in the year 1975, towards exponential growth in the complexity of
integrated circuits was true. He also did the forecast of a change for the next decade
indicating that the pace of complexity increase would slow to a doubling every two years
during the maturation of design capabilities.
Microprocessors dominated as the CPU of digital computers as with the increase in
processing power. Earlier to the arrival of microprocessors, CPU was realized from
individual SSI chips. As a definition of a microcomputer, we can state that it is a digital
computer that uses a single chip microprocessor as its CPU.
32-bit microprocessors
In computer architecture, 32-bit integers, memory addresses, or other data units are those that
are 32 bits (4 octets or 4 Bytes) wide. Also, 32-bit CPU and ALU architectures are those that
are based on registers, address buses, or data buses of that size. 32-bit microcomputers are
computers in which 32-bit microprocessors are the norm. We know that n-bit microprocessor
can handle n-bit word size.
As n-bit register can store 2n different values so, a 32-bit register can store 2 32 different
values. The range of integer values that can be stored in 32 bits depends on the integer
representation used. We know there are two most common representations for integer data.
And they are Unsigned and Signed representations. The range is 0 through 4,294,967,295
(232 − 1) for representation as an Unsigned binary number, and −2,147,483,648 (−231) through
2,147,483,647 (231 − 1) for representation as two's complement Signed numbers.
One important consequence is that a processor with 32-bit memory addresses can directly
access at most 4 GB of byte-addressable memory. But due to some issues, though in practice
the limit may be lower.
8
There were multiple instances of 32-bit microprocessors. As examples, we can consider
following
In 1985, Intel announced the 80386 a 32-bit microprocessor with 2,75,000 transistors.
It supported multitasking.
Intel 486 microprocessor was the first to offer a built-in math co-processor introduced
in the year 1989. It had 1.2 million transistors inside it.
Intel Pentium microprocessor with 3.1 million transistors was introduced in the year
1993. It allowed computers to process real-world data like speech, sound, handwriting
and photographic images.
In the year 1997, the 7.5-million transistor Intel Pentium II microprocessor was
designed specifically to process audio, video and graphics data efficiently.
In the year 1999, Intel Celeron processors range designed for the value PC market
segment were released.
Intel Pentium III processors with 9.5 million transistors designed for streaming audio, ,
video and speech recognition applications, advanced imaging, 3D and Intel Pentium III
Xeon processors for workstation and server market segments were introduced in 1999.
Intel Pentium IV processors with more than 42 million transistors introduced from
2000 are used in the present PCs. In such computers, users can deliver TV-like video
via the internet, communicate with real-time video, create professional quality movies
and voice, render 3D graphics in real time, quickly encode music for MP3 players and
can concurrently run several multimedia applications when the system is connected to
the Internet.
Introduced from 2001, Intel Xeon processors are targeted for high-performance and
mid-range, dual-processor workstations, dual and multiprocessor server configurations
coming in the range.
16-bit microprocessors
In computer architecture, 16-bit integers, memory addresses, or other data units are those that
are 16 bits (2 octets or 2 Bytes) wide. Also, 16-bit CPU and ALU architectures are those that
are based on registers, address buses, or data buses of that size. 16-bit microcomputers are
computers in which 16-bit microprocessors were the norm.
As n-bit register can store 2n different values. So as a result, 16-bit register can store
216 different values. If we consider the signed range of integer values that can be stored in 16
bits is −32,768 (−1 × 215) through 32,767 (215 − 1). But on the other hand in case of unsigned
range is 0 through 65,535 (216 − 1). Since 216 is 65,536, a processor with 16-bit memory
addresses can directly access 2 16 = × 26 × 210 = 64 x 1024 = 64 x 1K = 64 KB (65,536 bytes)
9
of byte-addressable memory. If a system uses segmentation with 16-bit segment offsets,
more can be accessed.
In the year 1978, Intel introduced the 16-bit microprocessor 8086 (16-bit bus) and in the year
1979, Intel introduced 8088 (8-bit bus). It had 29,000 transistors. In the year 1981, IBM
selected the Intel 8088 for their personal computer (IBM-PC). In the year 1982, Intel released
the 16-bit microprocessor 80286 (having 1,34,000 transistors) to be used for the advanced
technology personal computers (PC-AT) as CPU. It was called Intel 286 and was the first
Intel processor that could run all the software written for its predecessor Intel 8088. To have
the great commercial success, this backward software compatibility was important. It is
important to note that this software compatibility remains a hallmark of Intel's family of
microprocessors.
10