0% found this document useful (0 votes)
32 views10 pages

CSC 221

Css 221

Uploaded by

Vibes Bundle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views10 pages

CSC 221

Css 221

Uploaded by

Vibes Bundle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

CSC221: MACHINE LANGUAGE PROGRAMMING (2 UNITS) 14th Feb, 2020

Pre-requisite CSC 101, CSC 102, CSC 113


Course Contents
Introduction to Machine-level programming, addressing techniques, at least 16-bit instruction
set and addressing modes, macros, assembler, segmentation and linkage; assembler
construction, interpretation routines.

OVERVIEW OF MACHINE LANGUAGE, ASSEMBLY AND HIGH LEVEL


COMPUTER LANGUAGE

Machine Language

Machine Language is the language written as strings of binary 1`s and 0`s. It is the only
language which a computer understands without using a translation program.

A machine language instruction has two parts. The first part is the operation code which tells
the computer what function to perform and the second part is the operand which tells the
computer where to find or store the data which is to be manipulated. A programmer needs to
write numeric codes for the instruction and storage location of data.

Note - In computer programming, an operand is a term used to describe any object that is
capable of being manipulated. For example, in "1 + 2" the "1" and "2" are the operands and
the plus symbol is the operator.

Machine language example

Below is an example of machine language (binary) for the text


"Hello World".

01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111


01110010 01101100 01100100

Disadvantages –

 It is machine dependent i.e. it differs from computer to computer.


 It is difficult to program and write
 It is prone to errors
1
 It is difficult to modify
Assembly Language

It is a low level programming language that allows a user to write a program using
alphanumeric mnemonic codes, instead of numeric codes for a set of instructions.

It requires a translator known as assembler to convert assembly language into machine


language so that it can be understood by the computer. It is easier to remember and write than
machine language. Assembly language: The oldest and simplest class of programming
language, invented in the 1950S soon after the manufacture of the first computers. In assembly
language, the instructional codes have been made more user-friendly in such a manner that by
looking at a program written in assembly language one can understand some steps more easily
than a sequence of 0s and 1s. An assembly language is a programming language that allows a
programmer (a human) to tell the microprocessor (the chip) in the computer exactly what to do, in
terms of the specific operations the processor knows how to perform. The computer instructions
are written in easily understandable short words which are called mnemonics. For example, MOV
stands for moving the data, SUB stands for subtraction, ADD stands for addition, etc. All
instructions are written in capital letters. A program in assembly language needs software known
as an assembler which converts the program consisting of mnemonics and data into machine
language. In this case also the programs are not portable. Nevertheless, it is easier to write the
programs and to debug them.
High-level languages (HLL) are more user-friendly. These languages consist of words
called keywords and other syntax text which is easily understandable. An effort is made to make
the language as close to our day-to-day language as possible. However, natural languages cannot
be used as computer languages since they are not precise. In all high-level languages appropriate
words are chosen and each of these words is made to represent a set of computer instructions.
One keyword may translate into a number of machine language instructions. Thus, HLL also
reduces the number of lines of code that a programmer writes. But the programs written in HLL
have to be converted back to the machine language. For this we need compilers or interpreters. C
is a compiled language. Compilers are available for almost all computing platforms. It is worthwhile
to note that programs written in machine language are the most efficient in execution because it
does not involve any translation or interpretation as is the case with all high-level languages.
You can think of the difference between assembly language and a high-level language such as
BASIC, C, or Pascal this way: A program in a high-level language is like saying "point at me," while
the assembly language version is like telling a person to contract the muscles that elevate the
shoulder, then contract the muscles that extend the elbow, and finally contract the muscles that
extend the index finger. This analogy isn't perfect, but it should give you the flavor of the difference.
In assembly, each programming line corresponds directly to an instruction in the processor's
machine language.

2
Besides being laborious to write, assembly language programs have another drawback: They only
run on one microprocessor family, sometimes only on a single microprocessor. In other words, an
assembly program written for the Mac won't run on a PC, and vice versa. That's because each
processor knows a different set of operations.
Of course, there must be a reason people write programs in assembly language. Actually, there
are three: speed, program size, and control. Assuming equal skill of the programmers, an
assembly language program is almost always faster than the equivalent high-level program, and in
its finished, executable form, it's usually much smaller (even though the assembly programmer had
to write many more lines of code).And because you can control the microprocessor on a step-by-
step basis, your program gives you exactly the results you want.

Assembler – It is a computer program which converts or translates assembly language into


machine language. It assembles the machine language program in the main memory of the
computer and makes it ready for execution.

Advantages –

 It is easy to understand and use


 It is easy to locate and correct errors
 It is easier to modify
Disadvantages –

 It is machine dependent

High level Language

It is a machine independent language. It enables a user to write programs in a language which


resembles English words and familiar mathematical symbols. COBOL was the first high
level language developed for business.

Each statement in a high level language is a micro instruction which is translated into several
machine language instructions.

A compiler is a translator program which translates a high level programming language into
equivalent machine language programs. It compiles a set of machine language instructions
for every high level language program.

Source code: It is the input or the programming instructor of a procedural language.


3
The compiler translates the source code into machine level language which is known as
object code. Object code can be saved and executed as and when desired by the user.

Source Code → Language Translator Program → Object code

Linker: A program used with a compiler to provide links to the libraries needed for an
executable program. It takes one or more object code generated by a compiler and combines
them into a single executable program.

Interpreter: It is a translator used for translating high level language into the desired output.
It takes one statement, translates it into machine language instructions and then immediately
executes the result. Its output is the result of program execution.

Advantages of High level Language –

 It is machine independent
 It is easier to learn and use
 It is easier to maintain and gives few errors
Disadvantages –

 It lowers efficiency
 It is less flexible

Note - In computing, an opcode (Operation Code) is the portion of a machine language


instruction that specifies the operation to be performed

To begin our discussion of addressing techniques, let's first define the term. As used
throughout this book, the term addressing refers to the way we tell the 6502 what memory
location we wish to operate on. For example, BASIC has two addressing modes. The first is a
direct mode in which the memory location (address) in question is specified directly, for
example:

4
Understanding Memory Address Modes

There are many ways to locate data and instructions in primary memory and these
methods are called “memory address modes”.

Memory address modes determine the method used within the program to access
data either from the Cache or the RAM.
In this challenge we will focus on four different memory address modes:

 Immediate Access
 Direct Access
 Indirect Access
 Indexed Access

We will use Assembly programs to see how these modes can impact on the output
or flow of a program. Remember, with LMC (or any assembly languages) an
instruction consists of an opcode followed by an operand.
For instance:
 Instruction: ADD 7
 Opcode (using a mnemonic): ADD
 Operand: 7
Memory address modes enable us to provide either a hard coded value or a
memory location for the operand.

So let’s recap on the difference between these memory address modes.

Immediate Addressing

Immediate addressing means that the data to be used is hard-coded into


the instruction itself. For instance, using AL code:
ADD 7

Nothing needs to be fetched from memory. This instruction means that


the value 7 will be added to the value currently stored in the accumulator.
This is the fastest method of addressing as it does not involve fetching
anything from the main memory at all.

5
Direct Addressing
Direct Addressing is a very simple way of addressing memory – it means that the operand of an instruction
refers directly to a location in memory.

For example:

ADD 7

This instruction would not add the value 7 to the accumulator. Instead it would store the value
currently stored at memory location 7 (that will be need to be fetched from the Cache/RAM).

This memory address mode is fairly fast (not as fast as immediate addressing though).

Indirect Addressing

Indirect addressing means that the address of the data is held in an intermediate location so that
the address is first ‘looked up’ and then used to locate the data itself. Fetching the value is a two
step process: First the indirect address is used to locate an entry into a lookup table (called Vector
Table) where the actual address of where the value can be fetched from is stored.
;
So, indirect addressing mode is a two steps process: the operand is an address towards a memory
location that contains an address where the value can be fetched from.

Indexed Addressing
Indexed addressing means that the final address for the data is determined by adding an offset to a base
address.

This memory address mode is ideal to store and access values stored in arrays. Arrays are often
stored as a complete block in memory (A block of consecutive memory locations). The array has
a base address which is the location of the first element, then an index is used that adds an
offset to the base address in order to fetch the specified element within the array.

Register Addressing
Register addressing mode indicates the operand data is stored in the register itself, so the instruction contains
the address of the register. The data would be retrieved from the register. Here's how this would work:

6
4-bit microprocessors
In the 4-bit microprocessor or computer architecture will have a data path width or a highest
operand width of 4 bits or a nibble. And also these architectures or microprocessors typically
will have a matching register file with registers width of 4 bits and 4-8-bit wide addresses.
Most of the first microprocessors during the early 1970s had 4-bit word sizes. Both the Intel
4004, the first commercial microprocessor, and the 4040 had a 4-bit word length, but had 8-
bit instructions. Some of the first microcontrollers such as the TMS1000 made by Texas
Instruments and NEC's μPD751 also had 4-bit words. 4-bit word were proven to be very
limiting and by 1974 there was a shift to larger architectures such as 8 and 12-bit
architectures.
The advent of microprocessors was accidental. Intel Corporation founded by Moore and
Noyce in 1968 was initially focused on creating semiconductor memory (DRAMs and
EPROMs) for digital computers. A company called Busicom, a Japanese calculator
manufacturer, in the year 1969, approached Intel with a design for a small calculator, which
required 12 custom chips. Ted Hoff, an Intel engineer felt that a general-purpose logic device
could replace the separate multiple components. This is the first idea towards the
development of the first microprocessor. Microprocessors made a modest beginning as the
main drivers for calculator designs.
Federico Faggin and Stanley Mazor realized Ted Hoff's ideas into hardware at Intel. And as a
result we had Intel 4000 family comprising of multiple microprocessors. E.g. 4001 (2K
ROM), the 4002 (320-bit RAM), the 4003 (10-bit I/O shift-register) and the 4004, a 4-bit
central processing unit (CPU). In the year November 15, 1971, Intel introduced the 4004
microprocessor to the worldwide market. It was designed with 2,300 transistors and actually
it was a 4-bit PMOS chip. It was not truly a general-purpose microprocessor as it was
basically designed for a calculator. Almost at the same period of time, Texas Instruments also
developed the 4-bit microprocessor TMS 1000. As the inventor and owner of the
microprocessor patent, Texas Instruments is recognized.

8-bit microprocessors
Microprocessor8085Microcontroller
In computer architecture, 8-bit integers, memory addresses, or other data units are those that
are 8 bits (1 octet or 1 Byte) wide. Also, 8-bit CPU and ALU architectures are those that are
based on registers, address buses, or data buses of that size. 8-bit is also a generation of
microcomputers in which 8-bit microprocessors were the norm. Mainly we can consider 8-bit
microprocessor means 8-bit data or information bus.
for controlling a CRT display, Federico Faggin and his team at Intel designed a chip
produced by Computer Terminals Corporation. Later this chip was called as Datapoint. This
chip did not meet Datapoint's functional requirement of speed and they decided not to use it.
7
World's first 8-bit general-purpose microprocessor 8008 as developed Intel in the year 1972.
The Intel 8008 was used in the famous Mark-8 computer kit. On realizing the potential of this
product, Intel introduced the improved 8008. And the respective architecture was known as
8080 microprocessor in the year 1974. The Intel 8080 really created the microprocessor
market. Motorola 6800T is the other notable 8-bit microprocessors which was designed for
use in automotive and industrial applications, and Rockwell PPS-8, Signetics 2650. They
were having innovative and powerful instruction set architecture.
Along with the improvements of integration technologies, the 8224 clock generator and the
8228 system controller chips got integrated and implemented by Intel which was required by
the 8080, that is, along with the 8080 microprocessor within a single chip – the Intel 8085.
The other improved 8-bit microprocessors include Motorola MC6809 designed for high
performance, Zilog Z-80 and RCA COSMAC designed. This was designed and developed
for aerospace applications.
Predictions by Moore, in the year 1975, towards exponential growth in the complexity of
integrated circuits was true. He also did the forecast of a change for the next decade
indicating that the pace of complexity increase would slow to a doubling every two years
during the maturation of design capabilities.
Microprocessors dominated as the CPU of digital computers as with the increase in
processing power. Earlier to the arrival of microprocessors, CPU was realized from
individual SSI chips. As a definition of a microcomputer, we can state that it is a digital
computer that uses a single chip microprocessor as its CPU.

32-bit microprocessors
In computer architecture, 32-bit integers, memory addresses, or other data units are those that
are 32 bits (4 octets or 4 Bytes) wide. Also, 32-bit CPU and ALU architectures are those that
are based on registers, address buses, or data buses of that size. 32-bit microcomputers are
computers in which 32-bit microprocessors are the norm. We know that n-bit microprocessor
can handle n-bit word size.
As n-bit register can store 2n different values so, a 32-bit register can store 2 32 different
values. The range of integer values that can be stored in 32 bits depends on the integer
representation used. We know there are two most common representations for integer data.
And they are Unsigned and Signed representations. The range is 0 through 4,294,967,295
(232 − 1) for representation as an Unsigned binary number, and −2,147,483,648 (−231) through
2,147,483,647 (231 − 1) for representation as two's complement Signed numbers.
One important consequence is that a processor with 32-bit memory addresses can directly
access at most 4 GB of byte-addressable memory. But due to some issues, though in practice
the limit may be lower.

8
There were multiple instances of 32-bit microprocessors. As examples, we can consider
following
 In 1985, Intel announced the 80386 a 32-bit microprocessor with 2,75,000 transistors.
It supported multitasking.
 Intel 486 microprocessor was the first to offer a built-in math co-processor introduced
in the year 1989. It had 1.2 million transistors inside it.
 Intel Pentium microprocessor with 3.1 million transistors was introduced in the year
1993. It allowed computers to process real-world data like speech, sound, handwriting
and photographic images.
 In the year 1997, the 7.5-million transistor Intel Pentium II microprocessor was
designed specifically to process audio, video and graphics data efficiently.
 In the year 1999, Intel Celeron processors range designed for the value PC market
segment were released.
 Intel Pentium III processors with 9.5 million transistors designed for streaming audio, ,
video and speech recognition applications, advanced imaging, 3D and Intel Pentium III
Xeon processors for workstation and server market segments were introduced in 1999.
 Intel Pentium IV processors with more than 42 million transistors introduced from
2000 are used in the present PCs. In such computers, users can deliver TV-like video
via the internet, communicate with real-time video, create professional quality movies
and voice, render 3D graphics in real time, quickly encode music for MP3 players and
can concurrently run several multimedia applications when the system is connected to
the Internet.
 Introduced from 2001, Intel Xeon processors are targeted for high-performance and
mid-range, dual-processor workstations, dual and multiprocessor server configurations
coming in the range.

16-bit microprocessors
In computer architecture, 16-bit integers, memory addresses, or other data units are those that
are 16 bits (2 octets or 2 Bytes) wide. Also, 16-bit CPU and ALU architectures are those that
are based on registers, address buses, or data buses of that size. 16-bit microcomputers are
computers in which 16-bit microprocessors were the norm.
As n-bit register can store 2n different values. So as a result, 16-bit register can store
216 different values. If we consider the signed range of integer values that can be stored in 16
bits is −32,768 (−1 × 215) through 32,767 (215 − 1). But on the other hand in case of unsigned
range is 0 through 65,535 (216 − 1). Since 216 is 65,536, a processor with 16-bit memory
addresses can directly access 2 16 = × 26 × 210 = 64 x 1024 = 64 x 1K = 64 KB (65,536 bytes)
9
of byte-addressable memory. If a system uses segmentation with 16-bit segment offsets,
more can be accessed.
In the year 1978, Intel introduced the 16-bit microprocessor 8086 (16-bit bus) and in the year
1979, Intel introduced 8088 (8-bit bus). It had 29,000 transistors. In the year 1981, IBM
selected the Intel 8088 for their personal computer (IBM-PC). In the year 1982, Intel released
the 16-bit microprocessor 80286 (having 1,34,000 transistors) to be used for the advanced
technology personal computers (PC-AT) as CPU. It was called Intel 286 and was the first
Intel processor that could run all the software written for its predecessor Intel 8088. To have
the great commercial success, this backward software compatibility was important. It is
important to note that this software compatibility remains a hallmark of Intel's family of
microprocessors.

10

You might also like