0% found this document useful (0 votes)
86 views23 pages

Fundementals of IT Unit I Notes

The document discusses the basic components of a computer system including the input unit, output unit, storage unit, central processing unit (CPU), and control unit. The input unit links external devices to the computer and converts input to binary format. The output unit converts results to human readable format and outputs them. The storage unit stores data, instructions, and results. The CPU includes the arithmetic logic unit (ALU) which performs calculations and comparisons, and the control unit which directs operations.

Uploaded by

Biju Angalees
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views23 pages

Fundementals of IT Unit I Notes

The document discusses the basic components of a computer system including the input unit, output unit, storage unit, central processing unit (CPU), and control unit. The input unit links external devices to the computer and converts input to binary format. The output unit converts results to human readable format and outputs them. The storage unit stores data, instructions, and results. The CPU includes the arithmetic logic unit (ALU) which performs calculations and comparisons, and the control unit which directs operations.

Uploaded by

Biju Angalees
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

1

UNIT -1 codes that the primary memory of the computer is designed to accept. This
transformation is accomplished by units that called input interfaces. Input
1.1 BASIC COMPONENTS OF A COMPUTER interfaces are designed to match the unique physical or electrical
characteristics of input devices to the requirements of the computer system.

In short, an input unit performs the following functions.


1. It accepts (or reads) the list of instructions and data from the outside
world.
2. It converts these instructions and data in computer acceptable format.
3. It supplies the converted instructions and data to the computer system
for further processing.

Output Unit

The job of an output unit is just the reverse of that of an input unit.
It supplied information and results of computation to the outside world.
Thus it links the computer with the external environment. As computers
work with binary code, the results produced are also in the binary form.
Hence, before supplying the results to the outside world, it must be
converted to human acceptable (readable) form. This task is accomplished
by units called output interfaces.
Input Unit
Data and instructions must enter the computer system before any In short, the following functions are performed by an output unit.
computation can be performed on the supplied data. The input unit that 1. It accepts the results produced by the computer which are in coded
links the external environment with the computer system performs this form and hence cannot be easily understood by us.
task. Data and instructions enter input units in forms that depend upon the
particular device used. For example, data is entered from a keyboard in a 2. It converts these coded results to human acceptable (readable) form.
manner similar to typing, and this differs from the way in which data is 3. It supplied the converted results to the outside world.
entered through a mouse, which is another type of input device. However,
regardless of the form in which they receive their inputs, all input devices
must provide a computer with data that are transformed into the binary
2

Storage Unit compatible computers will not run on Apple computers because these two
architectures are not compatible.
The data and instructions that are entered into the computer system
through input units have to be stored inside the computer before the actual
processing starts. Similarly, the results produced by the computer after
processing must also be kept somewhere inside the computer system before
being passed on to the output units. Moreover, the intermediate results
produced by the computer must also be preserved for ongoing processing.
The Storage Unit or the primary / main storage of a computer system is
designed to do all these things. It provides space for storing data and
instructions, space for intermediate results and also space for the final
results.
In short, the specific functions of the storage unit are to store:
1. All the data to be processed and the instruction required for
processing (received from input devices).
2. Intermediate results of processing.
The control Unit and the Arithmetic and Logic unit of a computer
3. Final results of processing before these results are released to an system are jointly known as the Central Processing Unit (CPU). The CPU
output device. is the brain of any computer system. In a human body, all major decisions
are taken by the brain and the other parts of the body function as directed
by the brain. Similarly, in a computer system, all major calculations and
comparisons are made inside the CPU and the CPU is also responsible for
1.2 Central Processing Unit (CPU) activating and controlling the operations of other units of a computer
system.
Arithmetic and Logic Unit (ALU)
The main unit inside the computer is the CPU. This unit is
responsible for all events inside the computer. It controls all internal and The arithmetic and logic unit (ALU) of a computer system is the
external devices, performs"Arithmetic and Logical operations". The place where the actual execution of the instructions take place during the
operations a Microprocessor performs are called "instruction set" of this processing operations. All calculations are performed and all comparisons
processor. The instruction set is “hard wired” in the CPU and determines (decisions) are made in the ALU. The data and instructions, stored in the
the machine language for the CPU. The more complicated the instruction primary storage prior to processing are transferred as and when needed to
set is, the slower the CPU works. Processors differed from one another by the ALU where processing takes place. No processing is done in the
the instruction set. If the same program can run on two different computer primary storage unit. Intermediate results generated in the ALU are
brands they are said to be compatible. Programs written for IBM temporarily transferred back to the primary storage until needed at a later
time. Data may thus move from primary storage to ALU and back again as
3

storage many times before the processing is over. After the completion of Control Unit
processing, the final results which are stored in the storage unit are released
to an output device.
How the input device knows that it is time for it to feed data into the
storage unit? How does the ALU know what should be done with the data
The arithmetic and logic unit (ALU) is the part where actual once it is received? And how is it that only the final results are sent to the
computations take place. It consists of circuits that perform arithmetic output devices and not the intermediate results? All this is possible because
operations (e.g. addition, subtraction, multiplication, division over data of the control unit of the computer system. By selecting, interpreting, and
received from memory and capable to compare numbers (less than, equal seeing to the execution of the program instructions, the control unit is able
to, or greater than). to maintain order and directs the operation of the entire system. Although,
it does not perform any actual processing on the data, the control unit acts
as a central nervous system for the other components of the computer. It
While performing these operations the ALU takes data from the manages and coordinates the entire computer system. It obtains instructions
temporary storage are inside the CPU named registers. Registers are a from the program stored in main memory, interprets the instructions, and
group of cells used for memory addressing, data manipulation and issues signals that cause other units of the system to execute them.
processing. Some of the registers are general purpose and some are
reserved for certain functions. It is a high-speed memory which holds only
data from immediate processing and results of this processing. If these
results are not needed for the next instruction, they are sent back to the
main memory and registers are occupied by the new data used in the next
instruction.
All activities in the computer system are composed of thousands of
individual steps. These steps should follow in some order in fixed intervals
of time. These intervals are generated by the Clock Unit. Every operation
within the CPU takes place at the clock pulse. No operation, regardless of
how simple, can be performed in less time than transpires between ticks of
this clock. But some operations required more than one clock pulse. The The control unit directs and controls the activities of the internal and
faster the clock runs, the faster the computer performs. The clock rate is external devices. It interprets the instructions fetched into the computer,
measured in megahertz (Mhz) or Gigahertz (Ghz). Larger systems are even determines what data, if any, are needed, where it is stored, where to store
faster. In older systems the clock unit is external to the microprocessor and the results of the operation, and sends the control signals to the devices
resides on a separate chip. In most modern microprocessors the clock is involved in the execution of the instructions.
usually incorporated within the CPU.
4

1.3 BUS Similarly, when the processor wants to read some data residing in
the memory, it will assert the read signal and set the read address on the
In computer buses are using to carry data from one location to address bus. After receiving this signal, the memory controller will get the
other. There are three different types of buses are there data from the specific memory block (after checking the address bus to get
1) Address Bus the read address) and then it will place the data of the memory block on to
the data bus.
2) Data Bus
3) Control Bus Good to know : The size of the memory that can be addressed by the
system determines the width of the data bus and vice versa. For example, if
the width of the address bus is 32 bits, the system can address 2^32
memory blocks (that is equal to 4GB memory space, given that one block
holds 1 byte of data).

1.3.2 Data Bus :

A data bus simply carries data.Typically, the same data bus is


used for both read/write operations.When it is a write operation, the
processor will put the data (to be written) on to the data bus.When it is the
read operation, the memory controller will get the data from the specific
memory block and put it in to the data bus.

Good to know :

--> The data bus consist of 8, 16, or 32 parallel lines.

--> The data bus is bidirectional bus, means the data can be transferred
from CPU to main memory and vice versa.
1.3.1 Address Bus :
--> The number of data lines used in the data bus is equal to the size of data
Address bus is a part of the computer system bus that is dedicated word being written or read.
for specifying a physical address.When the computer processor needs to
read or write from or to the memory, it uses the address bus to specify the --> The data bus also connects the I/O ports and CPU. So, the CPU can
physical address of the individual memory block it needs to access (the write data to or read it from the memory or I/O ports.
actual data is sent along the data bus).More correctly, when the processor
wants to write some data to the memory, it will assert the write signal, set
the write address on the address bus and put the data on to the data bus.
5

1.3.3 A control bus: this manages the information flow between 1.4 Computer Number System
components indicating whether the operation is a read or a write and
ensuring that the operation happens at the right

What is the difference between Address Bus and Data Bus?

Data bus is bidirectional, while address bus is unidirectional.:- That


means data travels in both directions but the addresses will travel in only
one direction. The reason for this is that unlike the data, the address is
always specified by the processor.

The width of the data bus is determined by the size of the individual
memory block, while the width of the address bus is determined by the size
of the memory that should be addressed by the system.
11

Computer Number Systems 4) Hexadecimal number system

Number systems are the technique to represent numbers in A Hexadecimal number system has sixteen (16)
the computer system architecture, every value that you are saving or alphanumeric values from 0 to 9 and A to F. Every number (value)
getting into/from computer memory has a defined number system. represents with 0,1,2,3,4,5,6, 7,8,9,A,B,C,D,E and F in this number
system. The base of hexadecimal number system is 16, because it
Computer architecture supports following number systems. has 16 alphanumeric values. Here A is 10, B is 11, C is 12, D is
13, E is 14 and F is 15.
 Binary number system
 Octal number system Number System Conversions
 Decimal number system
 Hexadecimal (hex) number system There are three types of conversion:

1) Binary Number System  Decimal Number System to Other Base


[for example: Decimal Number System to Binary Number
A Binary number system has only two digits that are 0 and 1. System]
Every number (value) represents with 0 and 1 in this number system.  Other Base to Decimal Number System
The base of binary number system is 2, because it has only two [for example: Binary Number System to Decimal Number
digits. System]
 Other Base to Other Base
2) Octal number system [for example: Binary Number System to Hexadecimal
Number System]
Octal number system has only eight (8) digits from 0 to 7.
Every number (value) represents with 0,1,2,3,4,5,6 and 7 in this Decimal Number System to Other Base
number system. The base of octal number system is 8, because it has
only 8 digits.
To convert Number system from Decimal Number
System to Any Other Base is quite easy; you have to follow just
3) Decimal number system two steps:
A) Divide the Number (Decimal Number) by the base of target base
Decimal number system has only ten (10) digits from 0 to 9. system (in which you want to convert the number: Binary (2), octal
Every number (value) represents with 0,1,2,3,4,5,6, 7,8 and 9 in this (8) and Hexadecimal (16)).
number system. The base of decimal number system is 10, because it B) Write the remainder from step 1 as a Least Signification Bit
has only 10 digits. (LSB) to Step last as a Most Significant Bit (MSB).
12

Decimal to Binary Conversion


Decimal to Octal Conversion
Decimal Number is : (12345)10 Binary Number is
Decimal Number is : (12345)10
(11000000111001) 2

Octal Number is
(30071)8
Decimal to Hexadecimal Conversion
Example 1
Decimal Number is : (12345)10

Hexadecimal Number is
Result
(3039)16
Example 2
Decimal Number is : (725)10
Binary Number is
(11000000111001)2

Hexadecimal Number is
13

(2D5)16
Convert
10, 11, 12, 13, 14, 15
to its equivalent...
A, B, C, D, E, F

Other Base System to Decimal Number Base


To convert Number System from Any Other Base
System to Decimal Number System, you have to follow just three
steps: Octal to Decimal Conversion Result
A) Determine the base value of source Number System (that you
want to convert), and also determine the position of digits from LSB
(first digit’s position – 0, second digit’s position – 1 and so on). Octal Number is : (30071)8 =12288+0+0+56+1
B) Multiply each digit with its corresponding multiplication of =12345
position value and Base of Source Number System’s Base. Decimal Number
C) Add the resulted value in step-B. is: (12345)10

Explanation regarding examples:


Below given exams contains the following rows:
A) Row 1 contains the DIGITs of number (that is going to be
converted).
B) Row 2 contains the POSITION of each digit in the number
system.
C) Row 3 contains the multiplication: DIGIT* BASE^POSITION.
D) Row 4 contains the calculated result of step C.
E) And then add each value of step D, resulted value is the Decimal
Number. Hexadecimal to Decimal Conversion Result

Binary to Decimal Conversion =512+208+5


=725
Binary Number is : (11000000111001)2 Decimal
Number
is: (725)10
Hexadecimal Number is : (2D5)16
9

Example − Addition
Binary Arithmetic
Binary arithmetic is essential part of all the

digital computers and many other digital system.

Binary Addition
Binary Subtraction
It is a key for binary subtraction, multiplication,
Subtraction and Borrow, these two words will
division. There are four rules of binary addition.
be used very frequently for the binary

subtraction. There are four rules of binary

subtraction.

In fourth case, a binary addition is creating a

sum of (1 + 1 = 10) i.e. 0 is written in the given

column and a carry of 1 over to the next column. Example − Subtraction


10

Binary Multiplication Binary Division


Binary multiplication is similar to decimal
Binary division is similar to decimal division. It is
multiplication. It is simpler than decimal
called as the long division procedure.
multiplication because only 0s and 1s are
Example − Division
involved. There are four rules of binary

multiplication.

Example − Multiplication
11

2's complement
Complement Arithmetic
The 2's complement of binary number is obtained

by adding 1 to the Least Significant Bit (LSB) of


Complements are used in the digital
computers in order to simplify the 1's complement of the number.

subtraction operation and for the logical


2's complement = 1's complement + 1
manipulations. For each radix-r system
(radix r represents base of number
Example of 2's Complement is as follows.
system) there are two types of binary
system complements.

1's complement

The 1's complement of a number is found by

changing all 1's to 0's and all 0's to 1's. This is

called as taking complement or 1's complement.

Example of 1's Complement is as follows.


12

1.4.2 ASCII(American Standard Code for Information only). After you transfer the file to the PC (on a disk or via a cable or
Interchange) modem),the other person will be able to open the file in WordStar.

Stands for "American Standard Code for Information In ASCII, each character has a number which the computer or
Interchange." ASCII character encoding provides a standard way to printer uses to represent that character. For instance, a capitalAis
represent characters using numeric codes. These include upper and number 65 in the code. Although there are 256 possible characters in
lower-case English letters, numbers, and punctuation symbols. the code, ASCII standardizes only 128 characters, and the first 32 of
these are "control characters," which are supposed to be used to control
ASCII uses 7 bits to represent each character. For example, a the computer and don't appear on the screen. That leaves only enough
capital "T" is represented by the number 84 and a lowercase "t" is code numbers for all the capital and lowercase letters, the digits, and
represented by 116. Other keyboard keys are also mapped to standard the most common punctuation marks.
ASCII values. For example, the Escape (ESC) key is represented as 27
and the Delete (DEL) key is represented as 32. ASCII codes may also
be displayed as hexadecimal values instead of the decimal numbers
listed above. For example, the ASCII value of the Escape key in
hexadecimal is "1B" and the hexadecimal value of the Delete key is
"7F."

Since ASCII uses 7 bits, it only supports 2^7, or 128 values.


Therefore, the standard ASCII character set is limited to 128 characters.
While this is enough to represent all standard English letters, numbers,
and punctuation symbols, it is not sufficient to represent all special
characters or characters from other languages. Even Extended ASCII,
which supports 8 bit values, or 256 characters, does not include enough
characters to accurately represent all languages. Therefore, other
character sets, such as Latin-1 (ISO-8859-1), UTF-8, and UTF-16 are
commonly used for documents and webpages that require more
characters.

It also means you can easily print basic text and numbers on any
printer, with the notable exception of PostScript printers. If you are
working in the MacWrite word processing application on the Mac and
you need to send your file to someone who uses WordStar on the PC,
you can save the document as an ASCII file (which is the same as text- Another ASCII limitation is that the code doesn't include any
information about the way the text should look (its format). ASCIIonly
13

tells you which characters the text contains. If you save a formatted of BCD the binary number formed by four binary digits, will be the
document asASCII,you will lose all the font formatting, such as the equivalent code for the given decimal digits. In BCD we can use the
typeface changes, the italics, the bolds, and even the special characters binary number from 0000-1001 only, which are the decimal equivalent
like ©, TM, or ®. Usually carriage returns and tabs are saved.
from 0-9 respectively. Suppose if a number have single decimal digit
Unlike some earlier character encodings that used fewer than 7 then it’s equivalent Binary Coded Decimal will be the respective four
bits, ASCII does have room for both the uppercase and lowercase binary digits of that decimal number and if the number contains two
letters and all normal punctuation characters but, as it was designed to decimal digits then it’s equivalent BCD will be the respective eight
encode American English it does not include the accented characters binary of the given decimal number. Four for the first decimal digit and
and ligatures required by many European languages (nor the UK pound next four for the second decimal digit.
sign £). These characters are provided in some 8-bit EXTENDED
ASCII character sets, including ISO LATIN 1 or ANSI 1, but not all
software can display 8-bit characters, and some serial communications
channels still remove the eighth bit from each character. Despite its In computing and electronic systems, binary-coded decimal
shortcomings, ASCII is still important as the 'lowest common (BCD) is an encoding for decimal numbers in which each digit is
denominator' for representing textual data, which almost any computer represented by its own binary sequence. Its main virtue is that it allows
in the world can display. easy conversion to decimal digits for printing or display and faster
decimal calculations. Its drawbacks are the increased complexity of
The ASCII standard was certified by ANSI in 1977,and the ISO
adopted an almost identical code as ISO 646. circuits needed to implement mathematical operations and a relatively
inefficient encoding – it occupies more space than a pure binary
representation. Even though the importance of BCD has diminished, it
1.4.3 BINARY-CODED DECIMAL (BCD) is still widely used in financial, commercial, and industrial applications.
Definition

The binary-coded decimal (BCD) is an encoding for decimal


numbers in which each digit is represented by its own binary sequence.
In BCD, a digit is usually represented by four bits which, in
Basics general, represent the values/digits/characters 0-9. Other bit
combinations are sometimes used for sign or other indications.
BCD or Binary Coded Decimal is that number system or code
which has the binary numbers or digits to represent a decimal
number.A decimal number contains 10 digits (0-9). Now the equivalent
binary numbers can be found out of these 10 decimal numbers. In case
14

To BCD-encode a decimal number using the common encoding, 1.4.4 EBCDIC


each decimal digit is stored in a four-bit nibble.
The EBCDIC (Extended Binary Coded Decimal Interchange
Decimal number Binary number (BCD) Code) is an extended binary code for IBM mainframes, mid-range
computers, and peripheral devices that use 8 bits instead of the original
0 0000 0000 6-bit format. Although EBCDIC is still used today, more modern
1 0001 0001 encoding forms, such as ASCII and Unicode, exist. While all IBM
computers use EBCDIC as their default encoding format, most IBM
2 0010 0010 devices also include support for modern formats, allowing them to take
advantage of newer features that EBCDIC does not provide.
3 0011 0011

4 0100 0100

5 0101 0101 How EBCDIC Works

6 0110 0110 EBCDIC consists of an 8-bit character format that describes


7 0111 0111 how the computer interprets commands. For example, while one bit
may control which language the command is in, another bit may control
8 1000 1000 whether a character is interpreted as uppercase or lowercase. While
EBCDIC contains basic functions for supporting actual computer
9 1001 1001
properties and has mild support for newer languages, it does not
10 1010 0001 0000 support many features that Unicode and ASCII provide, such as the
ability to write in multiple languages.
11 1011 0001 0001

12 1100 0001 0010

13 1101 0001 0011 Applications

14 1110 0001 0100 EBCDIC is exclusively used on IBM machines such as


mainframes, midrange personal computers, and peripheral devices.
15 1111 0001 0101
Since most IBM machines include extensive processing capabilities and
15

some support for modern encoding languages, they are able to keep up 1.5 Language Evolution
and even outperform devices from other brands. However, most
1.5.1 Generation languages
machines and operating systems depend on ASCII and Unicode as their
default encoding format.
A generation language may refer to any of the following:

Advantages and Disadvantages

EBCDIC is advantageous because it consists of an 8-bit character


language rather than the old 6-bit character language found on punch
card encoding systems. This allows EBCDIC to provide IBM machines
with support for a wide variety of functions that punch card encoding
systems did not provide. However, EBCDIC only allows machines to
process English and one other language and writes characters from left
to right in every language, rather than from right to left as seen in 1. The first generation languages, or 1GL, are low-level
Arabic languages. languages that are machine language.
2. The second-generation languages, or 2GL, are also low-
level assembly languages. They are sometimes used in kernels and
hardware drives, but more commonly used for video editing and
video games.
3. The third-generation languages, or 3GL, are high-level languages,
such as C, C++, Java, JavaScript, and Visual Basic.
4. The fourth-generation languages, or 4GL, are languages that
consist of statements similar to statements in a human language.
16

Fourth generation languages are commonly used in database 1.5.2.2 Assembly languages
programming and scripts examples
include Perl, PHP, Python, Ruby, and SQL. The next evolution in programming came with the idea of
5. The fifth-generation languages, or 5GL, are programming replacing binary code for instruction and addresses with symbols or
languages that contain visual tools to help develop a program. mnemonics. Because they used symbols, these languages were first
Examples of fifth generation languages include Mercury, OPS5, known as symbolic languages. The set of these mnemonic languages
and Prolog. were later referred to as assembly languages. The assembly language
for our hypothetical computer to replace the machine language in
Table 9.2 is shown in Program 9.1. A special program called an
1.5.2 Programming Languages assembler is used to translate code in assembly language into machine
To write a program for a computer, we must use a computer language.
language. A computer language is a set of predefined words that are
combined into a program according to predefined rules (syntax). Over
the years, computer languages have evolved from machine language to 1.5.2.3 High-level languages
high-level languages.
Although assembly languages greatly improved programming
1.5.2.1 Machine Languages efficiency, they still required programmers to concentrate on the
hardware they were using. Working with symbolic languages was also
In the earliest days of computers, the only programming very tedious, because each machine instruction had to be individually
languages available were machine languages. Each computer had its coded. The desire to improve programmer efficiency and to change the
own machine language, which was made of streams of 0s and 1s. focus from the computer to the problem being solved led to the
Machine language is the only language understood by the computer development of high-level languages. High-level language are portable
hardware, which is made of electronic switches with two states: off to many different computers, allowing the programmer to concentrate
(representing 0) and on (representing 1). on the application rather than the intricacies of the computer’s
17

modular design of easy –to-understand programs. Subroutines and


organization. They are designed to relieve the programmer from the
powerful operators are essential here, and orderly data structures and
details of assembly language. Highlevel languages share one the ability to create such structures are important, too. A good language
characteristics with symbolic languages: they must be converted to should also enable control flow to be specified in a clean,
machine language. This process is called interpretation or compilation. understandable manner.

Over the years, various languages, most notably BASIC, COBOL, 2. Naturalness.
Pascal, Ada, C, C++ and Java, were developed.
Much of the understandability of a high-level programming language
comes from the ease with which one can express an algorithm in that
language. Some languages are clearly more suitable than others in this
1.5.3 Characteristics of Programming Languages regard for differing problem domains.

A programming language is a notation with which people can 3. Portability


communicate algorithms to computers and to one another. Hundreds of Users must often be able to run their programs on a variety of
programming language exit. They differ in their degree of closeness to machines. Languages such as FORTRAN or COBOL have relatively
natural or mathematical language on one hand and to machine language well-defined “standard versions,” and programs conforming to the
on the other. They differ also in the type of problem for which they are standard should run on any machine. These are pitfalls that come up
best suited. Some of the aspects of high-level languages which make unexpectedly, however.
them preferable to machine or assembly language are the following.
4. Efficiency of Use.

This area covers a number of aspects of both program and language


1. Ease of Understanding. design. One would like to be able to translate source programs into
A high-level program is generally easier to read, write, and efficient. In both cases, the design of the language can affect how easily
prove correct than is an assembly-language program, because a high- the computation can be done. But it is often more important that the
level language usually provides a more natural notation for describing programmer be able to implement programs in a way that make
algorithms than does assembly language. Even among high-level efficient use of his time. To the latter end, a high–level programming
language, some are easier to use than others. Part of this has to do with language should have facilities for defining data structures, macros,
the operators, data structures, and flow of control features provided in a subroutines, and the like. The operating system and programming
language. A good programming language should provide features for environment can also be as important as the language in reducing
programming time.
18

1.6 Translators program, whenever needed, and the program has to be recompiled. The
process is repeated until the program is mistake free and translated to
an object code. Thus the job of a complier includes the following:
1.6.1 ASSEMBLER,COMPILER AND INTERPRETER
1. To translate HLL source program to machine codes.
2. To trace variables in the program
As stated earlier, any program that is not written in machine
3. To include linkage for subroutines.
language has to be translated in machine language before it is executed
4. To allocate memory for storage of program and variables.
by the computer. The means used for translation are themselves
5. To generate error messages, if there are errors in the program.
computer programs. There are three types of translator programs i.e.
Assembler, Compilers and Interpreters.
Interpreter:

The basic purpose of interpreter is same as that of complier. In


compiler, the program is translated completely and directly executable
Assembler:
version is generated. Whereas interpreter translates each instruction,
executes it and then the next instruction is translated and this goes on
Assembler is a computer program which is used to translate
until end of the program. In this case, object code is not stored and
program written in Assembly Language in to machine language. The
reused. Every time the program is executed, the interpreter translates
translated program is called as object program. Assembler checks each
each instruction freshly. It also has program diagnostic capabilities.
instruction for its correctness and generates diagnostic messages, if
However, it has some disadvantages as below:
there are mistakes in the program. Various steps of assembling are:
1. Instructions repeated in program must be translated each time they
are executed.
1. Input source program in Assembly Language through an input
device.
2. Because the source program is translated fresh every time it is used,
2. Use Assembler to produce object program in machine language.
it is slow process or execution takes more time. Approx. 20 times
3. Execute the program.
slower than complier.
Compiler:

A compiler is a program that translates a programme written in


HLL to executable machine language. The process of transferring HKK
source program in to object code is a lengthy and complex process as
compared to assembling. Compliers have diagnostic capabilities and
prompt the programmer with appropriate error message while
compiling a HLL program. The corrections are to be incorporated in the
19

1.6.2 Source Program vs Object Program symbols, stack information, relocation and profiling information. Since
they contain instructions in machine code, they are not easily readable
by humans. But sometimes, object programs refer to an intermediate
Source program and object program are two types of programs
object between source and executable files. Tools known as linkers are
found in computer programming. Source program is typically a
used to link a set of objects in to an executable (e.g. C language). As
program with human readable machine instructions written by a
mentioned above .exe files and bytecode files are object files produced
programmer. Object program is typically a machine executable
when using Visual Basic and Java respectively. .exe files are directly
program created by compiling a source program.
executable on windows platform, while bytecode files need an
interpreter for execution. Most software applications are distributed
What is Source Program? with the object or executable files only. Object or executable files can
be converted back to its original source files by decompilation. For
Source program is a code written by a programmer usually example, java .class files (bytecode) can be decompiled using
using a higher level language, which is easily readable by the humans. Decompiler tools in to its original .java files.
Source programs usually contain meaningful variable names and
helpful comments to make it more readable. A source program cannot What is the difference between Source Program and Object
be directly executed on a machine. In order to execute it, the source Program?
program is compiled using a compiler (a program, which transforms
source programs to executable code). Alternatively, using an interpreter
Source program is a program written by a programmer, while an
(a program that executes a source program line by line without pre-
object program is generated by a compiler using one or more source
compilation) a source program may be executed on the fly. Visual
files as input. Source files are written in higher level languages such as
Basic is an example of a compiled language, while Java is an example
Java or C (so they are easily readable by humans), but object programs
of an interpreted language. Visual Basic source files (.vb files) are
usually contain lower level languages such as assembly or machine
compiled to .exe code, while Java source files (.java files) are first
code (so they are not human readable). Source files can be either
compiled (using javac command) to bytecode (an object code contained
compiled or interpreted for execution. Decompilers can be used to
in .class files) and then interpreted using the java interpreter (using java
convert object programs back to its original source file(s). It is
command). When software applications are distributed, typically they
important to note that the terms source program and object program are
will not include source files. However, if the application is open source,
used as relative terms. If you take a program transformation program
the source is also distributed and the user gets to see and modify the
(like a compiler), what goes in is a source program and what comes out
source code as well.
is an object program. Therefore an object program produced by one
tool can become a source file for another tool.
What is Object Program?

Object program is usually a machine executable file, which is


the result of compiling a source file using a compiler. Apart from
machine instructions, they may include debugging information,
20
Number System Reference
Rule: any binary number can be converted to its decimal equivalent
Number System Conversion simply by summing together the weights of the various positions in
the binary number which contains a 1.

3. Convert a number from one number system to another


+ + +

There are two ways to convert a decimal number to its equivalent


binary representation
1. The reverse of the binary-to-decimal conversion process
(optional). The decimal number is simply expressed as a sum
of powers of 2 and then 12 and 02 are written in the appropriate
bit positions.

2. Repeated division: Repeating division the decimal number by


and writing down the remainder after each division until a
We need decimal system for real world (for presentation and input):
quotient of is obtained.
for example: we use 10-based numbering system for input and output
in digital calculator.
We need binary system inside calculator for calculation.
21

To convert, we need to multiply each octal digit by its positional


weight.

Repeated division by 8.
22

Each hexa digit is converted to its four-bit binary equivalent:

The binary numbers are grouped into groups of four bits and each
group is converted to its equivalent hexa digit.
Zeros are added as needed to complete a four-bit group.

Conversion each octal digit to its three bit binary equivalent.


23

Using this table, we can convert any octal number to binary by


individually converting each digit.

Thus now we can quickly convert this octal number


to its binary equivalent to get eight bit representation.
So:

The bits of the binary number are grouped into group of 3 bits
starting at the LSB, then each group is converted to its octal
equivalent (see table).
4. Advantage of octal and hexadecimal systems:
1. Hexa and octal number are used as a "short hand" way to represent
stings of bits.
2. Error prone to write the binary number, in hex and octal less error.
3. The octal and hexadecimal number systems are both used (in
memory addressing and microprocessor technology).

You might also like