Fundementals of IT Unit I Notes
Fundementals of IT Unit I Notes
UNIT -1 codes that the primary memory of the computer is designed to accept. This
transformation is accomplished by units that called input interfaces. Input
1.1 BASIC COMPONENTS OF A COMPUTER interfaces are designed to match the unique physical or electrical
characteristics of input devices to the requirements of the computer system.
Output Unit
The job of an output unit is just the reverse of that of an input unit.
It supplied information and results of computation to the outside world.
Thus it links the computer with the external environment. As computers
work with binary code, the results produced are also in the binary form.
Hence, before supplying the results to the outside world, it must be
converted to human acceptable (readable) form. This task is accomplished
by units called output interfaces.
Input Unit
Data and instructions must enter the computer system before any In short, the following functions are performed by an output unit.
computation can be performed on the supplied data. The input unit that 1. It accepts the results produced by the computer which are in coded
links the external environment with the computer system performs this form and hence cannot be easily understood by us.
task. Data and instructions enter input units in forms that depend upon the
particular device used. For example, data is entered from a keyboard in a 2. It converts these coded results to human acceptable (readable) form.
manner similar to typing, and this differs from the way in which data is 3. It supplied the converted results to the outside world.
entered through a mouse, which is another type of input device. However,
regardless of the form in which they receive their inputs, all input devices
must provide a computer with data that are transformed into the binary
2
Storage Unit compatible computers will not run on Apple computers because these two
architectures are not compatible.
The data and instructions that are entered into the computer system
through input units have to be stored inside the computer before the actual
processing starts. Similarly, the results produced by the computer after
processing must also be kept somewhere inside the computer system before
being passed on to the output units. Moreover, the intermediate results
produced by the computer must also be preserved for ongoing processing.
The Storage Unit or the primary / main storage of a computer system is
designed to do all these things. It provides space for storing data and
instructions, space for intermediate results and also space for the final
results.
In short, the specific functions of the storage unit are to store:
1. All the data to be processed and the instruction required for
processing (received from input devices).
2. Intermediate results of processing.
The control Unit and the Arithmetic and Logic unit of a computer
3. Final results of processing before these results are released to an system are jointly known as the Central Processing Unit (CPU). The CPU
output device. is the brain of any computer system. In a human body, all major decisions
are taken by the brain and the other parts of the body function as directed
by the brain. Similarly, in a computer system, all major calculations and
comparisons are made inside the CPU and the CPU is also responsible for
1.2 Central Processing Unit (CPU) activating and controlling the operations of other units of a computer
system.
Arithmetic and Logic Unit (ALU)
The main unit inside the computer is the CPU. This unit is
responsible for all events inside the computer. It controls all internal and The arithmetic and logic unit (ALU) of a computer system is the
external devices, performs"Arithmetic and Logical operations". The place where the actual execution of the instructions take place during the
operations a Microprocessor performs are called "instruction set" of this processing operations. All calculations are performed and all comparisons
processor. The instruction set is “hard wired” in the CPU and determines (decisions) are made in the ALU. The data and instructions, stored in the
the machine language for the CPU. The more complicated the instruction primary storage prior to processing are transferred as and when needed to
set is, the slower the CPU works. Processors differed from one another by the ALU where processing takes place. No processing is done in the
the instruction set. If the same program can run on two different computer primary storage unit. Intermediate results generated in the ALU are
brands they are said to be compatible. Programs written for IBM temporarily transferred back to the primary storage until needed at a later
time. Data may thus move from primary storage to ALU and back again as
3
storage many times before the processing is over. After the completion of Control Unit
processing, the final results which are stored in the storage unit are released
to an output device.
How the input device knows that it is time for it to feed data into the
storage unit? How does the ALU know what should be done with the data
The arithmetic and logic unit (ALU) is the part where actual once it is received? And how is it that only the final results are sent to the
computations take place. It consists of circuits that perform arithmetic output devices and not the intermediate results? All this is possible because
operations (e.g. addition, subtraction, multiplication, division over data of the control unit of the computer system. By selecting, interpreting, and
received from memory and capable to compare numbers (less than, equal seeing to the execution of the program instructions, the control unit is able
to, or greater than). to maintain order and directs the operation of the entire system. Although,
it does not perform any actual processing on the data, the control unit acts
as a central nervous system for the other components of the computer. It
While performing these operations the ALU takes data from the manages and coordinates the entire computer system. It obtains instructions
temporary storage are inside the CPU named registers. Registers are a from the program stored in main memory, interprets the instructions, and
group of cells used for memory addressing, data manipulation and issues signals that cause other units of the system to execute them.
processing. Some of the registers are general purpose and some are
reserved for certain functions. It is a high-speed memory which holds only
data from immediate processing and results of this processing. If these
results are not needed for the next instruction, they are sent back to the
main memory and registers are occupied by the new data used in the next
instruction.
All activities in the computer system are composed of thousands of
individual steps. These steps should follow in some order in fixed intervals
of time. These intervals are generated by the Clock Unit. Every operation
within the CPU takes place at the clock pulse. No operation, regardless of
how simple, can be performed in less time than transpires between ticks of
this clock. But some operations required more than one clock pulse. The The control unit directs and controls the activities of the internal and
faster the clock runs, the faster the computer performs. The clock rate is external devices. It interprets the instructions fetched into the computer,
measured in megahertz (Mhz) or Gigahertz (Ghz). Larger systems are even determines what data, if any, are needed, where it is stored, where to store
faster. In older systems the clock unit is external to the microprocessor and the results of the operation, and sends the control signals to the devices
resides on a separate chip. In most modern microprocessors the clock is involved in the execution of the instructions.
usually incorporated within the CPU.
4
1.3 BUS Similarly, when the processor wants to read some data residing in
the memory, it will assert the read signal and set the read address on the
In computer buses are using to carry data from one location to address bus. After receiving this signal, the memory controller will get the
other. There are three different types of buses are there data from the specific memory block (after checking the address bus to get
1) Address Bus the read address) and then it will place the data of the memory block on to
the data bus.
2) Data Bus
3) Control Bus Good to know : The size of the memory that can be addressed by the
system determines the width of the data bus and vice versa. For example, if
the width of the address bus is 32 bits, the system can address 2^32
memory blocks (that is equal to 4GB memory space, given that one block
holds 1 byte of data).
Good to know :
--> The data bus is bidirectional bus, means the data can be transferred
from CPU to main memory and vice versa.
1.3.1 Address Bus :
--> The number of data lines used in the data bus is equal to the size of data
Address bus is a part of the computer system bus that is dedicated word being written or read.
for specifying a physical address.When the computer processor needs to
read or write from or to the memory, it uses the address bus to specify the --> The data bus also connects the I/O ports and CPU. So, the CPU can
physical address of the individual memory block it needs to access (the write data to or read it from the memory or I/O ports.
actual data is sent along the data bus).More correctly, when the processor
wants to write some data to the memory, it will assert the write signal, set
the write address on the address bus and put the data on to the data bus.
5
1.3.3 A control bus: this manages the information flow between 1.4 Computer Number System
components indicating whether the operation is a read or a write and
ensuring that the operation happens at the right
The width of the data bus is determined by the size of the individual
memory block, while the width of the address bus is determined by the size
of the memory that should be addressed by the system.
11
Number systems are the technique to represent numbers in A Hexadecimal number system has sixteen (16)
the computer system architecture, every value that you are saving or alphanumeric values from 0 to 9 and A to F. Every number (value)
getting into/from computer memory has a defined number system. represents with 0,1,2,3,4,5,6, 7,8,9,A,B,C,D,E and F in this number
system. The base of hexadecimal number system is 16, because it
Computer architecture supports following number systems. has 16 alphanumeric values. Here A is 10, B is 11, C is 12, D is
13, E is 14 and F is 15.
Binary number system
Octal number system Number System Conversions
Decimal number system
Hexadecimal (hex) number system There are three types of conversion:
Octal Number is
(30071)8
Decimal to Hexadecimal Conversion
Example 1
Decimal Number is : (12345)10
Hexadecimal Number is
Result
(3039)16
Example 2
Decimal Number is : (725)10
Binary Number is
(11000000111001)2
Hexadecimal Number is
13
(2D5)16
Convert
10, 11, 12, 13, 14, 15
to its equivalent...
A, B, C, D, E, F
Example − Addition
Binary Arithmetic
Binary arithmetic is essential part of all the
Binary Addition
Binary Subtraction
It is a key for binary subtraction, multiplication,
Subtraction and Borrow, these two words will
division. There are four rules of binary addition.
be used very frequently for the binary
subtraction.
multiplication.
Example − Multiplication
11
2's complement
Complement Arithmetic
The 2's complement of binary number is obtained
1's complement
1.4.2 ASCII(American Standard Code for Information only). After you transfer the file to the PC (on a disk or via a cable or
Interchange) modem),the other person will be able to open the file in WordStar.
Stands for "American Standard Code for Information In ASCII, each character has a number which the computer or
Interchange." ASCII character encoding provides a standard way to printer uses to represent that character. For instance, a capitalAis
represent characters using numeric codes. These include upper and number 65 in the code. Although there are 256 possible characters in
lower-case English letters, numbers, and punctuation symbols. the code, ASCII standardizes only 128 characters, and the first 32 of
these are "control characters," which are supposed to be used to control
ASCII uses 7 bits to represent each character. For example, a the computer and don't appear on the screen. That leaves only enough
capital "T" is represented by the number 84 and a lowercase "t" is code numbers for all the capital and lowercase letters, the digits, and
represented by 116. Other keyboard keys are also mapped to standard the most common punctuation marks.
ASCII values. For example, the Escape (ESC) key is represented as 27
and the Delete (DEL) key is represented as 32. ASCII codes may also
be displayed as hexadecimal values instead of the decimal numbers
listed above. For example, the ASCII value of the Escape key in
hexadecimal is "1B" and the hexadecimal value of the Delete key is
"7F."
It also means you can easily print basic text and numbers on any
printer, with the notable exception of PostScript printers. If you are
working in the MacWrite word processing application on the Mac and
you need to send your file to someone who uses WordStar on the PC,
you can save the document as an ASCII file (which is the same as text- Another ASCII limitation is that the code doesn't include any
information about the way the text should look (its format). ASCIIonly
13
tells you which characters the text contains. If you save a formatted of BCD the binary number formed by four binary digits, will be the
document asASCII,you will lose all the font formatting, such as the equivalent code for the given decimal digits. In BCD we can use the
typeface changes, the italics, the bolds, and even the special characters binary number from 0000-1001 only, which are the decimal equivalent
like ©, TM, or ®. Usually carriage returns and tabs are saved.
from 0-9 respectively. Suppose if a number have single decimal digit
Unlike some earlier character encodings that used fewer than 7 then it’s equivalent Binary Coded Decimal will be the respective four
bits, ASCII does have room for both the uppercase and lowercase binary digits of that decimal number and if the number contains two
letters and all normal punctuation characters but, as it was designed to decimal digits then it’s equivalent BCD will be the respective eight
encode American English it does not include the accented characters binary of the given decimal number. Four for the first decimal digit and
and ligatures required by many European languages (nor the UK pound next four for the second decimal digit.
sign £). These characters are provided in some 8-bit EXTENDED
ASCII character sets, including ISO LATIN 1 or ANSI 1, but not all
software can display 8-bit characters, and some serial communications
channels still remove the eighth bit from each character. Despite its In computing and electronic systems, binary-coded decimal
shortcomings, ASCII is still important as the 'lowest common (BCD) is an encoding for decimal numbers in which each digit is
denominator' for representing textual data, which almost any computer represented by its own binary sequence. Its main virtue is that it allows
in the world can display. easy conversion to decimal digits for printing or display and faster
decimal calculations. Its drawbacks are the increased complexity of
The ASCII standard was certified by ANSI in 1977,and the ISO
adopted an almost identical code as ISO 646. circuits needed to implement mathematical operations and a relatively
inefficient encoding – it occupies more space than a pure binary
representation. Even though the importance of BCD has diminished, it
1.4.3 BINARY-CODED DECIMAL (BCD) is still widely used in financial, commercial, and industrial applications.
Definition
4 0100 0100
some support for modern encoding languages, they are able to keep up 1.5 Language Evolution
and even outperform devices from other brands. However, most
1.5.1 Generation languages
machines and operating systems depend on ASCII and Unicode as their
default encoding format.
A generation language may refer to any of the following:
Fourth generation languages are commonly used in database 1.5.2.2 Assembly languages
programming and scripts examples
include Perl, PHP, Python, Ruby, and SQL. The next evolution in programming came with the idea of
5. The fifth-generation languages, or 5GL, are programming replacing binary code for instruction and addresses with symbols or
languages that contain visual tools to help develop a program. mnemonics. Because they used symbols, these languages were first
Examples of fifth generation languages include Mercury, OPS5, known as symbolic languages. The set of these mnemonic languages
and Prolog. were later referred to as assembly languages. The assembly language
for our hypothetical computer to replace the machine language in
Table 9.2 is shown in Program 9.1. A special program called an
1.5.2 Programming Languages assembler is used to translate code in assembly language into machine
To write a program for a computer, we must use a computer language.
language. A computer language is a set of predefined words that are
combined into a program according to predefined rules (syntax). Over
the years, computer languages have evolved from machine language to 1.5.2.3 High-level languages
high-level languages.
Although assembly languages greatly improved programming
1.5.2.1 Machine Languages efficiency, they still required programmers to concentrate on the
hardware they were using. Working with symbolic languages was also
In the earliest days of computers, the only programming very tedious, because each machine instruction had to be individually
languages available were machine languages. Each computer had its coded. The desire to improve programmer efficiency and to change the
own machine language, which was made of streams of 0s and 1s. focus from the computer to the problem being solved led to the
Machine language is the only language understood by the computer development of high-level languages. High-level language are portable
hardware, which is made of electronic switches with two states: off to many different computers, allowing the programmer to concentrate
(representing 0) and on (representing 1). on the application rather than the intricacies of the computer’s
17
Over the years, various languages, most notably BASIC, COBOL, 2. Naturalness.
Pascal, Ada, C, C++ and Java, were developed.
Much of the understandability of a high-level programming language
comes from the ease with which one can express an algorithm in that
language. Some languages are clearly more suitable than others in this
1.5.3 Characteristics of Programming Languages regard for differing problem domains.
1.6 Translators program, whenever needed, and the program has to be recompiled. The
process is repeated until the program is mistake free and translated to
an object code. Thus the job of a complier includes the following:
1.6.1 ASSEMBLER,COMPILER AND INTERPRETER
1. To translate HLL source program to machine codes.
2. To trace variables in the program
As stated earlier, any program that is not written in machine
3. To include linkage for subroutines.
language has to be translated in machine language before it is executed
4. To allocate memory for storage of program and variables.
by the computer. The means used for translation are themselves
5. To generate error messages, if there are errors in the program.
computer programs. There are three types of translator programs i.e.
Assembler, Compilers and Interpreters.
Interpreter:
1.6.2 Source Program vs Object Program symbols, stack information, relocation and profiling information. Since
they contain instructions in machine code, they are not easily readable
by humans. But sometimes, object programs refer to an intermediate
Source program and object program are two types of programs
object between source and executable files. Tools known as linkers are
found in computer programming. Source program is typically a
used to link a set of objects in to an executable (e.g. C language). As
program with human readable machine instructions written by a
mentioned above .exe files and bytecode files are object files produced
programmer. Object program is typically a machine executable
when using Visual Basic and Java respectively. .exe files are directly
program created by compiling a source program.
executable on windows platform, while bytecode files need an
interpreter for execution. Most software applications are distributed
What is Source Program? with the object or executable files only. Object or executable files can
be converted back to its original source files by decompilation. For
Source program is a code written by a programmer usually example, java .class files (bytecode) can be decompiled using
using a higher level language, which is easily readable by the humans. Decompiler tools in to its original .java files.
Source programs usually contain meaningful variable names and
helpful comments to make it more readable. A source program cannot What is the difference between Source Program and Object
be directly executed on a machine. In order to execute it, the source Program?
program is compiled using a compiler (a program, which transforms
source programs to executable code). Alternatively, using an interpreter
Source program is a program written by a programmer, while an
(a program that executes a source program line by line without pre-
object program is generated by a compiler using one or more source
compilation) a source program may be executed on the fly. Visual
files as input. Source files are written in higher level languages such as
Basic is an example of a compiled language, while Java is an example
Java or C (so they are easily readable by humans), but object programs
of an interpreted language. Visual Basic source files (.vb files) are
usually contain lower level languages such as assembly or machine
compiled to .exe code, while Java source files (.java files) are first
code (so they are not human readable). Source files can be either
compiled (using javac command) to bytecode (an object code contained
compiled or interpreted for execution. Decompilers can be used to
in .class files) and then interpreted using the java interpreter (using java
convert object programs back to its original source file(s). It is
command). When software applications are distributed, typically they
important to note that the terms source program and object program are
will not include source files. However, if the application is open source,
used as relative terms. If you take a program transformation program
the source is also distributed and the user gets to see and modify the
(like a compiler), what goes in is a source program and what comes out
source code as well.
is an object program. Therefore an object program produced by one
tool can become a source file for another tool.
What is Object Program?
Repeated division by 8.
22
The binary numbers are grouped into groups of four bits and each
group is converted to its equivalent hexa digit.
Zeros are added as needed to complete a four-bit group.
The bits of the binary number are grouped into group of 3 bits
starting at the LSB, then each group is converted to its octal
equivalent (see table).
4. Advantage of octal and hexadecimal systems:
1. Hexa and octal number are used as a "short hand" way to represent
stings of bits.
2. Error prone to write the binary number, in hex and octal less error.
3. The octal and hexadecimal number systems are both used (in
memory addressing and microprocessor technology).