Intro - To CS

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 52

Chapter one

Overview of Computers

I. Some Note about Computers, their Communications and security

1.1 What is a computer?

Computer, electronic machine that performs tasks, such as calculations or electronic


communication, under the control of a set of instructions called a program. Programs usually
reside within the computer and are retrieved and processed by the computer’s electronics.
The program results are stored or routed to output devices, such as video display monitors or
printers. Computers perform a wide variety of activities reliably, accurately, and quickly.

1.2 Characteristics of computers

Storage capacity
Computers help to save space and economy by storing very large amount of data. Data stored
on paper that filled the shelves of a registrar office can be stored in a single computer with
larger storage capacity. This saves space, money to buy paper, ink, and shelves. This also
helps to have backup (copy) of the data and put it somewhere else for security purpose. Now
day computers can store data in multiples of tens of gaga bytes. You can also put multiples of
hard disks in a computer to increase its storage capacity.

Speed
These days’ computers process tasks in very short period of time. They can accomplish tasks
in Nanoseconds or less. So they can perform very repetitive activities in slice of time.

Accuracy
Once well programmed, computers accomplish tasks accurately. The perfect ness of the set of
instruction that drives the system determines the accuracy of their activities.

Reliability
Now a day’s computers are used in sensitive areas that need very high reliability. For
example hospitals are using computers in patient diagnosis, monitoring patient operations.
They are also monitoring activities in industries that may cause failure or success of the
company. Computers are also used in nuclear plants that need very high care and its failure
may cause the destruction of a continent or the whole world. In general computers are
becoming reliable devices even in life and death situations.

Note that the failure of computers in sensitive areas also causes very great destruction.

Versatility
Computers accomplish various types of tasks at the same time or at different time. For this
reason they are called versatile. For example, you can listen music while you are writing
some text. Computers can be used for transaction processing, computers can be used for
designing, they can be used for communication, etc.

1.3 Uses of Computers

People use computers in many ways. In business, computers track inventories with bar codes
and scanners, check the credit status of customers, and transfer funds electronically. In
homes, tiny computers embedded in the electronic circuitry of most appliances control the
indoor temperature, operate home security systems, tell the time, and turn videocassette
recorders (VCRs) on and off. Computers in automobiles regulate the flow of fuel, thereby
increasing gas mileage. Computers also entertain, creating digitized sound on stereo systems
or computer-animated features from a digitally encoded laser disc. Computer programs, or
applications, exist to aid every level of education, from programs that teach simple addition
or sentence construction to programs that teach advanced calculus. Educators use computers
to track grades and communicate with students; with computer-controlled projection units,
they can add graphics, sound, and animation to their communications (Computer-Aided
Instruction). Computers are used extensively in scientific research to solve mathematical
problems, investigate complicated data, or model systems that are too costly or impractical to
build, such as testing the air flow around the next generation of aircraft. The military employs
computers in sophisticated communications to encode and unscramble messages, and to keep
track of personnel and supplies. Now a days computers are used almost everywhere.

1.4 Generations Of Computers

First Generation (1950s)


In the first computers, CPUs were made of vacuum tubes and electric relays rather than
microscopic transistors on computer chips. These early computers were immense and needed
a great deal of power compared to today’s microprocessor-driven computers. The first
general-purpose electronic computer, the ENIAC (Electronic Numerical Integrator And
Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were
used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers
had bulky CPUs that consumed massive amounts of energy and needed continual
maintenance, because the vacuum tubes burned out frequently and had to be replaced. Due to
such reasons computers of this generation were restricted to scientific and research
organizations only.

Second Generation (early 1960s)


A solution to the problems posed by vacuum tubes came in 1947, when American physicists
John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new
electronic switching and amplifying device called the transistor. The transistor had the
potential to work faster and more reliably and to consume much less power than a vacuum
tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took
nine years before they were used in a commercial computer. The first
commercially available computer to use transistors in its circuitry was the UNIVAC
(UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.
In the second generation computers, programming languages such as FORTRAN and
COBOL were introduced. And computers were used not only in scientific areas but also in
business organizations.

Third Generation (late 1960s)


Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments
demonstrated that it was possible to integrate the various components of a CPU onto a single
piece of silicon. These computer chips were called integrated circuits (ICs) because they
combined multiple electronic circuits on the same chip. Subsequent design and
manufacturing advances allowed transistor densities on integrated circuits to increase
tremendously. The first ICs had only tens of transistors per chip compared to the 3 million to
5 million transistors per chip common on today’s CPUs.

In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the
arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information
used in computers. Multiples of bits are used to describe the largest-size piece of data that a
CPU can manipulate at one time.) However, a fully working integrated circuit computer
required additional circuits to provide register storage, data flow control, and memory and
input/output paths. In the third generation due to introduction of ICs, computers became fast
and sophisticated software was developed.

Fourth Generation (1970 to present)


In the fourth generation computers the numbers of transistors in a single chip has been
increased. The large scale integration (LSI), very large scale integration (VLsI ), ultra large
scale integration(ULSI) were introduced during the fourth generation computers.
And the speed of computers has been increased dramatically.

Integrated Circuit, tiny electronic circuit used to perform a specific electronic function, such
as amplification; it is usually combined with other components to form a more complex
system. It is formed as a single unit by diffusing impurities into single-crystal silicon, which
then serves as a semiconductor material, or by etching the silicon by means of electron
beams. Several hundred identical integrated circuits (ICs) are made at a time on a thin wafer
several centimeters wide, and the wafer is subsequently sliced into individual ICs called
chips. In large-scale integration (LSI), as many as 5000 circuit elements, such as resistors and
transistors, are combined in a square of silicon measuring about 1.3 cm (.5 in) on a side.
Hundreds of these integrated circuits can be arrayed on a silicon wafer 8 to 15 cm (3 to 6 in)
in diameter. Larger-scale integration can produce a silicon chip with millions of circuit
elements. Individual circuit elements on a chip are interconnected by thin metal or
semiconductor films, which are insulated from the rest of the circuit by thin dielectric layers.
Chips are assembled into packages containing external electrical leads to facilitate insertion
into printed circuit boards for interconnection with other circuits or components.
During recent years, the functional capability of ICs has steadily increased, and the cost of
the functions they perform has steadily decreased. This has produced revolutionary changes
in electronic equipment—vastly increased functional capability and reliability combined with
great reductions in size, physical complexity, and power consumption. Computer technology,
in particular, has benefited greatly

VLSI (very large-scale integration), term for integrated circuits manufactured with
technology that makes it possible to fit hundreds of thousands of components, such as
transistors, on a single integrated circuit, or microchip. Some of the components on some
VLSI microchips may be so small that they cannot be seen without a microscope. Microchips
with even more components on them are called ULSI (ultra-large-scale integration)
integrated circuits

ULSI (ultra-large-scale integration), term used for integrated circuits manufactured with
technology that makes it possible to fit over 100,000 components, such as transistors, on a
single integrated circuit, or microchip. All components on a ULSI microchip are so small
they cannot be seen without a microscope Due to the above scale of integrations today’s
computers are very fast to accomplish complex tasks. And different sophisticated software
were developed to use this opportunity.

Fifth generation computers (future computers)


Fifth generation computers are characterized by advancements in software developments.
They are expected to communicate with human being in natural (human) language. And they
are also expected to reason like intelligent human being. These goals can be achieved by
developing sophisticated software.

1.5 Types Of Computers

I. With respect to Method of operation

Digital and Analog Computers

Computers can be either digital or analog. Virtually all-modern computers are digital. Digital
refers to the processes in computers that manipulate binary numbers (0s or 1s), which
represent switches that are turned on or off by electrical current. A bit can have the value 0 or
the value 1, but nothing in between 0 and 1. Analog refers to circuits or numerical values that
have a continuous range. Both 0 and 1 can be represented by analog computers, but so can
0.5, 1.5, or a number like p (approximately 3.14).

A desk lamp can serve as an example of the difference between analog and digital. If the
lamp has a simple on/off switch, then the lamp system is digital, because the lamp either
produces light at a given moment or it does not. If a dimmer replaces the on/off switch, then
the lamp is analog, because the amount of light can vary continuously from on to off and all
intensities in between.

Analog computer systems were the first type to be produced. A popular analog computer
used in the 20th century was the slide rule. To perform calculations with a slide rule, the user
slides a narrow, gauged wooden strip inside a ruler like holder. Because the sliding is
continuous and there is no mechanism to stop at any exact values, the slide rule is analog.
New interest has been shown recently in analog computers, particularly in areas such as
neural networks. These are specialized computer designs that attempt to mimic neurons of the
brain. They can be built to respond to continuous electrical signals. Most modern computers,
however, are digital machines whose components have a finite number of
states—for example, the 0 or 1, or on or off bits. These bits can be combined to denote
information such as numbers, letters, graphics, sound, and program instructions.

Hybrid Computers are both digital and analog. They can serve as both digital and analog.
They can measure continuous flow of data (act as analog) and manipulate discrete values (0s
and 1s).

III. With respect to physical size, speed, storage capacity, and price

Micro computers

Microcomputer, desktop or notebook size computing device that uses a microprocessor as its
central processing unit, or CPU. Microcomputers are also called personal computers (PCs),
home computers, small-business computers, and micros. The smallest, most compact are
palm tops. Laptops are also small in size (size of brief case). When they first appeared, they
were considered single-user devices, and they were capable of handling only four, eight, or
16 bits of information at one time. More recently the distinction between microcomputers and
large, mainframe computers (as well as the smaller mainframe-type systems called
minicomputers) has become blurred, as newer microcomputer models have increased the
speed and data-handling capabilities of their CPUs into the 32-bit, multi-user range.

Computer Circuit Board Integrated circuits (ICs) make the microcomputer possible; without
them, individual circuits and their components would take up far too much space for a
compact computer design. Also called a chip, the typical IC consists of elements such as
resistors, capacitors, and transistors packed on a single piece of silicon. In smaller, more
densely packed ICs, circuit elements may be only a few atoms in size, which makes it
possible to create sophisticated computers the size of notebooks. A typical computer circuit
board features many integrated circuits connected together.

Mini computers

Minicomputers, the first of which entered general business use in the early 1960s, are now
widespread in commerce and government. Terminals linked to the central processing unit
(CPU) are under the direct control of the individual user rather than centralized staff. In
recent years, however, it is the microcomputer, or personal computer (PC), that has come to
play the principal role in most office workplaces

Minicomputer, a mid-level computer built to perform complex computations while dealing


efficiently with a high level of input and output from users connected via terminals.
Minicomputers also frequently connect to other minicomputers on a network and distribute
processing among all the attached machines. Minicomputers are used heavily in transaction-
processing applications and as interfaces between mainframe computer systems and wide
area networks.

Mainframe computers
Mainframe computers have more memory, speed, and capabilities than workstations and are
usually shared by multiple users through a series of interconnected computers. They control
businesses and industrial facilities and are used for scientific research.

Mainframe computers—large, very expensive, high-speed machines that require trained


operators as well as a special temperature-regulated facility to prevent overheating. Use of
these machines today is limited to large organizations with heavy-volume data-processing
requirements. Time-sharing—allowing more than one company to use the same mainframe
for a fee—was instituted to divide the cost of the equipment among several users while
ensuring that the equipment is utilized to the maximum extent.

Mainframes with remote terminals, each with its own monitor, became available in the mid-
1970s and allowed for simultaneous input by many users. With the advent of the
minicomputer, however, a far less expensive alternative became available. The transistor and
microelectronics made manufacture of these smaller, less-complex machines practicable.

Super computers

Supercomputer, computer designed to perform calculations as fast as current technology


allows and used to solve extremely complex problems. Supercomputers are used to design
automobiles, aircraft, and spacecraft; to forecast the weather and global climate; to design
new drugs and chemical compounds; and to make calculations that help scientists understand
the properties of particles that make up atoms as well as the behavior and evolution of stars
and galaxies. Supercomputers are also used extensively by the military for weapons and
defense systems research, and for encrypting and decoding sensitive intelligence information.

Supercomputers are different than other types of computers in that they are designed to work
on a single problem at a time, devoting all their resources to the solution of the problem.
Other powerful computers such as mainframes and workstations are specifically designed so
that they can work on numerous problems, and support numerous users, simultaneously.
Because of their high cost—usually in the hundreds of thousands to millions of dollars—
supercomputers are shared resources. Supercomputers are so expensive that usually only
large companies, universities, and government agencies and laboratories can afford them.

Chapter Two

2. Organization of a Computer System

The computer system has two major parts. The two major parts that comprise the computer
system are: the computer hardware and the computer software. Hardware is the physical
components of the computer system. And the software is the set of instructions that drive the
hardware to accomplish its task.

2.1 Computer Hardware


2.1.1 Input Devices

Input hardware consists of external devices—that is, components outside of the computer's
CPU—that provide information and instructions to the computer. A light pen is a stylus with
a light sensitive tip that is used to draw directly on a computer's video screen or to select
information on the screen by pressing a clip in the light pen or by pressing the light pen
against the surface of the screen. The pen contains light sensors that identify which portion of
the screen it is passed over. A mouse

is a pointing device designed to be gripped by one hand. It has a detection device (usually a
ball) on the bottom that enables the user to control the motion of an on-screen pointer, or
cursor, by moving the mouse on a flat surface. As the device moves across the surface, the
cursor moves across the screen. To select items or choose commands on the screen, the user
presses a button on the mouse. A joystick is a pointing device composed of a lever that
moves in multiple directions to navigate a cursor or other graphical object on a computer
screen. A keyboard is a typewriter-like device that allows the user to type in text and
commands to the computer. Some keyboards have special function keys or integrated
pointing devices, such as a trackball or touch-sensitive regions that let the user's finger
motions move an on-screen cursor.

An optical scanner uses light-sensing equipment to convert images such as a picture or text
into electronic signals that can be manipulated by a computer. For example, a photograph can
be scanned into a computer and then included in a text document created on that computer.
The two most common scanner types are the flatbed scanner, which is similar to an office
photocopier, and the handheld scanner, which is passed manually across the image to be
processed. A microphone is a device for converting sound into signals that can then be
stored, manipulated, and played back by the computer. A voice recognition module is a
device that converts spoken words into information that the computer can recognize and
process.
A modem, which stands for modulator-demodulator, is a device that connects a computer to
a telephone line or cable television network and allows information to be transmitted to or
received from another computer. Each computer that sends or receives information must be
connected to a modem. The digital signal sent from one computer is converted by the modem
into an analog signal, which is then transmitted by telephone lines or television cables to the
receiving modem, which converts the signal back into a digital signal that the receiving
computer can understand.
The disk drives are also help to enter data to the computer system. So they can be considered
as input devices.

2.1.2 Storage Devices


2.1.2.1 Internal storages

Computer storage devices may be divided into two broad categories known as internal and
external. Internal memory operates at the highest speed and can be accessed directly by the
central processing unit (CPU)—the main electronic circuitry within a computer that processes
information. Internal memory is contained on computer chips and uses electronic circuits to
store information. External memory consists of storage on peripheral devices that are slower
than internal memories but offer lower cost and the ability to hold data after the computer’s
power has been turned off. External memory uses inexpensive mass-storage devices such as
magnetic hard drives.

Internal memory is also known as random access memory (RAM) or read-only memory
(ROM). Information stored in RAM can be accessed in any order, and may be erased or
written over. Information stored in ROM may also be random-access, in that it may be
accessed in any order, but the information recorded on ROM is usually permanent and cannot
be erased or written over.

RAM
Random access memory is also called main memory because it is the primary memory that
the CPU uses when processing information. The electronic circuits used to construct this
main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic
RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different
ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for
each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can
store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the
capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor.
The transistor is used to switch the charge onto the capacitor. When it is turned on, the
transistor acts like a closed switch that allows electric current to flow into the capacitor and
build up a charge. The transistor is then turned off, meaning that it acts like an open switch,
leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor
while the transistor is on, and then the transistor is turned off, leaving the capacitor
uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a
charge is present or absent on the relevant capacitor.

DRAM is called dynamic because it is continually refreshed. The memory chips themselves
cannot hold values over long periods of time. Because capacitors are imperfect, the charge
slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory
system contains additional circuitry that periodically reads and rewrites each data value. This
replaces the charge on the capacitors, a process known as refreshing memory. The major
difference between SDRAM and DRAM arises from the way in which refresh circuitry is
created. DRAM contains separate, independent circuitry to refresh memory. The refresh
circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The
hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing
the refresh circuitry with the hardware clock results in less duplication of electronics and
better access coordination between the CPU and the refresh circuits.

In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value
without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can
access data in SRAM more quickly than it can access data in DRAM or SDRAM. However,
the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM.
The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds
fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed
is more important than large memory capacity or low power consumption.

The time it takes the CPU to transfer data to or from memory is particularly important
because it determines the overall performance of the computer. The time required to read or
write one bit is known as the memory access time. Current DRAM and SDRAM access times
are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are
typically four times faster than DRAM.

The internal RAM on a computer is divided into locations, each of which has a unique
numerical address associated with it. In some computers a memory address refers directly to
a single byte in memory, while in others, an address specifies a group of four bytes
called a word. Computers also exist in which a word consists of two or eight bytes, or in
which a byte consists of six or ten bits.

When a computer performs an arithmetic operation, such as addition or multiplication, the


numbers used in the operation can be found in memory. The instruction code that tells the
computer which operation to perform also specifies which memory address or addresses to
access. An address is sent from the CPU to the main memory (RAM) over a set of wires
called an address bus. Control circuits in the memory use the address to select the bits at the
specified location in RAM and send a copy of the data back to the CPU over another set of
wires called a data bus. Inside the CPU, the data passes through circuits called the data path
to the circuits that perform the arithmetic operation. The exact details depend on the model of
the CPU. For example, some CPUs use an intermediate step in which the data is first loaded
into a high-speed memory device within the CPU called a register.

ROM
Read-only memory is the other type of internal memory. ROM memory is used to store items
that the computer needs to execute when it is first turned on. For example, the ROM memory
on a PC contains a basic set of instructions, called the basic input-output system (BIOS). The
PC uses BIOS to start up the operating system. BIOS is stored on computer chips in a way
that causes the information to remain even when power is turned off.
Information in ROM is usually permanent and cannot be erased or written over easily. A
ROM is permanent if the information cannot be changed—once the ROM has been created,
information can be retrieved but not changed. Newer technologies allow ROMs to be semi-
permanent—that is, the information can be changed, but it takes several seconds to make the
change. For example, a FLASH memory acts like a ROM because values remain stored in
memory, but the values can be changed.

2.1.2.2 External Storage

External memory can generally be classified as either magnetic or optical, or a combination


called magneto-optical. A magnetic storage device, such as a computer's hard drive, uses a
surface coated with material that can be magnetized in two possible ways. The surface rotates
under a small electromagnet that magnetizes each spot on the surface to record a 0 or 1. To
retrieve data, the surface passes under a sensor that determines whether the magnetism was
set for a 0 or 1. Optical storage devices such as a compact disc (CD) player use lasers to store
and retrieve information from a plastic disk. Magneto-optical memory devices use a
combination of optical storage and retrieval technology coupled with a magnetic medium.

Magnetic storage media

External magnetic media include magnetic tape, a hard disk, and a floppy disk. Magnetic tape
is a form of external computer memory used primarily for backup storage. Like the surface
on a magnetic disk, the surface of tape is coated with a material that can be magnetized. As
the tape passes over an electromagnet, individual bits are magnetically encoded. Computer
systems using magnetic tape storage devices employ machinery similar

to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar
to video tape).

Another form of magnetic memory uses a spinning disk coated with magnetic material. As
the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the
surface of the disk, reading and writing magnetic spots in concentric circles called tracks.
Magnetic disks are classified as either hard or floppy, depending on the flexibility of the
material from which they are made. A floppy disk is made of flexible plastic with small
pieces of a magnetic material imbedded in its surface. The read-write head touches the
surface of the disk as it scans the floppy. A hard disk is made of a rigid metal; with the read-
write head flying just above its surface on a cushion of air to prevent wear.

Optical storage Media

Optical external memory uses a laser to scan a spinning reflective disk in which the presence
or absence of no reflective pits in the disk indicates 1s or 0s. This is the same technology
employed in the audio CD. Because its contents are permanently stored on it when it is
manufactured, it is known as compact disc-read only memory (CD-ROM). A variation on the
CD, called compact disc-record able (CD-R), uses a dye that turns dark when a stronger laser
beam strikes it, and can thus have information written permanently on it by a computer.

Magneto-Optical Media

Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a
magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of
the disk heating it up slightly. This allows the magnetic write-head to change the physical
orientation of small grains of magnetic material (actually tiny crystals) on the surface of the
disk. These tiny crystals reflect light differently depending on their orientation. By aligning
the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite
direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a
way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they
can be read and written to. They are, however, more expensive than CD-ROMs and are used
mostly in industrial applications. MO devices are not popular consumer products.

2.1.3 Processing Devices

Central Processing Unit (CPU), microscopic circuitry that serves as the main information
processor in a computer. A CPU is generally a single microprocessor made from a wafer of
semi conducting material, usually silicon, with millions of electrical components on its
surface. On a higher level, the CPU is actually a number of interconnected processing units
that are each responsible for one aspect of the CPU’s function. Standard CPUs contain
processing units (Arithmetic and Logic Unit (ALU) that interpret and implement software
instructions, perform calculations and comparisons, make logical decisions (determining if a
statement is true or false based on the rules of Boolean algebra), temporarily store
information for use by another of the CPU’s processing units, keep track of the current step in
the execution of the program, and allow the CPU to communicate with the rest of the
computer. The registers do these activities.

The Intel Corporation manufactures the Pentium microprocessor shown below

It contains more than three million transistors and is used as the central processing unit in a
variety of personal computers.

HOW A CPU WORKS

A CPU is similar to a calculator, only much more powerful. The main function of the CPU is
to perform arithmetic and logical operations on data taken from memory or on information
entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled
by a list of software instructions, called a computer program. Software instructions entering
the CPU originate in some form of memory storage device such as a hard disk, floppy disk,
CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random
access memory (RAM), where each instruction is given a unique address, or memory
location. The CPU can access specific pieces of data in RAM by specifying the address of the
data that it wants.

As a program is executed, data flow from RAM through an interface unit of wires called the
bus, which connects the CPU to RAM. The data are then decoded by a processing unit called
the instruction decoder that interprets and implements software instructions. From the
instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs
calculations and comparisons. Data may be stored by the ALU in temporary memory
locations called registers where it may be retrieved quickly. The ALU performs specific
operations such as addition, multiplication, and conditional tests on the data in its registers,
sending the resulting data back to RAM or storing it in another register for further use. The
control unit (CU) performs the different controlling activities of the CPU units. The different
units of the CPU communicate via the internal buses. There are three types of internal buses:
Address bus, Data bus, and the control bus. During this process, a unit called the program
counter keeps track of each successive instruction to make sure that the program instructions
are followed by the CPU in the correct order.

Clock Pulses

The CPU is driven by one or more repetitive clock circuits that send a constant stream of
pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its
operations. The smallest increments of CPU work are completed between sequential clock
pulses. More complex tasks take several clock periods to complete. Clock pulses are
measured in Hertz, or number of pes per second. For instance, a 100-megahertz (100-MHz)
processor has 100 million clock pulses passing through it per second. Clock pulses are a
measure of the speed of a processor.

2.1.4 Output Devices

Once the CPU has executed the program instruction, the program may request that the
information be communicated to an output device, such as a video display monitor or a flat
liquid crystal display. Other output devices are printers, overhead projectors, videocassette
recorders (VCRs), and speakers.

Input/Output Device is hardware that is used for both providing information to the computer
and receiving information from it. An input/output device thus transfers information in one of
two directions depending on the current situation. A disk drive is an example of an
input/output device. Some devices, called input devices, can be used only for input—for
example, a keyboard, a mouse, a light pen, and a joystick. Other devices, called output
devices, can be used only for output—for example, a printer and a monitor.
Most devices require the installation of software routines called device drivers, which allow
the computer to transmit and receive information to and from the device.

The following figure shows Personal Computer Components. A typical personal computer
has components to display and print information (monitor and laser printer); input commands
and data (keyboard and mouse); retrieve and store information (CD-ROM and disk drives);
and communicate with other computers (modem).

2.1.5 HARDWARE CONNECTIONS


To function, hardware requires physical connections that allow components to communicate
and interact. A bus provides a common interconnected system composed of a group of wires
or circuitry that coordinates and moves information between the internal parts of a computer.
A computer bus consists of two channels, one that the CPU uses to locate data, called the
address bus, and another to send the data to that address, called the data bus. A bus is
characterized by two features: how much information it can manipulate at one time, called
the bus width, and how quickly it can transfer these data.

A serial connection is a wire or set of wires used to transfer information from the CPU to an
external device such as a mouse, keyboard, modem, scanner, and some types of printers. This
type of connection transfers only one piece of data at a time, and is therefore slow. The
advantage to using a serial connection is that it provides effective connections over long
distances.

A parallel connection uses multiple sets of wires to transfer blocks of information


simultaneously. Most scanners and printers use this type of connection. A parallel connection
is much faster than a serial connection, but it is limited to distances of less than 3 m (10 ft)
between the CPU and the external device.

2.2 Computer Software


Software, computer programs instructions that cause the hardware—the machines—to do
work. Software as a whole can be divided into a number of categories based on the types of
work done by programs. System software handles essential, but often invisible, chores as
maintaining disk files and managing the screen, whereas application software performs word
processing, database management, and the like for computer users.

2.2.1 System software


System software is further divided in to operating systems and language software.

2.2.1.1 Operating Systems

WHAT IS OPERATING SYSTEM?

Operating System (OS), in computer science, the basic software that controls a computer. The
operating system has three major functions: It coordinates and manipulates computer
hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor; it
organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc,
digital video disc, and tape; and it manages hardware errors and the loss of data.

HOW AN OS WORKS
Operating systems control different computer processes, such as running a spreadsheet
program or accessing information from the computer's memory. One important process is
interpreting commands, enabling the user to communicate with the computer. Some
command interpreters are text oriented, requiring commands to be typed in or to be selected
via function keys on a keyboard. Other command interpreters use graphics and let the user
communicate by pointing and clicking on an icon, an on-screen picture that represents a
specific command. Beginners generally find graphically oriented interpreters easier to use,
but many experienced computer users prefer text-oriented command interpreters.

Operating systems are either single tasking or multitasking. The more primitive single-
tasking operating systems can run only one process at a time. For instance, when the
computer is printing a document, it cannot start another process or respond to new commands
until the printing is completed.

All modern operating systems are multitasking and can run several processes simultaneously.
In most computers, however, there is only one central processing unit (CPU; the
computational and control unit of the computer), so a multitasking OS creates the illusion of
several processes running simultaneously on the CPU. The most common mechanism used to
create this illusion is time-slice multitasking, whereby each process is run individually for a
fixed period of time. If the process is not completed within the allotted time, it is suspended
and another process is run. This exchanging of processes is called context switching. The OS
performs the “bookkeeping” that preserves a suspended process. It also has a mechanism,
called a scheduler that determines which process will be run next. The scheduler runs short
processes quickly to minimize perceptible delay. The processes appear to run simultaneously
because the user's sense of time is much slower than the processing speed of the computer.

Operating systems can use a technique known as virtual memory to run processes that require
more main memory than is actually available. To implement this technique, space on the hard
drive is used to mimic the extra memory needed. Accessing the hard drive is more time-
consuming than accessing main memory, however, so performance of the computer slows.

CURRENT OPERATING SYSTEMS


1. MS-DOS
MS_DOS is acronym for Microsoft Disk Operating System. MS-DOS—like other operating
systems—oversees such operations as disk input and output, video support, keyboard control,
and many internal functions related to program execution and file maintenance. MS-DOS is a
single-tasking, single-user operating system with a command-line interface. Dos is not a
graphical user interface. You are required to enter the commands at the dos prompt to
accomplish tasks.

2. WINDOWS

Windows personal computer operating system sold by Microsoft Corporation that allows
users to enter commands with a point-and-click device, such as a mouse, instead of a
keyboard. An operating system is a set of programs that control the basic functions of a
computer. The Windows operating system provides users with a graphical user interface
(GUI), which allows them to manipulate small pictures, called icons, on the computer screen
to issue commands. Windows is the most widely used operating system in the world. It is an
extension of and replacement for Microsoft’s Disk Operating System (MS-DOS).

The Windows GUI is designed to be a natural, or intuitive, work environment for the user.
With Windows, the user can move a cursor around on the computer screen with a mouse. By
pointing the cursor at icons and clicking buttons on the mouse, the user can issue commands
to the computer to perform an action, such as starting a program, accessing a data file, or
copying a data file. Other commands can be reached through pull-down or click-on menu
items. The computer displays the active area in which the user is working as a window on the
computer screen. The currently active window may overlap with other previously active
windows that remain open on the screen. This type of GUI is said to include WIMP features:
windows, icons, menus, and pointing device (such as a mouse).

Computer scientists at the Xerox Corporation’s Palo Alto Research Center (PARC) invented
the GUI concept in the early 1970s, but this innovation was not an immediate commercial
success. In 1983 Apple Computer featured a GUI in its Lisa computer. This GUI was updated
and improved in its Macintosh computer, introduced in 1984.
Microsoft began its development of a GUI in 1983 as an extension of its MS-DOS operating
system. Microsoft’s Windows version 1.0 first appeared in 1985. In this version, the windows
were tiled, or presented next to each other rather than overlapping. Windows version 2.0,
introduced in 1987, was designed to resemble IBM’s OS/2 Presentation Manager, another
GUI operating system. Windows version 2.0 included the overlapping window feature. The
more powerful version 3.0 of Windows, introduced in 1990, and subsequent versions 3.1 and
3.11 rapidly made Windows the market leader in operating systems for personal computers,
in part because it was prepackaged on new personal computers. It also became the favored
platform for software development.

In 1993 Microsoft introduced Windows NT (New Technology). The Windows NT operating


system offers 32-bit multitasking, which gives a computer the ability to run several programs
simultaneously, or in parallel, at high speed. This operating system competes with IBM’s
OS/2 as a platform for the intensive, high-end, networked computing environments found in
many businesses.

In 1995 Microsoft released a new version of Windows for personal computers called
Windows 95. Windows 95 had a sleeker and simpler GUI than previous versions. It also
offered 32-bit processing, efficient multitasking, network connections, and Internet access.
Windows 98, released in 1998, improved upon Windows 95.

In 1996 Microsoft debuted Windows CE, a scaled-down version of the Microsoft Windows
platform designed for use with handheld personal computers. Windows 2000, released at the
end of 1999, combined Windows NT technology with the Windows 98 graphical user
interface. In 2001 Microsoft released a new operating system known as Windows XP, the
company’s first operating system for consumers that was not based on MS-DOS.

3. UNIX

UNIX is a powerful multi-user and multitasking operating system. Considered a very


powerful operating system, UNIX is written in the C language and can be installed on
virtually any computer.
UNIX was originally developed by Ken Thompson and Dennis Ritchie at AT&T Bell
Laboratories in 1969 for use on minicomputers. In the early 1970s, many universities,
research institutions, and companies began to expand on and improve UNIX. These efforts
resulted in two main versions: BSD UNIX, a version developed at the University of
California at Berkeley, and System V, developed by AT&T and its collaborators.

Many companies developed and marketed their own versions of UNIX in subsequent years.
Variations of UNIX include AIX, a version of UNIX adapted by IBM to run on RISC-based
workstations; A/UX, a graphical version for the Apple Macintosh; XENIX OS, developed by
Microsoft Corporation for 16-bit microprocessors; SunOS, adapted and distributed by Sun
Microsystems, Inc.; Mach, a UNIX-compatible operating system for the NeXT computer; and
Linux, developed by Finnish computer engineer Linus Torvalds with collaborators
worldwide. Linux is an open source operating system. Linux is available for download free of
charge and distributed commercially by companies such as Red Hat, Inc.

UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among
academic computer users. Its popularity is due in large part to the growth of the
interconnected computer network known as the Internet. Software for the Internet was
initially designed for computers that ran UNIX. UNIX and its clones support multitasking
and multiple users. Its file system provides a simple means of organizing disk files and lets
users control access to their files. The commands in UNIX are not readily apparent, however,
and mastering the system is difficult. Consequently, although UNIX is popular for
professionals, it is not the operating system of choice for the general public.

Instead, windowing systems with graphical interfaces, such as Windows and the Macintosh
OS, which make computer technology more accessible, are widely used in personal
computers (PCs). However, graphical systems generally have the disadvantage of requiring
more hardware—such as faster CPUs, more memory, and higher-quality monitors—than do
command-oriented operating systems.

FUTURE TECHNOLOGIES
Operating systems continue to evolve. A recently developed type of OS called a distributed
operating system is designed for a connected, but independent, collection of computers that
share resources such as hard drives. In a distributed OS, a process can run on any computer in
the network (presumably a computer that is idle) to increase that process's performance. All
basic OS functions—such as maintaining file systems, ensuring reasonable behavior, and
recovering data in the event of a partial failure—become more complex in distributed
systems.

Research is also being conducted that would replace the keyboard with a means of using
voice or handwriting for input. Currently these types of input are imprecise because people
pronounce and write words very differently, making it difficult for a computer to recognize
the same input from different users. However, advances in this field have led to systems that
can recognize a small number of words spoken by a variety of people. In addition, software
has been developed that can be taught to recognize an individual's handwriting

2.2.1.2 Language Software


Language, in computer science, artificial language used to write a sequence of instructions (a
computer program) that can be run by a computer. Similar to natural languages, such as
English, programming languages have a vocabulary, grammar, and syntax. However, natural
languages are not suited for programming computers because they are ambiguous, meaning
that their vocabulary and grammatical structure may be interpreted in multiple ways. The
languages used to program computers must have simple logical structures, and the rules for
their grammar, spelling, and punctuation must be precise.

Programming languages vary greatly in their sophistication and in their degree of versatility.
Some programming languages are written to address a particular kind of computing problem
or for use on a particular model of computer system. For instance, programming languages
such as Fortran and COBOL were written to solve certain general types of programming
problems—Fortran for scientific applications, and COBOL for business applications.
Although these languages were designed to address specific categories of computer problems,
they are highly portable, meaning that they may be used to program many types of
computers. Other languages, such as machine languages, are designed to be used by one
specific model of computer system, or even by one specific computer in certain research
applications. The most commonly used programming languages are highly portable and can
be used to effectively solve diverse types of computing problems. Languages like C,
PASCAL, and BASIC fall into this category.
Programming languages can be classified as either low-level languages or high-level
languages. Low-level programming languages, or machine languages, are the most basic type
of programming languages and can be understood directly by a computer. Machine languages
differ depending on the manufacturer and model of computer. High-level languages are
programming languages that must first be translated into a machine language before they can
be understood and processed by a computer. Examples of high-level languages are C, C++,
PASCAL, and Fortran. Assembly languages are intermediate languages that are very close
to machine language and do not have the level of linguistic sophistication exhibited by other
high-level languages, but must still be translated into machine language.

Machine Languages

In machine languages, instructions are written as sequences of 1s and 0s, called bits, that a
computer can understand directly. An instruction in machine language generally tells the
computer four things: (1) where to find one or two numbers or simple pieces of data in the
main computer memory (Random Access Memory, or RAM), (2) a simple operation to
perform, such as adding the two numbers together, (3) where in the main memory to put the
result of this simple operation, and (4) where to find the next instruction to perform. While all
executable programs are eventually read by the computer in machine language, they are not
all programmed in machine language. It is extremely difficult to program directly in machine
language because the instructions are sequences of 1s and 0s. A typical instruction in a
machine language might read 10010 1100 1011 and mean add the contents of storage register
A to the contents of storage register B.

Assembly Language

Computer programmers use assembly languages to make machine-language programs easier


to write. In an assembly language, each statement corresponds roughly to one machine
language instruction. An assembly language statement is composed with the aid of easy to
remember commands. The command to add the contents of the storage register A to the
contents of storage register B might be written ADD B,A in a typical assembly language
statement. Assembly languages share certain features with machine languages. For instance,
it is possible to manipulate specific bits in both assembly and machine languages.
Programmers use assembly languages when it is important to minimize the time it takes to
run a program, because the translation from assembly language to machine language is
relatively simple. Assembly languages are also used when some part of the computer has to
be controlled directly, such as individual dots on a monitor or the flow of individual
characters to a printer.

High-Level Languages

High-level languages are relatively sophisticated sets of statements utilizing words and
syntax from human language. They are more similar to normal human languages than
assembly or machine languages and are therefore easier to use for writing complicated
programs. These programming languages allow larger and more complicated programs to be
developed faster. However, high-level languages must be translated into machine language
by another program called a compiler before a computer can understand them. For this
reason, programs written in a high-level language may take longer to execute and use up
more memory than programs written in an assembly language.

Compiler, computer program, translates source code; instructions in a program written by a


software engineer, into object code, those same instructions written in a language the
computer’s central processing unit (CPU) can read and interpret. Software engineers write
source code using high level programming languages that people can understand. Computers
cannot directly execute source code, but need a compiler to translate these instructions into a
low level language called machine code.

Compilers collect and reorganize (compile) all the instructions in a given set of source code
to produce object code. Object code is often the same as or similar to a computer’s machine
code. If the object code is the same as the machine language, the computer can run the
program immediately after the compiler produces its translation. If the object code is not in
machine language, other programs—such as assemblers, binders, linkers, and loaders—finish
the translation.

Most programming languages—such as C, C++, and Fortran—use compilers, but some—


such as BASIC and LISP—use interpreters. An interpreter analyzes and executes each line of
source code one-by-one. Interpreters produce initial results faster than compilers, but the
source code must be re-interpreted with every use and interpreted languages are usually not
as sophisticated as compiled languages.

Most computer languages use different versions of compilers for different types of computers
or operating systems, so one language may have different compilers for personal computers
(PC) and Apple Macintosh computers. Many different manufacturers often produce versions
of the same programming language, so compilers for a language may vary between
manufacturers.

Consumer software programs are compiled and translated into machine language before they
are sold. Some manufacturers provide source code, but usually only programmers find the
source code useful. Thus programs bought off the shelf can be executed, but usually their
source code cannot be read or modified.

2.2.2 Application Software


Application software is developed to meet specific needs of computer users. They need
operating systems to operate properly. Unlike the system software, applicatin software cannot
manage the hardware directly. These soft wares can be developed by any one of the
programming languages. Word processors, spreadsheet manipulators, database management
systems are categorized under the application software.

Application, in computer science, a computer program designed to help people perform a


certain type of work. An application thus differs from an operating system (which runs a
computer), a utility (which performs maintenance or general-purpose chores), and a language
(with which computer programs are created). Depending on the work for which it was
designed, an application can manipulate text, numbers, graphics, or a combination of these
elements. Some application packages offer considerable computing power by focusing on a
single task, such as word processing; others, called integrated software, offer somewhat less
power but include several applications, such as a word processor, a spreadsheet, and a
database program
CHAPTER THREE

Over view of Data Communication and Computer Networks

3.1 Data communication

3.1.1 Modes of Data Transmission

Simplex

In this mode of data transmission one device is always sender and the other is always
receiver.
Thus data is transmitted in one direction only from the sender to the receiver.

Sender Receiver

Half Duplex

We find half duplex transmission when data is transmitted in both directions but only one
device send data at a time. In this way of communication first a sender sends data and the
receiver acknowledges it and sends the response back to the previous sender at different time.
At one time one device can be a sender and at another time it can be a receiver. This half
duplex is transmission of data in both directions at different time.

Sender or Sender or
Receiver Receiver
Full Duplex
In this mode of transmission data can be transmitted in both directions at the same time. The
two devices can send data the each other simultaneously. This mode of transmission is
important when fast data transfer is required.

3.1.2 Types of data Transmission

Synchronous Transmission
In synchronous way of transmission data is transferred at equal interval of time. The receiver
knows in advance when to receive data. This way of communication needs sophisticated
timing tools. Here the time interval between successive packets is equal.

Asynchronous communication

A form of data transmission in which information is sent and received at irregular intervals,
one character at a time. However, the time interval between successive bits of a character is
equal. Because data is received at irregular intervals, the receiving modem must be signaled
to let it know when the data bits of a character begin and end. This is done by means of start
and stop bits.

3.2 Computer Networks

Computer network is a techniques, physical connections, and computer programs used to link
two or more computers. Network users are able to share files, printers, and other resources;
send electronic messages; and run programs on other computers.

A network has three layers of components: application software, network software, and
network hardware. Application software consists of computer programs that interface with
network users and permit the sharing of information, such as files, graphics, and video, and
resources, such as printers and disks. One type of application software is called client-server.
Client computers send requests for information or requests to use resources to other
computers, called servers that control data and applications. Another type of application
software is called peer-to-peer. In a peer-to-peer network, computers send messages and
requests directly to one another without a server intermediary.

Network software consists of computer programs that establish protocols, or rules, for
computers to talk to one another. These protocols are carried out by sending and receiving
formatted instructions of data called packets. Protocols make logical connections between
network applications, direct the movement of packets through the physical network, and
minimize the possibility of collisions between packets sent at the same time.

Network hardware is made up of the physical components that connect computers. Two
important components are the transmission media that carry the computer's signals, typically
on wires or fiber-optic cables, and the network adapter, which accesses the physical media
that link computers, receives packets from network software, and transmits instructions and
requests to other computers. Transmitted information is in the form of binary digits, or bits
(1s and 0s), which the computer's electronic circuitry can process.

NETWORK CONNECTIONS

A network has two types of connections: physical connections that let computers directly
transmit and receive signals and logical, or virtual, connections that allow computer
applications, such as word processors, to exchange information. Physical connections are
defined by the medium used to carry the signal, the geometric arrangement of the computers
(topology), and the method used to share information. Logical connections are created by
network protocols and allow data sharing between applications on different types of
computers, such as an Apple Macintosh and an International Business Machines Corporation
(IBM) personal computer (PC), in a network. Some logical connections use client-server
application software and are primarily for file and printer sharing. The Transmission Control
Protocol/Internet Protocol (TCP/IP) suite, originally developed by the United States
Department of Defense, is the set of logical connections used by the Internet, the worldwide
consortium of computer networks. TCP/IP, based on peer-to-peer application software,
creates a connection between any two computers.
3.2.1 Network Topology

 Bus topology

Bus Network, in computer science, a topology (configuration) for a local area network in
which all nodes are connected to a main communications line (bus). On a bus network, each
node monitors activity on the line. Messages are detected by all nodes but are accepted only
by the node(s) to which they are addressed. Because a bus network relies on a common data
“highway,” a malfunctioning node simply ceases to communicate; it doesn't disrupt operation
as it might on a ring network, in which messages are passed from one node to the next. To
avoid collisions that occur when two or more nodes try to use the line at the same time, bus
networks commonly rely on collision detection or Token Passing to regulate traffic. The end
points of cables in bus topology should be terminated with terminators. Otherwise signal
bounce will occur and affects the whole network. In bus topology only one device can send
data at the same time. Thus as the number of devices increase the performance of the network
will fall.

 Star Topology

Star Network is a local area network in which each device (node) is connected to a
central computer in a star-shaped configuration (topology); commonly, a network
consisting of a central computer (the hub) surrounded by terminals. In a star network,
messages pass directly from a node to the central computer, which handles any
further routing (as to another node) that might be necessary. A star network is reliable
in the sense that a node can fail without affecting any other node on the network. Its
weakness, however, is that failure of the central computer results in a shutdown of
the entire network. And because each node is individually wired to the hub, cabling
costs can be high.

Ring Topology

Ring Network, a local area network in which devices (nodes) are connected in a closed loop,
or ring. Messages in a ring network pass in one direction, from node to node. As a message
travels around the ring, each node examines the destination address attached to the message.
If the address is the same as the address assigned to the node, the node accepts the message;
otherwise, it regenerates the signal and passes the message along to the next node in the
circle. Such regeneration allows a ring network to cover larger distances than star and bus
networks. Because of the closed loop, however, new nodes can be difficult to add. And if a
node goes down in token ring the whole network will cease functioning.

3.2.2 Media Of Networks


The medium used to transmit information limits the speed of the network, the effective
distance between computers, and the network topology. Copper wires and coaxial cable
provide transmission speeds of a few thousand bits per second for long distances and about
100 million bits per second (Mbps) for short distances. Optical fibers carry 100 million to 1
billion bits of information per second over long distances. It is also possible to use wireless
communication between computers. There are different issues that should be considered
during cable selection for networks.

One should consider the following before deciding the type of cable to be used for networks.
 The cost of the cable
 The attenuation (how long does a segment of a cable transfer data with out the
signal being weakened.)
 The bandwidth (how many bits cab be transferred per second, its speed of data
transfer)
 And its resistance to different problems.

Coaxial cable

Coaxial cable is a common type of cable for networks. The central copper wire helps to
transfer data. The wire mesh helps to protect data from the external electromagnetic
interference. And the insulator between the central copper wire and the mesh helps to
separate the two conductors. And the external plastic sheath helps to protect the cable from
physical damage. Coaxial is cheaper type of cable. The are two types of coaxial cables:

Thin net it is also called 10 base 2.


This type of coaxial cable has thinner central copper wire. And it can transmit data with
10mega bits per second for 185 meters. And it is rounded to 200 meters. That is why it is
called 10 base 2.10 refers 10mb/s and 2 for 200 meters. This is more flexible and easy to use.

Thick net it is also called 10 base 5.


This has thicker central copper wire. And it has the ability to transfer data up to 500 meters
with the speed of 10 mega bits per second. This cable is somewhat difficult to bend. And
hence it is less easy to work with. But it can transfer data farther than thin net.
Twisted pair cable
Pairs of cables are twisted and bundled in a plastic jacket to form the twisted pair cable. The
twisting helps to reduce electromagnetic interference by increasing the probability of
electromagnetic web canceling. One problem commonly seen in twisted pair cable is the
problem of cross talk. The mixing of data from two cables due to the removal of insulator
between is called cross talk. There are two types of twisted pair cables:

Shielded Twisted pair cable (STP)


This type of twisted pair cable has additional wire mesh beneath the external plastic sheath.
This helps to reduce electromagnetic interference. And hence it can transfer data for longer
distance.

Un shielded twisted pair (UTP)


Like the shielded one this do not have wire mesh to reduce interference. This is more
susceptible for interference. And hence it can transmit data for shorter distance than the
shielded one. This is also called 10 base T. T stands for 100.Hence it can transmit data with
speed of 10mb/sec for about 100 meters.

Fiber-Optic Cables (FO)

Fiber-optic cables use specially treated glass that can transmit signals in the form of pulsed
beams of laser light. Fiber-optic cables carry many times more information than copper wires
can, and they can transmit several television channels or thousands of telephone
conversations at the same time. Fiber-optic technology has replaced copper wires for most
transoceanic routes and in areas where large amounts of data are sent. This technology uses
laser transmitters to send pulses of light via hair-thin strands of specially prepared glass
fibers. New improvements promise cables that can transmit millions of telephone calls over a
single fiber. Already fiber optic cables provide the high capacity, "backbone" links necessary
to carry the enormous and growing volume of telecommunications and Internet traffic. Fiber
optic cable can transmit data for about 60 miles in gaga bits per second bandwidth. This is
because it is totally free from electromagnetic interferences.

3.2.3 Types Of Networks Based On The Geographic Area Coverage

3.2.3.1 Local Area Network

Local Area Network (LAN), collection of interconnected computers that can share data,
applications, and resources, such as printers. Computers in a LAN are separated by distances
of up to a few kilometers and are typically used in offices or across university campuses. A
LAN enables the fast and effective transfer of information within a group of users and
reduces operational costs. LAN is the basic building blocks of other types of larger networks.
To establish a LAN you need the minimum of two computers. It covers the smallest
geographic area coverage. You can have LAN in a room or a building or in a campus.

3.2.3.2 Metropolitan Area Network (MAN)

This type of network covers a larger geographic area than LAN. The different communicating
LANs in a city can form MAN. Thus MAN is not limited to a campus. It can cover a wider
area like a city. The media of communication in MAN can be cable or wireless
communication. In LAN because the computers are near to each other, they may be limited to
cable communication. But in MAN there are different obstacles in a city for cable
installation. Therefore wireless communication is also common.

3.2.3.3 Wide Area Network (WAN)


Wide area networks (WANs) are networks that span large geographical areas. Computers can
connect to these networks to use facilities in another city or country. For example, a person in
Los Angeles can browse through the computerized archives of the Library of Congress in
Washington, D.C. The largest WAN is the Internet, a global consortium of networks linked
by common communication programs and protocols (a set of established standards that
enable computers to communicate with each other). The Internet is a mammoth resource of
data, programs, and utilities. American computer scientist Vinton Cerf was largely
responsible for creating the Internet in 1973 as part of the United States Department of
Defense Advanced Research Projects Agency (DARPA). In 1984 the development of Internet
technology was turned over to private, government, and scientific agencies. The World Wide
Web, developed in the 1980s by British physicist Timothy Berners-Lee, is a system of
information resources accessed primarily through the Internet. Users can obtain a variety of
information in the form of text, graphics, sounds, or video. These data are extensively cross-
indexed, enabling users to browse (transfer their attention from one information site to
another) via buttons, highlighted text, or sophisticated searching software known as search
engines.

Internet Topology
The Internet and the Web are each a series of interconnected computer networks. Personal
computers or workstations are connected to a Local Area Network (LAN) either by a dial-up
connection through a modem and standard phone line, or by being directly wired into the
LAN. Other modes of data transmission that allow for connection to a network include T-1
connections and dedicated lines. Bridges and hubs link multiple networks to each other.
Routers transmit data through networks and determine the best path of transmission.

The different LANs and MANs in countries, and continents combine together to form the
WAN. The wide are network is very important to share resources among individuals and
organization in different countries.
What is World Wide Web?
World Wide Web (WWW), computer-based network of information resources that combines
text and multimedia. The information on the World Wide Web can be accessed and searched
through the Internet, a global computer network. The World Wide Web is often referred to
simply as “the Web.”

Browser, A browser is a program, such as Microsoft Corporation’s Internet Explorer, that


allows a computer to display documents containing text, graphics, photographs, sounds,
videos, and animations. Most browsers are used to view information on the World Wide Web
or on intranets of companies or organizations. Browsers have various buttons that allow users
to print out the contents of a Web site, move backward or forward between sites already
accessed, and use a search engine to find information.

The following picture shows the a website displayed using Microsoft internet explorer

Protocol
Domain name and domain extension

HOW THE WEB WORKS

To access the Web, a user must have a computer connected to the Internet and appropriate
software. The connection between the user's computer and the Internet can consist of a
permanent, dedicated connection or a temporary, dial-up connection. A dial-up connection
uses a modem to send data over the telephone system to another modem. It offers the lowest
cost but requires the user to wait for the connection to be established each time the modem is
used. A permanent connection uses a technology such as Asymmetric Digital Subscriber Line
(ADSL, also known as DSL), a cable modem, or a dedicated leased circuit. It remains in
place and is ready to use at all times. Permanent Internet connections cost more but offer
higher capacity—that is, they can send more data at a faster speed.

Two pieces of software are needed to access the Web: (1) basic communication software that
a computer uses to transfer data across the Internet and (2) a Web application program known
as a browser that can contact a Web site to obtain and display information. Basic
communication software, which is usually built into the computer's operating system, allows
the computer to interact with the Internet. The software follows a set of protocol standards
that are collectively known as TCP/IP (Transmission Control Protocol/Internet Protocol).
Because it is built into the computer's operating system, TCP/IP software remains hidden
from users. Application programs that use the Internet invoke the software automatically.

The second piece of software needed for Web access consists of an application program
known as a Web browser. Unlike basic communication software, a browser is directly visible
to the user. To access the Web, the user must invoke the browser and enter a request. The
browser then acts as a client. The browser contacts a Web server, obtains the requested
information, and displays the information for the user.

Information on the Web is divided into pages, each of which is assigned a short identification
string that is known as a Uniform Resource Locator (URL). A URL encodes three pieces of
information: the protocol a browser should use to obtain the item, the name of a computer on
which the item is located, including its domain name, and the name of the item. For example,
the URL http://encarta.msn.com/reference specifies that a browser should use the Hypertext
Transfer Protocol (HTTP) to obtain the page, that the page can be obtained from a server
running on computer, “encarta.msn.com,” and that the page is named “reference.” The
“.com” is the domain name, an abbreviation for commercial, signifying that the site is
operated by a commercial or for-profit business. Many other domain names exist,
including .edu for Web sites established by educational institutions and .org for nonprofit
organizations.

In 2001 many other unique domain names were created. They comprised .info for
informational sites, .biz for businesses, .name for individuals to register their name for a Web
site or for an e-mail address, .museum for museums, .aero for the aviation industry, .coop for
business cooperatives such as credit unions and electric coops, and .pro for professionals such
as accountants, lawyers, and physicians. As of March 2002, all of these domain name suffixes
were operational, with the exception of .pro.

Only the computer name is required in a URL. If the protocol is omitted, a browser assumes
“http://,” and if the name of an item is omitted, the server chooses a page to send. Thus, the
URL encarta.msn.com, which consists only of a computer name, is also valid.

Before it can obtain information, a browser must be given a URL. A user can enter the URL
manually or click on a selectable link. In each case, once it has been given a URL, the
browser uses the URL to obtain a new page, which it then displays for the user. The URL
associated with a selectable link is not usually visible because the browser does not display
the URL for the user. Instead, to indicate that an item is selectable, the browser changes the
color of the item on the screen and keeps the URL associated with the link hidden. When a
user clicks on an item that corresponds to a selectable link, the browser consults the hidden
information to find the appropriate URL, which the browser then follows to the selected page.
Because a link can point to any page in the Web, the links are known as hyperlinks. See also
Hypermedia.

When a browser uses a URL to obtain a page, the information may be in one of many forms,
including text, a graphical image, video, or audio. Some Web pages are known as active
pages because the page contains a miniature computer program called a script or applet (a
small application program). When a script or applet arrives, the browser runs the program.
For example, a script can make images appear to move on the user's screen or can allow a
user to interact with a mouse, keyboard, or microphone. Active pages allow users to play
games on the Web, search databases, or perform virtual scientific experiments. Active pages
are also used to generate moving advertisements, such as a banner that keeps changing or a
logo that appears to rotate.

The codes that tell the browser on the client computer how to display a Web document
correspond to a set of rules called Hypertext Markup Language (HTML). An HTML
document consists of text with special instructions called tags, which are inserted to tell the
browser how to display the text. The HTML language specifies the exact rules for a
document, including the meaning of each tag. Thus, a person who creates an HTML page is
responsible for inserting tags that cause the browser to display the page in the desired form.
Not all Web pages use HTML. Graphics images are usually encoded using the Graphics
Interchange Format (GIF) or Joint Photographic Experts Group (JPEG) standards. Active
pages are written in a computer programming language such as ECMA Script or Java.

What is website?

Web Site, file of information located on a server connected to the World Wide Web (WWW).
The WWW is a set of protocols and software that allows the global computer network called
the Internet to display multimedia documents. Web sites may include text, photographs,
illustrations, video, music, or computer programs. They also often include links to other sites
in the form of hypertext, highlighted or colored text that the user can click on with their
mouse, instructing their computer to jump to the new site.

Every web site has a specific address on the WWW, called a Uniform Resource Locator
(URL). These addresses end in extensions that indicate the type of organization sponsoring
the web site, for example, .gov for government agencies, .edu for academic institutions,
and .com for commercial enterprises. The user’s computer must be connected to the Internet
and have a special software program called a browser to retrieve and read information from a
web site. Examples of browsers include Navigator from the Netscape Communications
Corporation and Explorer from the Microsoft Corporation.
The content presented on a web site usually contains hypertext and icons, pictures that also
serve as links to other sites. By clicking on the hypertext or icons with their mouse, users
instruct their browser program to connect to the web site specified by the URL contained in
the hypertext link. These links are embedded in the web site through the use of Hypertext
Markup Language (HTML), a special language that encodes the links with the correct URL.

Web sites generally offer an appearance that resembles the graphical user interfaces (GUI) of
Microsoft’s Windows operating system, Apple’s Macintosh operating system, and other
graphics based operating systems. They may include scroll bars, menus, buttons, icons, and
toolbars, all of which can be activated by a mouse or other input device.

To find a web site, a user can consult an Internet reference guide or directory, or use one of
the many freely available search engines, such as WebCrawler from America Online
Incorporated. These engines are search and retrieval programs, of varying sophistication, that
ask the user to fill out a form before executing a search of the WWW for the requested
information. The user can also create a list of the URLs of frequently visited web sites. Such
a list helps a user recall a URL and easily access the desired web site. Web sites are easily
modified and updated, so the content of many sites changes frequently. A single website
consists of several pages.The first page of the website is called home page. You can click on
the hyperlinks to display other pages or give their address in the URL.
Chapter Four

Computer Security

Computer Security is technique developed to safeguard information and information systems


stored on computers. Potential threats include the destruction of computer hardware and
software and the loss, modification, theft, unauthorized use, observation, or disclosure of
computer data.

Computers and the information they contain are often considered confidential systems
because their use is typically restricted to a limited number of users. This confidentiality can
be compromised in a variety of ways.
For example, people who spread computer viruses and worms can harm computers and
computer data.

4.1 Malicious codes

4.1.1 Viruses

Virus is self-duplicating computer program that interferes with a computer's hardware or


operating system (the basic software that runs the computer). Viruses are designed to
duplicate or replicate them to avoid detection. Like any other computer program, a virus must
be executed for it to function—that is, it must be located in the computer's memory, and the
computer must then follow the virus's instructions. These instructions are called the payload
of the virus. The payload may disrupt or change data files, display an irrelevant or unwanted
message, or cause the operating system to malfunction.

How Infections Occur


Computer viruses activate when the instructions—or executable code—that run programs are
opened. Once a virus is active, it may replicate by various means and tries to infect the
computer’s files or the operating system. For example, it may copy parts of itself to floppy
disks, to the computer’s hard drive, into legitimate computer programs, or it may attach itself
to e-mail messages and spread across computer networks by infecting other shared drives.
Infection is much more frequent in PCs than in professional mainframe systems because
programs on PCs are exchanged primarily by means of floppy disks, e-mail, or over
unregulated computer networks.

Viruses operate, replicate, and deliver their payloads only when they are run. Therefore, if a
computer is simply attached to an infected computer network or downloading an infected
program, it will not necessarily become infected. Typically a computer user is not likely to
knowingly run potentially harmful computer code. However, viruses often trick the
computer's operating system or the computer user into running the viral program.

Some viruses have the ability to attach themselves to otherwise legitimate programs. This
attachment may occur when the legitimate program is created, opened, or modified. When
that program is run, so is the virus. Viruses can also reside on portions of the hard disk or
floppy disk that load and run the operating system when the computer is started, and such
viruses thereby are run automatically. In computer networks, some viruses hide in the
software that allows the user to log on (gain access to) the system.

With the widespread use of e-mail and the Internet, viruses can spread quickly. Viruses
attached to e-mail messages can infect an entire local network in minutes.

Types of Viruses

There are five categories of viruses: parasitic or file viruses, bootstrap sector, multi-partite,
macro, and script viruses.

Parasitic or file viruses infect executable files or programs in the computer. These files are
often identified by the extension .exe in the name of the computer file. File viruses leave the
contents of the host program unchanged but attach to the host in such a way that the virus
code is run first. These viruses can be either direct-action or resident. A direct-action virus
selects one or more programs to infect each time it is executed. A resident virus hides in the
computer's memory and infects a particular program when that program is executed.
Bootstrap-sector viruses reside on the first portion of the hard disk or floppy disk, known as
the boot sector. These viruses replace either the programs that store information about the
disk's contents or the programs that start the computer. Typically, these viruses spread by
means of the physical exchange of floppy disks.

Multi-partite viruses combine the abilities of the parasitic and the bootstrap-sector viruses,
and so are able to infect either files or boot sectors. These types of viruses can spread if a
computer user boots from an infected diskette or accesses infected files.

Other viruses infect programs that contain powerful macro languages (programming
languages that let the user create new features and utilities). These viruses, called macro
viruses, are written in macro languages and automatically execute when the legitimate
program is opened.

Script viruses are written in script programming languages, such as VBScript (Visual Basic
Script) and JavaScript. These script languages can be seen as a special kind of macro
language and are even more powerful because most are closely related to the operating
system environment. The "ILOVEYOU" virus, which appeared in 2000 and infected an
estimated 1 in 5 personal computers, is a famous example of a script virus.

4.1.2 Worms

Worm is a program that propagates itself across computers, usually by spawning copies of
itself in each computer's memory. A worm might duplicate itself in one computer so often
that it causes the computer to crash. Sometimes written in separate “segments,” a worm is
introduced surreptitiously into a host system either for “fun” or with intent to damage or
destroy information. The term comes from a science-fiction novel and has generally been
superseded by the term virus. Worms can form segments across a network and damage the
network by using its resources (memory space) highly. The segments of worms across a
network can communicate strengthen their damage.

4.1.3 Trojan Horses


There are other harmful computer programs that can be part of a virus but are not considered
viruses because they do not have the ability to replicate. These programs fall into three
categories: Trojan horses, logic bombs, and deliberately harmful or malicious software
programs that run within Web browsers, an application program such as Internet Explorer
and Netscape that displays Web sites.

A Trojan horse is a program that pretends to be something else. A Trojan horse may appear to
be something interesting and harmless, such as a game, but when it runs it may have harmful
effects. The term comes from the classic Greek story of the Trojan horse found in Homer’s
Iliad.

4.1.4 Bombs
A bomb infects a computer’s memory, but unlike a virus, it does not replicate itself. A logic
bomb delivers its instructions when it is triggered by a specific condition, such as when a
particular date or time is reached or when a combination of letters is typed on a keyboard. A
logic bomb has the ability to erase a hard drive or delete certain files.

Malicious software programs that run within a Web browser often appear in Java applets and
ActiveX controls. Although these applets and controls improve the usefulness of Web sites,
they also increase a vandal’s ability to interfere with unprotected systems. Because those
controls and applets require that certain components be downloaded to a user’s personal
computer (PC), activating an applet or control might actually download malicious code.

4.2 Techniques to Reduce Security problems


4.2.1 Backup

Storing backup copies of software and data and having backup computer and communication
capabilities are important basic safeguards because the data can then be restored if it was
altered or destroyed by a computer crime or accident. Computer data should be backed up
frequently and should be stored nearby in secure locations in case of damage at the primary
site. Transporting sensitive data to storage locations should also be done securely.
4.2.2 Encryption

Another technique to protect confidential information is encryption (Encryption, process of


converting messages or data into a form that cannot be read without decrypting or
deciphering it. The root of the word encryption—crypt—comes from the Greek word kryptos,
meaning “hidden” or “secret.”)
Computer users can scramble information to prevent unauthorized users from accessing it.
Authorized users can unscramble the information when needed by using a secret code called
a key. Without the key the scrambled information would be impossible or very difficult to
unscramble. A more complex form of encryption uses two keys, called the public key and the
private key, and a system of double encryption. Each participant possesses a secret, private
key and a public key that is known to potential recipients. Both keys are used to encrypt, and
matching keys are used to decrypt the message. However, the advantage over the single-key
method lies with the private keys, which are never shared and so cannot be intercepted. The
public key verifies that the sender is the one who transmitted it. The keys are modified
periodically, further hampering unauthorized unscrambling and making the encrypted
information more difficult to decipher

4.2.3 Approved users

Another technique to help prevent abuse and misuse of computer data is to limit the use of
computers and data files to approved persons. Security software can verify the identity of
computer users and limit their privileges to use, view, and alter files. The software also
securely records their actions to establish accountability. Military organizations give access
rights to classified, confidential, secret, or top-secret information according to the
corresponding security clearance level of the user. Other types of organizations also classify
information and specify different degrees of protection.

4.2.4 PASSWORDS

Passwords are confidential sequences of characters that allow approved persons to make use
of specified computers, software, or information. To be effective, passwords must be difficult
to guess and should not be found in dictionaries. Effective passwords contain a variety of
characters and symbols that are not part of the alphabet. To thwart imposters, computer
systems usually limit the number of attempts and restrict the time it takes to enter the correct
password.

A more secure method is to require possession and use of tamper-resistant plastic cards with
microprocessor chips, known as “smart cards,” which contain a stored password that
automatically changes after each use. When a user logs on, the computer reads the card's
password, as well as another password entered by the user, and matches these two
respectively to an identical card password generated by the computer and the user's password
stored in the computer in encrypted form. Use of passwords and "smart cards" is beginning to
be reinforced by biometrics, identification methods that use unique personal characteristics,
such as fingerprints, retinal patterns, facial characteristics, or voice recordings.

4.2.5 FIREWALLS

Computers connected to communication networks, such as the Internet, are particularly


vulnerable to electronic attack because so many people have access to them. Using firewall
computers or software placed between the networked computers and the network can protect
these computers. The firewall examines, filters, and reports on all information passing
through the network to ensure its appropriateness. These functions help prevent saturation of
input capabilities that otherwise might deny usage to legitimate users, and they ensure that
information received from an outside source is expected and does not contain computer
viruses.

4.2.6 Intrusion Selection Systems

Security software called intrusion detection systems may be used in computers to detect
unusual and suspicious activity and, in some cases, stop a variety of harmful actions by
authorized or unauthorized persons. Abuse and misuse of sensitive system and application
programs and data such as password, inventory, financial, engineering, and personnel files
can be detected by these systems
4.2.7 Application Safeguards

The most serious threats to the integrity and authenticity of computer information come from
those who have been entrusted with usage privileges and yet commit computer fraud. For
example, authorized persons may secretly transfer money in financial networks, alter credit
histories, sabotage information, or commit bill payment or payroll fraud. Modifying,
removing, or misrepresenting existing data threatens the integrity and authenticity of
computer information. For example, omitting sections of a bad credit history so that only the
good credit history remains violates the integrity of the document. Entering false data to
complete a fraudulent transfer or withdrawal of money violates the authenticity of banking
information. Using a variety of techniques can prevent these crimes. One such technique is
check summing. Check summing sums the numerically coded word contents of a file before
and after it is used. If the sums are different, then the file has been altered. Other techniques
include authenticating the sources of messages, confirming transactions with those who
initiate them, segregating and limiting job assignments to make it necessary for more than
one person to be involved in committing a crime, and limiting the amount of money that can
be transferred through a computer. This application safeguards is anticorruption software.

4.2.8 Disaster Recovery Plans

Organizations and businesses that rely on computers need to institute disaster recovery plans
that are periodically tested and upgraded. This is because computers and storage components
such as diskettes or hard disks are easy to damage. A computer's memory can be erased or
flooding, fire, or other forms of destruction can damage the computer’s hardware. Computers,
computer data, and components should be installed in safe and locked facilities.
4.2.9 Anti-viral Tactics

Preparation and Prevention

Computer users can prepare for a viral infection by creating backups of legitimate original
software and data files regularly so that the computer system can be restored if necessary.
Viral infection can be prevented by obtaining software from legitimate sources or by using a
quarantined computer to test new software—that is, a computer not connected to any
network. However, the best prevention may be the installation of current and well-designed
antiviral software. Such software can prevent a viral infection and thereby help stop its
spread.

Virus Detection

Several types of antiviral software can be used to detect the presence of a virus. Scanning
software can recognize the characteristics of a virus's computer code and look for these
characteristics in the computer's files. Because new viruses must be analyzed as they appear,
scanning software must be updated periodically to be effective. Other scanners search for
common features of viral programs and are usually less reliable. Most antiviral software uses
both on-demand and on-access scanners. On-demand scanners are launched only when the
user activates them. On-access scanners, on the other hand, are constantly monitoring the
computer for viruses but are always in the background and are not visible to the user. The on-
access scanners are seen as the proactive part of an antivirus package and the on-demand
scanners are seen as reactive. On-demand scanners usually detect a virus only after the
infection has occurred and that is why they are considered reactive.

Antivirus software is usually sold as packages containing many different software programs
that are independent of one another and perform different functions. When installed or
packaged together, antiviral packages provide complete protection against viruses. Within
most antiviral packages, several methods are used to detect viruses. Checksumming, for
example, uses mathematical calculations to compare the state of executable programs before
and after they are run. If the checksum has not changed, then the system is uninfected.
Checksumming software can detect an infection only after it has occurred, however. As this
technology is dated and some viruses can evade it, checksumming is rarely used today.

Most antivirus packages also use heuristics (problem-solving by trial and error) to detect new
viruses. This technology observes a program’s behavior and evaluates how closely it
resembles a virus. It relies on experience with previous viruses to predict the likelihood that a
suspicious file is an as-yet unidentified or unclassified new virus.

Other types of antiviral software include monitoring software and integrity-shell software.
Monitoring software is different from scanning software. It detects illegal or potentially
damaging viral activities such as overwriting computer files or reformatting the computer's
hard drive. Integrity-shell software establishes layers through which any command to run a
program must pass. Checksumming is performed automatically within the integrity shell, and
infected programs, if detected, are not allowed to run.

Containment and Recovery

Once a viral infection has been detected, it can be contained by immediately isolating
computers on networks, halting the exchange of files, and using only write-protected disks. In
order for a computer system to recover from a viral infection, the virus must first be
eliminated. Some antivirus software attempts to remove detected viruses, but sometimes with
unsatisfactory results. More reliable results are obtained by turning off the infected computer;
restarting it from a write-protected floppy disk; deleting infected files and replacing them
with legitimate files from backup disks; and erasing any viruses on the boot sector.

VIRAL STRATEGIES

The authors of viruses have several strategies to circumvent ant virus software and to
propagate their creations more effectively. So-called polymorphic viruses make variations in
the copies of themselves to elude detection by scanning software. A stealth virus hides from
the operating system when the system checks the location where the virus resides, by forging
results that would be expected from an uninfected system. A so-called fast-infector virus
infects not only programs that are executed but also those that are merely accessed. As a
result, running antiviral scanning software on a computer infected by such a virus can infect
every program on the computer. A so-called slow-infector virus infects files only when the
files are modified, so that it appears to check summing software that the modification was
legitimate. A so-called sparse-infector virus infects only on certain occasions—for example,
it may infect every tenth program executed. This strategy makes it more difficult to detect the
virus.

By using combinations of several virus-writing methods, virus authors can create more
complex new viruses. Many virus authors also tend to use new technologies when they
appear. The anti virus industry must move rapidly to change their antiviral software and
eliminate the outbreak of such new viruses

You might also like