0% found this document useful (0 votes)
10 views13 pages

Intro To Computer

Uploaded by

lipzee15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views13 pages

Intro To Computer

Uploaded by

lipzee15
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Introduction to Computers

Computers and other forms of technology impact our daily lives in a multitude of

ways. We encounter computers in stores, restaurants, and other retail establishments.

We use computers or smartphones and the Internet regularly to obtain information,

experience online entertainment, buy products and services, and communicate with

others. Many of us carry a smartphone or other mobile device with us at all times so

we can remain in touch with others on a continual basis and can access Internet

information as we need it. We also use these devices to pay for purchases, play online

games with others, watch TV and movies, access real-time traffic conditions, and

much, much more.

Businesses also use computers extensively, such as to maintain employee and

customers’ records, manage inventories, maintain online stores and other Web sites,

process sales, control robots and other machines in factories, and provide business

executives with the up-to-date information they need to make decisions. The

government uses computers to support our nation’s defence systems, for space

exploration, for storing and organizing vital information about citizens, for law

enforcement and military purposes, and other important tasks. In short, computers

and technology are used in an endless number of ways.

1.1. DEFINITION OF A COMPUTER

Fifty years ago, computers were used primarily by researchers and scientists. Today,

computers are an integral part of our lives. Experts call this trend pervasive

computing, in which few aspects of daily life remain untouched by computers and

computing technology. It is defined as the use of computerized technology through

various devices in various settings around the clock. With pervasive computing—also

referred to as ubiquitous computing— computers are found virtually everywhere and

computing technology is integrated into an ever-increasing number of devices to give

those devices additional functionality, such as enabling them to retrieve Internet


information or to communicate with other devices on an ongoing basis. Because of the

prominence of computers in our society, it is important to understand what a

computer is, a little about how a computer works, and the implications of living in a

technology-oriented society

Computer is an electronic device that takes data as input from the user, stores data,

processes the data and gives the result (output). A computer is basically defined as a

tool or machine used for processing data to give required information.

It is capable of:

a) taking input data through the input unit (provides data into the computer) e.g.

keyboards, scanners, camera, mouse

b) storing the input data in a diskette, hard disk or other medium

c) processing it at the central processing unit (CPU) and

d) giving out the result (output) on the screen or the Visual Display Unit (VDU).

Let’s see how that works in our everyday world


Data: Almost any kind of fact or set of facts can become computer data, such as the

words in a letter to a friend, the numbers in a monthly budget, the images in a

photograph, the notes in a song, or the facts stored in an employee record. The term

data is referred to facts, about a person, object or place, that can be processed by a

computer system. It is without any proper meaning. Data may be collection of words,

numbers, graphics or sounds. E.g. name, age, complexion, school, class, height etc.

Information: Is referred to as processed data or a meaningful statement produced by

a computer system through computation, summary or synthesis. When raw facts and

figures are processed and arranged in some order then they become information. E.g.

Net pay of workers, examination results of students, list of successful candidates in an

examination or interview, reports etc. Information is frequently generated to answer

some type of question, such as how many of a restaurant’s employees work less than

20 hours per week, how many seats are available on a particular flight from Lagos to

Abuja.

1.2. Uses of Computers


1.3. Evolution of computer

A. The Mechanical Era (1623-1945)

The growth of computer industry started with the need for performing fast

calculations. The manual method of computing was slow and prone to errors. So,

attempts were made to develop faster calculating devices. The journey that started

from the first calculating device i.e. Abacus has led us today to extremely high-speed

calculating devices. Let us first have a look at some early calculating devices.

• Abacus

Computers truly came into their own as great inventions in the last two decades of the

20th century. But their history stretches back more than 2500 years to the abacus: a

simple calculator made from beads and wires, which is still used in some parts of the

world today. Addition and multiplication of numbers were done by using the place

value of digits of the numbers and position of beads in an abacus. The difference

between an ancient abacus and a modern computer seems vast, but the principle—

making repeated calculations more quickly than the human brain—is exactly the

same.
• Napier’s Bones

This was developed by John Napier in 1617. He devised a set of numbering rods

known as Napier’s Bones through which both multiplication and division could be

performed. These were numbered rods which could perform multiplication of any

number by a number in the range of 2-9. There are 10 bones corresponding to the digits

0-9 and there is also a special eleventh bone that is used to represent the multiplier. By

placing bones corresponding to the multiplier on the left side and the bones

corresponding to the digits of the multiplicand on the right, the product of two

numbers can be easily obtained.

• Pascaline

Blaise Pascal, a French mathematician invented an adding machine in 1642 that was

made up of gears and was used for adding numbers quickly. This machine was also

called Pascaline and was capable of addition and subtraction along with carry-transfer

capability. It worked on clockwork mechanism principle. It consisted of various

numbered toothed wheels having unique position values. The addition and

subtraction operations were performed by controlled rotation of these wheels.

• Leibnitz’s Calculator

In 1673 Gottfried Leibnitz, a German mathematician extended the capabilities of the

adding machine invented by Pascal to perform multiplication and division as well.

The multiplication was done through repeated addition of numbers using a stepped

cylinder each with nine teeth of varying lengths.

• Difference engine

Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz were among

mathematicians who designed and implemented calculators that were capable of

addition, subtraction, multiplication, and division included. The first multi-purpose

or programmable computing device was probably Charles Babbage's Difference

Engine, which tabulated polynomial equations using the method of finite differences.

Babbage received some help with development of the Difference Engine from Ada
Lovelace, considered by many to be the first computer programmer for her work and

notes on the Difference Engine. It began in 1823 but never completed, because of

funding.

Fig.: Charles Babbage difference engine

In 1842, Babbage designed a more ambitious machine, called the Analytical Engine

but unfortunately it also was only partially completed. Babbage, together with Ada

Lovelace recognized several important programming techniques, including

conditional branches, iterative loops and index variables. Babbage designed the

machine which is arguably the first to be used in computational science.

In 1933, George Scheutz and his son, Edvard began work on a smaller version of the

difference engine and by 1853 they had constructed a machine that could process 15-

digit numbers and calculate fourth-order differences. The US Census Bureau was one

of the first organizations to use the mechanical computers which used punch-card

equipment designed by Herman Hollerith to tabulate data for the 1890 census. In 1911
Hollerith's company merged with a competitor to found the corporation which in 1924

became International Business Machines (IBM).

• Mark 1

In 1944 Prof Howard Aiken in collaboration with IBM constructed an

electromechanical computer named Mark 1 which could multiply two 10 digit

numbers in 5 seconds. This machine was based on the concept of Babbage’s Analytical

engine and was the first operational general-purpose computer which could execute

pre-programmed instructions automatically without any human intervention.

B. Generations of Computers

The evolution of digital computing is often divided into generations. Each generation

is characterised by dramatic improvements over the previous generation in the

technology used to build computers, the internal organisation of computer systems,

and programming languages. Although not usually associated with computer

generations, there has been a steady improvement in algorithms, including algorithms

used in computational science. The following history has been organised using these

widely recognized generations as mileposts.

What generation a computer belongs to is determined by the technology it uses. With

advancement in the generation, the performance of computers improved not only due

to the implementation of better hardware technology but also superior operating

systems and other software utilities.

1. First Generation Electronic Computers (1941 – 1956)

First generation computers were characterized by vacuum tubes. A vacuum tube is a

delicate glass device that can control and amplify electronic signals. They were made

using thousands of vacuum tubes and were the fastest calculating devices of their

time. These computers were very large in size, consumed lot of electricity and

generated lots of heat.


They relied on machine language, the lowest-level programming language

understood by computers, to perform operations, and they could only solve one

problem at a time. It would take operators days or even weeks to set-up a new

problem. Input was based on punched cards and paper tape, and output was

displayed on printouts.

First generation computers used the concept of ‘stored program’. Stored-program

concept is the storage of instructions in computer memory to enable it to perform a

variety of tasks in sequence or intermittently. The idea was introduced in the late 1940s

by John von Neumann, who proposed that a program be electronically stored in

binary-number format in a memory device so that instructions could be modified by

the computer as determined by intermediate computational results. One key

advantage of this technique is that the computer can easily go back to a previous

instruction and repeat it.

Universal Automatic Computer (UNIVAC), Electronic Discrete Variable Automatic

Computer (EDVAC), Electronic Delay Storage Automatic Calculator (EDSAC) and

Electronic Numerical Integrator and Computer (ENIAC) computers are examples of

first-generation computing devices.

Software technology during this period was very primitive. The first programs were

written out in machine code, i.e. programmers directly wrote down the numbers that

corresponded to the instructions they wanted to store in memory. It employed the

stored-program concept, provided a supervisory typewriter for controlling the

computer, and used magnetic tapes for storage.

2. Second Generation (1957 – 1962)

The second generation saw several important developments at all levels of computer

system design, from the technology used to build the basic circuits to the

programming languages used to write scientific applications. First-generation

computers were notoriously unreliable, largely because the vacuum tubes kept
burning out. Transistors were invented. This changed the way computers were built,

leading to the second generation of computer technology.

A transistor is a solid-state device that functions as an electronic switch. It regulates

current or voltage flow and acts as a switch or gate for electronic signals in an

electronic circuit, but at a tiny fraction of the weight, power consumption, and heat

output of vacuum tubes. Because transistors are small and can last indefinitely,

second-generation computers were much smaller and more reliable than first-

generation computers.

Second-generation computers looked much more like the computers we use today.

Although, they used punched cards for input, they had printers, tape storage, and

disk storage.

In contrast to the first-generation computer’s reliance on cumbersome machine

language, the second generation saw the development of the first high-level

programming languages, which are much easier for people to understand and work

with than machine language. The two programming languages introduced during the

second generation, Common Business-Oriented Language (COBOL) and Formula

Translator (FORTRAN), remain among the most widely-used programming

languages even today. COBOL is preferred by businesses, and FORTRAN is used by

scientists and engineers.

Important commercial machines of this era include the IBM 1620 and 7094. IBM 1620

developed for scientific computing and became the computer of choice for university

research labs. The Livermore Atomic Research Computer (LARC) and the IBM 7030

(a.k.a Stretch) were early examples of machines that overlapped memory operations

with processor operations and had primitive forms of parallel processing.

3. Third Generation (1963 – 1972)

The development of the Integrated Circuit (IC) brought about the third generation of

computers which incorporated many transistors and electronic circuits on a single


wafer or chip of silicon. Essentially, an integrated circuit is a solid-state device on

which an entire circuit—transistors and the connections between them—can be

created i.e. semiconductor devices with several transistors built into one physical

component. This meant that a single integrated circuit chip, not much bigger than

early transistors, could replace entire circuit boards containing many transistors, again

reducing the size of computers. Because of this, they gained the name microcomputers

because compared to second generation computers which would occupy entire rooms

and buildings, they were quite small and could conserve space. The invention of the

IC was the greatest achievement done in the period of third generation of computers.

IC was invented by Robert Noyce and Jack Kilby in 1958-59. IC is a single component

containing a number of transistors.

Another key innovation was timesharing. Early second-generation computers were

frustrating to use because they could run only one job at a time. Users had to give their

punched cards to computer operators, who would run their program and then give

the results back to the user. This technique, called batch processing, was time-

consuming and inefficient. In timesharing, however, the computer is designed so that

it can be used by many people simultaneously. They access the computer remotely by

means of terminals, control devices equipped with a video display and keyboard. In

a properly designed timesharing system, users have the illusion that no one else is

using the computer.

Integrated circuit technology unleashed a period of innovation in the computer

industry that is without parallel in history. By the second generation, scientists knew

that more powerful computers could be created by building more complex circuits.

But because these circuits had to be wired by hand, these computers were too complex

and expensive to build. With integrated circuits, new and innovative designs became

possible for the first time. With ICs on the scene, it was possible to create smaller,

inexpensive computers that more organizations could afford to buy.


The third generation brought huge gains in computational power. Innovations in this

era include the use of integrated circuits, or ICs (semiconductor devices with several

transistors built into one physical component), semiconductor memories starting to

be used instead of magnetic cores, microprogramming as a technique for efficiently

designing complex processors, the coming of age of pipelining and other forms of

parallel processing, and the introduction of operating systems and time-sharing. PDP-

8, PDP-11, ICL 2900, IBM 360 and IBM 370 are examples of third generation

computers.

4. Fourth Generation (1972 – 1984)

The fourth-generation computers emerged with development of the VLSI (Very

Large-Scale Integration). Very-large-scale integration (VLSI) is the process of creating

an integrated circuit (IC) by combining thousands of transistors into a single chip.

VLSI began in the 1970s when complex semiconductor and communication

technologies were being developed.

Before the introduction of VLSI technology, most ICs had a limited set of functions

they could perform. An electronic circuit might consist of a CPU, ROM, RAM and

other glue logic. VLSI lets IC designers add all of these into one chip. With the help of

VLSI technology microprocessor came into existence. From then, the evolution of

computing technology has been an ever-increasing miniaturization of the electronic

circuitry. Some computers of this generation were − DEC 10, STAR 1000, PDP 11,

CRAY-1(Super Computer), and CRAY-X-MP(Super Computer)

5. Fifth-generation Computers (now and the future)

Fifth-generation computers are most commonly defined as those that are based on

artificial intelligence, allowing them to think, reason, and learn. Some aspects of fifth-

generation computers—such as voice and touch input and speech recognition—are in

use today. In the future, fifth-generation computers are expected to be constructed

differently than they are today, such as in the form of optical computers that process
data using light instead of electrons, tiny computers that utilize nanotechnology, or as

entire general-purpose computers built into desks, home appliances, and other

everyday devices.

1.4. Classification of Computers by size

i. Personal Computers

When most people think about computers, they picture a personal computer, or PC.

This type of computer is called personal because it is designed for only one person to

use at a time. Personal computers fall into several categories that are differentiated

from one another by their sizes. The most common sizes are:

✓ Desktop PC: A computer designed to be used at a desk, and seldom moved.

This type of computer consists of a large metal box called a system unit that

contains most of the essential components, with a separate monitor, keyboard,

and mouse that all plug into the system unit.

✓ Notebook PC: A portable computer designed to fold up like a notebook for

carrying. The cover opens up to reveal a built-in screen, keyboard, and

pointing device, which substitutes for a mouse. This type of computer is

sometimes called a laptop.


✓ Tablet PC: A portable computer that consists of a touch-sensitive display

screen mounted on a tablet-size plastic frame with a small computer inside.

There is no built-in keyboard or pointing device; a software-based keyboard

pops up onscreen when needed, and your finger sliding on the screen serves

as a pointing device.

✓ Smartphone: A mobile phone that can run computer applications and has

Internet access capability. Smartphones usually have a touch-sensitive screen,

provide voice calls, text messaging, and Internet access. Many have a variety

of location-aware applications, such as a global positioning system (GPS) and

mapping program and a local business guide.

You might also like