Csc 101 First Note

Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

Introduction

to Computer
Science
CSC 101
Mr. Samuel Acheme
Mr. Francis Edoh
COURSE CONTENTS
• Definition of computer science. History of computer and their
generations.
• Information Processing and their roles in the society
• Computer Hardware: functional components, Modern I/O units.
• Software: Operating Systems, Application Packages.
• Information Systems:
• Integration and Application of ICT in Business and other
segments of the society
• Usage of MS office Applications and the internet for lab sections
• Program: Development; Flowcharts and Algorithms;
• Introduction to Computer Programming using Python.
Course Aim and Objectives
The aim of this course is to Develop your knowledge
and understanding of the underlying principles and
applicability of Computer Science. The Objectives
include;

• Develop your knowledge of Computers and


computational analysis.
• Understanding the applications of Information
Systems.
Course Aim and Objectives
• Develop Proficiency in the use of basic Computer
Softwares.
• Build up your capacity to evaluate computer Flow
chart and algorithms.
• Develop your competence in Basic Computer
Networking.
• Build up your capacity to write s i m p l e
programmes in Computer Science.
• .
COMPUTER SCIENCE, HISTORY OF
COMPUTER

COMPUTER SCIENCE

Computer Science is the study of computers and


computational systems. It encompasses both
theoretical concepts, such as algorithms and data
structures, and practical applications, such as
software engineering, hardware design, and network
security.
COMPUTER SCIENCE, HISTORY OF
COMPUTER
COMPUTER SCIENCE

At its core, computer science is concerned with


automating processes and solving problems through
computational thinking, which involves breaking down
tasks into manageable steps that can be executed by
a computer.
Association for Computing Machinery
(ACM) Define Computer Science as the
study of computers and algorithmic
processes, including their principles, their
hardware and software designs, their
applications, and their impact on society.
What is A Computer

A computer is an electronic device


that can process information (data)
according to a set of instructions
(program). It can store, retrieve, and
process data.
Introduction to Computer Science

A computer is an electronic
machine that accepts data as
2 3 input, stores it until the information
is needed, processes the
1
information according to the
instructions provided by the user
(called program), and finally
returns the results as output to the
user.
Introduction to Computer Science

3 The computer can store and manipulate large


quantities of data at very high speed, but a
1
computer cannot think. A computer makes
decisions based on simple comparisons such
as one number being larger than another.
Introduction to Computer Science

3 Although the computer can help solve


a tremendous variety of problems, it is
1
simply a machine. It cannot solve
problems on its own. i.e Computer can
not do anything without a program.
Introduction to Computer Science

The first use of the word "computer" was


recorded in 1613, referring to a person
who carried out calculations, or
computations, and the word continued to
be used in that sense until the middle of
the 20th century.
EVOLUTION OF COMPUTER – MANUAL COUNTING DEVICE

1. Abacus (3000 BC)


The earliest known calculating device, consisting of beads on a frame.

The history of computers starts out over 2000 years


ago in
Babylonia (Mesopotamia), at the birth of the
abacus,
a wooden rack holding horizontal wires with beads
strung on them.
Abacus was calculating device (or adding
machine) which was used
to performed addition and subtraction easily and
speedily.
EVOLUTION OF COMPUTER – MANUAL COUNTING DEVICE

The abacus works on the principle of place-value


notation:
the location of the bead determines its value. In
this way,
relatively few beads are required to depict large
numbers. The beads are counted, or given
numerical values, by shifting them in one
direction. The values are erased (freeing the
counters for reuse) by shifting the beads in the
other direction.
EVOLUTION OF COMPUTER – MANUAL COUNTING DEVICE

2. Napier's Bones (1617)


A mechanical calculating device invented by John Napier,
used for multiplication and division.
3. Pascaline (1642)
The first mechanical adding machine, invented by Blaise
Pascal.

In 1642, at age 18, a French scientist and


philosopher Blaise Pascal invented the first
practical mechanical calculator - the
Pascaline, to help his tax-collector father do
his sums.
The pascaline was based on a design
described by Hero of Alexandria (2AD) to add
up the distance a carriage travelled. The
basic principle of his calculator is still used
today in water meters and modern-day
odometers.
The calculator had spoked metal wheel
dials, with the digit 0 through 9 displayed
around the circumference of each wheel.
To input a digit, the user placed a stylus in
the corresponding space between the
spokes, and turned the dial until a metal
stop at the bottom was reached, similar
to the way a rotary telephone dial is used.
This would display the number in the
boxes at the top of the calculator. Then,
one would simply redial the second
number to be added, causing the sum of
both numbers to appear in boxes at the
top.
Pascaline
This first mechanical calculator, called the
Pascaline, had several disadvantages. Although
it did offer a substantial improvement over
manual calculations, only Pascal himself could
repair the device and it cost more than the
people it replaced! In addition, the first signs of
technophobia emerged with mathematicians
fearing the loss of their jobs due to progress.
4. Difference Engine (1822)
A mechanical computer designed by Charles Babbage to calculate
mathematical tables.

The first person to attempt evolving the calculator into a


computer was Charles Babbage. Many regard Babbage as the
"father of the computer" because his machines had an input (a
way of feeding in numbers), a memory (something to store
these numbers while complex calculations were taking place), a
processor (the number-cruncher that carried out the
calculations), and an output (a printing mechanism)—the same
basic components shared by all modern computers.
In 1812, Charles Babbage a mathematics
professor in Cambridge, England began to
design an automatic mechanical
calculating machine, which he called a
difference engine. By 1822, he had a
working model to demonstrate with. With
financial help from the British government,
Babbage started fabrication of a difference
engine in 1823.
It was intended to be steam
powered and fully automatic,
including the printing of the
resulting tables, and
commanded by a fixed
instruction program. The
difference engine, although
having limited adaptability
and applicability, was really a
great advance.
5. Analytical Engine (1837)

A general-purpose mechanical computer designed by Charles


Babbage, considered the first programmable computer.
Babbage continued to work on the Difference Engine for the next 10
years, but in 1833 he lost interest because he thought he had a
better idea — the construction of what would now be called a
general purpose, fully program-controlled, automatic mechanical
digital computer. Babbage called this idea an Analytical Engine..
The built-in operations were supposed to
include everything that a modern general
– purpose computer would need. The
analytical engine was soon to use
punched cards, which would be read into
the machine from several different
Reading Stations. The machine was
supposed to operate automatically, by
steam power, and require only one person
there. Babbage‘s computers were never
finished.
In 1832 Augusta Ada Byron, Countess of Lovelace, daughter of the poet
Lord Byron. An enthusiastic mathematician, at age 17 met the inventor of
the Differential Engine, Charles Babbage.

Babbage was more fortunate in receiving help from Augusta Ada Byron.
She helped to refine Babbage's ideas for making his machine
programmable—and this is why she is still, sometimes, referred to as the
world's first computer programmer. Ada suggested to Babbage, writing
a pan for how the engine might calculate Bernoulli numbers. This is
regarded as the first computer program.

A software language developed by US department of Defence was


named “Ada” in her honour in 1979.
5. TABULATOR

In 1890 an American statistician Herman Hollerith who


worked for the US. Census Bureau built mechanical
tabulator based on punched cards to rapidly tabulate
statistics from millions of pieces of data to help compile
census data. The tabulator could read the information
that had been punched into the cards automatically,
without human help. Because of this, reading errors were
reduced dramatically, work flow increased, and, most
importantly, stacks of punched cards could be used as
easily accessible memory of almost unlimited size.
Furthermore, different problems could be stored on
different stacks of cards and accessed when needed.
Then a census was taken each decade but, by the 1880s,
the population of the United States had grown so much
through immigration that a full-scale analysis of the data
by hand was taking seven and a half years. The
statisticians soon figured out that, if trends continued, they
would run out of time to compile one census before the
next one fell due, hence the development of the tabulator.
The tabulator tallied the entire
census in only six weeks and
completed the full analysis in just
two and a half years. Soon
afterward, Hollerith realized his
machine had other applications,
so he set up the Tabulating
Machine Company in 1896 to
manufacture it commercially. A
few years later, it changed its
name to the Computing-
Tabulating-Recording (C-T-R)
company and then, in 1924,
acquired its present name:
International Business Machines
(IBM).
6. ENIAC (1946)
The Electronic Numerical Integrator and Computer, the first electronic
general-purpose computer

The start of World War II produced a large need for


computer capacity, especially for the military. New
weapons were made for which trajectory tables and other
essential data were needed. In 1942, John P. Eckert, John W.
Mauchly, and their associates at the Moore school of
Electrical Engineering of University of Pennsylvania decided
to build a high – speed electronic computer to do the job.
This machine became known as ENIAC (Electrical Numerical
Integrator And Calculator)
The size of ENIAC‘s numerical
“word” was 10 decimal digits, and it
could multiply two of these
numbers at a rate of 300 per
second, by finding the value of
each product from a multiplication
table stored in its memory.
ENIAC was therefore about 1,000 times
faster than the previous generation of
relay computers. ENIAC used 18,000
vacuum tubes, about 1,800 square feet of
floor space, and consumed about 180,000
watts of electrical power. It had punched
card I/O, 1 multiplier, 1 divider/square
rooter, and 20 adders using decimal ring
counters, which served as adders and
also as quick-access (.0002 seconds)
read-write register storage.
ENIAC is commonly accepted as the first
successful high – speed electronic digital
computer (EDC) and was used from 1946 to
1955. A controversy developed in 1971,
however, over the patentability of ENIAC‘s
basic digital concepts, the claim being made
that another physicist, John V. Atanasoff had
already used basically the same ideas in a
simpler vacuum – tube device he had built in
the 1930’s (known as Atanasoff-Berry
Computer (ABC) though not fully completed)
while at Iowa State College.
In 1973 the courts found in favor of
the company using the Atanasoff
claim.
The development of ENIAC was the
starting point of the current
generations of modern computers
considered below.
7. Transistor (1947)

A semiconductor device that replaced vacuum


tubes in computers, making them smaller,
faster, and more reliable. A transistor can act
as a switch or gate for electronic signals. In
practice this means we use transistors as
electronic switches that turn electronic
circuits on or off.
This is a basic function that we use in digital
logic circuits, such as those found in
computers, where we use transistors to
represent the ones and zeros of binary
code.
8. Integrated Circuit
(1958)
A tiny electronic circuit
containing thousands of
transistors, resistors, and
capacitors on a single chip.
9. Personal Computer (1970s)
Smaller, more affordable computers designed for personal use
10. Microprocessor (1971)
• A single integrated circuit containing the
entire central processing unit (CPU) of a
computer.
11. Graphical User Interface (GUI)
(1980s)
A user-friendly interface that uses
icons and menus to interact with a
computer.
12. Internet (1990s)
A global network of interconnected computers that revolutionized
communication and information sharing.
13. Smartphone (2000s)

A mobile phone with advanced features


such as a touchscreen, internet
connectivity, and apps.
14. Cloud Computing (2000s)

• The practice of using remote


servers and the internet to store
and access data.
15. Artificial Intelligence (AI) (2010s)
The development of computer systems that can
perform tasks typically requiring human intelligence.
GENERATIONS OF COMPUTER
In discussing the history of the modern computer, we
classify them into generations.
I. First generation: (1942 – 1955) characterized by the
vacuum tube (a.k.a. thermionic valve) as its major
functional electronic component and magnetic drums
for memory.
The high cost of vacuum tubes prevented their use for
main memory. They stored information in the form of
propagating sound waves.
GENERATIONS OF COMPUTER

Input was based on punched cards and paper


tape, and output was displayed on printouts.
First generation computers could solve only
one problem at a time. The vacuum tube
consumes a lot of power. The Vacuum tube was
developed by Lee DeForest in 1908.
Features of First Generation Computers
1. They used valves or vacuum tubes as their
main electronic component and magnetic
drums for memory.
2. They were large in size, slow in processing
and had less storage capacity.
3. They consumed lots of electricity and
produced lots of heat.
4. Input was based on punched cards and
paper tape, and output was displayed on
printouts.
Features of First Generation Computers

5. First generation computers could solve only


one problem at a time.
6. Their computing capabilities were limited.
7. They were not so accurate and reliable.
8. They used machine level language for
programming.
9. They were very expensive.
Example: ENIAC, UNIVAC, IBM 650 etc
(Research on them)
1st Generation (1940s-1950s): Vacuum Tubes
• Key characteristics: Used vacuum tubes for electronic components,
large size, high power consumption, limited storage capacity.
• Examples: ENIAC, UNIVAC
Second Generation (1955-1964) : The second-
generation computer used transistors for CPU
components & ferrite cores for main memory &
magnetic disks for secondary memory. Transistors
simply acted as a light switch, allowing the
electronic circuits to either open or close.They
used Assembly Language and high-level
languages such as FORTRAN (1956), ALGOL (1960) &
COBOL (1960 - 1961). I/O processor was included to
control I/O operations.
Transistors are smaller
than Vacuum tubes and
have higher operating
speed. They have no
filament and require no
heating. Manufacturing
cost was also very low.
Thus the size of the
computer got reduced
considerably.
Features of Second Generation Computers
1. Transistors were used instead of Vacuum
Tube.
2. Processing speed was faster than First
Generation Computers (Micro Second)
3. They used punched cards for input and
printouts for output.
4. The memory moved from a magnetic
drum to magnetic core technology.
Features of Second Generation Computers

5. Smaller in Size (51 square feet)

6. The input and output devices were faster.

Example: IBM 1400 and 7000 Series, Control


Data 3600 etc.
2nd Generation (1950s-1960s):
Transistors
• Key characteristics: Replaced
vacuum tubes with transistors,
smaller size, lower power
consumption, improved
reliability, and increased
storage capacity.
• Examples: IBM 1401, DEC PDP-1
III. Third generation: (1964 – 1975) was a
major breakthrough in computing research
with the advent of the IC (Integrated Circuit)
where hundreds of transistors could be
integrated onto a tiny silicon chip. So it is
quite obvious that the size of the computer
got further reduced. Some of the computers
developed during this period were IBM-360,
ICL-1900, IBM-370, and VAX-750.
Higher level language such as
BASIC (Beginners All purpose
Symbolic Instruction Code)
was developed during this
period. Computers of this
generation were small in size,
low cost, large memory and
processing speed is very high.
Features of Third Generation Computers
1. They used Integrated Circuit (IC) chips in place
the transistors.
2. Semi conductor memory devices were used.
3. The size was greatly reduced, the speed
processing was high, they were more accurate an
reliable.
4. Keyboards and monitors were used instead
of punched cards and printouts.
Features of Third Generation Computers

5. The computers were interfaced with an operatin


system which allowed to solve many problems at a
time.
6. The mini computers were introduced in th
generation.
7. They used high level language for programming
Example: IBM 360, IBM 370 etc.
3rd Generation (1960s-1970s):
Integrated Circuits
• Key characteristics: Introduced
integrated circuits (ICs), significantly
reducing size and cost, improving
performance and reliability.
• Examples: IBM System/360, DEC
PDP-11
IV. Fourth generation: (1975 - 1990)
An IC (Integrated Circuit) containing about 100
components is called LSI (Large Scale Integration) and
the one, which has more than 1000 such components, is
called as VLSI (Very Large Scale Integration). It uses
large scale Integrated Circuits (LSIC) built on a single
silicon chip called microprocessors.
Due to the development of microprocessor it is possible
to place computer’s central processing unit (CPU) on
single chip. These computers are called
microcomputers. Later very large scale Integrated
Circuits (VLSIC) replaced LSICs. Thus the computer
which was occupying a very large room in earlier days
can now be placed on a table.
The personal computer (PC) that
you see in today is a Fourth
Generation Computer. Main
memory used fast
semiconductors chips up to
4Mbits size. Hard disks were used
as secondary memory. Keyboards,
dot matrix printers etc. were
developed. OS-such as MS-DOS,
UNIX, Apple’s Macintosh were
available. Object oriented
language, C++ etc were
developed.
Features of Fourth Generation Computers

1. They used Microprocessor (VLSI) as their


main switching element.
2. They are also called as micro
computers or personal computers.
3. Their size varies from desktop to laptop
or palmtop.
Features of Fourth Generation Computers

4. They have very high speed of


processing; they are 100% accurate,
reliable, diligent and versatile.
5. They have very large storage capacity.
Example: IBM PC, Apple-Macintosh etc.
4th Generation (1970s-1980s):
Microprocessors
• Key characteristics: Developed
microprocessors, single chips containing
entire CPUs, leading to the rise of personal
computers.
• Examples: Intel 4004, Apple II, IBM PC
V. Fifth generation(1991- Till date):
5th generation computers use ULSI (Ultra-Large Scale
Integration) chips. Millions of transistors are placed in a
single IC in ULSI chips. 64 bit microprocessors have been
developed during this period. Data flow & EPIC architecture of
these processors have been developed. RISC & CISC, both
types of designs are used in modern processors. Memory
chips and flash memory up to 1 GB, hard disks up to 600 GB &
optical disks up to 50 GB have been developed. fifth
generation digital computer are Artificial intelligence
systems.
5th Generation computers also focused on connectivity;
Which is the method of connecting of computers – known as
networking. Fifth generation computing devices, based on
artificial intelligence, are still in development, though there
are some applications, such as voice recognition, that are
being used today.
The use of parallel processing
and superconductors is
helping to make artificial
intelligence a reality.
Quantum computation and
molecular and
nanotechnology will radically
change the face of
computers in years to come.
The goal of fifth-generation
computing is to develop
devices that respond to
natural language input and
are capable of learning and
self-organization.
THANK YOU.

You might also like