0% found this document useful (0 votes)
772 views37 pages

JSS 3 Summary Notes Week 1 - 12

This document provides an overview of different information ages throughout history based on the technologies used. It discusses 8 ages: the Stone Age, Copper Age, Bronze Age, Iron Age, Middle Ages, Industrial Age, Electronic Age, and Information Age. For each age, it describes the key technologies that defined that period, such as the use of stone tools in the Stone Age, bronze tools in the Bronze Age, and electronic devices like computers and the internet in the Electronic and Information Ages. It also discusses early counting devices that predated modern civilization, such as fingers, pebbles, and cowries.

Uploaded by

DORCAS GABRIEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
772 views37 pages

JSS 3 Summary Notes Week 1 - 12

This document provides an overview of different information ages throughout history based on the technologies used. It discusses 8 ages: the Stone Age, Copper Age, Bronze Age, Iron Age, Middle Ages, Industrial Age, Electronic Age, and Information Age. For each age, it describes the key technologies that defined that period, such as the use of stone tools in the Stone Age, bronze tools in the Bronze Age, and electronic devices like computers and the internet in the Electronic and Information Ages. It also discusses early counting devices that predated modern civilization, such as fingers, pebbles, and cowries.

Uploaded by

DORCAS GABRIEL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Summary notes for JSS 3

Computer Studies
Week 1-12

Technologies of Different Information Ages

What is Technology?
The term ‘Technology” is very wide and everyone has his own way of defining and understanding the meaning of technology.

In this chapter, we shall describe technology as the use of materials, tools, techniques, art, craft, or skills to make life easier
or more pleasant and work more productive. For example, if someone skillfully puts some tools/materials together so that
communication can be made possible over some distance; that simply means that he has invented a simple technology for
communication.

What is Information Technology?


Information Technology can therefore be described as any technology developed/invented that helps to produce,
manipulate, store and communicate information. This is essential for the development and progress of any country.
Since the early ages to the present one, several technologies have been developed to help in the development of the various
ages. they are as follows:

8 different information ages in the field of technology


1. Stone age
2. Copper age
3. Bronze age Metal age
4. Iron age
5. The middle ages
6. The industrial age
7. Electronic age
8. Information age

THE STONE AGE


This was the prehistoric period during where stone was widely used to
make implements with a sharp edge, a point or a percussion
surfaces. Stones were carved and shaped for different purposes, one of
which was to provide information. In those days, stones were carved as
symbols on the walls and other places to provide information.
Variety of stone
tools

THE COPPER AGE


The Chalcolithic period, or Copper Age, was an era of transition between the stone tool-using farmers of the Neolithic and
the metal-obsessed civilizations of the Bronze Age. The Copper Age was really a phenomenon of the eastern Mediterranean
regions, and occurred from roughly 3500 to 2300 BCE.
THE BRONZE AGE
The Bronze Age was a period of human history after the Stone Age and before the Iron Age(between 1800 BC and 2000 BC)
when bronze was used widely to make tools, weapons, and other implements.

The bronze age succeeds the Copper age.

THE IRON AGE


The Iron Age was a time in early human history when people began to use tools and weapons
made of iron. The Iron Age started and ended at different times in different places.
In this age, tools were mainly made of iron and steel. Tools like hoes and cutlasses were made
for agriculture and for war. For the first time villages were fortified, warfare was conducted on
horseback and in horse-drawn chariots, and
alphabetic writing based on the Phoenician
script became widespread, Thisperiod
witnessed many changes in society, including
different agricultural practices, religious beliefs
and artistic styles.

THE MIDDLE AGES


This age experienced a major technological advances, which includes the
manufacturing of gun powder, mechanical clock, measuring instruments like
the barometer and thermometer, building techniques and the
gearing system. Horses were also used more efficiently during this period. Feather or quill pens were important writing tool
in this age (used by craftsmen and artists to create beautiful calligraphy and manuscripts.

Mechanical clock

A Barometer
THE INDUSTRIAL AGE
The Industrial Revolution, which took place from the 18th to 19th centuries, was a period during which, rural societies in
Europe and America became industrial and urban. Prior to the Industrial Revolution, which began in Britain in the late
1700's, manufacturing was often done in people’s homes, using hand tools or basic
machines. Industrialization marked a shift to powered, special-purpose machinery,
factories and mass production. The iron and textile industries, along with the
development of the steam engine, played central roles in the Industrial Revolution,
which also saw improved systems of transportation, communication and banking.
While industrialization brought about an increased volume and variety of
manufactured goods and an improved standard of living for some, it also resulted
in often grim employment and living conditions for the poor and working classes.
This period replaced an economy based on manual labour with one dominated by
industry and machine manufacture. The development of metal machine tools
further helped the increase of manufacture of more production machines for
manufacturing in other industries.

ELECTRONIC AGE
The Information Age (also known
as the Computer Age, Digital Age,
or New Media Age) is a period in human
history characterized by the shift from traditional industry that the industrial
revolution brought through industrialization, to an economy based on
information computerization. The onset of the Information Age is associated
with the Digital Revolution, just as the Industrial
Revolution marked the onset of the Industrial Age.
We are presently in electronic age where modern technologies are
used. In this age, a vast number of electronic equipment are used, including
televisions, radio, video cameras and computers.
This covers the period from the advent of personal computer to the development of internet.
Computer is now acknowledged as the most important part of human existence. It is one of the tools needed for success and
progress today.
HISTORY DEVELOPMENT OF COMPUTERS

Definition of Early Counting Devices: These are the devices used in the early ages for counting and performing simple
arithmetic calculations. They include fingers and toes, pebbles and grains, cowries, sticks, tallies, stones, writing on the wall,
etc.

Early counting devices can also be defined as those devices used to perform arithmetic operations before the advent of
modern civilization.

They were used as legal tender for the purchase of goods in Africa and other counting needs.

People learned how to count well back in the Stone Age. At this early period, counting devices used were fingers and toes,
stones, wooden sticks, tallies, pebbles and grains, cowries, and notch sticks.

These devices were used in counting, and in the performance of simple arithmetic calculations such as the addition of
numbers, subtraction, and multiplications.
Until now, some of these early counting devices are in use in some areas. However, this simple way of counting was difficult
to use for large counts because of their awkward nature. The methods were used until the invention of the Abacus device.

EXAMPLES OF EARLY COUNTING DEVICES

1. Fingers and toes – These were used in the early days to give an account of days. They were also used for trading
purposes. The feet were used for the measurement of land.

2. Pebbles and grains – Pebbles are small stones and they were used for counting. However, the use of stones in counting is
very awkward. Grains like rice, beans, maize were also used for counting in the early days.

3. Cowries – these are brightly colored mollusk which has a glossy patterned domed shell with a long, narrow opening. They
were used for money and other counting needs.

4. Stones – People made calculations by moving stones. If numbers are to be added, stones were added, if subtraction is to
be made, stones were taken off.

5. Sticks – These include the canes, clubs, and shaped woods that were used for measuring land areas, and for other
counting and measuring needs.
6. writing on the wall: At an early age, man learned to use objects like charcoal, mud, limestone, and chalk to write strokes
on the wall for counting. The practice led to the development of writing in figures and letters.
Problems of early counting devices

1. They cannot be used to count large numbers


2. They are limited in scope and cannot go beyond a particular number
3. There is a problem of non-accuracy when used to count large numbers
4. Using them to count large numbers could bring confusion as counting progresses
Mechanical Counting and Calculating Devices; are devices that involve the use of physical forces to operate them.

Abacus

When trading between countries became important, people needed more sophisticated devices. The Abacus device was
invented to replace the traditional method of counting.

The abacus device is an instrument used for counting as far back as 500 B.C. to make calculations easier, and to suit the
various number systems.

In the beginning, the abacus was just a board with stones or sticks. On the surface of the abacus, there were parallel notches
or grooves.

People made calculations by moving stones, sticks, bones. If numbers are to be added, stones were added, if subtraction is to
be made, stones were taken off. If it were multiplication, double summing was made. When dividing, double subtraction was
performed.

There are different types of abacus counting devices including the Chinese abacus called Suanpan, invented in the 6th
century, the Roman abacus, named Calculi or Abaculi, and the Japanese abacus, called Soroban. Soroban was used in the
16th and 17th centuries.
In China, pearls or bullets substituted stones and these were put into wire or string.
The Roman abacus was made of bronze, stone, ebony, or colored glass.

Astrolabes

Astrolabes are devices that can be used to calculate the location of celestial objects, among other things. Some can be very
complex; the Antikythera mechanism (dated about 100 BCE) can be viewed as an extreme form of an astrolabe, although its
construction is much closer to that of many early mechanical computers.

Looking for elements of a computer in the astrolabe, we again see the input, output, and memory devices combined.

An astrolabe consists of several disks (or spheres, in some versions), one or more pointers, graduations or scales engraved
around the perimeters and elsewhere, and tables of various sorts. Many included sights. The construction allowed one to
move the pieces around to predict or check the positions of the sun, stars, and other things in the sky. Many versions
allowed several different upper disks to be traded in and out with the same base, allowing the same astrolabe to be used at
different latitudes.

Again, programmability is in the physical construction of the astrolabe. Exchangeable parts allow a certain amount of what
we now call re-programmability.

Clocks

I'm afraid someone will call foul when I call clocks computers, but let's be real here.

Maybe they should be classed separately, but they also share the input, processing, output model. You set the clock as one
input, then you wind it, install the battery, plug it in, or fill the water or sand tank, as another. Then it free runs to
continuously calculate the current time. Output, of course, is the face (usually). Alarm clocks have additional inputs, and
recording clocks have additional outputs. There is memory (as in read-only) in the cogs and circuits which keep the output
time accurate and readable in the 12:60:60 base time numbering system we traditionally use.

Slide Rule

In 1620, William Oughtred developed a counting device called a slide rule. This invention was necessitated by the invention
of logarithms and Napier’s bones.
This device makes use of a cursor, which is moved up and down various scales to perform multiplication and division using
the principles of Logarithms. Thus, the device is equivalent to today's pocket calculator.

Blaise Pascal Machine

In 1642 Blaise Pascal invented the first calculating machine when he was 19 years old. This machine was developed to assist
his father’s work as a government auditor of accounts.

The machine consists of clogged wheels, gears, and dials. Each wheel was divided into ten sections, representing numbers,
and the machine allowed a carry from one wheel to the next.

This principle is still in use today. Odometers in cars use Pascal’s wheel principle to keep track of the number of kilometers
traveled.

The machine had input, processing, and output devices. The Pascal machine was only capable of addition.
Napier’s bones

In 1617 John Napier, a Scottish mathematician, invented Napier’s bones. These were rods on which numbers were marked.

These numbers enable the user to easily work out the answers to a restricted set of multiplication tables. The numbers to be
multiplied are positioned on the top row and the left column.

The answer is obtained at the interoperation of these two. Napier later invented tables of Logarithms which enabled
multiplication and division to be carried out very simply by addition and subtraction.

Gottfried Leibniz Machine

A famous German mathematician, Gottfried Von Leibniz made the most significant contribution to the mechanical calculator
in 1671 when he invented the Leibniz calculating machine.

The machine can perform 4 arithmetic operations. The machine also used a wheel with teeth on them, termed “steeped
wheel”, which allowed long multiplication and division to be done.

The process of multiplication involved repeated addition. Unfortunately, Leibniz’s machine was unreliable, as were most of
the early calculators.

Because of this problem, mechanical calculators were not popular for many years, and it was not until the late nineteenth
century that they became widely used in business.

Joseph Jacquard Loom

The Jacquard loom is a mechanical loom, invented by Joseph Marie Jacquard in 1801. The loom simplifies the process of
manufacturing textiles with complex patterns such as brocade, damask, and matelasse.

In 1725, French weaver, Basile Bouchon constructed a weaving loom that could be controlled by holes in a roll of paper.   The
holes allowed some needles in the loom to be engaged, while others were held back.

The loom was, therefore “programmed” by the placement of the holes in the roll of paper to produce a particular pattern.
However, in Bouchon’s loom, someone had to be employed to control the needles and decide which would be used for each
line of weave in the fabric.

But Joseph-Marie Jacquard improved upon Bouchon’s design by developing a loom that used a punched card to control each
line of the weave.  Over 1000 needles could be controlled at one time, and very intricate designs were easily created.
Mechanical Counting Devices
Electro-Mechanical Counting Devices are devices that involve the use of physical forces and electric power to operate them.
They can also be defined as those counting devices that could be operated both electrically and mechanically.

Charles Babbage Analytical machine

Charles Babbage was a mathematics professor at Trinity College in Cambridge, England.  After several unsuccessful attempts
at building a mechanical calculating machine, Babbage developed the analytical engine in 1834.

Babbage’s designs were similar to the general design of modern-day computers, including a central arithmetic unit for
calculating, called a mill, an area for retaining numbers, called a store, and sophisticated methods for input and output.

While working on his analytical engine, Babbage began a lengthy correspondence with poet Lord Byron’s daughter, Ada
Augusta, Countess of Lovelace. Lady Lovelace became fascinated with Babbage’s ideas, and in her analysis of his analytical
engine, she developed the essential ideas of programming, such as “branching” to perform decisions and repetitions.

Because of her work in this area, she is considered to be the first computer programmer. The programming language “Ada”
is named after her.

Philip Emeagwali

Philip Emeagwali is a Nigerian-born engineer and computer scientist/geologist. He is called the Bill Gates of Africa. He
invented the world’s fastest the computer. He was one of two winners of the 1989 Gordon Bell Prize, a prize from the IEEE,
for his use of a Connection Machine supercomputer to help analyze petroleum fields.

He programmed the Connection Machine to compute a world record 3.1 billion calculations per second using 65,536
processors to simulate oil reservoirs. He has submitted over 41 inventions to the US patent and trademark office.

Gottfried Leibniz machine: In 1671 the German mathematician-philosopher Gottfried Wilhelm von Leibniz designed a
calculating machine called the Step Reckoner. (It was first built in 1673.) The Step Reckoner expanded on Pascal's ideas and
did multiplication by repeated addition and shifting.

Leibniz was a strong advocate of the binary system. Binary numbers are ideal for machines because they require only two
digits, which can easily be represented by the on and off states of a switch. When computers became electronic, the binary
system was particularly appropriate because an electrical circuit is either on or off. This meant that one could represent true,
off could represent false, and the flow of current would directly represent the flow of logic.
Speeding clock: This machine preceded the Pascaline and Gottfried Leibniz's Stepped Reckoner by about 20 years. It was
called the "Calculating Clock" because it used gears and cogs used in clocks. [2] The Calculating Clock used a direct gear drive
and rotating wheels to make calculations. As one wheel made a complete turn, the wheel to the left rotated one-tenth of a
turn. Schickard reportedly had constructed one prototype of the machine and then tried constructing a Calculating Clock for
the famous mathematician Johannes Kepler, but the fire destroyed the machine destined for Kepler around 1624. The
location of the original prototype is unknown.[3]It could add and subtract up to six-digit numbers, and it would ring a bell
whenever there was an overflow in its capacity. For more complex problem's Napier's Bones were set on it. It could also
calculate astronomic tables.

Joseph Marie Jacquard loom: Joseph Marie Charles (called or nicknamed) Jacquard (French: [ʒakaʁ]; 7 July 1752 – 7 August
1834) was a French weaver and merchant. He played an important role in the development of the earliest programmable
loom (the "Jacquard loom"), which in turn played an important role in the development of other programmable machines,
such as an early version of the digital compiler used by IBM to develop the modern-day computer

Images of Mechanical and Electro-Mechanical Counting Devices


Electronic Counting Devices and Modern Computer: are those counting devices that could be operated electronically.

Calculators: An electronic calculator is typically a portable electronic device used to perform calculations, ranging from


basic arithmetic to complex mathematics.
The first solid-state electronic calculator was created in the early 1960s. Pocket-sized devices became available in the 1970s,
especially after the Intel 4004, the first microprocessor, was developed by Intel for the Japanese calculator
company Busicom. They later became used commonly within the petroleum industry (oil and gas).
Modern electronic calculators vary from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in
printers. They became popular in the mid-1970s as the incorporation of integrated circuits reduced their size and cost. By the
end of that decade, prices had dropped to the point where a basic calculator was affordable to most and they became
common in schools.
Herman Hollerith Punch Cards

The rest of the nineteenth century witnessed the design of more complicated mechanical devices. By 1890, an American
called Dr. Herman Hollerith made the most outstanding and important invention called punch cards.

The machine was used to process information obtained in the census of the population carried out in the United States in
1890. With this machine, he was able to achieve in three years what will take seven years to do manually.
Hollerith used Jacquard’s punched-card idea to feed personal statistics into his machine.  Holes in the punched cards stood
for a person’s age, sex, state, and other similar information.  There was one card for each person.

 As each card was fed into the machine, a set of metal pins were brought down on the card.   The pins passed through any
holes punched in the card, which completed an electrical circuit that turned a counter dial.

To sell the machine, Hollerith formed his own company in 1896, then later merged with several other companies to form the
Computing Tabulating Recording Company (CTR) in 1911. CTR later became the International Business Machines or IBM.

John Von Neumann Machine

In 1945, the Hungarian born American mathematician, John von Neumann undertook a study of computation. In this study,
he demonstrated that a computer could have a simple, fixed structure, yet be able to execute any kind of computation if
given properly programmed control, and without the need for hardware modification.
Von Neumann contributed a new understanding of how practical fast computers should be organized and built; these ideas,
often referred to as the stored-program technique, became fundamental for future generations of high-speed digital
computers and were universally adopted.

The principal feature of a von Neumann machine is that the program and any data are both stored together, usually in a
slow-to-access storage medium such as a hard disk, and transferred as required to a faster, and more volatile storage
medium (RAM) for execution or processing by a central processing unit (CPU).

Since this is practically how all present-day computers work, Neumann is termed the father of the modern computer.
The term “von Neumann architecture” is rarely used now, but it was a common parlance in the computing profession
through to the early 1970s.

Von Neuman Architecture


Before Neumann’s idea, programs were viewed as essentially part of the machine, and hence different from the data the
machine operated on. A common approach was to input the program by some physical means, such as wiring a plugboard,
and then feeding in the data for the program to act upon.
As a result of Neumann’s discovery, computing and programming became faster, more flexible, and more efficient, with the
instructions in subroutines performing far more computational work.
In 1945, von Neumann proposed the stored program concept in his report on the EDVAC. He did it together with computer
pioneers, J. Presper Eckert, John Mauchly, Arthur Burks, and Hermann Goldstine, who was working on plans for the EDVAC.

According to the original papers proposing the new architecture, a von Neumann computer has five parts: an arithmetic-logic
unit, a control unit, a memory, some form of input/output, and a bus that provides a data path between these parts. Such a
computer operates by performing the following sequence of steps:
1. Fetch the next instruction from memory at the address in the program counter.
2. Add the length of the instruction to the program counter.
3. Decode the instruction using the control unit.
4. Go back to step 1.
Von Neumann computers have some drawbacks. In particular, they carry out instructions one after another, in a single linear
sequence, and they spend a lot of time moving data to and from the memory. This slows the computer. This problem is
called the von Neumann bottleneck.
MODERN MACHINES

1. The EDVAC computer, when it was finally constructed in 1952, followed von Neumann’s design. But the first von
Neumann computer to be constructed and operated as the Manchester Mark I

Manchester Mark 1
This machine was designed and built at Manchester University in England. It ran its first program in 1948. The computer had
a 96-word memory and executed an instruction in 1.2 milliseconds. Today, the computer you are using is born out of von
Neumann’s idea.
2. EDSAC: The Electronic delay storage automatic calculator was an early British computer. Inspired by John von Neumann's
seminal First Draft of a Report on the EDVAC, the machine was constructed by Maurice Wilkes and his team at the University
of Cambridge Mathematical Laboratory in England.

3. UNIVAC: The UNIVAC I was the first general-purpose electronic digital computer design for business applications
produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC.
4. ENIAC was the first electronic general-purpose digital computer. It was Turing-complete, and able to solve "a large class of
numerical problems" through reprogramming.

COMPUTER GENERATIONS

Generation of computers explains the different stages or periods different category of computers were invented and the
technologies and features these computers exhibited. Initially, the generation term was used to distinguish between varying
hardware technologies. But nowadays, generation includes both hardware and software, which together make up an entire
computer system.

Following are the main five generations of computers:

S/NO GENERATION PERIOD TECHNOLOGY


1 First generation 1945-1955 Vacuum tube
2 Second Generation 1956-1963 Transistor based.

3 Third Generation 1964-1971 Integrated Circuit


4 Fourth Generation 1971-1980 VLSI microprocessor

5 Fifth Generation 1980- ULSI microprocessor based ( Artificial intelligence)


present/beyond

FIRST GENERATION

First generation of computers started with using vacuum tubes as the basic components for memory and circuitry for
CPU (Central Processing Unit).

In this generation, mainly batch processing operating systems were used. Also, Punched cards, Paper tape, Magnetic
tape Input & Output devices were used.

These generation of computers were coded in machine language.

FEATURES OF THE 1st GENERATION OF COMPUTER


The main features of First Generation computers are:

1. They used Vacuum tube technology,

2. They were very costly and unreliable

3. Generate lots of heat

4. Huge in size

5. Non-portable

6. Consumed lot of electricity

7. Slow input and output devices

8. Used machine language (binary codes)

Examples of computers in this generation were:

1. ENIAC - Electronic Numerical Integrator And Computer

2. EDVAC - Electronic Discrete Variable Automatic Computer

3. UNIVAC - UNIVersal Automatic Computer

4. IBM-701

5. IBM-650

SECOND GENERATION

The invention of transistors marked the beginning of this second generation. The transistors were cheaper, and took
the place of vacuum tubes of the first generation of computers. In this generation, the memory of the computer became
larger as magnetic tape and magnetic disks as storage devices.

In this generation, assembly language and high-level programming language like FORTRAN, COBOL were used.
Vacuum tubes
They used Batch processing and Multiprogramming Operating system
Transistors

FEATURES OF THE 2nd GENERATION OF COMPUTER

The main features of Second Generation are:

1. Use of transistors

2. Reliable as compared to First generation computers

3. Smaller size as compared to First generation computers

4. Generate less heat as compared to First generation computers

5. Consumed less electricity as compared to First generation computers

6. Faster than first generation computers

7. Still very costly

8. Support assembly and high level languages

Examples of computers in this generation were:

1. IBM 1620

2. IBM 7094

3. CDC 1604

4. CDC 3600

5. UNIVAC 1108

THIRD GENERATION
The period of third generation was 1965-1971.

The third generation of computer is marked by the use of Integrated Circuits (IC's) in place of transistors. The IC was
invented by Jack Kilby in 1958. This development made computers smaller in size, reliable and affordable by individuals.

This generation of computers used High-level language (FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68,
etc.)

FEATURES OF THE 3rd GENERATION OF COMPUTER

The main features of Third Generation are:

1. IC used

2. More reliable

3. Smaller size

4. Generate less heat

5. Faster

6. Lesser maintenance

7. Consumed lesser electricity

8. Support high-level language

Examples of computers in this generation were:

1. IBM-360 series

2. Honeywell-200 series

3. PDP(Personal Data Processor)

4. IBM-370

5. UNIVAC 1100

FOURTH GENERATION

The fourth generation of computers is marked by the use of Very Large Scale Integrated (VLSI) circuits. VLSI circuits is
a microprocessor technology consisting of about 5000 transistors and other circuit elements on a single chip which made it
possible to have microcomputers in the fourth generation.
This generation of computers were also equipped with ROM (READ ONLY MEMORY), which stores programs that
cannot be changed.

Fourth Generation computers became more powerful, compact, reliable, and affordable. As a result, it gave rise to
personal computer (PC) revolution.

The main features of Fourth Generation are:

1. VLSI microprocessor technology used

2. Very cheap

3. Portable and reliable

4. Introduction of Personal Computers

5. They generate less amount of heat

6. Very small size

7. The internet was introduced

8. Development of computer networks

9. Computers became easily available

10. They used high level languages.

Examples of computers in this generation were:

1. IBM 5100PC

2. INTEL 8080, 80286,80386, 80486

3. Pentium I, II, III, IV.

FIFTH GENERATION

In the fifth generation, the VLSI technology became ULSI (Ultra Large Scale Integrated) microprocessors technology.

This generation is based on multi-processing hardware and AI (Artificial Intelligence) software.

Artificial Intelligence is an emerging branch in computer science whereby computers think like human beings. This
method is referred to as KNOWLEDGE INFORMATION PROCESSING SYSTEM.

All the higher level languages like C and C++, Java, .Net, etc., are used in this generation

The main features of Fifth Generation are:


1. ULSI microprocessors technology

2. Development of artificial intelligence

3. Advancement in information Processing

4. Advancement in computer technology

5. More user friendly interfaces with multimedia features

6. Availability of very powerful and compact computers at cheaper rates

Examples of computers in this generation are:

1. Desktop

2. Laptop

3. NoteBook

4. UltraBook

5. ChromeBook

Data and information

What is data?

Data can be defined as a collection of raw facts represented in the form of numbers, letters or words about an event, activity
or something.

Data are also facts and figures that can be processed by a computer.

What is information?

Information is organized or classified data which has some meaningful value to the receiver.
Types of data

Data consists of various types, namely;

1. Numeric data – this is data represented in the form of numbers or figures. E.g. 246, 20, 900

2. Alphabetic data (labels) – this consists of letters, names, places. E.g. port Harcourt, Adeolu, and letters A-Z. labels are
also called strings.

3. Alpha-numeric data – this is a combination of numbers and alphabets. E.g. school address; Jesuit Memorial College,
P.M.B. 18095, Port-Harcourt.

4. Audio data – these are also known as voice data. They are usually sent into a computer with a microphone.

5. Graphic data – these are also called video or visual data. They are usually multimedia types such as pictures, images
diagrams etc.

Sources of data

Sources of data refers to how data is obtained. This is determined by the nature of the data, time and also the cost of
obtaining the data.

Data can be obtained by:

1. Interviewing

2. Observing

3. Document analysis

4. Survey

Qualities of good information

For data to be meaningful, it must possess the following characteristics;

1. Timely

2. Accurate
3. Meaningful

4. Relevant

5. Comprehensive

6. Economical

7. Suitable

8. reliable

Classification of information

Information can be classified based on the following;

1. Form in which the information exists

2. Time of occurrence

3. Frequency of occurrence

Based on the form in which the information exists

1. Written information- this is information written down on a medium usually paper. E.g. magazines, books and
newspapers

2. Oral information – information by talking or speaking

3. Visual information – information through pictures, images, video ad charts

4. Sensory information – information by hearing, feeling, smelling, touching.

Based on the time of occurrence

1. Historical information – related to past events

2. Present information – related to present events such as current issues.

3. Future information – relating to a future activity such as weather forecast.

Based on frequency of occurrence

1. Continuous information

2. Hourly information

3. Daily information

4. Monthly information

5. Termly information

6. Annual information

How data is processed into information

Data is processed into information through a combination of steps called data processing steps.

DATA PROCESSING STEPS

1. data origination: this means putting together original data, which will be entered into the information system. This is also
called data collection.
2. data preparation: once the data have been checked, the source document is then sorted. Sorting means arranging the
data in a particular order. The order could be alphabetical or numerical.

3. data input: this is entering the sorted or prepared data into the processor. This usually is done by computer operators.

4. data processing: calculations are done at this stage. This is the stage at which arithmetical or logical operations are
performed on the data. Here a set of new data are generated from calculations.

5. information output: this is the stage where documents are printed and reports are written.

Data origination

Data preparation
Storage

Data input
Data processing steps

Data processing

Information output

Information processing and the need for computers

Information needs to be stored in a meaningful and efficient manner in order to make it available whenever it is needed.
However computers are needed for information processing based on the following;

1. Increased accuracy: computers do not make mistakes, provided there is no mistake in the input. Errors can only
occur if there is an error in data entered into the system. The computer works on GIGO (Garbage In Garbage Out).
Therefore, results obtained in an electronic data processing method are very reliable.

2. Efficiency in storage facilities: the computer can process and store a very large volume of data within a very short
time. Examples are CD ROMs which can store the equivalent of a shelf of books in the library.

3. Speed: the computer processes data at a very fast speed that can never be matched using manual or mechanical
methods. This leads to fats results, thereby improving organisational or institutional efficiency.

4. Complexity: the computer is capable of handling complex data that could not have been handled manually.

5. Consistency: the computer is very consistent in its operation. It does not get tired like human beings.

Computer Ethics

Definitions:

• Ethics are the moral rules, standards or principles of behavior for deciding between right and wrong. Ethics are rules
and standards that relate to professional conduct of individuals.

• Computer ethics – these are the rules and standards guiding computer professionals and users in the use of the
computer system. They are the rules adopted and practiced by computer professionals.
Common computer ethics

Computer ethics can be categorized into 4 different types, namely:

Environmental ethics

This refers to environmental factors that affect the use of computers. It relates to the following;

1. Location of the computer room in a convenient place.

2. Ensuring appropriate ventilation by installing air conditioners.

3. Ensure that the computer room is neat and tidy.

4. Provision of a thermometer to help keep track of the temperature of the room.

Maintenance ethics

This relates to efforts to ensure a good working condition as well as improve the life span of the computer system. These
efforts include:

• Ensure continuous power supply by providing Uninterruptible Power System (UPS) to curb sudden power failure to
the computer system.

• Avoid overcrowding the computer room and also while using a computer.

• Ensure that the computer system is dust-free.

• Periodic check and preventive maintenance should be carried out.

Moral ethics

This relates to moral standards in the usage of the computer. They include:

• No fighting in the computer laboratory

• No playing with the computer system

• Students should adhere to instruction given by the teachers.

• Students should have a genuine purpose before gaining access to the computer laboratory.

Safety ethics

These are the rules that help to avert danger or harm while using a computer. They include the following:

• Provision of fire extinguishers to handle fire outbreak.

• Fire alarms should be provided to alert users of any fire outbreak

• Provision of voltage stabilisers to control the input voltage to computer systems.

• There should be proper management of cables and wires and also appropriate lighting to prevent unnecessary
accidents.

Responsible use of the computer

 These are the precautions taken by computer users to ensure maximum performance from the computer system.
The following are the ways a computer can be used responsibly;
 Avoid dropping liquid into the computer system.
 Use a dust cover to prevent the accumulation of dust into the sensitive parts of the computer system.
 Protect the system from power problem with the use of a voltage stabilizer, surge protectors and Uninterruptible
Power Supply (UPS).
 Unplug the computer system from power source when not in use.
 Obey the laid down rules and regulations in a computer laboratory.

Responsible use of the internet

The internet should be used responsibly as follows;

• Regular and frequent check of the e-mail box

• Prompt and polite response to emails.2

• Avoid copying files indiscriminately from the internet to help prevent virus infections on storage devices.

• Do not give out personal information to anyone over the internet.

• Avoid visiting websites that promote occult and pornographic information.

Areas of misuse of computers and internet

• Invasion of other people’s privacy

• Online fraud

• Plagiarism

• Pornography

• Software piracy

• Cyber war

• Virus infection to computer systems

APPLICATIONS OF ICT
What is ICT?
Information and communication technology is the process of creating, acquiring, storing, processing and communicating
information with the use of computers and other information and communication technology devices.
ICT is a very important tool in our daily activities and it has impacted on how we live. The discoveries and inventions in
science and technology have improved the speed of communication. ICT is helping man to fulfil his needs with the use of
available tools.
Application of ICT Everyday Life
ICT has affected the following areas of human endeavor in day to day activities;
 Education
 Banking
 Industry
 Commerce
 Medicine
 Agriculture
 Transportation
 Entertainment
 Engineering
 Security
 In Education.
ICT has impacted on education in the following aspects;
1. E-learning: This is utilizing electronic technologies to access educational curriculum outside of a traditional
classroom. It involves delivering lessons/courses via the internet to somewhere other than the classroom where the
teacher is teaching.
2. E-library: It is also called digital library.
A digital or electronic library is a collection of documents in organized electronic form, available on the Internet or on
CD-ROM (compact-disk read-only memory) disks. Depending on the specific library, a user may be able to access
magazine articles, books, papers, images, sound files, and videos.
An Electronic Library System enables users to obtain open digitized data from anywhere in the world by online
access.
3. Research: This involves finding useful information on the internet. Both teachers and students use the internet to
search for information on a daily basis.
4. Television broadcast is also a communication medium used to educate students, farmers, sportsmen, etc.
5. Difficult experiments, advanced surgery for medical students, etc. can be viewed through the use of virtual reality
technology.
6. LCD projectors can be used for effective teaching/training.
7. The man power problem, human mistakes can be avoided by the use of online examination, Computer Based Test
(CBT), etc.

Virtual Reality

Impact of ICT on Learners


 Motivates learners
 Learning process can be anywhere and anytime
 Students use interactive whiteboard in classroom

Impact of ICT on Teachers


Teacher has access to:
 Lesson plans
 Network of teachers
 Information resource

 In Banking
Banking in general has changed from the traditional banking which is manual to electronic Banking that enables 24-hour
access to banking services.
Electronic banking services are a range of banking and other services or facilities that use electronic equipment and include:
 Online banking.
 mobile banking
 SMS banking
 Use ATM and debit card services
 Electronic Fund Transfer services
 Point of sales banking (POS)
 E-statements
 In Industry
1. Robots are used in manufacturing to help to improve productivity, consistency (in terms of final finish) and to reduce
overall running costs. Robots generally make the factory a much safer environment for workers.
Before robots, Manufacturing in factories was carried out by people. People are not very fast at making stuff and
they also sometimes make mistakes. For example; in Automobile manufacturing industry. These days’ cars are
designed, crash tested, tested for functionality and photographed for glossy preview shots without a prototype even
being built – all thanks to IT
2. Super computers are used in aerospace research.

Robots in Automobile Industries


 In Commerce
1. Electronic commerce (E-commerce): In the past, distribution of goods, buying and selling were done manually. But
now, they are done electronically/online.
Also, trading globally was slow, late and expensive. These days, global trading is comparatively fast and cheap.
2. Various means of Advertisement: Advertising was mostly done by word of mouth. Today, there are different and
interesting multimedia for advertising.

 In Medicine
1. Use of computerized medical equipment; CT Scanners like Ultra sound, ECG, etc.
2. Channeling doctors over the internet
3. Telemedicine
4. Conducting research
5. Educating the health workforce

 In Agriculture
1. Use of computerized agricultural equipment; computerized feeders for dairy cow, computerized milk processing
system, computer controlled greenhouses.
2. Using Internet and email to disseminate information
3. Managing and analyzing information of research and experiments.

 In Transportation
1. Using Air traffic control systems
2. Booking tickets over internet/online ticket reservation
3. Controlling traffic jam using computerized traffic controlling systems
4. Using GPS (Global Positioning System) in travelling.

 In Engineering
1. Using CAD (Computer Aided Design) for design and analyses of things like construction plan, vehicles and machines,
etc.
2. Using CAM (Computer Aided Manufacturing) to create 3D models.
3. Using robotics in manufacturing plants.

 In Entertainment
1. Games
2. Movies
3. Music
4. Animation
5. Sports
6. Social

 In Security
1. Using computers to analyze crimes.
2. Using robotics to remove landmines.
3. Using computerized air defense system.

THE COMPUTER SYSTEM week 1

The computer system is a collection of component elements that include all functional parts of a computer and all peripheral
devices that work together to perform a task.
The components of a computer system
 Hardware
 Software
 Peopleware

HARDWARE COMPONENTS
Computer hardware is the collection of physical elements that constitutes a computer system. They are divided into 2 main
parts:
1. The system unit
2. Peripherals

 The system unit is the case that contains all the electronic components of any computer system. It houses various
hardware devices like the motherboard, hard drive, CD ROM drive, power supply unit, the processor (CPU), power
cables, cooling fan, data signal cables, internal speakers etc.

 The peripherals are mainly the input/output devices that are connected to the system unit and they perform various
functions. However most peripheral devices are essential elements of a complete and useful computer system.

Components of a Typical Personal Computer


The personal computer consists of the following basic components;
1. The Input unit – used to input commands and data. E.g.: mouse, keyboard, digital camera
2. The Output unit – used to display, print processed data (information). E.g. monitor, printer.
3. The Memory unit – used to store and retrieve data or information temporary or permanently. E.g. RAM (random
Access Memory), ROM (read only memory)
4. The Storage unit – used to store data/ information. They can be classified into; internal storage and external storage
5. The Processing unit - used for processing data into information. It consist of Arithmetic and Logic Unit (ALU) and the
control unit.

Converting raw data to information

Sr.No. Operation Description


1. Take Input The process of entering data and instructions into the computer
system
2. Store Data Saving data and instructions so that they are available for
processing as and when required
3. Processing Data Performing arithmetic, and logical operations on data in order to
convert them into useful information.
4. Output Information The process of producing useful information or results for the user,
such as a printed report or visual display
5. Control the Directs the manner and sequence in which all of the above
workflow operations are performed.
The Central Processing Unit

The Central Processing Unit

This is also referred to as the processor (also known as the ‘brain’ of the computer). It is a small chip with millions of
components fitted together. The CPU is the hardware within the computer system that carries out the instructions of a
computer program by performing the basic arithmetic, logical, control and input/output operations. It consists of the
following parts;
1. The Arithmetic and Logic Unit (ALU) –this is where all arithmetic and logical operations are performed.
2. The Control Unit –this unit controls the operations of all parts of the computer system.
3. The Memory Unit–this unit can store instructions, data and intermediate results. It is known as main memory or RAM and
its size affects speed and capability.
CPUs come in different speeds. Basically, the faster the CPU, the faster the system will perform.

SOFTWARE COMPONENTS
This is the non-physical part of the computer system. These are the written instructions, programs and codes that the
computer reads, understands and executes. It consists of 2 main parts;
 System software –a collection of programs designed to operate, control and extend the processing capabilities of the
computer. E.g. operating systems –windows
 Application software –these are products designed to satisfy a particular need of a particular event. E.g. MS Word,
MS Excel etc.

PEOPLE WARE
This is also referred to as computer professionals. This is the human factor that operates the computer system. The
computer system functions through a combination of three factors, namely: Hardware, Software and People ware.
Computers operate using a combination of hardware and software. However, without user interaction, most computers
would be useless machines. While peopleware can mean many different things, it always refers to the people who develop
computers or use computer systems.

COMPUTER SOFTWARE WEEK 2


Software is a program that enables a computer to perform a specific task. It acts as a communication link between the user
and the computer. It is the non-physical part of the computer system. They are the written instructions, programs and codes
that the computer reads, understands and executes.

Software consists of 2 main parts;

Software

System Software Application Software


TYPES OF SOFTWARE

1. System software -a collection of programs designed to manage, operate and control the processing capabilities of
the computer e.g. operating systems –windows. It enables other software to run properly, by interfacing with hardware and
with other software.

System software can be classified into:


 Bootstrap loader - This is a program that loads the operating system (OS) and other programs of a computer
into the main memory.
 Operating system - Software that controls the execution of computer programs and also manages the computer
resources. It is a set of routines that governs the operation of a computer.
 Utility program - A program that performs a specific task related to the management of computer functions,
resources, or files. This includes password protection, memory management, virus protection, and file compression.
It is also a program that determines how a computer will communicate with a peripheral device. It is also known as
service program.
 Translator- A program that translates another program written in a high-level language into machine language
so that it can be executed.
 Executive - A program that controls the execution of other programs. It is a supervisory program.
 Device Driver- A program that create communication between the system and every new device attach to the
computer system

2. Application software -These are programs which enables a user to perform a task. They are products designed to
satisfy a particular need of a particular event. E.g. Word processor, spreadsheet packages etc.

Application Software

Application software can be classified into 2 main groups:

1. General Purpose Application or Users’ Objectives – These are off-the-shelf software that accomplish a broad
range of tasks as opposed to custom software which accomplishes tasks specific to user requirements. General purpose
applications are available in standalone versions or are bundled together to make up application suites.
Application suites such as MS Office, Apache Open Office, iWork, WPS Office, CorelDraw Graphics Suite, and Adobe Creative
Suite are bundles of applications with different functionality. They complement each other to make complete productive
packages for the office, school, and home.

A typical suite includes at least a word processor, presentation app, database app, and graphics app. Corel and Adobe suites,
however, favour graphics and video editing applications.

Word processing applications

i. Programming application software


ii. Graphic application software
iii. Database management system software
iv. Presentation application software
v. Learning application software
vi. Architectural/ engineering application software
vii. Entertainment application software
viii. Data processing application software

2. Custom made/Tailor-made/Bespoke software – Custom software is tailor-made to provide specific features and
tools. They perform specific requested functions and may contain borrowed features from off-the-shelf applications. Overall,
they are meant to maximize productivity and provide cordial interfaces for users while cutting out the excess features of
general purpose software.

Custom applications are tweaked to suit the changing demands of the client organization. Tweaks may include adaptations
to evolving business trends and removal of obsolete features.

Custom software can be customized to create:

 Security and client identification systems.


 Consumer application portals.
 Attendance rosters.
 Custom receipts and invoices.
 Stock management applications.
 Student enrollment, performance, and records tools.

Organizations and schools tend to favor custom applications because they work with multiple users and attend to multiple
clients.

The ownership rights of an application also remain with the client, giving him/her absolute authority to use or sell the
application.

An application can be customized to run on traditional computing setups or inside browsers. Popular examples of software
under this category includes:

 School Management Information System (SMIS).


 Point of Sale (POS).
 Electronic registration software for schools.
OPERATING SYSTEM week 3 & 4

Operating system – usually referred to as OS, is a software that controls the execution of computer programs and also
manages the computer resources. It is a set of routines that governs the operation of a computer.
An operating system could also be referred to as a software program that enables the computer hardware to communicate
and operate with the computer software.
Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen,
keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers. Without
a computer operating system, a computer and software programs would be useless.

Examples of operating systems

1. DOS (Disk Operating System)


2. UNIX operating systems
3. LINUX operating systems (Ubuntu, Kubuntu, Fedora, Debian, etc.)
4. WINDOWS operating systems
5. MACINTOSH operating systems (MAC OS)
6. iOS (Apple’s iOS)
7. ANDROID operating systems
8. BLACKBERRY operating systems
9. SYMBIAN OS
10. Some Linux based OS, etc.

Versions of WINDOWS Operating Systems


1.0 (1985) 3.1 (1992) 95 (1995) XP (2001)

Vista (2006) 7 (2009) 8 (2012) & 10 (2015)


Properties of an operating system

The following are properties of a good operating system:

1. It must be affordable
2. It should be understandable
3. It must be efficient
4. It must have the ability to evolve
5. It must be convenient

Types of operating systems

1. Command Line Interface (CLI) - A command line interface is an older style operating system where users type in
commands using keyboard.
Command Line Interfaces do not make use of images, icons or graphics. All the user is sees is a plain black screen. E.g.: DOS,
UNIX
2. GUI -Short for Graphical User Interface, a GUI Operating System contains graphics and icons and is commonly
navigated by using a computer mouse. Examples of GUI Operating Systems are: Windows OS, Linux, and all mobile OS

3. Multi-user multi-tasking -This operating system allows for multiple users to use the same computer at the same time
and multiple software processes to run at the same time. Examples of operating systems that would fall into this category
are: Linux, UNIX, and Windows 2000

4. Multiprocessing -An operating system capable of supporting and utilizing more than one computer processor.
Examples of operating systems that would fall into this category are: Linux, UNIX, and Windows XP

5. Network -An operating system that is capable of allowing multiple computers/ users to share resources on a
network. It is usually in a client/server network structure. Examples of operating systems in this category are: Novel, Linux,
UNIX, Windows NT, and Windows 2000

6. Multithreading -an operating system that allow different parts of a program, called threads, to run simultaneously.
Examples of operating systems that would fall into this category are: Linux, UNIX, and Windows XP

7. Single-user, single task- As the name implies, this operating system is designed to manage the computer so that one
user can effectively do one thing at a time. E.g. Ms DOS
8. Single-user, multi-tasking-This is the type of operating system most people use on their desktop and laptop
computers today.
 Microsoft's Windows and Apple's Mac OS platforms are both examples of operating systems that will let a single user
have several programs in operation at the same time. For example, it's entirely possible for a Windows user to be
writing a note in a word processor while downloading a file from the Internet while printing the text of an e-mail
message.

The operating system at the simplest level does 2 main things


1. It manages the hardware and software resources of the system. Such resources include –processor, memory, disk
space etc.
2. It provides a stable consistent way for applications to deal with the hardware without having to know all the details
of the hardware.

FUNCTIONS OF AN OPERATING SYSTEM

Below are the main functions of an operating system:

 Resource Management/Allocation
 User interface
 Memory Management.
 Process Management.
 Device management
 File Management.
 Communication/network Management.
 Security Management.

1. Resource manager – The first task, managing the hardware and software resources, is very important, as various
programs and input methods compete for the attention of the central processing unit (CPU) and demand memory,
storage and input/output (I/O) bandwidth for their own purposes. In this capacity, the operating system plays the
role of the good parent, making sure that each application gets the necessary resources while playing nicely with all
the other applications, as well as husbanding the limited capacity of the system to the greatest good of all the users
and applications.

2. User interface- creates a platform where the system interfaces with the user to perform the user’s requests.

3. Memory manager – (in-charge of the main memory). It is an important function of operating system. The memory
cannot be managed without operating system. Different programs and data execute in memory at one time. If there
is no operating system, the programs may mix with each other. The system will not work properly. The operating
system scans every request for memory space and checks if it is valid. Then, it allows allocation of memory space not
taken up already.

4. Process manager – The CPU can perform one task at one time. If there are many tasks, operating system decides
which task should get the CPU, keeping track of the status of each process and handling tasks as they enter the
system.

5. Device management: Special programs called drivers allows hardware peripherals to communicate with the OS.
Much of a driver's function is to tell the operating system that a peripheral device has been plugged and needs to be
used.
Driver programs function in different ways. Most run when the device is required, and function much the same as
any other process. The operating system will frequently assign high-priority memory blocks to drivers so that the
hardware resource can be released and readied for further use as quickly as possible.

6. File manager – checks every type of file on the system. Data files, program files, compilers, seeking permissions so
the user can only see certain files.

7. Network manager – provides a way users share hardware and software resources and control the user’s access to
those resources.

8. Security management – OS has a number of built–in tools to protect the system against security threats, including
the use of virus scanning utilities and setting up a firewall to block suspicious network activity.
COMPUTER PROBLEM SOLVING SKILLS week 5

Introduction:
Solving problems using a computer involves creating and using software.
Software are programs written in computer programming languages. These programming languages are languages written in
such a way that it would be easy to convert to computer language.
Definition of terms

A computer language: This is a notation by which we communicate with the computer. It can also be the system of
instructions or commands upon which the computer operates.

A computer program: This is a set of instructions in a computer language that enables the computer to perform some given
tasks.

Computer programming language: This is the language used to communicate instructions for the computer. It is a series of
steps required to carry out specific tasks in the computer.

Every programming language has its own grammar and set of rules called syntax that governs the way in which an instruction
given to the computer is to be executed.

Types of programming language


There are two main types of programming languages;
1. Low-level languages
2. High-level languages

1. Low level languages are machine oriented languages that are closer to the computer than to the human user. They
are referred to as low because they are very close to how different hardware elements of a computer communicate
with each other.
There are 2 categories of low-level languages;
a) Machine language
b) Assembly language
i. Machine language is the only language that is directly understood by the computer. It doesn’t need to be
translated as it consists of strings of 0’s and 1’s which is the way data is represented in the computer.
ii. Assembly language is a low-level language which has instructions written in set of symbols and letters usually
called mnemonics. A translator program called an assembler is required to translate programs written in
assembly language to machine language for the computer to understand and execute.
2. High level languages are programming languages that are humanly understandable. They are closer to the human
user than to the machine. They are the languages used by programmers to write programs to perform specific tasks.
They are however not dependent on the design of the CPU. Examples of high level languages are C++, Fortran, Java
and Python.

Translators:
High level languages require a translator to translate programs written in human readable form to machine language
in order for the computer to execute. There are 2 types of translators;
i. A compiler
ii. An interpreter
a. A compiler is a computer program that translates a program written in a high-level language to the machine
language of a computer. The high-level program is referred to as 'the source code.' The compiler translates the
whole program from source code to machine code before it is executed by the computer. It is however machine
dependent.
b. An interpreter is a computer program that simulates a computer that understands a high-level language. This means
that the interpreter translates the source code line by line during execution. When using an interpreter, every time
you want to run the program, you need to interpret the code again line by line. There is no compiled code to use if
you have multiple inputs that require processing.
Stages involved
in writing
computer
programs
1. Problem

identification – this is the first and most important stage in writing a computer program. It involves identifying a
particular problem and developing a program to solving it. There are certain factors that needs to be analyzed at this
stage; they include the language to be used, type of report required, type of data expected, the stages of processing,
type of hardware and the need of the user in the report.
2. Problem analysis – this is the stage where a step-by-step procedure of solving the problem is written out usually in
English language with mathematical symbols.
3. Symbolic representation – this is a pictorial representation of the step-by-step procedure and it is called a flowchart.

Flowchart symbols

InpIut/Output
Terminator Input/Output
(start/stop)

Process
Decision
Flow lines

Connector

A Simple Flow Chart


4. Coding – this is the stage where the instructions to be carried out are written using a programming language that is
convenient.
5. Preparing data – this is stage where relevant data for the program are prepared to be processed for a desired
output.
6. Testing or running – the collected data are fed into the computer at this stage. Such data are used to perform tests
on the program and any errors are corrected usually referred to as debugging.
7. Documentation – this is an on-going stage, as all steps involved at the beginning of the program are written down.
8. Maintenance – programs developed to solve a problem needs to be updated to accommodate certain modifications.
For instance a program written previously could be updated and corrected for preset-day use.

WEEK 11
SAFETY MEASURES IN THE USE OF COMPUTER
What is Ergonomics?
Ergonomics is the study of people's efficiency in their working environment. It is the process of designing or arranging
workplaces, products and systems so that they fit the people who use them.
Most people have heard of ergonomics and think it is something to do with seating or with the design of car controls and
instruments –and it is- but it is so much more. Ergonomics applies to the design of anything that involves people –
workspaces, sports and leisure, health and safety.

Computer ergonomics is the science of equipment design and how specific equipment usage and placement can reduce a
user's discomfort and increase productivity.

Some equipment is designed with special attention to ergonomics, such as ergonomic keyboards and ergonomic chairs.

Ergonomic Keyboards

Safety Tips While Using the Computers


1. Adjust your chair: make sure your chair is adjusted to allow you to sit in a natural, comfortable position.
2. Keep the keyboard at a comfortable height: Many desks have a keyboard tray that can keep the keyboard at a better
height. You can also buy an ergonomic keyboard that is designed to minimize wrist strain.
3. Keep the mouse close to the keyboard: If possible, place the mouse right next to the keyboard. If the mouse is too
far away, it may be uncomfortable or awkward to reach for it.
4. Place the monitor at a comfortable distance: The ideal position for a monitor is 20 to 40 inches away from your eyes.
It should also beat eye level or slightly lower.
5. Avoid clutter: The computer area can quickly become cluttered with paper, computer accessories, and other items.
By keeping this area as uncluttered as possible, you can improve your productivity and prevent strain and injury.
6. Take frequent breaks: It's important to take breaks while you're working at your computer. To avoid eye strain, you
should look away from the monitor every once in a while. You can also stand up and walk around to avoid sitting in
the same position for long periods of time.

Other Safety Measures


1. Using of anti-glare protector. This will prevent Computer Vision Syndrome (CVS).
2. Good positioning of monitor base.
3. Proper Illumination of the computer room.
4. Maintaining a dust-free Environment.
5. Keeping liquids away from computers

WEEK 12
GRAPHICS PACKAGES
Definition
Graphics packages are computer software used in the production of images, drawings, pictures and other graphic related
jobs. They are frequently used for creating logos, charts, editing pictures and also for colour separation.

Uses of Graphics Packages


Graphic packages can be used for:
1. Drawing straight lines and curves
2. Filling a shape with colour
3. Editing images from input devices such as scanners and digital cameras.

Examples of graphics packages include:


 Ms Paint,
 CorelDraw
 Instant Artist
 Adobe Photoshop
 Print artist
 Logo graphics
 Harvard graphics
MS - Paint

FEATURES OF GRAPHICS PACKAGES


The general features of graphics packages include the following:
 Title bar: this is the horizontal space found at the top of the graphics environment window. It contains the file name
as well as buttons for closing and resizing the window.
 Tool bar: this is a block of icons that contains all the tools needed to perform a graphic task.
 Menu bar: this contains commands used to carry out tasks. They include file, Edit, view, image colour and Help
 Printable area: this is the part of the window used for drawing and printing. For any image to be printed, it must be
in the printable area.
 Colour palette: this tool enables different colours to be selected for use in a drawing.
 Status bar: this shows the position and status of the cursor. It displays the page number, line number etc.

You might also like