Class Notes
Class Notes
DESIGNED BY R. GOTORA
Information Technology
Definition – It is an organized combination and use of hardware, software, telecommunications,
database management and other information processing technologies used in a computer-based
information system.
Information technology transforms data into a variety of useful information products specifically by
the use of a computer.
Computer
⇒ It is a device that has ability to accept data, internally store and execute a program of instructions,
perform mathematical, logical and manipulative operations on the data and reports on the results.
⇒ Put simply, it is a machine that accepts data (input) and processes it into useful information
(output).
Computer System
⇒ It is an interrelated system of input, processing, output, storage and control components
⇒ Thus a computer system consists input and output devices, primary and storage devices, the central
processing unit, the control unit within the CPU and other peripherals
The terms data and information are loosely used interchangeably in ordinary discussions. The terms,
however, are different in their usage in the field of information systems.
Data – it is the complete range of facts, events, transactions, opinions, judgments that exist both
within and outside the organization. Data are raw facts from which information is produced.
Information – it is part of the total data available which is appropriate to the requirements of a
particular user or group of users. It is processed data upon which a user may rely for decision.
Computer Hardware- refers to the physical components of a computer both mechanical and
electronic.
Processor
CONTROL UNIT
Interprets stored
instructions in
sequence. Issues
commands to all
elements of the
computer
ARITHMETIC &
LOGIC UNIT
Performs
Arithmetic &
logic
functions
INPUT OUTPUT
Data and Information
instructions -the results of
processing
MAIN MEMORY (MAIN
AUXILLARY)
Holds data,
instructions and
results of
processing
AUXILLARY STORAGE
(BACKING/SECONDARY STORAGE)
To supplement main memory
Key
Command/Signal flow
These facilitate communication between the user and the computer. They facilitate the insertion of
data into the computer for processing.
(b) Keyboard
⇒ A keyboard is laid out like a typewriter
⇒ It allows data to be typed in by the pressing of the relevant keys
⇒ The pressed key or instruction is displayed or executed
⇒ This is the most commonly used input device
(c) Mouse
⇒ It is a hand held pointing device electronically connected to the computer which is
used to control the cursor or a pointer on the screen through the rolling motion of a
ball on a flat surface.
⇒ The cursor or pointer on the video screen moves in the same direction as the
movement of the mouse.
⇒ When the pointer is on the required menu item (icon) a button is clicked to select that
item.
(d) Light Pen
⇒ It is a pen-shaped device that uses photoelectric circuitry to enter data into the
computer through a special video screen
⇒ A user can write on the video display
⇒ The high sensitive pen enables the computer to calculate the coordinates of the
points on the screen touched by the light pen
⇒ The handwriting or graphic is digitalized, accepted as input and displayed on the
VDU
⇒ Input therefore is directly onto the screen
(e) Touch Screens
⇒ Use an inlaid screen to accept input through the act of physically touching the
screen
⇒ The computer senses the selected position and execute the instruction accordingly
⇒ This device works more suitably with menu driven applications
The following are the main factors to be considered when deciding on the data capture systems:
(a) Costs of the system – costs must be kept low
(b) Accuracy – should have detection and correction procedures for errors
(c) Time – turnaround time in capturing data should be short
(d) Reliability – The system of capture should be free from breakdown
(e) Flexibility – the system must cater for different types of data
(f) Volume – a system should suit the volume of data to be captured
(g) Existing equipment – a system that uses existing equipment would be most preferred.
(h) User friendliness – a system should render itself easily to the user
There are two forms of output devices, those that produce hardcopy (permanent) and those that
produce softcopy.
Printers are hardcopy devices while the VDU is a softcopy device.
Hardcopies are needed when copies are to be taken away from the computer to be sent to a user of the
information thereon, or to be filed away or even as legal documentation.
Computers, therefore can produce a number of different documents e.g. reports, sales invoices,
payrolls, or graphics.
Types of Printers
The following is an illustration showing the types of printers and their sub types and examples of
these.
Impact Non-Impact
These produce a complete line of text in a single printing operation. These are suitable for bulk
printing.
i. Drum Printers.
• They employ columns of complete characters embossed around the circumference of a rapidly
rotating drum.
• Every print position is capable of being occupied by any character
• A print hammer situated at each print position forces the paper against the drum through the
ribbon (interposed between the paper and the ribbon) when the appropriate character is in
position.
• These are expensive to buy and maintain
• The print quality is poor (especially if there are mistiming of print hammers)
• They do not allow for change of fonts.
• They are also very noisy
These are also known as serial printers. These print one character at a time across. The method of
printing necessitates one (or two) print head(s).
There are two categories of character printers: impact and non-impact printers.
Impact printers – they form characters and graphics on the paper by pressing a printing element
(such as print wheel or cylinder) and an inked ribbon against paper e.g. a dot matrix printer. Multiple
copies can be used through the use of a carbonized paper.
Non-impact printers - do not use force and are quieter than impact printers. They use specially
treated paper and can form characters by laser, thermal (heat) or electrochemical processes. They
produce higher quality of print than impact printers. They, however, cannot produce multiple copies.
(i) Dot Matrix – it consists of a matrix tiny tubes containing needles in the print head. Each
character in formed from the square or rectangle array of dots. The needles are fired onto the
printer ribbon in a pattern corresponding to the shape of the character required. Each character
is printed by the repeated horizontal movement of the print head. The quality of the print
depends on the dots in the matrix (most common are the 7 rows by 9 columns matrices). These
printers are cheap to purchase and maintain but do not produce good print quality.
(ii) Daisy Wheel – Uses a rotatable wheel consisting of a number of flexible metal or plastic
spokes (usually 96) at the end of which is a mirror image of a character. During printing the
wheel is rotated until the required character comes into line with the print hammer which then
hits the character and the ribbon onto the paper and the paper is printed. They are cheap and
offer better print quality than Dot Matrix Printers. However, they are slower and much noisier.
They can print an infinite number of fonts and can even mix different fonts in the same line.
They can produce high quality print, and can be used to print logos, illustrations and graphics.
They are very quiet during printing.
However, they are very expensive to purchase and maintain.
Some computers are capable of producing speech. The computer can actually speak out from a stored
digital representation of either words or other sounds. A person's voice can be stored and reproduced
later.
These method of output is of great use to the blind, or in poor light. Advanced forms of speech output
are being used to answer telephones, answer some inquiries from an organization's callers etc.
It is the recording of computer output onto microfilm or microfiche (microforms). Recording onto the
microfilms and microfiches can be done on-line or via a magnetic tape in order to convert the
computer's digital representation of data into microforms. The recording is a way of data storage in
which the computer stores the data in a reduced (compacted) way on the film or fiche. The stored data
can be retrieved later through the use of a reader and editing can be done. Microforms are much easier
to store than ordinary hardcopies and last much longer. Accessing and finding the required information
can be made easier by indexing the film or fiches.
COM is most suitable where large amounts of data are processed but are to be used much later e.g. in a
government registry department (birth registrations, important national statistics for example
population censuses.
Graph Plotters
These are used to plot graphs, maps and other forms of graphic onto a medium usually larger than the
size of a normal paper. They can print in different colours.
It is the unit of the computer system that includes the circuits that control the interpretation and
execution of instructions. It is the most important component of a computer system.
Functions of the processor
To control the use of the memory to store data and instructions
To control the sequence of operations
To give instructions to all parts of the computer to carry out processing
The CPU can be subdivided into two major subunits; the control unit (CU) and the arithmetic logic
unit (ALU). The primary (main) memory is an extension of the CPU and assists the CPU in its
operations.
Operational features
The memory has uniquely addressable storage locations that are easily accessible to the CU.
Random Access - it is possible to fetch data from the locations in main storage in any order and
time taken to access the location does not depend on the position of the location.
Volatility - the main memory can be volatile or non-volatile depending on the its physical
characteristics
Details of single location - Each location consists of tiny devices that can take two states
(on/off). The two states of each device are used to represent binary (0 - off, 1 - on). Each
location in the main memory holds a unit of data called a word. Some computers had locations
holding 8 binary digits and were therefore said to have an 8-bit word. Other computers have 16
bit storage locations, while others tend to have 32 bit locations.
The Random Access Memory (RAM)
This forms the largest part of the Main Memory and is often used to measure the power of a computer.
It is used for temporary storage of data and programs during processing. RAM contains user data and
application programs being processed. Data may be read from RAM and data can also be written
onto and stored on RAM. RAM contents are volatile i.e. stored data is lost and the contents disappear
if the power is interrupted or when computer is switched off.
Storage capacity on RAM
The number of storage locations in RAM dictates the storage capacity or size of the computer. Storage
on computer is quoted in kilobytes (Kb) or megabytes (Mb)
PROM - Programmable Read Only Memory - can be programmed by the user, the data and
instructions are held permanently once the PROM is programmed.
EPROM - Erasable Programmable Read Only Memory - Ii is like PROM but can be erased and
reprogrammed. The EPROM must be removed from the computer in order to be erased thus
the complete program has to be reinserted.
EAROM - Electrically Alterable Read Only Memory - It can be read, erased and written on by
electrical methods without removing it from the computer.
Floppy Disk Drive - These work in conjunction with floppy or magnetic diskettes. They have a
narrow slot where the diskette is inserted. The slot has a push button or lever which must be closed
when the diskette has been inserted. The process of closing engages a turn table which rotates the disk
and so brings the read/write head into contact with the disk.
Magnetic Tape
This is similar to the kind found for audio or video tapes
It is a film coated with iron oxide
Portions of the tape are magnetized to represent bits
It uses separate read write heads to transfer data from the tape to the main memory and to
record.
Vacuum columns in the unit absorb the force of sudden starts and stops to prevent tape
snatches.
They store data in a sequence so data has to be moved over sequentially in order to read the
required, this means access is slower.
Read/Write heads
Disks
Access Arms
COMPUTER SYSTEMS
Computers can be classified as general purpose or special purpose. General-purpose computers are
used to perform a variety of applications and the most common in business while special purpose
computers are used for specific or limited applications e.g. military or scientific research.
Microcomputers
They are also called Personal Computers (PCs) or Desktop Computers.
These are relatively small and inexpensive.
They consist of a single processor or a chip
The system is normally made up of the microprocessor, keyboard, VDU one or two floppy disk
drives, a printer and a hard disk drive.
It has a hard disk capacity of 20Mb to 520Mb
May use a colour or monochrome cathode ray tube (CRT)
Have capabilities for networking.
They are single user.
They occupy little space.
They are capable of multiprogramming.
They are compatible with a wide range of software.
PCs come in a variety of sizes: notebooks (handheld), laptops, desktops and floor standing.
Hardware features
- Support magnetic tape storage
- They are multi-user more than 100 users at time for super minis
- Have multiple hard disks, both fixed and exchangeable
- Can be upgraded when necessary
- They do not require a special environment to work so can allow for decentralisation.
- They are less expensive than the mainframe systems
- They have bigger capacities than microprocessors, some have 32 bit microprocessors.
Mainframe computers
These are large, powerful computers with a lot of processing capabilities.
They are suitable for medium-size to large corporations.
They can also be linked together to form a powerful and flexible system.
Hardware Features
- Similar to minicomputers but have several large processors and high processing speeds of
up to 200 million instructions per second (mips)
- They have massive amounts of storage power.
- They can use high speed line printers
- They have a large number of magnetic disks and tape units with large capabilities
- They are multi-user and multi processing
- They have improved reliability
- Their performance may be enhanced by slotting a smaller system, like a minicomputer
between the terminal and the main processor - the front end processor (FEP)
- Both processors run concurrently with the FEP passing on partially processed data to the
main frame for further processing.
- They, however, are expensive to buy and maintain, they need special and very expensive
software and they also require a special environment.
- They can be used for large corporations (such as large international banks) and government
departments
Supercomputer Systems
These are extremely powerful mainframe computer systems. They are specifically designed for high-
speed numeric calculations. These an process hundreds of millions instructions per second (mips)
They can be used by government research agencies, national weather forecasting, spacecraft
construction and navigation.
A microprocessor is a type of an integrated circuit (ic). It has two distinct characteristics - word size
and speed of operation.
Word size - the number of bits dealt with at the same time, some processors are 8 bit, others even 32
bit. The larger the word size the more powerful a computer system is. So some physically bigger
systems may have smaller word sizes and hence less power.
Speed of the microprocessor - it is the clock rate or rate of data transfer, it is the rate at which data bits
are moved around inside at bits per second (megahertz) this is also called the baud rate. Systems with
higher speeds tend to be more powerful though they tend to be physically small.
1. The Word Processor - it is a computer used to produce office documents usually in text. It has
very limited memory and processing capabilities. They are cheap to buy
2. Home Computer - it is a cheap computer that is used for domestic purposes e.g. programmes
for games and controlling home finances.
3. Personal Computer - it is a microcomputer that is usually for use by one person in an office or
at home.
5. Workstation - a computer terminal (PC or desktop) designed to support the work of one
person. It can be high-powered or have other superior capabilities to PCs or ordinary desktops
e.g. capacity to do calculations, graphics and other advanced logical operations.
6. Lap - top - it is a small computer with a flat screen that a user can place on his lap. It is
portable and has an in-built rechargeable battery that can support it when there is no power
from the mains. It can be carried in a briefcase.
7. Embedded Computers - These are computers in other devices that cannot be accessed directly
e.g. those in petrol pumps, ATMs, vending machines, cellphones and elevators.
SOFTWARE
Software refers to computer programs that control the workings of the computer hardware, along with
the program documentation used to explain the programs to the user.
Systems Software
Application Software
Development Software
SYSTEMS SOFTWARE
It is a collection of programs that interact with the computer hardware and application software
programs creating a layer of insulation between the two. Systems Software contains instructions
which:
(a) Manage a computer system’s hardware components to coordinate them so that they work
efficiently
(b) Schedule the computer’s time to make the best use of that time.
⇒ It is an integrated system of programs that manages the operations of the CPU, controls the input.
Output and storage resources and activities of a computer system.
⇒ The primary purpose of the operating system is to maximise the productivity of a computer
system. It minimises the amount of user intervention required during data entry and processing. It
helps application programs perform common operations such as entering data, saving, retrieving
files, printing and displaying output.
(a) User Interfacing – an Operating System allows a user to communicate with the computer in
loading programs, accessing files and accomplishing tasks through command driven, menu
driven or graphical user interfaces. In command driven interfaces, the user uses brief end
commands, in menu driven interfaces the user selects choices from menus of options, in
(b) Operating Environment Management – Use of GUI enables the user to connect to other
separate application packages so that they can communicate and work together and share data
files. Operating environment packages provide icon displays and support the use of some input
devices to allow the running and output of several programs to be displayed at the same time.
The Operating System allows for multitasking – i.e. where several programs or tasks can be
processed at the same time.
(c) Resource Management – Resource management programs of the operating system manage
the hardware resources of a computer system including the CPU, memory, secondary storage
devices and input/output peripherals. For example a memory program keeps track of where
data and programs are stored. They subdivide memory into sections and swap parts of
programs and data between main memory and secondary storage devices. This operation then
can provide virtual memory capability i.e. the real memory capacity in main memory is larger
than the capacity of its normal memory circuits.
(d) File Management – The file management programs in the operating system control the
creation, deletion and access of data and programs. The programs also keep track of the
physical location of files on secondary storage units. They maintain directories of information
about the location characteristics of files stored on a computer system’ s secondary storage
devices.
(e) Task Management – The task management programs of an operating system manage the
accomplishment of computing tasks as needed by the user. They give each task a slice of the
CPU’s time and interrupt the CPU operations to substitute other tasks. Task management may
involve multitasking – where several computing tasks can occur at the same time.
Multitasking may be in the form of multiprogramming (several programs are running at the
same time). The operating system allows for time sharing – where the computing tasks of
several users can be processed at the same time. Multitasking depends on the computing power
of the CPU if too many programs are running concurrently the system may be overloaded or
processing slowed down. Example of multitasking: printing and typing at the same time, word
processing and financial analysis, browsing the internet and word processing.
OS/2
⇒ It is called the Operating System /2
⇒ It is an operating system developed by IBM and was introduced in 1994 with the OS/2 Warp
version as the latest one.
⇒ It provides graphical user interfaces (GUIs), multitasking, virtual memory and
telecommunications.
UNIX
⇒ It was originally developed by AT&T but now being offered by many other vendors.
⇒ It is a multitasking, multi-user and net-work managing operating system
⇒ Because of its portability, it can be used on mainframes, midrange computers and microcomputers.
⇒ It is a popular choice for network servers.
Language translators
Utility programs
Control programs
Communication programs
Non-machine languages must be converted into machine language to be executed by the CPU. This is
done by systems software called language translators.
A language translator converts a statement from a high-level programming language into machine
language called source code whereas the machine language code is refereed to as the object code. The
translator converts the command given in human language into the form the computer has been
programmed to understand before executing the instruction.
Interpreter
This is a language translator that converts each statement in a program into machine language and
executes the programme statement at a time
Compiler
This language translator translates a complete program into a complete machine language program.
The result is a program in machine language that can be run in its entirety, with a compiler, program
execution is a two-stage process. First, the compiler translates the program into a machine language;
second the machine language program is executed.
Utility programs
It is a standard set of routines that assist the operation of the computer system by performing some
frequently required processes such as to sort and merge sets of data, copy and keep track of computer
jobs being run.
Control programs
These are designed to manage the general functions of the processor, memory and terminal interface.
The programs are arranged in a hierarchy at the top of which is a kernel or executive program that
controls the running of the other programs. In microcomputers the supervisor is held in ROM while on
larger computer systems it is held on backing store. When the computer is switched on the supervisor
is loaded into main memory, the other programs are kept on disk and are transferred to main memory
when they are needed.
The job scheduler selects, initiates, terminates, queues and sequences the jobs that demand the use of
the processor and main memory.
The Input/Output manager has the responsibility of managing the interface with terminals and
backing store in response to the requirements of any applications program being executed.
Communications Programs
These support network computer systems by allowing different types of hardware to be linked and to
communicate with each other. The programs may help to select the best transmission medium for the
message, coding and sending the data.
This refers to a system for extending the capacity of main memory for running large application or utility programs. The operating system separates
programs into sections some of which are put into backing store. The locations of these sections (addresses) and the part of the program being executed
are held in main memory; the sections are called in and processed
When required and then returned to backing storage. The sections of the programs are called pages and are said to page in from backing store and page
out when being replaced by other pages. The execution of virtual storage is therefore called paging.
APPLICATION SOFTWARE
⇒ Applications are programs designed to help the user carry out specific tasks for example to
manipulate figures or write texts.
⇒ This also consists of programs written to solve particular user-oriented problems. It applies the
power of the computer to give individuals, groups and organisations the ability to solve problems
and perform specific activities or tasks e.g. Accounts receivable, accounts payable, automatic teller
machines, inventory control, library operations and control, invoicing etc
Word Processing
- A spreadsheet package is used to perform calculations that have been entered onto a grid.
- Formulae are entered into the grid using the figures, if the figures change; the results of the
formulae are updated automatically.
- It is also possible to filter (select only the required data), sort or perform other forms of data
manipulations.
- It is possible to produce graphs, charts and other forms of comparison using the entered figures
from the spreadsheet.
- Popular examples of spreadsheet packages are: Microsoft Excel, Lotus 1-2-3, Quattro Pro
Databases
- A database is an organised store of information, for example an address book, list of employees,
list of students, customers or items of assets.
- A database package is used to store records.
- Data can be sorted, filtered for separate viewing.
- Calculations and comparisons between data items can be done.
- Popular database packages are: Microsoft Excel, lotus Approach, Paradox, dBase IV, Data Ease.
Graphics
- These are applications designed solely for designing graphs and text charts/posters.
- They are often used to produce slides for use on overhead projectors, or presentations on
computer-projectors.
- Various types of charts are also produced
- Examples of graphics packages are: Microsoft PowerPoint, Lotus Freelance, Harvard Graphics,
Corel Draw
Desktop Publishing
- Desktop Publishing (DTP) applications give users powerful and versatile page design capabilities.
- The user can incorporate text and graphics on very exact page layouts.
- These applications produce magazines, catalogues, invitation cards, business cards and other
sophisticated documents.
- The application links up well with other applications as the user can import text and graphics from
the other applications.
- Examples of DTP packages are: Microsoft Publisher, PageMaker, Ventura and Frame maker.
Suites
- Many applications are grouped together into suites which users can purchase as one product
containing, for example, Word-Processing, a Spreadsheet, Graphics application, Desktop
Publishing, Database application.
-Performance - it must have efficiency in both response time and run time.
In-house
Off-the shelf
From a contractor.
This approach requires a development team from within the organisation. The team is usually
comprised of programmers and analysts. The team members should be high calibre, highly trained and
reliable
Advantages
⇒ Internal professionals understand operations better and therefore can produce an accurate solution.
⇒ The Software usually meets user requirements.
⇒ Management are in total control of the development process
⇒ More flexibility - there is more flexibility in making modifications.
⇒ Problem specificity - in-house developed software can give an organisation software programs that
are easily tailored to a unique problem or task.
Disadvantages
⇒ Time and costs of developing the program may be greater than other options
⇒ In-house staff may lack the expertise needed for the project
Off-the Shelf
This is software that can be purchased, leased, or rented from a software company that develops
programs and sells them to many computer users and organisations. Applications such as financial
accounting, business graphics and pay roll may be bought.
Advantages
⇒ Cheaper - the software company is able to spread the software development cost over a large
number of customers, hence reducing the cost any one customer must pay.
⇒ Less risky - the software is existing, hence you can analyse the features and performance of the
package.
⇒ The program is a well-tried and tested product with few errors.
⇒ Less time - Off-the -shelf software is often installed quickly and easily.
⇒ The package is well documented
⇒ The packages require little maintenance
⇒ There is continued support from the supplier through upgrades.
Disadvantages
⇒ The organisation might need to pay for the features that are not required and never used.
⇒ Tie package may be for general needs and therefore not ideal for the user.
⇒ The user has no direct control over the package,
This involves contracting out software development to a software house - better known as outsourcing
especially where off-the-shelf packages are not available.
Advantages
⇒ Software houses employ professionals and this may benefit the organisation
⇒ Transfer of skills to the existing professionals in an organisation
⇒ Organization can get support from the contractor.
Disadvantages
All software programs (systems and application) are written in coding schemes called programming
languages. The primary function of a programming language is to provide instructions to the
computer system so that it can perform a processing activity to achieve an Objective or solve a
problem. Program code is the set of instructions that signal the CPU to perform circuit-switching
operations,
Programming languages
Low-level languages
High-level languages
Low-level languages
In machine languages programmers wrote their instructions in binary code (0 and 1), telling the CPU
exactly which circuits to switch on (1) and off (0). Machine language is considered a low-level
language because it is very simple to the computer. Machine language is the language of the CPU. It is
the only language capable of directly instructing the CPU.
Because machine-language programming is extremely difficult, very few programs are actually
written in machine language.
The other disadvantage of machine languages is that they are machine specific.
All languages beyond the first generation are called symbolic languages- they use symbols easily
understood by humans, allowing the programmer to focus
on structuring a problem solution rather than on the complexities of coding specific computer
programs.
The commands are written in simple mnemonics (abbreviated form) instead of the binary coding . For
example A for ADD MV for MOVE. It is therefore easier to work with assembly coding than
These use greater symbolic code. They are problem oriented. They specifically instruct the computer
on how to complete an operation. The instructions are English-like and have to be translated into
machine code by a compiler or interpreter.
⇒ They are at a higher level than 3GLs. They demand few lines of code as compared to 3GLs.
⇒ They are easy to learn and their statements are close to natural language. Being easy they are used
to develop simple systems.
⇒ It emphasises what output results are desired more than how programming statements are to be
written.
⇒ Many managers and executives with little or no training in computers and programming are using
fourth generation languages for programming.
Features of 4GLs
⇒ These are used to create programs for artificial intelligence and expert systems.
⇒ They are sometimes called natural languages because they use English syntax.
⇒ They allow programmers to communicate with the computer using
normal sentences.
DATA COMMUNICATION
Refers to the means and methods whereby data is transferred between processing locations through the
use of communication systems.
Communication systems are defined as systems for creating, delivering, and receiving electronic
messages. The communication system comprises of the following: - a device to send the message
- The channel or communication medium
- A device to receive the message
- A device to send the message
There are various modes and codes of data transmission signals from the sending to the receiving
device.
Analogue transmission
Analogue signals are continuous sine waves that send a continuous 5-volt signal on a channel but the
signal will vary continuously between +5 to -5 volts. The number of cycles per second is the frequency
of the signal and is expressed in units called hertz (Hz). The human voice forms oscillating patterns of
changes in air pressure. The vibrations act on the telephone microphone and are converted to electrical
voltage patterns that reflect the characteristics of the speech pattern. Analogue transmission is used to
transmit voice or data in analogue signals. It is used in telephone systems and radio transmission.
Voltage
Analogue Signal
+5
-5 Time
Digital Transmission
This is the sending of data with digital symbols of 0 and 1 representing the switching on (1) and
switching off (0) pulses of electricity. Most computer systems use this to create bits that make up
bytes. One cycle is made up of two pulses. The number of pulses per second is called the baud rate.
1 1 1
0 0 0
Time
These are ways through which organisations can communicate via the channel or transmission media.
Simplex transmission
Transmission takes place only in one direction. These are not suitable for long distance transmission
because there is need for acknowledgement or error messages. It is used in the computer - printer
communication. This is also used in radio and television transmission.
Sender Receiver
Half Duplex
Messages can be sent both ways but only one way at a time. The channel alternately sends and
receives data but these are not done at the same time. The same device is used for both sending and
receiving. This is used in two-way radio communication.
Sender Receiver
Full Duplex
This permits simultaneous transmission of messages in both directions. Sending and receiving can be
done at the same time using the same devices. This is the mode used in modern telephone/cellular
transmission.
Sender Receiver
Synchronous Transmission is faster and less expensive as given in that character are blocked
and sent down as one message allowing for the transmission of a fuller message.
Protocols - There is need for there to be a way of signalling the start and end of the message by
the use of data transmission protocols. The use of the parity bits is one method, use of "roger",
"over" are protocols in two-way radio communication.
Switching Alternatives
Circuit Switching
When a call is made the communication channel is opened and kept open until the communication
session is complete.
Message Switching
Each message is sent to the receiver if a route is available. The messages are sent in blocks one at a
time. The message may be stored for later transmission if the route is not available, sometimes this is
called store-and-forward transmission. The message is delivered when the route becomes available or
upon demand from the receiver.
Packet Switching
This involves sub-dividing the message into groups called packets. Each packet is then sent to the
destination separately via the fastest route. At the destination the packets are put in sequential order
and delivered to the receiver. Sometimes when there is no route open, the packets are stored and then
forwarded once the route is open, so these are also store-and-forward systems. These fully put the
network to better utilisation.
a. Telephone Systems - telephone lines are used with online systems. The lines can be divided
into leased and dial service lines.
The amount of data transmitted depends on the capabilities of the communication channel.
The capabilities may be classified as bandwidth.
Bandwidth is the frequency range of the channel, representing the maximum transmission
rate of the channel. The bandwidth can be measured in bits per second (bps)
A narrow band - the channel offers the lowest transfer rate and supports transmission
through telegraph lines.
Voice band - this is a low-speed analogue (offering a rate between 300 and 9600 bps)
Wideband - these offer the highest transfer rates with data communication through coaxial
cables.
Destination
Source
Earth
This form of transmission can transmit large amounts of data over wider areas. This is in much
use in wide area television broadcasting. Although transmission is of high quality, setting up
the system is very expensive.
d. Radio Transmission - This form of transmission uses radio waves, transportation and taxi
companies for easier communication use it. The police and army to communicate also use this.
A network is a number of computers connected through some channel so that they may share some of
the resources and allow access into them by users from other points.
Advantages of networking
a. Resources can be shared e.g. printers, computer files and programmes.
b. More even distribution of processing and other work by the computers and users.
c. More economic and fuller use of computers.
d. Allow for the provision of local facilities without loss of central control.
e. Mutual support and a spirit of cooperation exist.
Disadvantages of networking.
a. There could be congestion at the shared resources.
b. Control of information and confidentiality may be lost.
c. The costs of the system may be considerable.
WAN hardware
Hosts - This provides users with processing software and access. The host is usually a mainframe
computer with microcomputers connected to it.
Front end Processors (FEP) & Back End Processors (BEP) - These are minicomputers that are
placed in front of (FEP) or at the back (BEP) of the main system CPU. These assist the main system
CPU with accepting input perform the operations on it before forwarding it to the mainframe CPU for
further processing or after the main system CPU to assist with the output activities. They generally
manage communications. The main system CPU concentrates on process work without having to
handle input and output activities.
Modems - This is short for Modulate Demodulate. Telephone lines that are common link media
between computers use the analogue signal whereas computers use digital signals. So there is need to
convert the signals. From the sending computer, the digital signal should be modulated to analogue
signal for transmission over the telephone link. At the receiving end the analogue signal has to be
demodulated (converted back) into the digital signal.
Modulation can be done using a number of methods. Amplitude modulation (AM) where the amplitude
is modified to represent binary digits 0 and 1. Frequency Modulation (FM) adjusts frequency to
represent the bits 0 and 1. The Phase Shift Modulation adjusts a fixed amount so that 0 and 1 can
correspond to different phase shifts.
Terminals - These are the microcomputers connected to the system on which the users can work to
sent data or access information. Terminals may be dump or intelligent. Dump terminals have limited
memory and intelligence while intelligent terminals have processing and memory capabilities that they
can process data themselves.
Multiplexor -it is a device that can subdivide one big channel so that many people can use it at the
same time. There are two types of multiplexors: time division and frequency division.
Time Division Multiplexors (TDM) - It slices multiple incoming signals into small time
intervals that are then transmitted over a channel and then split by another TDM at the
receiving end.
Multiplexor Multiplexor
Terminal controllers/Concentrators- are used to connect many terminals to a single line, they do not
communicate directly with the FEP.
Protocol Converters - Because of the diversity of technology in communication, WANs are using
various devices, channels, modes and codes. To allow for communication between and among all
these protocol converters are used to translate different signals from one system to another. Typical
protocol converters include HTTP (Hyper Text Transfer Protocol) used on Internet communications,
TCP/IP.
WAN Software
In order for the WAN hardware to operate there should appropriate software which should include
telecommunications access programmes that handle messages between the host's memory and the
remote devices, this programme could be in the FEP. There is also the network control programme
that has the work of running the network off the host by checking if terminals have messages to send,
do editing of incoming messages.
1. Reliability of the network - there should be minimal breakdown and errors in the system.
2. Response time of the Network - there should be less time spent on picking the required
signals when required.
WAN Topology
Modem Modem
Multiplexor Multiplexor
Host
Terminals
LOCAL AREA NETWORKS
Network Workstations - these are usually microcomputers from where a user can work to enter or
receive data from the network. These could be dump terminals or diskless workstations.
File Servers - These are computer systems attached to a network that control access to and other wise
manage hard disks to allow the workstations to share disk space, programmes and data and to control
the network.
Print Servers - these are computers that have the duty of managing the printer resources, that is
allocating print time, and carrying out other print routines.
Cabling - these connect the different computer systems and devices in the network. Sometimes LANs
use radio waves and thus are wireless.
The Network operating System - These are programmes that manage the operations of the network
allowing the user to install the network, the application software, analyse and solve network problems,
manage the sending, routing and delivery of messages and do other routine and housekeeping duties
for the network.
LAN Topologies
Topologies are ways in which a network can be structured or arranged depending on the resources, the
needs and the structure of the given organisation.
1. Star Topology - there is a central node - the file server (which could be a minicomputer or
microcomputer) to which all other computers in the system are connected. All the processing
and storage are done by the host (central computer). There is no direct interterminal
communication.
The network is suitable for use in offices and homes.
Advantages
a. It is easy to trouble shoot
b. It is economic and offers intensive use of the terminals
c. Requires a single intelligent computers, terminals could be dump there by saving on the
cost of the network.
d. Each terminal has direct and quicker access to the host
Disadvantages
a. It is prone to failure - failure of the node mean loss and breakdown to the whole system.
b. The cabling may be costly
c. Viruses can be transmitted easily within the network
Terminal
Host
All the computer are connected to a linear transmission medium called a bus through the use of a tap.
The tap is used to create a link up to the medium and to the network.
Advantages
a. Costs of setting up and maintaining the network are low.
b. Each terminal can communicate directly with any other on the network.
Disadvantages
a. Risk of network malfunction
b. Are more troublesome than the star topology.
c. Trouble shooting is more difficult.
d. There is the risk of data collision.
3. Ring Topology
It is made up of a series of nodes connected to each other to form a closed loop. Each loop can accept
data transmitted to it from an immediate neighbour and forward data not addressed to it to the next
neighbour. Messages are transmitted in packets or frames. Each node has an address and packet is
delivered to the node bearing the address required.
Disadvantages
a. There is poor response time
b. There is risk of data interception so there should be data encyption.
c. Unwanted data may continue to circulate the network, thus reducing traffic carrying capacity
of the network.
4. Mesh Topology
This topology combines the ring and the star topologies. Bi-directional links are established.
This offers better communication and reduces the risk of data collision because of the existence of
alternative routes. The network has quick response time and is very accurate. However, the costs of the
cabling are quite considerable.
The nodes are connected to form a hierarchy. Messages are passed along the branches until they reach
their destinations. These networks do not need a centralised computer that controls communications.
These are also very easy to troubleshoot and repair because the branches can be separated and serviced
separately.
THE INTERNET
The Internet is a worldwide network of computer systems. Millions of computer networks in different
parts of the world are connected by telephone lines, cables, radio and microwave links and modems.
Because the telephone system is not yet fully digitalized, there is need to convert the predominantly
digital computer signal to analogue and reconvert to digital. This is done by the Modem (short for
Modulator
demodulator) which sits between the computer and the telephone line. Modems may be external or
internal to the PC.
There are many organizations that offer internet services. These are called Internet Service Providers
(ISPs) and they usually charge a monthly fee for the connection. Some ISPs do not charge a fee for the
service.
If you want to explore the Internet, a web browser is required. A web browser contains programmes
that assist in the surfing of the internet. The most used web browser is Microsoft Internet Explorer.
Web site –
Web address –
It is commonly referred to as the e-mail. Every user of the e-mail has a unique address. E-mail
addresses have an @, for example, [email protected] . One can send and receive messages the same
way an ordinary letter is sent and received. E-mail messages are sent from user to user on a computer
network with message being stored in the recipient’s mailbox or inbox. The next time the user logs on,
he is told that there is a new message, these messages can be read, printed or replied.
E-mail allows for the sending of attachments. These are files that contain greater detail and are based
on a particular application package like Microsoft Word, Microfoft Excel. One can even send a CV, an
assignment or other document using e-mail.
E-mail makes use of an existing internet connection and software. However, there is a misconception
in the minds of many students that e-mail and internet are one and the same thing. Internet provides
many facilities and the e-mail is only one of them.
Advantages of e-mail
1. Speed - Messages are received instantly
- Provides certainty of delivery of mail
- Reading, sending replies, redirecting massages is faster.
3. Time - Less time spent on the phone waiting to be put through, finding people
unobtainable, holding the line because line is engaged and leaving messages and
having to call again.
4. Flexibility - Anyone with a PC at home can send and receive messages out of office
hours.
- Messages can be sent and received at any computer that is linked to the network.
Disadvantages of e-mail
1. Delay – if a recipient takes long to log on, the speed advantages is lost
- If there is a problem with the recipient’s server, one may not know immediately if
the message has been received or not.
It is the network of computers usually within a company that uses e-mail and browser software but is
not part of the internet. Employees can use the Intranet to access information related to the company
such as training, social activities, job opportunities and product information. It enables employees to
share information.
E-commerce
It involves the internet transactions of goods and services to businesses and consumers and can
include:
- retail – you can buy virtually anything on the Internet, e.g books
- banking – there are online banking facilities that allow the payment of bills and
access of balances.
- Travel arrangements – airline and rail tickets can be bought and sold on the internet
and bookings made thereon.
Consumers connect to the online service and can then order goods and pay for these using a credit or
debit card.
DATA PROCESSING
It is the process of collecting data and converting it into information. It may be manual (where only
human effort is used), semi-manual where human effort is aided by an electronic device or
mechanical/electronic, where computers replace human effort.
Disadvantages
a. The whole system is disturbed when the computer is down.
b. Users of the system have little control over the data even their own.
c. Loss of data by the processing computer may mean loss of data at other points.
d. Needs specialists to set up and maintain the system and its environment.
e. Processing of data may be slow due to congestion at the central computer.
HOST COMPUTER
1 2 3 4 5 6
Advantages
a. Data lost at any point may be recovered from the host
b. Faster processing of data
c. Processing may occur even if the host computer is down
d. User have some control over their data
e. Problem can be solved at the different points
Disadvantages
a. The system needs expensive equipment and has high maintenance costs.
b. Management and control difficult
c. There is dada duplication at the different points
d. No uniformity of data
e. There is no overall control time
3. Batch processing
A central computer system normally processes all transactions against a central data base and produces
reports at intervals. In batch processing transactions are accumulated over a period of time and
processed periodically. A batch system handles the inputing, processing, job queuing and transmission
of the data from the remote sites with little or no user intervention. Batch processing involves.
a. Data gathering from source documents at the remote site through the remote job entry (RJE)
b. recording the transaction data onto some medium e.g. magnetic tape or disk.
c. Sorting the transactions into transaction files
d. Processing of the data
e. Updating the master file.
Advantages
a. It is economical when large amounts of data are being processed
b. Suitable where reports and information is needed periodically.
c. Reduced risk of error due to the time involved in validation and verification.
Disadvantages
a. Master files are often out of date
b. Immediate updated responses cannot be made.
4. On-line Processing
It is the input of transactions while the input device is connected directly to the main CPU of the
system. There is no necessity of a batch.
Advantages
a. Items can be input much easier and quicker.
b. Many errors are dealt with by the operator at entry
c. Time is saved because a batch does not have to be produced first.
d. User can recognise anomalies in the data being entered
Disadvantages
a. The system may be more expensive than batch processing.
b. Sometimes accuracy of data depends on the operator who might fail to detect or prevent some
errors.
c. Sometimes source document are not used e.g. in the case of telephone orders.
Data is fed directly into the computer system from on-line terminals without the storing, sorting (these
are done on-line)
Advantages
Processing is instantaneous
Files and databases are always up to date
Disadvantages
The systems are expensive to and complex to develop and put up.
Data with errors may be processed with damaging effect.
Hardware costs are high, there is need for on-line terminals, more CPU power, large on-line
storage and back up facilities.
Advantages
a. Each user is given a chance
Disadvantages
The user may not require a service at the time his slice is given - this results in too much excess
capacity at some periods.
Data Acquisition
(From source Documents)
Storage Processing
Stage 1
Data acquisition - this is the collection of data from source documents for input into the computer
system.
Stage 2
Input/Capture - This is the putting of the acquired data into the system e.g. through typing, scanning,
or other forms of input.
Validation - the quality of the data is checked before it is entered or processed and errors detected and
eliminated.
Verification - data is checked for mistakes in copying so that it is correct.
There may be gabbage in and gabbage out (GIGO) meaning that information systems will produce
erroneous output if provided with erroneous input data or instructions. To avoid GIGO common
validation is done:
a. Checking data reasonableness, e.g. pregnancy for a three year old would be unreasonable.
b. Checking data consistency - e.g. it would be inconsistent to record a pregnant male.
c. Checking range units e.g. it would be impossible to have 30hours worked by one worker in a
day.
d. Timeliness - that data is not out of date.
Stage 3
Processing - this is the key part of the cycle where data is converted into information. This is where
calculations and other logical and manipulative operations on the data are done. Specific applications
are used to process the data e.g. word processing, spreadsheet, payroll packages etc
Stage 5
Output - The processed information is eventually displayed for use by the user through the various
output systems - printers, VDUs, sound cards & speakers.
The job involving the capture, processing and output of data and information involves a number of
people. Sometimes there may be one person to do this and often there may be a department solely
responsible for data processing or for Information technology.
The data processing department interacts with all other departments from where it gets data and for
whom it processes and eventually provides the information to.
The main functions of a DP department are:
a. Design and install a custom built system of data capture and processing.
b. Provision of advice to the organisation on matters regarding the processing of information e.g.
the selection of the correct devices for input, process, storage and output.
c. Provides advice on the installation of a package or information system
d. Manage the organisation's data processing resources.
A typical DP/IT department
Data Processing Manager
CO CO CO CO CO CO
2. System Analyst
The title and function of this person may vary from organisation to organisation or as an organisation
develops from stage to the other. As the computer is applied to the different organisational problems,
the duties of the analyst may change.
A business analyst would require less skill than the system designer or the technical analyst. These
personnel deal with more or less the same task but with differing depth.
Main Duties
a. Examine the feasibility of potential computer applications and to consider all the various
approaches to computerisation that are available.
b. To perform the proper analysis of user systems and requirements.
c. Develop a cost benefit analysis in conjunction with the users.
d. Design system, which take into account not only the computer procedures but the clerical and
other procedures around the computer system.
e. To specify the check and controls to be incorporated into the system in conjunction with the
audit staff.
f. To specify the most appropriate processing techniques to be used e.g. micro, mini or
mainframe, batch or real time processing.
g. To ensure that there is proper communication and clear instructions at each stage of the project
e.g. programme specification, file set up, operating instructions, print out volumes.
h. To ensure the system is properly set up and documented.
i. To ensure a proper environment for system testing and pilot running and parallel running of the
system as may be appropriate.
3. Programmer
After design the programmer comes in. He uses the program specifications produced by the
analyst/system designer to develop the programme. The programme specifications may consist of
file and records layouts, field description, report and screen layouts.
A flow chart or diagram indicating the main logical steps in the proposed program may be made.
Duties
a. To reach an understanding of what each programme is expected to do and to clarify any
problems with the analyst or systems designer.
b. To design the structure of the programme in accordance with installations standards.
c. To provide a working of an efficient programme using the installations standards within the
budgeted time and funds.
d. To test programmes thoroughly both as a unit and in relation to other programmes.
e. To provide the required programme documentation.
Once the programme is in place, the maintenance programmer would take the responsibility of
correcting any subsequent problems and recommend any improvements.
4. Systems Programmer
He specializes in non-application programmes e.g. operating systems, data communications
Duties
a. Liaising with computer supplier to keep abreast with operating system changes.
b. Support systems analyst and programmer regards queries on the system software performance
and features.
c. Assisting the programmer to interpret and resolve problems which appear to be caused by the
system software rather than application software.
5. Application programmer
He writes programmes or adapts software packages to carry out specific task or operations for the
computer users e.g. a sales analysis programme for the marketing department.
Duties
a. discuss the programme specification with the analyst.
b. To write the sources program module.
c. To test the programme and debug it.
d. To maintain programmes correcting errors, making improvements & modifications to allow for
changing business methods, circumstances or equipment.
e. Encode the procedure detailed by the analyst in a language suitable for the specified computer.
f. Liase with the analyst and other users on the logical correctness of the programme.
6. Computer Operator
S/he operates the mainframe or mini computer. He is responsible for the efficient running of the
computer equipment, which if not ensured efficient running time of the computer may be lost.
Duties
a. Collecting files and programs required for a computer run from the library.
b. Loading magnetic tapes and disks into drives.
c. Putting stationery into the printer.
d. Carrying out routine maintenance such as cleaning the tapes and read write heads.
e. Ensure the equipment is running efficiently and reporting any faults to the technicians.
f. Replacement of computer accessories e.g. toner catridges, ribbons, ink.
g. Switching the computer on/off.
8. Database Administrator
This is a person responsible for planning, designing and maintaining the organisation's database.
This person relates to the management, system analysts, programmers and other stakeholders in the
organisation. He needs to have adequate managerial and technical abilities to suit the job. He
therefore must have a sound knowledge of the structure of the database and the DBMS.
Duties
a. Ensure that the database meets the needs of the organisation.
b. Ensure facilities for the recovery of data
c. Ensure the functioning of report generation systems form the DBMS
d. The DBA is also responsible for the documentation of the DBMS through the designing and
availing of the data dictionary and manuals for the users giving such direction as the general
use of the database, access to information, deletion of records from the system and the general
validation and verification of data.
Duties of the personnel overlap and depending on the size of the organisation or the IT department
some duties are done by one person.
File Concepts
The purpose of a computer file is to hold data required for providing information. A computer file is a
collection of related records. Records consist of fields and the fields are made up of characters. A
character is the smallest element of a file. A character may be a letter of the alphabet, a digit or of a
special form (symbols).
Logical files show what data items are contained and what processing may be done while physical
files are viewed in terms of how data is stored on storage media.
1. Master Files – They hold permanent data for use in applications such as stock control, credit
control. Usually much of the data items in these files do not change frequently or with each
transaction e.g name, address or date of birth.
2. Transaction files – These are also called movement files. They hold temporary records of
values. They are used to update the master file and are overwritten after the updating of the
master file.
3. Look up files – they are reference files from which such information as prices list and mailing
list can be obtained.
4. Archive files – these are files that are used to store information that has not been in use in the
recent past and would not be in use in the near future – so are used to store historical data.
File Structures
This is the way a records are stored on the storage device or medium. This is how the files are
arranged. The arrangement also affects the way these files will be accessed.
1. Sequential Files – the files are stored and accessed in sequence i.e. one after another. Access
the file is dependent on the position of the file on the storage medium. Accessing the files is by
moving usually through spooling to the relevant file. This is the structure of filing on magnetic
tapes. It is most suitable where all the records on the file are being considered e.g. in the
preparation of a payroll but would be inefficient where the selection of one record is necessary
e.g. access to one employee on the payroll.
2. The Direct File Structure – The files are randomly stored. Access and storage of the records
not depend on the physical position of the record on the file. This is the form of file structure
on magnetic disks, floppy disks, or on optical disks. Each record is given a specific disk
address by which it is recognized and accessed. This is the structure used to store and access
records and files on the Automatic Teller Machine (ATMs) systems. Access to records is much
faster. However, there could be conflict resolution where several records generate the same
address – this problem has been dealt with the use of conflict resolution mechanisms on the
computer and filing systems.
3. Indexed Sequential File Structure – This combines the features of sequential and direct file
structures. Records are stored sequentially on a direct access medium like the hard disk and
each record occupies an addressable location identifiable by the unique disk address. An index
is developed to keep track of the records and their physical locations on the storage medium.
The records can be stored and accessed sequentially starting from the beginning moving
through the records one at a time or can be stored and accessed directly e.g. the way the cell
phone’s phone book is structured.
On all the types and structures discussed above a user may add or remove a file or record, modify
contents of the file or record, view the contents of the data and create reports as may be necessary.
These operations on the data can be done on-line i.e. files and records updated as the transactions
are being entered from a device connected onto the main CPU of the system.
A database is a single organized collection of structured data. It is a collection of related files that
are stored in a uniform way for ease of access. It can also be defined as a collection of logically
related records or files previously stored separately so that a common pool of data records is
formed.
Data Independence - data can be defined and described separately from the application
programme. Where there is no data independence a change in any record would then necessitate
the changing of the programme to access the file.
Data Redundancy - The same data element appears in a number of files but serving the same
purpose and usually thus staying unused.
Data Inconsistency - this is when redundant data is not updated accurately so much that there are
differences in the data elements on the different files.
Disadvantages of Databases
a. Concurrence problems - where more than one user access and attempt to update the same
record at the same time - there is file edit locking to prevent this.
b. Ownership problems - sometimes some individuals tend to own the data and thus refuse access
by other individuals or departments in the organisation.
c. Resources problem - with database extra resources are needed e.g. more workstations and other
devices.
d. Security problems - there is increased exposure to unauthorized entry into the data this could
be reduced by the use of regularly changed passwords and by physically denying access to
unauthorized users.
It is a complex system of software that constructs, expands and maintains the database. It provides
a controlled interface between the database and the user.
Data repositories - these are an extension of the data dictionary which provide a directory of
component parts of the dB and other information resources in the organisation.
Data languages - a definition language is needed to place the data in the data dictionary through the
use of commands such as sort, get, find etc.
Teleprocessing Monitor - This is a software that controls and manages the communication between
remote terminals e.g. from and to sales points in a large departmental store.
Applications Development Software - this is a set of development software used to help the user
programmer to develop database software.
Security Software - this is a set of software used to minimize unauthorized access to the database.
Archiving and Recovery Systems - these systems are used to store backups of the original record
so that if the original database is damaged the information can still be recovered.
Report Writers - these allow the user to obtain reports from the data quicker and easier.
Records in the database may be set in different ways depending on the relationships between the
records themselves.
1. One to One relationship - this is where one record is related to one other record e.g. single
parent record to one child record
Record
Recor
d
This can be represented in a relational entity model, in E-R diagrams
1 Represents 1
Record Record
This could be where one sales representative deals with one customer for example.
2. One to many relationships - one parent to many children
Record
1 Represents N
Record Records
Record
In E-R diagram
M Represents N
Records Record
e.g. many sales representatives relating with on customer, many lecturers relating to one student.
4. Many to many - with two or more parents relating to two or more children
M Represents N Records
Records
A replicated dB is one that has been copied and is kept at different geographical locations. This allows
for easier recovery if one fails and reduces data transmission costs, as there would be little long
distance transmission thus is also faster. This, however, it is expensive in terms of computer resources.
A partitioned dB is one that is split into segments and each segment distributed to the relevant
location. This reduces transmission costs and provides a faster turnaround time in input, processing
and output. There is reduced data inconsistency and redundancy. However, there is need for more
computer resources that are costly.
DATA SECURITY
This refers to measures to reduce unauthorized access to, use and destruction of an organisation’s data
and data resources.
Data is a valuable resource like any other asset of an organization. Data like money can be stolen and
exchanged for some value. Organisations have some pieces of data that are confidential and these need
to be secure. Every organization needs to take security of hardware, software and data seriously
because the consequences of breaches of security can be extremely damaging to a business. This may
lead to loss of production, cashflow problems, loss of customers and reputation.
Threats to security come from outside (external) and inside (internal) the organisation
Securing data entails making sure that the computers are in the right environment, there are right
software measures to reduce loss or theft of data.
Security of Equipment
There is need to look after the computer hardware well to avoid loss of data or the computers
themselves.
1. Ventilation – a good room has to be adequately ventilated. If ventilation is poor the computer
may over heat and thus fail to operate properly.
2. Power supply – power supply should of the right voltage and supplied from safe socket outlet.
Power cables should not cross the room to avaid interfering with free movements.
3. Use of Uninterupted Power Supplies (UPS) – in the event of unanticipated power loss or power
surge there should be some standby power alternative so that the users’s information is not lost.
4. Carpet – Carpets are good dust absorbers, this thus reduces dust in the room, dust interferes
with the operation of electronic equipment.
5. Curtains – Curtains reduce the amount of light getting to the screens and other computer
equipment. Light damages screens.
6. Lockable doors – doors should be lockable to avoid unauthorized access to the computers or
theft or vandalism of the computer systems. The key should be kept with some responsible
person.
7. Metal Bars and Shutters – Fit room with metal bars and shutter.
8. An alarm system – an alarm system may be installed to warn of an intrision.
9. ID badges – all users to use ID badges for access to the room or building.
10. Security guards – have a twenty-four hour guard to the room.
11. Attach computers permanently to desks using clamps to avoid theft of the computer(s).
12. Have all equipment serial numbers for use if equipment is stolen.
Security of Data
Accidental Loss
To reduce the risk of loss to accidental loss there is the use of back up of data – there should be a
saved copy of the original file that is kept on a different medium and place.
Deliberate Damage
Viruses
A computer virus is a harmful program that copies itself onto other programmes and destroys them or
interferes with their proper functioning.
Viruses are transmitted from computer to computer. They copy themselves quickly. Effects of viruses
may be mild to severe. Some viruses are harmless and computers may operate normally.
Characteristics of viruses
Computer and computer equipment have negative health effects on the users. Research is still being
carried out to determine the extent of the effects on the health of users.
This refers to disorders that affect the hands, wrists, arms, shoulders or neck of computer
operators. It is inflammation of the joints and is caused by making the same small movements
over and over again. This can cause pain, numbness, swelling and the inability to lift or grip
objects. In some cases, operators have become permanently disabled.
To reduce RSI
- make sure the desk and chair are at suitable heights
- sit at a comfortable distance fro the keyboard
- make sure that lower arms are horizontal and wrists straight when using a keyboard or
mouse.
- Use a wrist rest if necessary so that you do not rest you wrists on the edge of the table
or keyboard.
- Take frequent breaks to stretch your arms and fingers
Using the VDU for long periods at a time may affect a user’s eyes and in some instances cause
headaches.
(c) Posture
Sitting incorrectly or without right support at a computer terminal for long periods may result
in back, neck and upper arm pains.
If it may be necessary to shift a computer, if not done properly may result in injury to the
worker. There may also be problems relating to power and falling over of computer equipment
that is not put up properly.
Some computer equipment produces radiation that has long-term effects on users or on unborn
babies.
The systems development life cycle is a project management technique that divides complex projects
into smaller, more easily managed segments or phases. Segmenting projects allows managers to verify
the successful completion of project phases before allocating resources to subsequent phases.
Software development projects typically include initiation, planning, design, development, testing,
implementation, and maintenance phases. However, the phases may be divided differently depending
on the organization involved. For example, initial project activities might be designated as request,
requirements-definition, and planning phases, or initiation, concept-development, and planning phases.
Note: Examiners should focus their assessments of development, acquisition, and maintenance
activities on the effectiveness of an organization’s project management techniques. Reviews should be
centered on ensuring the depth, quality, and sophistication of a project management technique are
commensurate with the characteristics and risks of the project under review.
INITIATION PHASE
Careful oversight is required to ensure projects support strategic business objectives and resources are
effectively implemented into an organization's enterprise architecture. The initiation phase begins
when an opportunity to add, improve, or correct a system is identified and formally requested through
the presentation of a business case. The business case should, at a minimum, describe a proposal’s
purpose, identify expected benefits, and explain how the proposed system supports one of the
organization’s business strategies. The business case should also identify alternative solutions and
detail as many informational, functional, and network requirements as possible.
The presentation of a business case provides a point for managers to reject a proposal before they
allocate resources to a formal feasibility study. When evaluating software development requests (and
during subsequent feasibility and design analysis), management should consider input from all
affected parties. Management should also closely evaluate the necessity of each requested functional
requirement. A single software feature approved during the initiation phase can require several design
documents and hundreds of lines of code. It can also increase testing, documentation, and support
requirements. Therefore, the initial rejection of unnecessary features can significantly reduce the
resources required to complete a project.
If provisional approval to initiate a project is obtained, the request documentation serves as a starting
point to conduct a more thorough feasibility study. Completing a feasibility study requires
management to verify the accuracy of the preliminary assumptions and identify resource requirements
in greater detail.
The feasibility support documentation should be compiled and submitted for senior management or
board study. The feasibility study document should provide an overview of the proposed project and
identify expected costs and benefits in terms of economic, technical, and operational feasibility. The
document should also describe alternative solutions and include a recommendation for approval or
rejection. The document should be reviewed and signed off on by all affected parties. If approved,
management should use the feasibility study and support documentation to begin the planning phase.
PLANNING PHASE
The planning phase is the most critical step in completing development, acquisition, and maintenance
projects. Careful planning, particularly in the early stages of a project, is necessary to coordinate
activities and manage project risks effectively. The depth and formality of project plans should be
commensurate with the characteristics and risks of a given project.
DESIGN PHASE
The design phase involves converting the informational, functional, and network requirements
identified during the initiation and planning phases into unified design specifications that developers
use to script programs during the development phase. Program designs are constructed in various
ways. Using a top-down approach, designers first identify and link major program components and
interfaces, then expand design layouts as they identify and link smaller subsystems and connections.
Using a bottom-up approach, designers first identify and link minor program components and
interfaces, then expand design layouts as they identify and link larger systems and connections.
Contemporary design techniques often use prototyping tools that build mock-up designs of items such
as application screens, database layouts, and system architectures. End users, designers, developers,
database managers, and network administrators should review and refine the prototyped designs in an
iterative process until they agree on an acceptable design. Audit, security, and quality assurance
personnel should be involved in the review and approval process.
Management should be particularly diligent when using prototyping tools to develop automated
controls. Prototyping can enhance an organization’s ability to design, test, and establish controls.
However, employees may be inclined to resist adding additional controls, even though they are
needed, after the initial designs are established.
Designers should carefully document completed designs. Detailed documentation enhances a
programmer’s ability to develop programs and modify them after they are placed in production. The
documentation also helps management ensure final programs are consistent with original goals and
specifications.
Organizations should create initial testing, conversion, implementation, and training plans during the
design phase. Additionally, they should draft user, operator, and maintenance manuals.
Application controls include policies and procedures associated with user activities and the automated
controls designed into applications. Controls should be in place to address both batch and on-line
environments. Standards should address procedures to ensure management appropriately approves and
control overrides. Refer to the IT Handbook’s "Operations Booklet" for details relating to operational
controls.
Designing appropriate security, audit, and automated controls into applications is a challenging task.
Often, because of the complexity of data flows, program logic, client/server connections, and network
interfaces, organizations cannot identify the exact type and placement of the features until interrelated
functions are identified in the design and development phases. However, the security, integrity, and
Standards should be in place to ensure end users, network administrators, auditors, and security
personnel are appropriately involved during initial project phases. Their involvement enhances a
project manager's ability to define and incorporate security, audit, and control requirements. The same
groups should be involved throughout a project’s life cycle to assist in refining and testing the features
as projects progress.
Application control standards enhance the security, integrity, and reliability of automated systems by
ensuring input, processed, and output information is authorized, accurate, complete, and secure.
Controls are usually categorized as preventative, detective, or corrective. Preventative controls are
designed to prevent unauthorized or invalid data entries. Detective controls help identify unauthorized
or invalid entries. Corrective controls assist in recovering from unwanted occurrences.
Input Controls
Automated input controls help ensure employees accurately input information, systems properly
record input, and systems either reject, or accept and record, input errors for later review and
correction.
Processing Controls
Automated processing controls help ensure systems accurately process and record information and
either reject, or process and record, errors for later review and correction. Processing includes merging
files, modifying data, updating master files, and performing file maintenance.
Output Controls
Automated output controls help ensure systems securely maintain and properly distribute processed
information.
DEVELOPMENT PHASE
The development phase involves converting design specifications into executable programs. Effective
development standards include requirements that programmers and other project participants discuss
design specifications before programming begins. The procedures help ensure programmers clearly
understand program designs and functional requirements.
Programmers use various techniques to develop computer programs. The large transaction-oriented
programs associated with financial institutions have traditionally been developed using procedural
programming techniques. Procedural programming involves the line-by-line scripting of logical
instructions that are combined to form a program.
Organizations should complete testing plans during the development phase. Additionally, they should
update conversion, implementation, and training plans and user, operator, and maintenance manuals.
Development Standard
Development standards should be in place to address the responsibilities of application and system
programmers. Application programmers are responsible for developing and maintaining end-user
applications. System programmers are responsible for developing and maintaining internal and open-
source operating system programs that link application programs to system software and
subsequently to hardware. Managers should thoroughly understand development and production
environments to ensure they appropriately assign programmer responsibilities.
Development standards should prohibit a programmer's access to data, programs, utilities, and systems
outside their individual responsibilities. Library controls can be used to manage access to, and the
movement of programs between, development, testing, and production environments. Management
should also establish standards requiring programmers to document completed programs and test
results thoroughly. Appropriate documentation enhances a programmer's ability to correct
programming errors and modify production programs.
Coding standards, which address issues such as the selection of programming languages and tools, the
layout or format of scripted code, and the naming conventions of code routines and program libraries,
are outside the scope of this document. However, standardized, yet flexible, coding standards enhance
an organization’s ability to decrease coding defects and increase the security, reliability, and
maintainability of application programs. Examiners should evaluate an organization’s coding
standards and related code review procedures.
Library Controls
Libraries are collections of stored documentation, programs, and data. Program libraries include
reusable program routines or modules stored in source or object code formats. Program libraries allow
programmers to access frequently used routines and add them to programs without having to rewrite
Version Controls
Library controls facilitate software version controls. Version controls provide a means to
systematically retain chronological copies of revised programs and program documentation.
Development version control systems, sometimes referred to as concurrent version systems, assist
organizations in tracking different versions of source code during development. The systems do not
simply identify and store multiple versions of source code files. They maintain one file and identify
and store only changed code. When a user requests a particular version, the system recreates that
version. Concurrent version systems facilitate the quick identification of programming errors. For
example, if programmers install a revised program on a test server and discover programming errors,
they only have to review the changed code to identify the error.
Software Documentation
Organizations should maintain detailed documentation for each application and application system in
production. Thorough documentation enhances an organization’s ability to understand functional,
security, and control features and improves its ability to use and maintain the software. The
documentation should contain detailed application descriptions, programming documentation, and
operating instructions. Standards should be in place that identify the type and format of required
documentation such as system narratives, flowcharts, and any special system coding, internal controls,
or file layouts not identified within individual application documentation.
Management should maintain documentation for internally developed programs and externally
acquired products. In the case of acquired software, management should ensure (either through an
internal review or third-party certification) prior to purchase, that an acquired product’s documentation
meets their organization's minimum documentation standards. For additional information regarding
acquired software distinctions (open/closed code) refer to the "Escrowed Documentation" discussion
in the "Acquisition" section.
Examiners should consider access and change controls when assessing documentation activities.
Change controls help ensure organizations appropriately approve, test, and record software
modifications. Access controls help ensure individuals only have access to sections of documentation
directly related to their job functions.
TESTINGPHASE
The testing phase requires organizations to complete various tests to ensure the accuracy of
If organizations use effective project management techniques, they will complete test plans while
developing applications, prior to entering the testing phase. Weak project management techniques or
demands to complete projects quickly may pressure organizations to develop test plans at the start of
the testing phase. Test plans created during initial project phases enhance an organization’s ability to
create detailed tests. The use of detailed test plans significantly increases the likelihood that testers
will identify weaknesses before products are implemented.
Testing groups are comprised of technicians and end users who are responsible for assembling and
loading representative test data into a testing environment. The groups typically perform tests in
stages, either from a top-down or bottom-up approach. A bottom-up approach tests smaller
components first and progressively adds and tests additional components and systems. A top-down
approach first tests major components and connections and progressively tests smaller components
and connections. The progression and definitions of completed tests vary between organizations.
Bottom-up tests often begin with functional (requirements based) testing. Functional tests should
ensure that expected functional, security, and internal control features are present and operating
properly. Testers then complete integration and end-to-end testing to ensure application and system
components interact properly. Users then conduct acceptance tests to ensure systems meet defined
acceptance criteria.
Testers often identify program defects or weaknesses during the testing process. Procedures should be
in place to ensure programmers correct defects quickly and document all corrections or modifications.
Correcting problems quickly increases testing efficiencies by decreasing testers’ downtime. It also
ensures a programmer does not waste time trying to debug a portion of a program without defects that
is not working because another programmer has not debugged a defective linked routine.
Documenting corrections and modifications is necessary to maintain the integrity of the overall
program documentation.
Organizations should review and complete user, operator, and maintenance manuals during the testing
phase. Additionally, they should finalize conversion, implementation, and training plans.
IMPLEMENTATION PHASE
The implementation phase involves installing approved applications into production environments.
Primary tasks include announcing the implementation schedule, training end users, and installing the
product. Additionally, organizations should input and verify data, configure and test system and
security parameters, and conduct post-implementation reviews. Management should circulate
implementation schedules to all affected parties and should notify users of any implementation
responsibilities.
After organizations install a product, pre-existing data is manually input or electronically transferred to
a new system. Verifying the accuracy of the input data and security configurations is a critical part of
Management should conduct post-implementation reviews at the end of a project to validate the
completion of project objectives and assess project management activities. Management should
interview all personnel actively involved in the operational use of a product and document and address
any identified problems.
Management should analyze the effectiveness of project management activities by comparing, among
other things, planned and actual costs, benefits, and development times. They should document the
results and present them to senior management. Senior management should be informed of any
operational or project management deficiencies.
MAINTENANCE PHASE
The maintenance phase involves making changes to hardware, software, and documentation to support
its operational effectiveness. It includes making changes to improve a system’s performance, correct
problems, enhance security, or address user requirements. To ensure modifications do not disrupt
operations or degrade a system’s performance or security, organizations should establish appropriate
change management standards and procedures.
Emergency changes may address an issue that would normally be considered routine, however,
because of security concerns or processing problems, the changes must be made quickly. Emergency
change controls should include the same procedures as routine change controls. Management should
establish abbreviated request, evaluation, and approval procedures to ensure they can implement
changes quickly. Detailed evaluations and documentation of emergency changes should be completed
as soon as possible after changes are implemented. Management should test routine and, whenever
Software patches are similar in complexity to routine modifications. This document uses the term
"patch" to describe program modifications involving externally developed software packages.
However, organizations with in-house programming may also refer to routine software modifications
as patches. Patch management programs should address procedures for evaluating, approving, testing,
installing, and documenting software modifications. However, a critical part of the patch management
process involves maintaining an awareness of external vulnerabilities and available patches.
Maintaining accurate, up-to-date hardware and software inventories is a critical part of all change
management processes. Management should carefully document all modifications to ensure accurate
system inventories. (If material software patches are identified but not implemented, management
should document the reason why the patch was not installed.)
Management should coordinate all technology related changes through an oversight committee and
assign an appropriate party responsibility for administering software patch management programs.
Quality assurance, security, audit, regulatory compliance, network, and end-user personnel should be
appropriately included in change management processes. Risk and security review should be done
whenever a system modification is implemented to ensure controls remain in place.
DISPOSAL PHASE
The disposal phase involves the orderly removal of surplus or obsolete hardware, software, or data.
Primary tasks include the transfer, archiving, or destruction of data records. Management should
transfer data from production systems in a planned and controlled manner that includes appropriate
backup and testing procedures. Organizations should maintain archived data in accordance with
applicable record retention requirements. It should also archive system documentation in case it
becomes necessary to reinstall a system into production. Management should destroy data by
overwriting old information or degaussing (demagnetizing) disks and tapes. Refer to the IT
Handbook’s “Information Security Booklet” for more information on disposal of media.
1. The Internet
Definition
Facilities
∗ Bulletin Boards – Magazines, newspapers * Web Television
∗ Music * Software downloading (shareware)
∗ Discussion Groups * Shopping Malls
∗ Libraries * Research
2. Telecommuting
This involves working from home while connected to the office through computer networks. This
means the workers do not need to travel to workplaces. This has the advantage of saving time to the
worker in terms of travel especially in highly congested cities. It also saves costs of fuel to the worker.
However, traditional supervision methods do not apply.
3. Teleconferencing
This is also known as confra-vision. It is a facility through which people in distant places can hold a
conference like discussion while seeing each other on computer screens. The biggest advantage of this
form of technology is in the saving of conference costs like venue hire, travel and subsistence. There is
also the benefit of body language that the people using this technology have.
5. Connectivity
∗ Increased use of networks
∗ Use of shared databases
9. Globalization
∗ Role of IT in Globalisation