Management Information Systems 2
Management Information Systems 2
DATA RESOURCES
Data
Data the raw material for information is defined as groups of non-random symbols
that represent quantities, actions, objects etc. In information systems data items
are formed from characters that may be alphabetical, numeric, or special symbols.
Data items are organized for processing purposes into data structures, file
structures and databases. Data relevant to information processing and decision-
making may also be in the form of text, images or voice.
Information
Information is data that has been processed into a form that is meaningful to the
recipient and is of real or perceived value in current or prospective actions or
decisions. It is important to note that data for one level of an information system
may be information for another. For example, data input to the management level
is information output of a lower level of the system such as operations level.
Information resources are reusable. When retrieved and used it does not lose value:
it may indeed gain value through the credibility added by use.
Much of the information that organizations or individuals prepare has value other
than in decision-making. The information may also be prepared for motivation and
background building.
1
Reliability Reliable information can be depended on. In many cases,
reliability of information depends on the reliability of the data collection
method. In other instances, reliability depends on the source of information.
Accuracy Information should be correct, precise and without error. In some
cases inaccurate information is generated because inaccurate data is fed
into the transformation process (this is commonly called garbage in garbage
out, GIGO).
Consistency Information should not be self-contradictory.
Completeness Complete information contains all the important facts. For
example an investment report that does not contain all the costs is not
complete.
Economical Information should always be relatively economical to produce.
Decision makers must always balance the value of information and the cost
of producing it.
Flexibility Flexible information can be used for a variety of purposes.
Data Processing
Data processing may be defined as those activities, which are concerned with the
systematic recording, arranging, filing, processing and dissemination of facts
relating to the physical events occurring in the business. Data processing can also
be described as the activity of manipulating the raw facts to generate a set or an
assembly of meaningful data, what is described as information. Data processing
activities include data collection, classification, sorting, adding, merging,
summarizing, storing, retrieval and dissemination.
The black box model is an extremely simple principle of a machine, that is,
irrespective of how a machine operates internally any machine takes an input,
operates on it and then produces an output.
Input Output
Processing
In dealing with digital computers this data consists of: numerical data, character
data and special (control) characters.
2
The basic processing activities include:
Record bring facts into a processing system in usable form
Classify data with similar characteristics are placed in the same category,
or group.
Sort arrangement of data items in a desired sequence
Calculate apply arithmetic functions to data
Summarize to condense data or to put it in a briefer form
Compare perform an evaluation in relation to some known measures
Communicate the process of sharing information
Store to hold processed data for continuing or later use.
Retrieve to recover data previously stored
Information processing
This is the process of turning data into information by making it useful to some
person or process.
Computer files
A file is a collection of related data or information that is normally maintained on a
secondary storage device. The purpose of a file is to keep data in a convenient
location where they can be located and retrieved as needed. The term computer file
suggests organized retention on the computer that facilitates rapid, convenient
storage and retrieval.
As defined by their functions, two general types of files are used in computer
information systems: master files and transaction files.
Master files
Master files contain information to be retained over a relatively long time period.
Information in master files is updated continuously to represent the current status
of the business.
To continue with the illustration, records containing data on customer orders are
entered into transaction files. These transaction files are then processed to update
the master files. This is known as posting transaction data to master file. For each
customer transaction record, the corresponding master record is accessed and
updated to reflect the last transaction and the new balance. At this point, the
master file is said to be current.
3
Accessing Files
Files can be accessed
Sequentially - start at first record and read one record after another until end
of file or desired record is found
o known as sequential access
o only possible access for serial storage devices
Directly - read desired record directly
o known as random access or direct access
File Organization
Files need to be properly arranged and organised to facilitate easy access and
retrieval of the information. Types of file organisation (physical method of storage)
include:
Serial
Sequential
Indexed-Sequential
Random
All file organisation types apply to direct access storage media (disk, drum etc.)
A file on a serial storage media (e.g. tape) can only be organised serially
Serial Organization
Each record is placed in turn in the next available storage space
A serial file must be accessed sequentially implying
o good use of space
o high access time
Usually used for temporary files, e.g. transaction files, work files, spool files
Note: The method of accessing the data on the file is different to its
organisation
o E.g. sequential access of a randomly organised file
o E.g. direct access of a sequential file
Sequential organization
Records are organised in ascending sequence according to a certain key
Sequential files are accessed sequentially, one record after the next
Suitable
o for master files in a batch processing environment
o where a large percentage of records (high hit-rate) are to be accessed
Not suitable for online access requiring a fast response as file needs to be
accessed sequentially
Indexed Sequential
Most commonly used methods of file organisation
File is organised sequentially and contains an index
Used on direct access devices
Used in applications that require sequential processing of large numbers of
records but occasional direct access of individual records
Increases processing overheads with maintenance of the indices
Random organization
4
Records are stored in a specific location determined by a randomising
algorithm
o function (key) = record location (address)Records can be accessed
directly without regard to physical location
Used to provide fast access to any individual record
e.g. airline reservations, online banking
Data redundancy
o duplicate data in multiple data files
Redundancy leads to inconsistencies
o in data representation e.g. refer to the same person as client or
customer
o values of data items across multiple filesData isolation multiple files
and formats
Program-data dependence
o tight relationship between data files and specific programs used to
maintain files
Lack of flexibility
o Need to write a new program to carry out each new taskLack of data
sharing and availability
Integrity problems
o Integrity constraints (e.g. account balance > 0) become part of
program code
o Hard to add new constraints or change existing ones
Concurrent access by multiple users difficult
o Concurrent accessed needed for performance
o Uncontrolled concurrent accesses can lead to inconsistencies
o E.g. two people reading a balance and updating it at the same time
Security problems
A record includes many data items, each of which is a separate cell in the table.
Each column in the table is a field; it is a set of values for a particular variable, and
is made up of all the data items for that variable. Examples include phone book,
library catalogue, hospital patient records, and species information.
5
A database is an organized collection of (one or more) related data file(s). The way
the database organizes data depends on the type of database, called its data
model, which, may be hierarchical, network and relational models.
6
Program and the database
Transaction and the database
Program and data field
User and transaction
User and data field
Most DBMS have internal security features that interface with the operating system
access control mechanism/package, unless it was implemented in a raw device. A
combination of the DBMS security features and security package functions is often
used to cover all required security functions. This dual security approach however
introduces complexity and opportunity for security lapses.
DBMS architecture
Data elements required to define a database are called metadata. There are three
types of metadata: conceptual schema metadata, external schema metadata and
internal schema metadata. If any one of these elements is missing from the data
definition maintained within the DBMS, the DBMS may not be adequate to meet
users needs. A data definition language (DDL) is a component used for creating
the schema representation necessary for interpreting and responding to the users
requests.
Data dictionary and directory systems (DD/DS) have been developed to define and
store in source and object forms all data definitions for external schemas,
conceptual schemas, the internal schema and all associated mappings. The data
dictionary contains an index and description of all the items stored in the database.
The directory describes the location of the data and access method. Some of the
benefits of using DD/DS include:
Enhancing documentation
Providing common validation criteria
Facilitating programming by reducing the needs for data definition
Standardizing programming methods
7
Database structure
The common database models are:
Hierarchical database model
Network database model
Relational database model
Objectoriented model
A hierarchical structure has only one root. Each parent can have numerous
children, but a child can have only one parent. Subordinate segments are retrieved
through the parent segment. Reverse pointers are not allowed. Pointers can be set
only for nodes on a lower level; they cannot be set to a node on a predetermined
access path.
Computer
Department
Manager Manager
(Development) (Operation)
The network structure is more flexible, yet more complex, than the hierarchical
structure. Data records are related through logical entities called sets. Within a
network, any data element can be connected to any item. Because networks allow
reverse pointers, an item can be an owner and a member of the same set of data.
Members are grouped together to form records, and records are linked together to
form a set. A set can have only one owner record but several member records.
8
Comp
.
John
Jane
Mary
Comp
. 101 Comp
. 201
Comp
Comp . 401
. 301
Relational database technology separates data from the application and uses a
simplified data model. Based on set theory and relational calculations, a relational
database models information in a table structure with columns and rows. Columns,
called domains or attributes, correspond to fields. Rows or tuples are equal to
records in a conventional file structure. Relational databases use normalization
rules to minimize the amount of information needed in tables to satisfy users
structured and unstructured queries to the database.
Staff
Department
Department ID Manager
Manager ID Staff ID
Department Name Manager Name Manager ID
Department Type Department ID Surname
Firstname
Initial
Date of birth
Job Title
PC OD
Hard disk size
RAM
CPU
Applications
Staff ID
Manager ID
Database administrator
Coordinates the activities of the database system. Duties include:
Schema definition
Storage structure and access method definition
Schema and physical organisation modification
Granting user authority to access the database
9
Specifying integrity constraints
Acting as liaison with users
Monitoring performance and responding to changes in requirements
Security definitions
Large volumes of data are concentrated into files that are physically very
small
The processing capabilities of a computer are extensive, and enormous
quantities of data are processed without human intervention.
Easy to lose data in a database from equipment malfunction, corrupt files,
loss during copying of files and data files are susceptible to theft, floods etc.
Unauthorized people can gain access to data files and read classified data on
files
Information on a computer file can be changed without leaving any physical
trace of change
Database systems are critical in competitive advantage to an organization
10
f. Encryption coding of data by special algorithm that renders them
unreadable without decryption
g. Journaling maintaining log files of all changes made
h. Database repair
4) Development controls when a database is being developed, there should be
controls over the design, development and testing e.g.
a. Testing
b. Formal technical review
c. Control over changes
d. Controls over file conversion
5) Document standards standards are required for documentation such as:
a. Requirement specification
b. Program specification
c. Operations manual
d. User manual
6) Legal issues
a. Escrow agreements legal contracts concerning software
b. Maintenance agreements
c. Copyrights
d. Licenses
e. Privacy
7) Other controls including
a. Hardware controls such as device interlocks which prevent input or
output of data from being interrupted or terminated, once begun
b. Data communication controls e.g. error detection and correction.
Database recovery is the process of restoring the database to a correct state in the
event of a failure.
Terminology
Multiprogramming
Multiprogramming is a rudimentary form of parallel processing in which several
programs are run at the same time on a uniprocessor. Since there is only one
processor, there can be no true simultaneous execution of different programs.
11
Instead, the operating system executes part of one program, then part of another,
and so on. To the user it appears that all programs are executing at the same time.
Multiprocessing
Multiprocessing is the coordinated (simultaneous execution) processing of programs
by more than one computer processor. Multiprocessing is a general term that can
mean the dynamic assignment of a program to one of two or more computers
working in tandem or can involve multiple computers working on the same program
at the same time (in parallel).
Multitasking
In a computer operating system, multitasking is allowing a user to perform more
than one computer task (such as the operation of an application program) at a time.
The operating system is able to keep track of where you are in these tasks and go
from one to the other without losing information. Microsoft Windows 2000, IBM's
OS/390, and Linux are examples of operating systems that can do multitasking
(almost all of today's operating systems can). When you open your Web browser
and then open word at the same time, you are causing the operating system to do
multitasking.
Multithreading
It is easy to confuse multithreading with multitasking or multiprogramming, which
are somewhat different ideas.
12
SYSTEMS THEORY AND ORGANIZATIONS
1. Systems concepts
A system is a set of interacting components that work together to accomplish
specific goals. For example, a business is organized to accomplish a set of specific
functions. Any situations, which involve the handling or manipulation of materials
or resources of any kind whether human, financial or informative, may be
structured and represented in the form of a system.
13
l) Feedback Recycles outputs as subsequent inputs, or measures outputs to
assess effectiveness.
1.2 Classification of systems
Each system can be characterized along a wide range of various characteristics.
A complex system has many elements that are highly related and interconnected.
A closed system has no interaction with the environment. This is a system that
neither transmits information to the outside world nor receives any information
from the outside world. It is mainly a scientific concept (e.g. physics experiments).
14
Deterministic systems Vs Probabilistic systems
Deterministic systems operate in a predictable manner. For example, thermostats
and computer programs. In probabilistic systems however, it is not possible to
determine the next state of the system. These systems depend on probability
distribution.
The system even if the current state is known. For example, a doctors diagnostic
system.
Environment Environment
Input
Process
Output
Inputs
These provide the system with what it needs to operate. It may include machines,
manpower, raw materials, money or time.
Processes
Include policies, procedures, and operations that convert inputs into outputs.
Outputs
These are the results of processing and may include information in the right
format, conveyed at the right time and place, to the right person.
Systems Boundary
A system boundary defines the system and distinguishes it from its environment.
15
Subsystems
A subsystem is a unit within a system that shares some or all of the characteristics
of that system. Subsystems are smaller systems that make up a super-system /
supra-system. All systems are part of larger systems
.
Environment
This is the world surrounding the system, which the system is a subsystem of.
2. Systems are hierarchical, that is, the parts and sub-systems are made up of
other smaller parts. For example, a payroll system is a subsystem of the
Accounting System, which is a sub of the whole organisation. One system is a
sub of another.
3. The parts of a system constitute an indissoluble whole so that no part can be
altered without affecting other parts. Many organisational problems arise once
this principle is flouted or ignored. Changes to one department could create
untold adverse effects on others - ripple effects: e.g. changing a procedure in
one department could affect others e.g. admissions - faculty type of data
captured, process
4. The sub-systems should work towards the goals of their higher systems and
should not pursue their own objectives independently. When subsystems pursue
their own objectives, a condition of sub-optimality arises, and with this the falling
of the organisation is close at hand!
5. Organisational systems contain both hard and soft properties. Hard properties
are those that can be assessed in some objective way e.g. the amount of PAYE
tax with tax code, size of product- quantifiable
16
c) It recognizes the fact that conflicts can arise within a system, and that such
conflicts can lead to sub-optimization and that, ultimately, can even mean
that an organization does not achieve its goals.
d) It allows the individual to recognize that he/she is a subsystem within a
larger system, and that the considerations of systems concept apply to
him/her, also.
e) Given the above factors, it is clear that information-producing systems must
be designed to support the goals of the total system, and that this must be
borne in mind throughout their development.
17
Systems theory concepts
Comparator
18
Sub-optimization It is an occurrence that occurs when the objectives of one
element or subsystem conflicts with the objectives of the whole system.
Equifinality Certain results may be achieved with different initial conditions
and in different ways. In open systems the same final state can be reached
from several starting points, one result can have different causes, or through
different methods, there is more than one way to achieve the objective.
Goal-seeking systems attempt to stabilize at a certain point.
Holism the analysis of a system is considered from the point of view of the
whole system and not on individual subsystems. Subsystems are studied in
the context of the entire system.
3. Organizations
An organization is a group created and maintained to achieve specific objectives.
A hospital with objectives dealing with human care.
A local authority with objectives concerned with providing services to the
local community.
A commercial company with objectives including earning profits,
providing a return for shareholders and so on.
19
SYSTEM ANALYSIS AND DESIGN
1. Introduction to system analysis and design
System analysis and design is a series of processes for analysing and designing
computer-based information systems. Systems design allows a development team
to roughly see what and how their system will look like. An important result of
systems analysis and design is application software, that is, software designed to
support a specific organizational function or process.
Tools Computer programs that make it easy to use and benefit from the
techniques and to faithfully follow the guidelines of the overall development
methodology.
20
c) Design the process of determining how the system will accomplish its
purpose.
d) Implementation involves creating the system and putting it into use.
e) Maintenance involves monitoring and changing an information system
throughout its life.
System analysts use the system analysis and design process to develop new
systems. They study the organization’s present systems and suggest actions to be
taken after doing preliminary investigations.
2. Project Management
The key competencies that project managers must develop are known as
knowledge areas and include:
Scope management
Time management
Cost management
Quality management
Human resources management
Communications management
Risk management
Procurement management and
Integration management
The project stakeholders are the people involved in or affected by project activities
(including project sponsor, project team, support staff, customers, users, suppliers
and even opponents to the project).
21
A project life cycle is a collection of project phases, which includes:
1. Concept
2. Development
3. Implementation
4. Close-out
The first two phases relate to project feasibility and the last two phases focus on
delivering the work and are often called project acquisition.
It is important not to confuse project life cycle with product life cycle. The project
life cycle applies to all projects regardless of the products being produced. On the
other hand product life cycle models vary considerably based on the nature of the
product. For information systems a systems development life cycle (SDLC) is used.
SDLC is a framework for describing the phases involved in developing and
maintaining information systems.
22
Project identification and selection
(i) Identify potential development projects
Team organization
Establishing management procedures
Identifying scope Scope defines the boundaries of a project what part of
the business is to be studied, analysed, designed, constructed, implemented
and ultimately improved?
Identifying alternatives
Feasibility/risk analysis and strategic assessment
23
Feasibility is the measure of how beneficial or practical the development of
an information system will be to an organization. Feasibility analysis is the
process by which feasibility is measured.
PERT (Program Evaluation and Review Technique) and CPM (Critical Path
Method)
A PERT chart is a graphical network model that depicts a projects tasks and the
relationships between those tasks. It was developed in the late 1950s to plan and
control large weapons development projects for the US Navy. It is a project
management tool used to schedule, organize, and coordinate tasks within a
project. PERT depicts task, duration, and dependency information.
24
Critical Path Method (CPM), which was developed for project management in the
private sector at about the same time, has become synonymous with PERT, so that
the technique is known by any variation on the names: PERT, CPM, or CPM/PERT.
25
Diagram Symbols
Earliest Time
Event no.
Latest Time
X Y
Task X must be completed before task Y can start
Y
X
X
Z is a dummy activity; it is represented by a dotted
line, it is required to avoid activities having the
same head and tail events.
Y Z
CPM
26
CPM models the activities and events of a project as a network. Activities are
depicted as nodes on the network and events that signify the beginning or ending
of activities are depicted as arcs or lines between the nodes. The following is an
example of a CPM network diagram:
Activity Listing
Activity Preceden Duration
ce (Weeks)
A - 2
B - 3
C A 1
D A 3
E B 3
F B 4
G C 3
H G 1
I D, E 2
J F 2
Network Diagram
C G
A 1
D 3 H
2 1
Start 3 I
2
B E
3 J Finish
3
F
2
4
27
1. Specify the Individual Activities
From a work breakdown structure, a listing can be made of all the activities in the
project. This listing can be used as the basis for adding sequence and duration
information in later steps.
The critical path can be identified by determining the following four parameters for
each activity:
EST - earliest start time: the earliest time at which the activity can start
given that its precedent activities must be completed first.
EFT - earliest finish time, equal to the earliest start time for the activity plus
the time required to complete the activity.
LFT - latest finish time: the latest time at which the activity can be
completed without delaying the project.
LST - latest start time, equal to the latest finish time minus the time required
to complete the activity.
Slack is the amount of time that an activity can be delayed past its earliest start or
earliest finish without delaying the project. .The slack time for an activity is the
time between its earliest and latest start time, or between its earliest and latest
finish time.
The critical path is the path through the project network in which none of the
activities have slack, that is, the path for which EST=LST and EFT=LFT for all
activities in the path. A delay in the critical path delays the project. Similarly, to
accelerate the project it is necessary to reduce the total time required for the
activities in the critical path.
28
29
ES
Convention: Where N is node (activity) number
N ES Earliest Event Start Time
LS LS Latest Event Start Time
C
G
2 3
1
1 3 6
3
4 5 5
A H
8
2
D 1
3 I 9
0
6 7
0
4 9
0 2
E 7
J
B 3
3 F 7
2 2
6
3 3
7
4
As the project progresses, the actual task completion times will be known and the
network diagram can be updated to include this information. A new critical path
may emerge, and structural changes may be made in the network if project
requirements change.
CPM Limitations
a) CPM was developed for complex but fairly routine projects with minimal
uncertainty in the project completion times. For less routine projects there is
more uncertainty in the completion times, and this uncertainty limits the
usefulness of the deterministic CPM model. An alternative to CPM is the PERT
project-planning model, which allows a range of durations to be specified for
each activity.
Complex projects require a series of activities, some of which must be
performed sequentially and others that can be performed in parallel with
other activities. This collection of series and parallel tasks can be modeled as
a network.
30
In 1957 the Critical Path Method (CPM) was developed as a network model
for project management. CPM is a deterministic method that uses a fixed
time estimate for each activity.
b) While CPM is easy to understand and use, it does not consider the time
variations that can have a great impact on the completion time of a complex
project.
The PERT chart may have multiple pages with many sub-tasks.
The milestones generally are numbered so that the ending node of an activity has
a higher number than the beginning node. Incrementing the numbers by 10 allows
for new ones to be inserted without modifying the numbering of the entire diagram.
The activities in the above diagram are labeled with letters along with the expected
time required to complete the activity.
31
2. Determine Activity Sequence
This step may be combined with the activity identification step since the activity
sequence is evident for some tasks. Other tasks may require more analysis to
determine the exact order in which they must be performed.
Optimistic time - generally the shortest time in which the activity can be
completed. It is common practice to specify optimistic times to be three
standard deviations from the mean so that there is approximately a 1%
chance that the activity will be completed within the optimistic time.
Most likely time - the completion time having the highest probability. Note
that this time is different from the expected time.
Pessimistic time - the longest time that an activity might require. Three
standard deviations from the mean are commonly used for the pessimistic
time.
PERT assumes a beta probability distribution for the time estimates. For a beta
distribution, the expected time for each activity can be approximated using the
following weighted average:
Expected time = (Optimistic + 4 x Most likely + Pessimistic) / 6
This expected time may be displayed on the network diagram.
To calculate the variance for each activity completion time, if three standard
deviation times were selected for the optimistic and pessimistic times, then there
are six standard deviations between them, so the variance is given by:
[(Pessimistic - Optimistic) / 6] 2
32
If activities outside the critical path speed up or slow down (within limits), the total
project time does not change. The amount of time that a non-critical path activity
can be delayed without delaying the project is referred to as slack time.
If the critical path is not immediately obvious, it may be helpful to determine the
following four quantities for each activity:
These times are calculated using the expected time for the relevant activities. The
earliest start and finish times of each activity are determined by working forward
through the network and determining the earliest time at which an activity can
start and finish considering its predecessor activities. The latest start and finish
times are the latest times that an activity can start and finish without delaying the
project. LS and LF are found by working backward through the network. The
difference in the latest and earliest finish of each activity is that activity's slack.
The critical path then is the path through the network in which none of the
activities have slack.
The variance in the project completion time can be calculated by summing the
variances in the completion times of the activities in the critical path. Given this
variance, one can calculate the probability that the project will be completed by a
certain date assuming a normal probability distribution for the critical path. The
normal distribution assumption holds if the number of activities in the path is large
enough for the central limit theorem to be applied.
Since the critical path determines the completion date of the project, the project
can be accelerated by adding the resources required to decrease the time for the
activities in the critical path. Such a shortening of the project sometimes is referred
to as project crashing.
Make adjustments in the PERT chart as the project progresses. As the project
unfolds, the estimated times can be replaced with actual times. In cases where
there are delays, additional resources may be needed to stay on schedule and the
PERT chart may be modified to reflect the new situation.
Benefits of PERT
33
The activities that have slack time and that can lend resources to critical
path activities.
Activity start and end dates.
Limitations
The following are some of PERT's weaknesses:
Gantt Chart
A Gantt chart is a simple horizontal bar chart that depicts project tasks against a
calendar. Each bar represents a named project task. The tasks are listed vertically
in the left hand column. The horizontal axis is a calendar timeline. The Gantt chart
was first conceived by Henry L. Gantt in 1917, and is the most commonly used
project scheduling and progress evaluation tool.
Gantt charts give a clear, pictorial model of the project. They are simple and
require very little training to understand and use. They show progress and can be
used for resource planning. They also show interrelationships and critical path.
GanttChart Methodology
List vertically all tasks to be performed
Tasks identified by number and name
Horizontally indicate duration, resources and any other relevant information
Graphical part shows horizontal bar for each task connecting start and end
duration
Relationships can be shown by lines joining tasks can make diagram
34
Example of a Gantt Chart (File conversion)
Weeks 0 5 8 10 11
16
Convert Files
Prepare Training
Conduct Training
Run Parallel
Systems
1. Preliminary survey/study
2. Feasibility study
3. Facts finding and recording
4. Analysis
5. System design
6. System development
7. System implementation
35
This stage involves determination of whether there is need to change the existing
system or business procedures. It may require management requests for a change
of the existing system to give an organization a competitive advantage or to
improve the staff morale. The user department should be involved during the
definition of the problem. The problem to be solved should be very specific, clear
and precise. It should not be too broad to cause ambiguities in its solution.
Terms of reference
It is a documentation prepared by steering committee to act as a reference
document throughout system development stages. Its contents include:
Project title
Subject of study
Purpose of study
Personnel
Departments
Sections affected or involved during the system implementation
Available resources and constraints, the analyst, the project leader should
consider
The projects estimated duration and schedule
Steering committee
It is formed by two or three people to oversee the system development project
from its initiation to its completion. It comprises of the system analyst as the
project leader and a representative of the user department. They should
understand all the processing objectives and procedures within the affected
department. A management representative and accountant or auditors may be
incorporated to advise initially on financial aspects on the project.
36
a) To study the current processing procedures that may require to be
improved.
b) To prepare problem statement in form of terms of reference.
c) To coordinate system development activities throughout the
development life cycle.
d) To interface the project development team with organizational
management.
e) To resolve conflict that may arise during system development.
f) To direct, control and monitor the system development progress.
Technical Feasibility
Technical questions are those that deal with equipment and software e.g.
determination of whether the new system can be developed using the current
computer facilities within the company. Technical feasibility is thus aimed at
evaluation of the following:
37
v. Determination of the need for telecommunication equipment in the new
system to improve both data capture and preparation activities.
vi. The inputs, outputs, files and procedures that the proposed system
should have as compared to the outputs, files and procedures for the
current system.
vii. Determination of whether training is necessary for the employees before
the new system is implemented and the relevant skills required.
viii. Suggesting a suitable method of developing the new system, methods of
running it when it becomes operational and ways of implementing it.
Social Feasibility
This is also known as operational feasibility. It mostly deals with the effect of the
system on the current society within the company. It is carried out on the following
areas:
The social feasibility is carried out along with technical feasibility such that the
social implications of every alternative technical solution to the problem that
emerges are evaluated. Areas to consider include:
Group relationships
Salary levels
Job titles and job descriptions
Social costs to be evaluated e.g. cost of user training, consultancy that
may be engaged during development of the new system, job
improvements and salary changes.
Legal Feasibility
The new systems legal implications should be evaluated e.g. if it requires that the
computer should be insured or whether the stored data should be registered with
the government registrar before use. The copyright implication for restriction
should be assessed before the new system is implemented. Generally any legal
aspects associated with the new system should be assessed, and adequate
measures taken to protect the interest of the user company.
Economic Feasibility
Economic feasibility is aimed at determination of whether or not to continue with
the project, depending on whether the project is economically viable. The systems
benefits and estimated implementation cost should be determined before any
further resources can be spent on the project.
38
A cost benefit analysis (CBA) is carried out to determine whether the new system is
economically viable.
Measurable benefits are those that can be quantified in monetary terms e.g.
increase in working capital as a result of purchasing of computer systems or
reduction of delays in decision making which is obtained through improved
procedures e.g. invoicing procedures and credit control procedures.
ii. Intangible benefits they are benefits that cannot be quantified in monetary
terms or those that are difficult or impossible to quantify in monetary terms.
They are clearly desirable but very difficult to evaluate in terms of money value
e.g. improved customer satisfaction, better information, improved organizational
image, increased staff morale, a competitive advantage to an organization etc.
Cost Analysis: Costs are expenses or expenditure which are incurred by a system.
These may include equipment cost, development cost, operation cost and software
cost. During cost analysis one should consider both the new and the existing
system. The cost of retaining and operating the existing system should be
compared to the cost of introducing and running the computerized information
system.
a) The cost of running the existing system. This is calculated from the past
records. The items to consider include:
i. Man power cost which is extracted from the budgets and payroll reports
ii. Material cost which includes consumables e.g. stationery, work in
progress and current stock
iii. Operational cost e.g. the equipment cost expressed in terms of unit rate.
Others to consider include the duration the project takes to be completed
and initial replacement cost.
iv. Overhead costs which are direct expenses incurred by the company on
behalf of all departments e.g. rent, electricity, telephone bill etc. These
can easily be extracted from departments or centres to which they are
allocated.
39
v. The intangible cost of existing system e.g. loss of sales or cost of sales as
a result of inappropriate stock levels or loss of interest in bank as a result
of improper credit control system.
b) The cost of operating the proposed system this is likely to include all the areas
covered above i.e. manpower, materials, overheads and the intangible costs.
However there are additional costs associated with computerized systems e.g.
service contracts for the computer system, insurance of the computer system,
cost of data transmission, cost of consumables like printer cartridges, ribbons
etc. All these costs should be evaluated or estimated as accurately as possible.
c) The cost of new system development Includes the cost incurred for any
consultancy services that may have been hired during development. Allowances
given to the system development team members fall under this category.
Overall effects of the system development and implementation should be
determined and any cost associated established. These estimates are based on
both time and activities involved in the project. Staff training cost, recruitment
costs and retrenchment costs should be considered under system development
cost.
40
Introduction It gives general description of the existing system, the people
contacted during the study and purpose of the report.
Description of the alternative proposed systems in terms of the inputs,
outputs, file processed, response time etc.
Quantification to justify the cost of running the proposed system
The recommendation by the analyst on the most cost effective alternative
solution.
The author of the report
System analyst recommendations on the new system indicating whether to
commit further resources.
If the decision is to continue with the project, its development plan should be
given.
The report is submitted to the management for approval. After approval a more
detailed survey is conducted on the existing system mostly to establish its
weaknesses and strengths. This is called fact-finding or fact gathering.
This involves collection of information about the existing system on which to base
analysis in order to determine whether users current needs are being met. The
following are some of the activities that are looked at:
Fact-finding techniques
a) Use of questionnaires
41
- the questions to be asked are simple and straight forward and require
direct answers
- limited information is required from a large number of people
- it is used as a means to verify facts found using other methods.
b) Interviewing
This is a direct face-to-face conversation between the system analyst (the
interviewer) and users (interviewees). He obtains answers to questions he asks the
interviewee. He gets the interviewees suggestions and recommendations that
may assist during the design of the proposed system.
Interviews serve the following purposes:
Acts as a method of fact-finding to gather facts about the existing system.
Used for verifying facts gathered through other methods.
Used for clarifying facts gathered through other methods.
Used to get the user involved in the development of the new system.
42
When the analyst wishes to seek direct answers, opinions, suggestions and
detailed information
When the analyst wishes to verify validity of facts collected through other
techniques
When immediate response is required
43
Gathered facts should be recorded
Those to be observed should be notified and the purpose of the exercise
explained
The analyst should be objective and avoid personal opinion. He should have
an open mind
The analyst should also record ordinary events
44
It is comparatively cheap compared to other techniques
It is a faster method of fact finding especially when documents to be
considered are few
Disadvantages of this method are:
Time consuming if the documents are many or if they are not within the same
locality
Unavailability of relevant documents makes this method unreliable
Its success depends on the expertise of the analyst
Most of the documents or information obtained may be outdated
e) Sampling
Disadvantages include:
The sample may not be representative enough which may lead to incorrect
and bias conclusions
The expertise of the analyst is required since sampling involves a lot of
mathematical computation
3.4 Analysis
A system analysis involves evaluation of the current system using the gathered
facts or information. One should evaluate whether the current and projected user
needs are being met. If not, he should give a recommendation of what is to be
done. Analysis involves detailed assessment of the components of the existing
system and the requirements of the system.
45
Determination of the resources required for the intended system
Determine capabilities required in the system to meet information needs of
the organization
Once all the facts are analysed and documented a formal report is written called
statement of requirements.
46
It helps the analyst or system developer to gain understanding of the
existing system.
It allows the analyst or system developer to record existing system
information in a standard form to aid design of a new system. It also
facilitates understanding of the system by the user staff.
Enables the analyst or developer to define existing system procedure
into a logical model
Helps the analyst to write or produce statement of requirements,
which guides the development team throughout subsequent stages of
the development life cycle.
The objective system design is to put a logical structure of the real system in a
form that can be interpreted by other people apart from the designer. The analyst
should derive a logical model of the way the existing system works. There is an
assumption that the existing system provides a good guide to what is required of a
new system. It should be different from how the new system is to achieve the given
requirement.
There are two types of design: logical design and physical design.
47
Logical Design
A logical design produces specifications of major features of the new system, which
meets the systems objectives. The delivered product of the logical design is a
blueprint of the new system. It includes the requirements of existing system
components:
o Outputs (reports, displays, timings, frequencies etc)
o Inputs (dialogs, forms, screens etc)
o Storage (requirement for data to be stored in databases)
o Procedures (to collect, transform and output data)
o Controls (requirements for data integrity, security, data recovery procedures)
Note: Logical design of the system is viewed in terms of what process i.e.
procedures, inputs, outputs, storage and control the new system should have.
Physical Design
It takes the logical design blueprint and produces the program specification,
physical files or database and user interface for the selected or targeted hardware
and software. Physical design is mostly concerned with how the current system or
future system works in terms of how the programs are written, files organized in
storage media and how tasks are carried out for user interface with the system.
48
Standards: Standards may drive the design tasks in a specified direction.
Screen design
In order to produce an effective screen dialogue, several factors should be
considered:
i) The hardware in which the interface will be implemented e.g. a VDU, mini
computers etc. These should include the hardware ability to use graphics,
colour or touch sensitive panels.
ii) The software pre-loaded on hardware e.g. the pre-loaded operating
system, whether it is DOS based or window based.
iii) Memory capacity i.e. RAM and hard disk in order to determine the size of
the application to be developed
Interface techniques
49
It is important to consider the characteristics of the proposed system of input and
output interface design. Modern screen dialogue interface will always apply three
types of combination i.e. form filling, menu selection and WIMP interface.
Form filling
It is an effective form of interface where verification involves data capture. Some of
the guidelines to form filling include:
o Forms should be easy to fill
o Information should be logically organized or grouped e.g. headings
identification, instructions, body, totals and any comments
o Should be clear and attractive
o Should ensure accurate completion
Menu interface (selection)
It is designed to allow operator to select increasingly detailed options or choices
with each menu screen covering a different function. This is made clear by use of
colour screen and WIMP interface. The bottom option is usually reserved for the
error message or for help facility.
50
Coupling: Is a measure of the strength of the bond between programs.
Ideally, program should have little dependencies on other programs in the
system. This is so that any amendment to the program should have little or
no impact on other programs in the system. Programs should thus be
developed in modules.
Systems are broken down into modules because smaller programs (modules) are
easier to specify, test and modify than larger programs. Therefore errors of the
impact of a change are confined within fewer lines of code when modules are used.
Programmers can also choose the area of the system that interests them to code,
which is motivating to them. A large number of small programs make rescheduling
work easier i.e. same programs can be assigned to someone else to write if the first
programmer is taking longer than estimated for a particular program.
For a system to be broken down into modules, a good system specification should
be prepared.
System specification
It is a document prepared at the design stage of system development life cycle. It
represents the conceptual system or logical system. This is a system in paper form.
Its contents are:
i) Introduction of existing system i.e. details of its objectives and a brief
description of how these objectives are met.
ii) Description of the proposed system i.e. details of its objectives and a
description of how the objectives are to be met.
iii) Justification of proposed system as a solution to the problem specified in
terms of reference. Costs and benefits justification for the proposed
system should be shown.
iv) Comparison of both existing and proposed system in terms of inputs and
outputs i.e. specification of frequency, volume, timings etc.
v) Proposed system file descriptions: This should include file names,
organization methods, access methods, nature of the system, storage
media, record structures and file activities.
vi) Proposed system control specifications i.e. error handling procedures,
recovery procedures, in built controls both hardware and software
related.
(a) Programming
This activity involves translation of system specification into program code. A
programmer should integrate user requirements into the computer system.
Programming standards should be adhered to e.g. use of a standard programming
language. Decomposition of a program into smaller units or modules should be
51
implemented as per the design specifications. It is important that the programming
team work in cooperation to improve the quality of programs produced.
52
(b) Testing
Generally all programs should be tested before system conversion. There are two
major program-testing techniques: white box and black box testing.
These methods are based on the input and output to and from a program. They do
not emphasize on the internal structure of a program.
53
(vi) Configuration review: It is conducted to ensure that each element of the
software is properly developed i.e. to ensure that each modules
configuration is proper.
(vii) Recovery testing: Is conducted to force software to fail in a number of ways
and verify that recovery is properly performed.
(viii) Security testing: It attempts to verify that protection mechanism built into
the system works.
(ix) Stress testing: Designed to confront a program with abnormal structure and
abnormal quantity of resources e.g. a large volume of transaction inputs to
see how the program can cope up with such abnormally.
(x) Performance testing: Conducted to evaluate the software performance e.g.
run time, response time, quality of output etc.
(xi) Acceptance testing: This is carried out by software users and management
representation for the following reasons:
To discover software errors not yet detected
To discover the actual and exact demands of the system
To discover if any major changes required by the system can be
adopted.
54
iii. Recorder Acts as the secretary of the team and ensures that all agreed
actions pointed out are noted and followed up.
iv. Reviewers They get in advance the materials being walked through as a
working model. They walk through the proposed system and checks whether
it falls short of required quality.
v. User representative Approve their understanding and satisfaction of what
they will do with the system when it becomes operational. The
representatives may be senior managers, auditors etc.
(c) Documentation
Software documentation is a description of software or system after its
development. Software product therefore comprises of code and documentation.
Documentation includes a wide range of technical and non-technical manuals,
books, descriptions and diagrams relating to the use and operation of produced
software. It is vital for software engineering to allocate adequate time to the
software engineering particularly documentation throughout its development.
55
There are two major reasons why software engineers dislike producing
documentation:
(i) They do not see the need for it because it may indicate that one is new to
the profession and has not yet had time to appreciate the benefits of
documentation. It may indicate also that one is so wrapped up in
pressure of the moment that long-range goals have become absurd.
(ii) They do not feel capable of doing it. Although sometimes the feeling of
inadequacy derives from inability to talk about technical subjects with
non-technical people.
(1) Flowcharts
A flowchart is a diagrammatic representation that illustrates the sequence of
operations performed to get to the solution of a problem. Flowcharts facilitate
communication between system analysts, system designers and programmers.
56
57
Guidelines for drawing a flowchart
Flowcharts are usually drawn using some standard symbols. Some standard
symbols used in drawing flowcharts are:
Alternate process
Magnetic tape
Magnetic disk
Flow lines
Display
Document
Multiple documents
Stored data
58
In drawing a proper flowchart, all necessary requirements should be listed out in
logical order
The flowchart should be clear, neat and easy to follow. There should not be any
room for ambiguity in understanding the flowchart
The usual direction of the flow of a procedure or system is from left to right or
top to bottom
Only one flow line should come out from a process symbol
Or
Only one flow line should enter a decision symbol, but two or three flow lines,
one for each possible answer, should leave the decision symbol
>0 <0
=0
Only one flow line is used in conjunction with the termination symbol
Write within the standard symbols briefly. As necessary, you can use the
annotation symbol to describe data or computational steps more clearly.
Types of flowcharts
Flowcharts are of three types:
System flowcharts
Run flowcharts
Program flowcharts
System Flowcharts
59
System flowchart describes the data flow for a data processing system. It provides
a logical diagram of how the system operates. It represents the flow of documents,
the operations performed in data processing system. It also reflects the
relationship between inputs, processing and outputs. Following are the features of
system flowcharts:
The sources from which data is generated and device used for this purpose
Various processing steps involved
The intermediate and final output prepared and the devices used for their
storage
The figure below is a sample of system flowchart for the following algorithm (step
by step instructions on how to perform a certain task):
Prompt the user for the centigrade temperature.
Store the value in C
Set F to 32+(9C/5)
Print the value of C, F
Stop
System Flowchart
60
Run Flowcharts
Run flowcharts are used to represent the logical relationship of computer routines
along with inputs, master files transaction files and outputs.
The figure below illustrates a run flowchart.
Run Flowchart
Program Flowcharts
A program flowchart represents, in detail, the various steps to be performed within
the system for transforming the input into output. The various steps are logical/
arithmetic operations, algorithms, etc. It serves as the basis for discussions and
communication between the system analysts and the programmers. Program
flowcharts are quite helpful to programmers in organizing their programming
efforts. These flowcharts constitute an important component of documentation for
an application.
The figure represents a program flowchart for finding the sum of first five natural
numbers (i.e. 1,2,3,4,5).
61
Program Flowchart
62
(2) Data flow diagram
Data Flow Diagram (DFD) is a graphical representation of a systems data and how
the processes transform the data. Unlike, flowcharts, DFDs do not give detailed
descriptions of modules but graphically describe a systems data and how the data
interact with the system.
Components of DFD
DFDs are constructed using four major components
External entities
Data stores
Processes and
Data flows
(i) External Entities
External entities represent the source of data as input to the system. They are also
the destination of system data. External entities can be called data stores out side
the system. These are represented by squares.
(ii) Data Stores
Data stores represent stores of data within the system. Examples, computer files or
databases. An open-ended box represents a data/store data at rest or a
temporary repository of data.
(iii) Process
Process represents activities in which data is manipulated by being stored or
retrieved or transferred in some way. In other words we can say that process
transforms the input data into output data. Circles stand for a process that converts
data into information.
(iv) Data Flows
Data flow represents the movement of data from one component to the other. An
arrow identifies data flow data in motion. It is a pipeline through which
information flows. Data flows are generally shown as one-way only. Data flows
between external entities are shown as dotted lines.
Physical and Logical DFD
A logical DFD of any information system is one that models what occurs without
showing how it occurs. An example is illustrated below.
63
It is clear from the figure that orders are placed, orders are received, the location of
ordered parts is determined and delivery notes are dispatched along with the
order. It does not however tell us how these things are done or who does them. Are
they done by computers or manually and if manually who does them?
A physical DFD shows, how the various functions are performed? Who does them?
An example is illustrated below:
The figure is opposite to that of the logical DFD, it shows the actual devices that
perform the functions. Thus there is an "order processing clerk", an "entry into
computer file" process and a "run locate program" process to locate the parts
ordered. DFD(s) that shows how things happen, or the physical components are
called physical DFD(s).
Typical processes that appear in physical DFDs are methods of data entry, specific
data transfer or processing methods.
Difference between Flowcharts and DFD
The program flowchart describes boxes that describe computations, decisions,
interactions and loops. It is important to keep in mind that data flow diagrams are
not program flowcharts and should not include control elements. A good DFD
should:
Have no data flows that split up into a number of other data flows
Have no crossing lines
Not include flowchart loops of control elements
Not include data flows that act as signals to activate processes.
64
This is a matrix representation of the logic of a decision. It specifies the possible
conditions and the resulting actions. It is best used for complicated decision logic. It
consists of the following parts:
Condition stubs
o Lists condition relevant to decision
Action stubs
o Actions that result from a given set of conditions
Rules
o Specify which actions are to be followed for a given set of conditions
An indifferent condition is a condition whose value does not affect which action is
taken for two or more rules.
The Condition stub contains a list of all the necessary tests in a decision table. In
the lower left-hand corner of the decision table we find the action stub where one
may note all the processes desired in a given module. Thus Action Stub contains a
list of all the processes involved in a decision table.
The upper right corner provides the space for the condition entry - all possible
permutations of yes and no responses related to the condition stub. The yes and no
possibilities are arranged as a vertical column called rules. Rules are numbered
1,2,3 and so on. We can determine the rules in a decision table by the formula:
65
Example: Complete decision table for payroll system
Conditions / Rules
Courses of action 1 2 3 4 5 6
Condition S H S H S H
Stubs Employee Type
Hours Worked <4 <4 4 4 >4 >4
0 0 0 0 0 0
The decision tree defines the conditions as a sequence of left to right tests. A
decision tree helps to show the paths that are possible in a design following an
action or decision by the user. Decision tree turns a decision table into a diagram.
This tool is read from left to right, decision results in a fork, and all branches end
with an outcome. Each node corresponds to a numbered choice on the legend. All
possible actions are listed on the far right.
66
Yes Pay Base
1 Salary
No
Yes Pay Hourly
2 Wage;
Absence
No
Yes
Pay Hourly
Legend
3 Wage
1) Salaried?
2) Hours Worked < 40?
3) Hours Worked = 40?
No
Pay Hourly
Wage; Pay
Overtime
(1)Structured English
A user may acquire the hardware and software directly from a manufacturer and
developer respectively. He may also purchase them from an intermediate supplier.
Whichever way, carefully controlled purchasing procedures should be followed. The
67
procedures should include invitation to tender and comparative analysis to
determine the appropriate supplier of the required hardware and software.
It is issued to a range of suppliers. ITT sets out specifications for the required
equipment and software and should explore how the hardware will be used and the
time scale for implementation. It sets the performance criteria required for the new
system.
Contents of ITT
ITT includes background information about the companies together with an
indication of the purpose of the system. This includes:
While all the above features are necessary, it is important to decide on the
financing methods. These may include:
(1) Benchmark tests tests how long it takes for a machine to run through a
particular set of programs. It is carried out to compare performance of
software/hardware against present criteria such as performance speed,
response times and user friendliness of the equipment.
68
(2) Simulation tests it uses synthetic program written specifically for
testing purposes. They are programs incorporated with routines designed
to test a variety of situations. Other features or factors include:
Software factors
Software contracts
Software contracts include the costs, purpose and capacity of the software. The
following are covered in software contracts:
Warrant terms
Support available
69
Arrangement for upgrades
Maintenance arrangements
Delivery period/time especially for written software
Performance criteria
Ownership
Software licensing
Software licensing covers the following:
Number of users that can install and use the software legally
Whether the software can be copied without infringing copyrights
Whether it can be altered without the developers consent
Circumstances under which the licensing can be terminated
Limitation of liability e.g. if the user commits fraud using the software
Obligation to correct errors or bugs if they exist in the software
Hardware factors
Custom-built hardware is a rare necessity. Most hardware is standard, compatible,
off-the-shelf components. It is cheaper, easy to maintain, and ensures compatibility
with equipment in your organization and your partners and clients.
The system analysis and design should have precisely determined what sort of
hardware is needed - down to the make and model.
Factors to consider:
Reputation for support (e.g. phone support, onsite visits, website help)
Reputation for reliability, honesty, permanence (very important!)
Knowledge of the equipment
Geographic location - can you get to them easily if you need to?
Ability to offer onsite support or repair
Prices cheap, affordable
70
Installation
Software and hardware installation is done by suppliers technicians or the user
organization appointed person to avoid the risks associated with improper
installation of the equipment. The system analyst and other development team
members may be called to assist where appropriate.
User training
It is important that the system users be trained to familiarize themselves with the
hardware and the system before the actual changeover.
The aims of user training are:
File conversion
This involves changing of existing form of files into a form suitable for the new
system when it becomes operational. It may require that the analyst create the file
from scratch if no computer-based files exist. In an event that computer-based files
exist they should be converted to a form relevant or sensible to the new system.
(i) Record manually the existing data i.e. the old master files
(ii) Transfer the recorded data to special form required by the new system
71
(iii) Insert any new data into the file i.e. update the file already in the new form
(form should include data contents and their corresponding formats and
layouts)
(iv) Transcribe the completed form into a medium or storage relevant for the
new system
(v) Validate the file contents to ensure that they are error free before they can
be used in the new system
System change-over
Involves changing or switching from existing system to the new developed system.
The following methods may be used:
72
Direct changeover
The old system ceases its operation and the new system commences operation the
next day. The old system is made redundant in all its aspects. The method is
applicable in the following circumstances:
Parallel changeover
This is a method where new and old systems are allowed to run side by side or
simultaneously until it is proved beyond reasonable doubt that the new system is
working and all the benefits are realized. It is suitable when the new system is
sophisticated and a very careful changeover is required or when the development
team has little confidence in the new system and where there are more staff to
cope with the operations of both system running in parallel.
Phased changeover
The method involves implementation of a system on step-by-step approach. This
implies that only a portion of the system is implemented initially. Other portions
are implemented in phases. For example if it has modules for finance, production
73
and human resource management, then the finance module is implemented first,
then the production and lastly the human resource management module.
Pilot changeover
It involves installation of new system but using it only in one part of the
organization on an experimental basis. E.g. a bank wishing to computerize its
operations may install a computerized system on one branch on experimental
basis. When the system is proved to be successful, it is transferred to another
branch and after some time to another etc until the entire bank is computerized.
Any refinement that ought to be done on the system should be done before it is
installed in the next branch.
Advantages are:
Allows a new system to be implemented quickly with minimum costs
Allow training of personnel on the new system during implementation
They cause minimum interruption to company operations during systems
implementation
The peak demands are lighter on the end user and the operational
environment
They are less costly
The risks associated with errors and system failure are minimized
74
a) Comparison of the actual system performance against the anticipated
performance objectives. This involves assessment of system running cost,
benefits etc as they compare with estimated or anticipated.
b) The staffing needs and whether they are more or less than anticipated
c) Any delays in the processing and effects of such delays
d) Effectiveness of the inbuilt security procedures in the system
e) The error rates for input data
f) The output i.e. whether it is correct, timely and distributed correctly to
the relevant users
Evaluation of a system should be carried out after completion of every stage of
SDLC. There are three types of evaluation.
Formative (feedback) evaluation
It produces information that is fed back into the development cycle to improve the
product under development. It serves the needs of those who are involved in
the development process.
Summative evaluation
It is done after the system development project is completed. It provides
information about efficiency of the product to the decision makers who adopt it.
Documentation evaluation
It is performed just before and after hardware and software installation and also
after system changeover. It is carried out to assess general functionality of a
system after settling down.
75
The quality of program produced
The operational cost of the system
The savings made as a result of the system
The impact of the system on users and their job
Quality and completeness of the system documentation
NB: System post implementation review team writes a report that indicates specific
areas within the system that need improvement. This report is called post
implementation review report. It acts as a reference document during system
maintenance.
76
The process of the system maintenance should be controlled by the system
analyst.
a) Corrective maintenance
b) Perfective maintenance
c) Adaptive maintenance
d) Preventive maintenance
e) Replacive maintenance
Corrective maintenance
It is usually a change effected in a system in response to detected problem or
error. It is objective is to ensure that the system remains functional. It basically
involves removal of errors on the already newly developed system. Example could
be a failure in parts of the system.
Perfective maintenance
It is a change to perfect a system i.e. improve its performance in terms of response
time to user request or to amend a system interface to make a system more user
friendly.
Adaptive maintenance
Involves changing a system to take account of a change in its functional
environment.
Preventive maintenance
Carried out on a system to ensure that it can withstand stress. It helps in ensuring
data and software integrity.
Replacive maintenance
It is carried out on a system when a system becomes almost unmaintainable e.g.
due to lack of documentation, poor design or age.
77
4.1 Data-Oriented System Development
Data-oriented system development (DOSD) focuses on and recognizes the need for
management and staff to have access to data to facilitate and support decisions.
Users need and want data so they can derive information from it. Inherent in DOSD
systems is the development of an accessible database of information that will
provide the basis for ad hoc reporting.
4.3 Prototyping
Prototyping, also known as heuristic development, is the process of creating a
system through controlled trial and error. It is a method, primarily using faster
development tools such as 4GLs that allows a user to see a high level view of the
workings of the proposed system within a short period of time.
The initial emphasis during development of the prototype is usually placed on the
reports and screens, which are the system aspects most used by the end users.
This allows the end users to see a working model of the proposed system within a
short time.
i. Build the model to create the design. Then, based on that model, develop
the system with all the processing capabilities needed.
ii. Gradually build the actual system that will operate in production using a 4GL
that has been determined to be appropriate for the system being built.
78
The problem with the first approach is that there can be considerable pressure to
implement an early prototype. Often, users observing a working model cannot
understand why the early prototype has to be refined further. The fact that the
prototype has to be expanded to handle transaction volumes, terminal networks,
backup and recovery procedures, as well as provide for auditability and control is
not often understood.
A potential risk with prototyped systems is that the finished system will have poor
controls. By focusing mainly on what the user wants and what the user uses,
system developers may miss some of the controls that come out of the traditional
system development approach, such as: backup/recovery, security and audit trails.
Change control often becomes much more complicated with prototyped systems.
Changes in designs and requirements happen so quickly that they are seldom
documented or approved and can escalate to a point of being unmaintainable.
79
i. The concept definition stage defines the business functions and data subject
areas that the system will support and determines the system scope.
ii. The functional design stage uses workshops to model the systems data and
processes and to build a working prototype of critical system components.
iii. The development stage completes the construction of the physical database
and application system, builds the conversion system and develops user aids
and deployment work plans.
iv. The deployment stage includes final user testing and training, data
conversion and the implementation of the application system.
Advantages:
Improved requirements determination.
Large productivity gains have been realized when developing certain types
of applications.
Enables end users to take a more active role in the systems development
process.
Many can be used for prototyping.
Some have new functions such as graphics, modeling, and ad hoc
information retrieval.
Disadvantages:
It is not suited to large transaction-oriented applications or applications with
complex updating requirements.
Standards for testing and quality assurance may not be applied.
Proliferation of uncontrolled data and "private" information systems.
80
charts and diagrams. They may have any of the following tools: screen and report
generators, data dictionaries, reporting facilities, code generators, and
documentation generators. These tools can greatly increase the productivity of the
systems analyst or designer by:
Enforcing a standard.
Improving communication between users and technical specialists.
Organizing and correlating design components and providing rapid access to
them via a design repository or library.
Automating the tedious and error-prone portions of analysis and design.
Automating testing and version control.
5. Application Packages
An application software package is a set of prewritten, pre-coded application
software programs that are commercially available for sale or lease. Packages
range from very simple programs to very large and complex systems
encompassing hundreds of programs. Packages are normally used for one of the
following three reasons:
Disadvantages of packages:
There are high conversion costs for systems that are sophisticated and
already automated.
Packages may require extensive customization or reprogramming if they
can't easily meet unique requirements. This can inflate development costs.
A system may not be able to perform many functions well in one package
alone. For example, a human resources system may have a good
81
compensation module but very little capacity for manpower planning or
benefits administration.
Impact if the vendor no longer supports/supplies the package
6. Software reengineering
Is a methodology that addresses the problem of aging or legacy software. It seeks
to upgrade such software that works by extracting the logic of the system, thereby
creating a new system without starting from scratch. The techniques involved are:
Reverse engineering.
Revision of design and program specifications.
Forward engineering.
7. Reverse engineering
Extracts the business specifications from older software. Reverse engineering tools
analyse the existing program code, file, and database descriptions to produce a
structured specification of the system. This specification can be combined with new
specifications to provide the basis of the new system.
8. Terminology
Object-oriented programming
This is a revolutionary concept that changed the rules in computer program
development, object-oriented programming (OOP) is organized around "objects"
rather than "actions," data rather than logic. Historically, a program has been
viewed as a logical procedure that takes input data, processes it, and produces
output data. The programming challenge was seen as how to write the logic, not
how to define the data. Object-oriented programming takes the view that what we
really care about are the objects we want to manipulate rather than the logic
required to manipulate them. Examples of objects range from human beings
(described by name, address, and so forth) to buildings and floors (whose
properties can be described and managed) down to the little widgets on your
computer desktop (such as buttons and scroll bars).
The first step in OOP is to identify all the objects you want to manipulate and how
they relate to each other, an exercise often known as data modelling. Once you've
identified an object, you generalize it as a class of objects and define the kind of
data it contains and any logic sequences that can manipulate it. Each distinct logic
sequence is known as a method. A real instance of a class is called an "object" or,
in some environments, an "instance of a class." The object or class instance is what
you run in the computer. Its methods provide computer instructions and the class
object characteristics provide relevant data. You communicate with objects - and
they communicate with each other - with well-defined interfaces called messages. .
C++ and Java are the most popular object-oriented languages today. The Java
programming language is designed especially for use in distributed applications on
corporate networks and the Internet.
82
83
INFORMATION SYSTEMS
1. Introduction
An information system is a set of interrelated components that collect, manipulate,
process and transform data into information and provide feedback to meet a
specified objective. A computer based information system is an information system
that uses computer technology to perform input, processing and output activities.
Due to the massive computerization of manual information systems, computer
based information systems are simply referred to as information systems. They are
the subject of discussion in this chapter.
For each functional area in the organization, four levels of organizational hierarchy
can be identified: the operational level, knowledge level, management level and
strategic level. Different types of information systems serve each of these levels.
84
TYPES OF INFORMATION SYSTEMS
Operational Operational
Level Managers
People These use the system to fulfil their informational needs. They include
end users and operations personnel such as computer operators, systems
analysts, programmers, information systems management and data
administrators.
Computer Software Refers to the instructions that direct the operation of the
computer hardware. It is classified into system and application software.
85
Databases Contains all data utilized by application software. An individual set
of stored data is referred to as a file. Physical storage media evidences the
physical existence of stored data, that is: tapes, disk packs, cartridges, and
diskettes.
Transaction processing
Management reporting
Decision support
86
v. Process interactive support applications The information system contains
applications to support systems for planning, analysis and decision making.
The mode of operation is interactive, with the user responding to questions,
requesting for data and receiving results immediately in order to alter inputs
until a solution or satisfactory result is achieved.
1. Management reporting
This is the function involved in producing outputs for users. These outputs are
mainly as reports to management for planning, control and monitoring purposes.
Major outputs of an information system include:
i. Transaction documents or screens
ii. Preplanned reports
iii. Preplanned inquiry responses
iv. Ad hoc reports and ad hoc inquiry responses
v. User-machine dialog results
2. Decision support
Types of decisions
a) Structured/programmable decisions
These decisions tend to be repetitive and well defined e.g. inventory replenishment
decisions. A standardized pre-planned or pre-specified approach is used to make
the decision and a specific methodology is applied routinely. Also the type of
information needed to make the decision is known precisely. They are
programmable in the sense that unambiguous rules or procedures can be specified
in advance. These may be a set of steps, flowchart, decision table or formula on
how to make the decision. The decision procedure specifies information to be
obtained before the decision rules are applied. They can be handled by low-level
personnel and may be completely automated.
It is easy to provide information systems support for these types of decisions. Many
structured decisions can be made by the system itself e.g. rejecting a customer
order if the customers credit with the company is less than the total payment for
the order. Yet managers must be able to override these systems decisions
because managers have information that the system doesnt have e.g. the
customer order is not rejected because alternative payment arrangements have
been made with the customer.
In other cases the system may make only part of the decision required for a
particular activity e.g. it may determine the quantities of each inventory item to be
reordered, but the manager may select the most appropriate vendor for the item
on the basis of delivery lead time, quality and price.
Examples of such decisions include: inventory reorder formulas and rules for
granting credit. Information systems requirements include:
o Clear and unambiguous procedures for data input
o Validation procedures to ensure correct and complete input
o Processing input using decision logic
87
o Presentation of output so as to facilitate action
b) Semi-structured/semi-programmable decisions
The information requirements and the methodology to be applied are often
known, but some aspects of the decision still rely on the manager: e.g. selecting
the location to build a new warehouse. Here the information requirements for
the decision such as land cost, shipping costs are known, but aspects such as
local labour attitudes or natural hazards still have to be judged and evaluated
by the manager.
c) Unstructured/non-programmable decisions
These decisions tend to be unique e.g. policy formulation for the allocation of
resources. The information needed for decision-making is unpredictable and no
fixed methodology exists. Multiple alternatives are involved and the decision
variables as well as their relationships are too many and/or too complex to fully
specify. Therefore, the managers experience and intuition play a large part in
making the decision.
In addition there are no pre-established decision procedures either because:
The decision is too infrequent to justify organizational preparation cost
of procedure or
The decision process is not understood well enough, or
The decision process is too dynamic to allow a stable pre-established
decision procedure.
Information systems requirements for support of such decisions are:
Access to data and various analysis and decision procedures.
Data retrieval must allow for ad hoc retrieval requests
Interactive decision support systems with generalized inquiry and
analysis capabilities.
Example: Selecting a CEO of a company.
88
A transaction is any business related exchange, such as a sale to a client or a
payment to a vendor. Transaction processing systems process and record
transactions as well as update records. They automate the handling of data about
business activities and transactions. They record daily routine transactions such as
sales orders from customers, or bank deposits and withdrawals. Although they are
the oldest type of business information system around and handle routine tasks,
they are critical to business organization. For example, what would happen if a
banks system that records deposits and withdrawals and maintain accounts
balances disappears?
TPS are vital for the organization, as they gather all the input necessary for other
types of systems. Think of how one could generate a monthly sales report for
middle management or critical marketing information to senior managers without
TPS. TPS provide the basic input to the companys database. A failure in TPS often
means disaster for the organization. Imagine what happens when an airline
reservation system fails: all operations stops and no transaction can be carried out
until the system is up and running again. Long queues form in front of ATMs and
tellers when a banks TPS crashes.
Characteristics of TPS:
TPS are large and complex in terms of the number of system interfaces with
the various users and databases and usually developed by MIS experts.
TPSs control collection of specific data in specific formats and in accordance
with rules, policies, and goals of organisation- standard format
They accumulate information from internal operations o the business.
They are general in natureapplied across organisations.
They are continuously evolving.
89
the relatively raw data available through a TPS and converts it into meaningful
aggregated form that managers need to conduct their responsibilities. They
generate information for monitoring performance (e.g. productivity information)
and maintaining coordination (e.g. between purchasing and accounts payable).
The main input to an MRS is data collected and stored by transaction processing
systems. A MRS further processes transaction data to produce information useful
for specific purposes. Generally, all MIS output have been pre-programmed by
information systems personnel. Outputs include:
Characteristics of MRS
MIS professionals usually design MRS rather than end users- using life cycle
oriented development methodologies.
They are large and complex in terms of the number of system interfaces with
the various users and databases.
MRS are built for situations in which information requirements are reasonably
well known and are expected to remain relatively stable. This limits the
informational flexibility of MRS but ensures that a stable informational
environment exists.
They do not directly support the decision making process in a search for
alternative solutions to problems. Information gained through MRS is used in
the decision making process.
They are oriented towards reporting on the past and the present, rather than
projecting the future. Can be manipulated to do predictive reporting.
MRS have limited analytical capabilities. They are not built around elaborate
models, but rather rely on summarisation and extraction from the databases
according to the given criteria.
90
environment in which decision makers can quickly manipulate data and models of
business operations. A DSS might be used for example, to help a management
team decide where to locate a new distribution facility. This is a non-routine,
dynamic problem. Each time a new facility must be built, the competitive,
environmental, or internal contexts are most likely different. New competitors or
government regulations may need to be considered, or the facility may be needed
due to a new product line or business venture.
DSS have less structure and predictable use. They are user-friendly and highly
interactive. Although they use data from the TPS and MIS, they also allow the
inclusion of new data, often from external sources such as current share prices or
prices of competitors.
Top executives need ESS because they are busy and want information quickly and
in an easy to read form. They want to have direct access to information and want
their computer set-up to directly communicate with others. They want structured
forms for viewing and want summaries rather than details.
91
ES use artificial intelligence technology.
It attempts to codify and manipulate knowledge rather than information
ES may expand the capabilities of a DSS in support of the initial phase of the
decision making process. It can assist the second (design) phase of the decision
making process by suggesting alternative scenarios for "what if" evaluation.
It assists a human in the selection of an appropriate model for the decision
problem. This is an avenue for an automatic model management; the user of
such a system would need less knowledge about models.
ES can simplify model-building in particular simulation models lends itself to this
approach.
ES can provide an explanation of the result obtained with a DSS. This would be
a new and important DSS capability.
ES can act as tutors. In addition ES capabilities may be employed during DSS
development; their general potential in software engineering has been
recognised.
92
for documents), but also enable them to test the product without having to build
physical prototypes.
Architects use CAD software to create, modify, evaluate and test their designs;
such systems can generate photo-realistic pictures, simulating the lighting in rooms
at different times of the day, perform calculations, for instance on the amount of
paint required. Surgeons use sophisticated CAD systems to design operations.
Financial institutions use knowledge work systems to support trading and portfolio
management with powerful high-end PCs. These allow managers to get
instantaneous analysed results on huge amounts of financial data and provide
access to external databases.
Workflow systems are rule-based programs - (IF this happens THEN take this
action)- that coordinate and monitor the performance of a set of interrelated
tasks in a business process.
93
(viii) Electronic Funds Transfer (EFT)
EFT is the exchange of money via telecommunications without currency actually
changing hands. EFT refers to any financial transaction that transfers a sum of
money from one account to another electronically. Usually, transactions originate
at a computer at one institution (location) and are transmitted to a computer at
another institution (location) with the monetary amount recorded in the respective
organizations accounts. Because of the potential high volume of money being
exchanged, these systems may be in an extremely high-risk category. Therefore,
access security and authorization of processing are important controls.
Security in an EFT environment is extremely important. Security includes methods
used by the customer to gain access to the system, the communications network
and the host or application-processing site. Individual customer access to the EFT
system is generally controlled by a plastic card and a personal identification
number (PIN). Both items are required to initiate a transaction.
94
Officers in ICT department
IT Manager/Director
Systems analysts
Programmers- system and applications
Database administrator
Network administrator
Librarian
Support staff- hardware, software technicians
Data entry clerks
The number of people working in the ICT department and what they do will depend
on:
The size of the computing facility. Larger computers are operated on a shift
work basis.
The nature of the work. Batch processing systems tend to require more staff.
Whether a network is involved. This requires additional staff.
How much software and maintenance is done in house instead of seeking
external resources.
The information technology staff may be categorized into various sections whose
managers are answerable to the information technology manager. The
responsibilities of the information technology manager include:
95
Structure of ICT department
ICT
DIRECTOR/
MANAGER
96
Functional structure for information services department
Other Support
Development Support Network Management
Production Control and Support
Development Centre Technology Management
Capacity Management
97
The sections that make up the ICT department and their functions are discussed
below:
1) Development section
System Analysis Functions include:
System investigations.
System design.
System testing.
System implementation.
System maintenance.
2) Operations section
Duties include:
Planning procedures, schedules and staff timetables.
Contingency planning.
Supervision and coordination of data collection, preparation, control and
computer room operations.
Liaison with the IT manager and system development manager.
a) Data preparation
Data preparation staff are responsible for converting data from source documents
to computer sensible form.
Duties are:
Correctly entering data from source documents and forms.
Keeping a record of data handled.
Reporting problems with data or equipment.
b) Data control
Data control staff are generally clerks. Duties include:
Receiving incoming work on time.
Checking and logging incoming work before passing it to the data
preparation staff.
Dealing with errors and queries on processing.
Checking and distributing output.
98
Duties include:
Control of work progress as per targets.
Monitoring machine usage.
Arranging for maintenance and repairs.
Computer operators
Controls and operates hardware in the computer room.
Duties include:
Starting up equipment.
Running programs.
Loading peripherals with appropriate media.
Cleaning and simple maintenance.
Files librarian
Keeps all files organized and up to date. Typical duties are:
Keeping records of files and their use.
Issuing files for authorized use.
Storing files securely.
Database management
The database administrator. He is responsible for the planning, organization and
control of the database. His functions include
Coordinating database design.
Controlling access to the database for security and privacy.
Establishing back-up and recovery procedures.
Controlling changes to the database.
Selecting and maintaining database software.
Meeting with users to resolve problems and determine changing
requirements.
Network management
The network administrator/controller/manager. Functions include:
Assignment of user rights.
Creating and deleting of users.
Training of users.
Conflict resolution.
Advising managers on planning and acquisition of communication
equipment.
99
Efficiency is a ratio of what is produced to what is consumed. It ranges from 0
100%. Systems can be compared by how efficient they are
100
3. General technological trends
General trends within computing systems include:
Object oriented environment and document management
Networked computing
Mobile commerce
Integrated home computing
The Internet
Intranets and extranets
Optical networks
These are systems that maintain records concerning the flow of funds in the firm
and produce financial statements, such as balance sheets and income statements.
They are among the earliest systems to be computerized.
101
5.1 OPERATIONAL-LEVEL ACCOUNTING IS
The general ledger subsystem ties all other financial accounting system
subsystems together, provides managers with
Periodic accounting reports and statements, such as income statement and
balance sheet
Support for budgeting
Creation of general ledger accounts and definition of the organizations
fiscal period
Production of a list of accounts maintained by the financial accounting
system.
The fixed assets subsystem maintains records of equipment, property, and the
other long-term assets an organization owns. The records include:
102
The general ledger subsystem uses this information to maintain up-to-date
balances in the various long-term asset accounts of the organization. The
subsystem also may maintain and process data on the gain or loss on the sale of
fixed assets and prepare special income tax forms for fixed assets required by the
federal government.
The accounts payable subsystem provides data directly to the general ledger
subsystem and receives data from the purchase order subsystem.
103
Typical outputs are
Cheques to creditors
Schedule of accounts payable
The inventory control subsystem provides input to the general ledger subsystem
and receives input from the purchase order and the sales order subsystems. The
basic purpose of the subsystem is to
Keep track of inventory levels
Keep track of inventory costs for the organization
The purchase order subsystem provides information to the accounts payable and
inventory subsystems.
104
Tactical accounting and financial information systems support management
decision making by providing managers with:
Budgeting Systems
The budgeting system permits managers to
Track actual revenues
Track actual expenses
Compare these amounts to expected revenues and expenses
Compare current budget amounts to those of prior fiscal periods
Compare current budget amounts to other divisions
Compare current budget amounts to other departments
Compare current budget amounts to industry wide data.
Comparisons of budget data against such standards allow managers to assess how
they use their resources to achieve their goals.
The information supplied by a cash flow report helps the manager make decisions
about investing, purchasing, and borrowing money. By simulating many different
possible business conditions, the manager is able to make more informed decisions
about the use of or need for cash for the short term. In short, the manager can
study various reallocations of the resources of a department, division, or other unit.
105
Capital Budgeting Systems manage information about
The planned acquisition
The disposal of major plant assets during the current year.
The manager may compare the various capital spending plans using three
commonly used evaluation tools: net present value, internal rate of return,
and payback period.
106
Different inflation rates
Spreadsheet Software
Spreadsheet software packages provide a versatile tool for financial managers.
Spreadsheet software allows the manager to design partially completed tables or
forms called template, which contain the headings and names of the items in the
spreadsheet. The templates also contain the formulas used to calculate column or
row totals, column or row averages, and other statistical quantities on the values
entered into the template.
107
5.6 COMPUTERIZED AUDITING SOFTWARE
A number of computerized auditing programs are available to assist auditors when
they evaluate or monitor a computerized accounting system. Generalized audit
software :
Telemarketing Systems
Use of the telephone to sell products and services, or telemarketing systems, has
become a common and important means by which organizations improve the
108
productivity of their sales forces. The telephone allows salespeople to initiate
contacts, offer products and services, or follow up on sales without travel cost or
travel time. It also lets salespeople reach many more customers in a given time
period than they could have through other means.
109
6.2 TACTICAL MARKETING INFORMATION SYSTEMS
A great deal of the data that tactical marketing information systems utilize is
collected by operational financial information systems. Tactical marketing
information systems often combine operational-level financial data with other data
to support tactical decision making managers.
110
services to meet those customers needs, and forecasting sales for the market
segments and products.
Regardless of type, sales forecasts are usually based on more than historical data;
they are not merely projections of past trends. Sales forecasts are also based on
assumptions about the activities of the competition, governmental action, shifting
customer demand, demographic trends, and a variety of other pertinent factors,
including even the weather.
A package of form letters that salespeople can use or adapt for use
The ability to keep customer lists
The ability to merge letters with customer lists for large mailings
File support features may include the ability to record and store information about
potential and current customers.
Salesperson support software often includes a calendar module to help salespeople
manage their meetings and customer appointments and a tickler file module to
111
ensure that they follow through on their promises to customers at the appointed
time.
Telemarketing Software
Telemarketing Software provides computer support for identifying customers and
calling them from disk-based telephone directories or from customer files
maintained on a database. The packages may allow you to
Make notes about the telephone calls you make
Generate follow-up letters to the customer
View a customer file while a call to that customer is in progress
These are systems that supply data to operate, monitor and control the production
process.
112
7.1 OPERATIONAL PRODUCTION IS
113
to the organization. Inventory management and control systems use information
from operational information systems, such as the shipping and receiving systems,
purchasing systems, and order entry systems.
Just-in-Time Systems
The just-in-Time (JIT) system is not a tactical information system, but a tactical
approach to production. The just-in-time approach was created by the Toyota Motor
Company of Japan and has generated many advantages to organizations,
especially those that do repetitive manufacturing. The purpose of the approach is
to eliminate waste in the use of equipment, parts, space, workers time, and
materials, including the resources devoted to inventories. The basic philosophy of
JIT is that operations should occur just when they are required to maintain the
production schedule. To assure a smooth flow of operations in that environment,
sources of problems must be eradicated.
114
Product Design and Development Information Systems
Many tactical decisions must be made to design and develop a product, especially
a new product. The design engineering team usually depends on product
specification information derived from customer surveys, target population
analysis, or other marketing research information systems. Teams may use other
computerized systems for designing new products as well.
Plant design
Designing and laying out a manufacturing plant requires large amounts of diverse
information about the proposed plant including:
115
Automated Materials Handling Software
Automated materials handling (AMH) software tracks, controls, and otherwise
supports the movement of raw materials, work-in-process, and finished goods from
the receiving docks to the shipping docks. AMH software combines with various
materials handling equipment, including conveyors, pick-and-place robots, and
automated guided vehicles, to get this job done.
116
integrate current manufacturing hardware and software products into systems that
provide computer-integrated manufacturing, or CIM.
o ATM systems
o Cash vault automation
o Cheque processing and verification systems
o EDI systems
o EFT systems
o Document processing systems
o Voice response systems
o Cash management systems
o General ledger systems
o Image processing systems
o Payroll processing systems
o Online banking systems
Online banking uses todays computer technology to give customers the option of
bypassing the time-consuming, paper-based aspects of traditional banking in order
to manage finances more quickly and efficiently.
The advent of the Internet and the popularity of personal computers presented
both an opportunity and a challenge for the banking industry. For years, financial
institutions have used powerful computer networks to automate millions of daily
transactions; today, often the only paper record is the customers receipt at the
point of sale. Now that its customers are connected to the Internet via personal
117
computers, banks envision similar economic advantages by adapting those same
internal electronic processes to home use.
Banks view online banking as a powerful value-added tool to attract and retain
new customers while helping to eliminate costly paper handling and teller
interactions in an increasingly competitive banking environment. Today, most
national banks, many regional banks and even smaller banks and credit unions
offer some form of online banking (at least in developed countries), variously
known as PC banking, home banking, electronic banking or Internet banking. Those
that do are sometimes referred to as brick-to-click banks, both to distinguish
them from brick-and-mortar banks that have yet to offer online banking, as well as
from online or "virtual" banks that have no physical branches or tellers whatsoever.
The challenge for the banking industry has been to design this new service channel
in such a way that its customers will readily learn to use and trust it. Most of the
large banks now offer fully secure, fully functional online banking for free or for a
small fee. Some smaller banks offer limited access or functionality; for instance,
you may be able to view your account balance and history but not initiate
transactions online. As more banks succeed online and more customers use their
sites, fully functional online banking likely will become as commonplace as
automated teller machines.
Virtual banks
Virtual banks are banks without bricks; from the customer's perspective, they exist
entirely on the Internet, where they offer pretty much the same range of services
and adhere to the same federal regulations as your corner bank.
Virtual banks pass the money they save on overhead like buildings and tellers
along to the customer in the form of higher yields, lower fees and more generous
account thresholds. The major disadvantage of virtual banks revolves around ATMs.
Because they have no ATM machines, virtual banks typically charge the same
surcharge that the brick-and-mortar bank would if a customer used another bank's
automated teller. Likewise, many virtual banks won't accept deposits via ATM; a
customer has to either deposit the check by mail or transfer money from another
account.
118
compatible with money managing programs such as Quicken and Microsoft
Money.
These are systems that deal with recruitment, placement, performance evaluation,
compensation and career development of the firms employees.
Operational human resource information systems provide the manager with data to
support routine and repetitive human resource decisions. Several operational-level
information systems collect and report human resource data. These systems
include information about the organizations positions and employees and about
governmental regulations.
119
Position Control Systems
A job is usually defined as a group of identical positions. A position, on the other
hand, consists of tasks performed by one worker. The purpose of a position control
system is to identify each position in the organization, the job title within which the
position is classified, and the employee currently assigned to the position.
Reference to the position control system allows a human resource manager to
identify the details about unfilled positions.
120
outputs provide managers with the basis for many tactical human resource
decisions.
Negotiating with craft, maintenance, office, and factory unions requires information
gathered from many of the human resource information systems. The human
121
resource team completing the negotiating needs to be able to obtain numerous ad
hoc reports that analyze the organizations and unions positions within the
framework of both the industry and the current economic situation. It is also
important that the negotiating team be able to receive ad hoc reports on a very
timely basis because additional questions and tactics will occur to the team while
they are conducting labor negotiations.
Training Software
Many training software packages are available for all types and sizes of computers
to provide on-line training for employees. They include:
122
10. Important definitions
123
1.1 Security goals
These are circumstances that have potential to cause loss or harm i.e.
circumstances that have a potential to bring about exposures.
Human error
Disgruntled employees
Dishonest employees
Greedy employees who sell information for financial gain
Outsider access hackers, crackers, criminals, terrorists, consultants, ex-
consultants, ex-employees, competitors, government agencies, spies
(industrial, military etc), disgruntled customers
Acts of God/natural disasters earthquakes, floods, hurricanes
Foreign intelligence
Accidents, fires, explosion
Equipment failure
Utility outage
124
Water leaks, toxic spills
Viruses these are programmed threats
1.4 Vulnerability
A vulnerability is a weakness within the system that can potentially lead to loss or
harm. The threat of natural disasters has instances that can make the system
vulnerable. If a system has programs that have threats (erroneous programs) then
the system is vulnerable.
125
1.5 Security controls
These include:
1. Administrative controls they include
a. Policies a policy can be seen as a mechanism for controlling
security
b. Administrative procedures may be put by an organization to
ensure that users only do that which they have been authorized to
do
c. Legal provisions serve as security controls and discourage some
form of physical threats
d. Ethics
2. Logical security controls measures incorporated within the system to
provide protection from adversaries who have already gained physical
access
3. Physical controls any mechanism that has a physical form e.g. lockups
4. Environmental controls
Risk analysis
Security planning a security plan identifies and organizes the security
activities of an organization.
Security policy
Risk analysis
Security policy
Security failures can be costly to business. Losses may be suffered as a result of
the failure itself or costs can be incurred when recovering from the incident,
followed by more costs to secure systems and prevent further failure. A well-
defined set of security policies and procedures can prevent losses and save money.
126
Management support and commitment management should approve and
support formal security awareness and training.
Access philosophy access to computerized information should be based on
a documented need-to-know, need-to-do basis.
Compliance with relevant legislation and regulations
Access authorization the data owner or manager responsible for the
accurate use and reporting of the information should provide written
authorization for users to gain access to computerized information.
Reviews of access authorization like any other control, access controls
should be evaluated regularly to ensure they are still effective.
Security awareness all employees, including management, need to be
made aware on a regular basis of the importance of security. A number of
different mechanisms are available for raising security awareness including:
o Distribution of a written security policy
o Training on a regular basis of new employees, users and support staff
o Non-disclosure statements signed by employees
o Use of different media in promulgating security e.g. company
newsletter, web page, videos etc.
o Visible enforcement of security rules
o Simulate security incidents for improving security procedures
o Reward employees who report suspicious events
o Periodic audits
These controls may consist of edit tests, totals, reconciliations and identification
and reporting of incorrect, missing or exception data. Automated controls should
be coupled with manual procedures to ensure proper investigation of exceptions.
127
Input authorization
Input authorization verifies that all transactions have been authorized and
approved by management. Authorization of input helps ensure that only authorized
data is entered into the computer system for processing by applications.
Authorization can be performed online at the time when the data is entered into
the system. A computer-generated report listing the items requiring manual
authorization also may be generated. It is important that controls exist throughout
processing to ensure that authorized data remains unchanged. This can be
accomplished through various accuracy and completeness checks incorporated into
an applications design.
Batch header forms are a data preparation control. All input forms should be clearly
identified with the application name and transaction codes. Where possible, pre-
printed and pre-numbered forms with transaction identification codes and other
constant data items are recommended. This would help ensure that all pertinent
data has been recorded on the input forms and can reduce data recording/entry
errors.
128
example, the total number of units ordered in the batch of invoices agrees
with the total number of units processed.
Total documents verification that the total number of documents in the
batch equals the total number of documents processed. For example, the
total number of invoices in a batch agrees with the total number of invoices
processed.
Hash totals verification that a predetermined numeric field existing for all
documents in a batch agrees with the total of documents processed.
Data conversion error corrections are needed during the data conversion process.
Errors can occur due to duplication of transactions and inaccurate data entry.
These errors can, in turn, greatly impact the completeness and accuracy of the
data. Corrections to data should be processed through the normal data conversion
process and should be verified, authorized and re-entered to the system as a part
of normal processing.
129
o Logging of errors
o Timely corrections
o Upstream resubmission
o Approval of corrections
o Suspense file
o Error file
o Validity of corrections
Anticipation the user anticipates the receipt of data
Transmittal log this log documents transmission or receipt of data
Cancellation of source documents procedures to cancel source
documents, for example, by punching with holes or mark, to avoid
duplicate entry
Edit controls are preventative controls that are used in a program before data is
processed. If the edit control is not in place or does not work correctly; the
preventative control measures do not work effectively. This may cause processing
of inaccurate data.
130
Range check data should be within a predetermined range of values. For
example, product type codes range from 100 to 250. Any code outside this
range should be rejected as an invalid product type.
Validity check programmed checking of the data validity in accordance
with predetermined criteria. For example, a payroll record contains a field for
marital status; the acceptable status codes are M or S. If any other code is
entered the record should be rejected.
Reasonableness check input data is matched to predetermined reasonable
limits or occurrence rates. For example, in most instances, a bakery usually
receives orders for no more than 20 crates. If an order for more than 20
crates is received, the computer program should be designed to print the
record with a warning indicating that the order appears unreasonable.
Table look-ups input data complies with predetermined criteria maintained
in a computerized table of possible values. For example, the input clerk
enters a city code of 1 to 10. This number corresponds with a computerized
table that matches the code to a city name.
Existence check data is entered correctly and agree with valid
predetermined criteria. For example, a valid transaction code must be
entered in the transaction code field.
Key verification keying-in process is repeated by a separate individual
using a machine that compares original keystrokes to the repeated keyed
input. For example, the worker number is keyed twice and compared to
verify the keying process.
Check digit a numeric value that has been calculated mathematically is
added to data to ensure that the original data have not been altered or an
incorrect but valid value submitted. This control is effective in detecting
transposition and transcription errors. For example, a check digit is added to
an account number so it can be checked for accuracy when it is used.
Completeness check a field should always contain data and not zeros or
blanks. A check of each byte of that field should be performed to determine
that some form of data, not blanks or zeros, is present. For example, a
worker number on a new employee record is left blank. This is identified as a
key field and the record would be rejected, with a request that the field is
completed before the record is accepted for processing.
Duplicate check new transactions are matched to those previously input to
ensure that they have not already been entered. For example, a vendor
invoice number agrees with previously recorded invoices to ensure that the
current order is not a duplicate and therefore, the vendor will not be paid
twice.
Logical relationship check if a particular condition is true, then one or more
additional conditions or data input relationships may be required to be true
and consider the input valid. For example, the date of engagement of an
employee may be required to be more than sixteen years past his or her
date of birth.
131
are processing control techniques that can be used to address the issues of
completeness and accuracy of accumulated data.
Manual recalculations a sample of transactions may be recalculated
manually to ensure that processing is accomplishing the anticipated task.
Editing an edit check is a program instruction or subroutine that tests for
accurate, complete and valid input and updates in an application.
Run-to-run totals run-to-run totals provide the ability to verify data values
through the stages of application processing. Run-to-run total verification
ensures that data read into the computer was accepted and then applied to
the updating process.
Programmed controls software can be used to detect and initiate
corrective action for errors in data and processing. For example, if the
incorrect file or file version is provided for processing, the application
program could display messages instructing that the proper file and version
be used.
Reasonableness verification of calculated amounts application programs
can verify the reasonableness of calculated amounts. The reasonableness
can be tested to ensure appropriateness to predetermined criteria. Any
transaction that is determined to be unreasonable may be rejected pending
further review.
Limit checks on calculated amounts an edit check can provide assurance
through the use of predetermined limits that calculated amounts have not
been keyed in correctly. Any transaction exceeding the limit may be rejected
for further investigation.
Reconciliation of file totals reconciliation of file totals should be performed
on a routine basis. Reconciliation may be performed through use of a
manually maintained account, a file control record or an independent control
file.
Exception reports an exception report is generated by a program that
identifies transactions or data that appear to be incorrect. These items may
be outside a predetermined range or may not conform to specified criteria.
File controls should ensure that only authorized processing occurs to stored data.
Types of controls over data files are:
Before and after image reporting computer data on a file prior to and after
a transaction is processed can be recorded and reported. The before and
after image makes it possible to trace the impact transactions have on
computer records.
Maintenance error reporting and handling control procedures should be in
place to ensure that all error reports are properly reconciled and corrections
are submitted on a timely basis. To ensure segregation of duties, error
corrections should be properly reviewed and authorized by personnel who
did not initiate the transaction.
Source documentation retention source documentation should be retained
for an adequate time period to enable retrieval, reconstruction or verification
of data. Policies regarding the retention of source documentation should be
enforced. Originating departments should maintain copies of source
132
documentation and ensure that only authorized personnel have access.
When appropriate, source documentation should be destroyed in a secure,
controlled environment.
Internal and external labelling internal and external labelling of removable
storage media is imperative to ensure that the proper data is loaded for
processing. External labels provide the basic level of assurance that the
correct data medium is loaded for processing. Internal labels, including file
header records, provide assurance that the proper data files are used and
allow for automated checking.
Version usage it is critical that the proper version of a file, such as date
and time of data, be used as well as the correct file in order for the
processing to be correct. For example, transactions should be applied to the
most current database while restart procedures should use earlier versions.
Data file security data file security controls prevent unauthorized users
that may have access to the application to alter data files. These controls do
not provide assurances relating to the validity of data, but ensure that
unauthorized users who may have access to the application cannot
improperly alter stored data.
One-for-one checking individual documents agree with a detailed listing of
documents processed by the computer. It is necessary to ensure that all
documents have been received for processing.
Pre-recorded input certain information fields are pre-printed on blank input
forms to reduce initial input errors.
Transaction logs all transaction input activity is recorded by the computer.
A detailed listing including date of input, time of input, user ID and terminal
location can then be generated to provide an audit trail. It also permits
operations personnel to determine which transactions have been posted.
This will help to decrease the research time needed to investigate
exceptions and decrease recovery time if a system failure occurs.
File updating and maintenance authorization proper authorization for file
updating and maintenance is necessary to ensure that stored data are
adequately safeguarded, correct and up-to-date. Application programs may
contain access restrictions in addition to overall system access restrictions.
The additional security may provide levels of authorization in addition to
providing an audit trail of file maintenance.
Parity checking data transfers in a computer system are expected to
be made in a relatively error-free environment. However, when
programs or vital data are transmitted, additional controls are needed.
Transmission errors are controlled primarily by error detecting or
correcting codes. The former is used more often because error-
correcting codes are costly to implement and are unable to correct all
errors.
Output controls provide assurance that the data delivered to users will be
presented, formatted and delivered in a consistent and secure manner. Output
controls include the following:
133
Logging and storage of negotiable, sensitive and critical forms in a secure place
negotiable, sensitive or critical forms should be properly logged and secured
to provide adequate safeguards against theft or damage. The form log should
be routinely reconciled to inventory on hand and any discrepancies should be
properly researched.
Computer generation of negotiable instruments, forms and signatures the
computer generation of negotiable instruments, forms and signatures should be
properly controlled. A detailed listing of generated forms should be compared to
the physical forms received. All exceptions, rejections and mutilations should be
accounted for properly.
Report distribution output reports should be distributed according to
authorized distribution parameters, which may be automated, or manual.
Operations personnel should verify that output reports are complete and that
they are delivered according to schedule. All reports should be logged prior to
distribution.
134
Data integrity testing is a series of substantive tests that examines accuracy,
completeness, consistency and authorization of data holdings. It employs
testing similar to that used for input control. Data integrity tests will indicate
failures in input or processing controls. Controls for ensuring the integrity of
accumulated data on a file can be exercised by checking data on the file
regularly. When this checking is done against authorized source documentation,
it is usual to check only a portion of the file at a time. Since the whole file is
regularly checked in cycles, the control technique is often referred to as cyclical
checking. Data integrity issues can be identified as data that conform to the
following definitions.
(i) Domain integrity this testing is really aimed at verifying that the data
conform to definitions; that is, that the data items are all in the correct
domains. The major objective of this exercise is to verify that edit and
validation routines are working satisfactorily. These tests are field level
based and ensure that the data item has a legitimate value in the correct
range or set.
(ii) Relational integrity these tests are performed at the record based level
and usually involve calculating and verifying various calculated fields
such as control totals. Examples of their use would be in checking aspects
such as payroll calculations or interest payments. Computerized data
frequently have control totals built into various fields and by the nature of
these fields, they are computed and would be subject to the same type of
tests. These tests will also detect direct modification of sensitive data i.e.
if someone has bypassed application programs, as these types of data
are often protected with control totals.
This is a function implemented at the operating system level and usually also
availed at the application level by the operating system. It controls access to the
system and system resources so that only authorized accesses are allowed, e.g.
135
It is a form of logical access control, which involves protection of resources from
users who have physical access to the computer system.
The access control reference monitor model has a reference monitor, which
intercepts all access attempts. It is always invoked when the target object is
referenced and decides whether to deny or grant requests as per the rules
incorporated within the monitor.
3.1 Identification
Involves establishing identity of the subject (who are you?). Identification can use:
- ID, full name
- Workstation ID, IP address
- Magnetic card (requires a reader)
- Smart card (inbuilt intelligence and computation capability)
They are quite effective when thresholds are sensible (substantial difference
between two different people) and physical conditions of person are normal (equal
to the time when reference was first made). They require expensive equipment and
are rare. Also buyers are deterred by impersonation or belief that devices will be
difficult to use. In addition users dislike being measured.
3.2 Authentication
Involves verification of identity of subject (Are you who you say you are? Prove it!).
Personal authentication may involve:
- Something you know: password, PIN, code phrase
- Something you have: keys, tokens, cards, smart cards
- Something you are: fingerprints, retina patterns, voice patterns
- The way you work: handwriting (signature), keystroke patterns
- Something you know: question about your background, favourite
colour, pet name etc.
3.3 Authorization
Involves determining the access rights to various system objects/resources. The
security requirement to be addressed is the protection against unauthorized access
136
to system resources. There is need to define an authorization policy as well as
implementation mechanisms. An authorization policy defines activities permitted or
prohibited within the system. Authorization mechanisms implement the
authorization policy and includes directory of access rights, access control lists
(ACL) and access tickets or capabilities.
4. Logical security
Logical access into the computer can be gained through several avenues. Each
avenue is subject to appropriate levels of access security. Methods of access
include the following:
137
4.1 Logical access issues and exposures
Technical exposures
138
independently and travel from machine to a machine across network
connections. Worms may also have portions of themselves running on many
different machines.
7. Logic bombs are similar to computer viruses, but they do not self-replicate.
The creation of logic bombs requires some specialized knowledge, as it
involves programming the destruction or modification of data at a specific
time in the future. However, unlike viruses or worms, logic bombs are very
difficult to detect before they blow up; thus, of all the computer crime
schemes, they have the greatest potential for damage. Detonation can be
timed to cause maximum damage and to take place long after the departure
of the perpetrator. The logic bomb may also be used as a tool of extortion,
with a ransom being demanded in exchange for disclosure of the location of
the bomb.
8. Trap doors are exits out of an authorized program that allow insertion of
specific logic, such as program interrupts, to permit a review of data during
processing. These holes also permit insertion of unauthorized logic.
9. Asynchronous attacks occur in multiprocessing environments where data
move asynchronously (one character at a time with a start and stop signal)
across telecommunication lines. As a result, numerous data transmissions
must wait for the line to be free (and flowing in the proper direction) before
being transmitted. Data that is waiting is susceptible to unauthorized
accesses called asynchronous attacks. These attacks, which are usually very
small pinlike insertions into cable, may be committed via hardware and are
extremely hard to detect.
10.Data leakage involves siphoning or leaking information out of the
computer. This can involve dumping files to paper or can be as simple as
stealing computer reports and tapes.
11.Wire-tapping involves eavesdropping on information being transmitted
over telecommunications lines.
12.Piggybacking is the act of following an authorized person through a
secured door or electronically attaching to an authorized telecommunication
link to intercept and possibly alter transmissions.
13.Shut down of the computer can be initiated through terminals or
microcomputers connected directly (online) or indirectly (dial-up lines) to the
computer. Only individuals knowing a high-level systems logon-ID can
usually initiate the shut down process. This security measure is effective only
if proper security access controls are in place for the high-level logon-ID and
the telecommunications connections into the computer. Some systems have
proven to be vulnerable to shutting themselves down under certain
conditions of overload.
14.Denial of service is an attack that disrupts or completely denies service to
legitimate users, networks, systems or other resources. The intent of any
such attack is usually malicious in nature and often takes little skill because
the requisite tools are readily available.
Viruses
Viruses are a significant and a very real logical access issue. The term virus is a
generic term applied to a variety of malicious computer programs. Traditional
viruses attach themselves to other executable code, infect the users computer,
replicate themselves on the users hard disk and then damage data, hard disk or
files. Viruses usually attack four parts of the computer:
139
Executable program files
File-directory system that tracks the location of all the computers files
Boot and system areas that are needed to start the computer
Data files
Computer viruses are a threat to computers of any type. Their effects can range
from the annoying but harmless prank to damaged files and crashed networks. In
todays environment, networks are the ideal way to propagate viruses through a
system. The greatest risk is from electronic mail (e-mail) attachments from friends
and/or anonymous people through the Internet. There are two major ways to
prevent and detect viruses that infect computers and network systems.
Some of the policy and procedure controls that should be in place are:
Build any system from original, clean master copies. Boot only from original
diskettes whose write protection has always been in place.
Allow no disk to be used until it has been scanned on a stand-alone machine
that is used for no other purpose and is not connected to the network.
Update virus software scanning definitions frequently
Write-protect all diskettes with .EXE or .COM extensions
Have vendors run demonstrations on their machines, not yours
Enforce a rule of not using shareware without first scanning the shareware
thoroughly for a virus
Commercial software is occasionally supplied with a Trojan horse (viruses or
worms). Scan before any new software is installed.
Insist that field technicians scan their disks on a test machine before they
use any of their disks on the system
Ensure that the network administrator uses workstation and server anti-virus
software
Ensure that all servers are equipped with an activated current release of the
virus detection software
Create a special master boot record that makes the hard disk inaccessible
when booting from a diskette or CD-ROM. This ensures that the hard disk
cannot be contaminated by the diskette or optical media
Consider encrypting files and then decrypt them before execution
Ensure that bridge, route and gateway updates are authentic. This is a very
easy way to place and hide a Trojan horse.
Backups are a vital element of anti-virus strategy. Be sure to have a sound
and effective backup plan in place. This plan should account for scanning
selected backup files for virus infection once a virus has been detected.
Educate users so they will heed these policies and procedures
Review anti-virus policies and procedures at least once a year
Prepare a virus eradication procedure and identify a contact person.
140
Technical means
Technical methods of preventing viruses can be implemented through hardware
and software means.
The following are hardware tactics that can reduce the risk of infection:
Use workstations without floppy disks
Use boot virus protection (i.e. built-in firmware based virus protection)
Use remote booting
Use a hardware based password
Use write protected tabs on floppy disks
Software is by far the most common anti-virus tool. Anti-virus software should
primarily be used as a preventative control. Unless updated periodically, anti-virus
software will not be an effective tool against viruses.
The best way to protect the computer against viruses is to use anti-viral software.
There are several kinds. Two types of scanners are available:
One checks to see if your computer has any files that have been infected with
known viruses
The other checks for atypical instructions (such as instructions to modify
operating system files) and prevents completion of the instruction until the user
has verified that it is legitimate.
Once a virus has been detected, an eradication program can be used to wipe the
virus from the hard disk. Sometimes eradication programs can kill the virus without
having to delete the infected program or data file, while other times those infected
files must be deleted. Still other programs, sometimes called inoculators, will not
allow a program to be run if it contains a virus.
141
checkers take advantage of the fact that executable programs and boot
sectors do not change very often, if at all.
Computer crime can be performed with absolutely nothing physically being taken
or stolen. Simply viewing computerized data can provide an offender with enough
intelligence to steal ideas or confidential information (intellectual property).
Committing crimes that exploit the computer and the information it contains can be
damaging to the reputation, morale and very existence of an organization. Loss of
customers, embarrassment to management and legal actions against the
organization can be a result.
Legal repercussions there are numerous privacy and human rights laws an
organization should consider when developing security policies and procedures.
These laws can protect the organization but can also protect the perpetrator
from prosecution. In addition, not having proper security measures could
expose the organization to lawsuits from investors and insurers if a significant
loss occurs from a security violation. Most companies also must comply with
industry-specific regulatory agencies.
Sabotage some perpetrators are not looking for financial gain. They merely
want to cause damage due to dislike of the organization or for self-gratification.
142
Logical access violators are often the same people who exploit physical exposures,
although the skills needed to exploit logical exposures are more technical and
complex.
Access control software generally processes access requests in the following way:
143
Identification of users users must identify themselves to the access control
software such as name and account number
Authentication users must prove that they are who they clam to be.
Authentication is a two way process where the software must first verify the
validity of the user and then proceed to verify prior knowledge information.
For example, users may provide the following information:
o Remembered information such as name, account number and
password
o Processor objects such as badge, plastic cards and key
o Personal characteristics such as fingerprint, voice and signature
144
Passwords should not be displayed in any form either on a computer screen
when entered, on computer reports, in index or card files or written on
pieces of paper taped inside a persons desk. These are the first places a
potential perpetrator will look.
Passwords should be changed periodically. The best method is for the
computer system to force the change by notifying the user prior to the
password expiration date.
Password must be unique to an individual. If a password is known to more
than one person, the responsibility of the user for all activity within their
account cannot be enforced.
145
Users enter this password along with a password they have memorized to gain
access to the system. This technique involves something you have (a device
subject to theft) and something you know (a personal identification number). Such
devices gain their one time password status because of a unique session
characteristic (e.g. ID or time) appended to the password.
4) Biometric security access control
This control restricts computer access based on a physical feature of the user, such
as a fingerprint or eye retina pattern. A reader is utilized to interpret the
individuals biometric features before permitting computer access. This is a very
effective access control because it is difficult to circumvent, and traditionally has
been used very little as an access control technique. However due to advances in
hardware efficiencies and storage, this approach is becoming a more viable option
as an access control mechanism. Biometric access controls are also the best means
of authenticating a users identity based on something you are.
5) Terminal usage restraints
Terminal security this security feature restricts the number of
terminals that can access certain transactions based on the
physical/logical address of the terminal.
Terminal locks this security feature prevents turning on a computer
terminal until a key lock is unlocked by a turnkey or card key.
6) Dial-back procedures
When a dial-up line is used, access should be restricted by a dial-back mechanism.
Dial-back interrupts the telecommunications dial-up connection to the computer by
dialling back the caller to validate user authority.
7) Restrict and monitor access to computer features that bypass
security
Generally, only system software programmers should have access to these
features:
Bypass Label Processing (BLP) BLP bypasses computer reading of the file
label. Since most access control rules are based on file names (labels), this
can bypass access security.
System exits this system software feature permits the user to perform
complex system maintenance, which may be tailored to a specific
environment or company. They often exist outside of the computer security
system and thus are not restricted or reported in their use.
Special system logon-Ids these logon-Ids are often provided with the
computer by the vendor. The names can be easily determined because they
are the same for all similar computer systems. Passwords should be changed
immediately upon installation to secure them.
146
Many computer systems can automatically log computer activity initiated through a
logon-ID or computer terminal. This is known as a transaction log. The information
can be used to provide a management/audit trail.
9) Data classification
Computer files, like documents have varying degrees of sensitivity. By assigning
classes or levels of sensitivity to computer files, management can establish
guidelines for the level of access control that should be assigned. Classifications
should be simple, such as high, medium and low. End user managers and the
security administrator can the use these classifications to assist with determining
who should be able to access what.
147
data should be recorded on removable hard drives, which are more easily secured
than fixed or floppy disks. Software can also be used to control access to
microcomputer data. The basic software approach restricts access to program and
data files with a password system. Preventative controls such as encryption
become more important for protecting sensitive data in the event that a PC or
laptop is lost, stolen or sold.
5. Physical security
Exposures that exist from accidental or intentional violation of these access paths
include:
Unauthorized entry
Damage, vandalism or theft to equipment or documents
Copying or viewing of sensitive or copyrighted information
Alteration of sensitive equipment and information
Public disclosure of sensitive information
Abuse of data processing resources
Blackmail
Embezzlement
Possible perpetrators
148
The most likely source of exposure is from the uninformed, accidental or
unknowing person, although the greatest impact may be from those with malicious
or fraudulent intent.
Bolting door locks these locks require the traditional metal key to gain
entry. The key should be stamped Do not duplicate.
149
device that then activates the door locking mechanism. Electronic door locks
have the following advantages over bolting and combination locks:
150
Bonded personnel all service contract personnel, such as cleaning
people and off-site storage services, should be bonded. This does not
improve physical security but limits the financial exposure of the
organization.
Deadman doors this system uses a pair of (two) doors, typically found in
entries to facilities such as computer rooms and document stations. For the
second door to operate, the first entry door must close and lock, with only
one person permitted in the holding area. This reduces risk of piggybacking,
when an unauthorized person follows an authorized person through a
secured entry.
Computer terminal locks these lock devices to the desk, prevent the
computer from being turned on or disengage keyboard recognition,
preventing use.
151
Segregation of responsibilities
A traditional security control is to ensure that there are no instances where one
individual is solely responsible for setting, implementing and policing controls and,
at the same time, responsible for the use of the systems. The use of a number of
people, all responsible for some part of information system controls or operations,
allows each to act as a check upon another. Since no employee is performing all
the steps in a single transaction, the others involved in the transaction can monitor
for accidents and crime.
Systems development
Management of input media
Operating the system
Management of documentation and file archives
Distribution of output
Hiring practices to ensure that the most effective and efficient staff is
chosen and that the company is in compliance with legal requirements.
Practices include:
o Background checks
o Confidentiality agreements
o Employee bonding to protect against losses due to theft
o Conflict of interest agreements
o Non-compete agreements
Employee handbook distributed to all employees upon being hired,
should explain items such as
o Security policies and procedures
o Company expectations
o Employee benefits
o Disciplinary actions
o Performance evaluations etc.
Promotion policies should be fair and understood by employees. Based
on objective criteria considering performance, education, experience and
level of responsibility.
Training should be provided on a fair and regular basis
Scheduling and time reporting proper scheduling provides for a more
efficient operation and use of computing resources
Employee performance evaluations employee assessment must be a
standard and regular feature for all IS staff
152
Required vacations ensures that once a year, at a minimum, someone
other than the regular employee will perform a job function. This reduces
the opportunity to commit improper or illegal acts.
Job rotation provides an additional control (to reduce the risk of
fraudulent or malicious acts), since the same individual does not perform
the same tasks all the time.
Termination policies policies should be structured to provide adequate
protection for the organizations computer assets and data. Should
address:
o Voluntary termination
o Immediate termination
o Return of all access keys, ID cards and badges to prevent easy
physical access
o Deletion of assigned logon-ID and passwords to prohibit system
access
o Notification to other staff and facilities security to increase
awareness of the terminated employees status.
o Arrangement of the final pay routines to remove the employee
from active payroll files
o Performance of a termination interview to gather insight on the
employees perception of management
o Return of all company property
o Escort from the premises.
7. Network security
Communication networks (wide area or local area networks) generally include
devices connected to the network, and programs and files supporting the network
operations. Control is accomplished through a network control terminal and
specialized communications software.
153
A terminal identification file should be maintained by the communication
software to check the authentication of a terminal when it tries to send or
receive messages.
Data encryption should be used where appropriate to protect messages
from disclosure during transmission.
Some common network management and control software include Novell NetWare,
Windows NT, UNIX, NetView, NetPass etc.
The LAN security provisions available depend on the software product, product
version and implementation. Commonly available network security administrative
capabilities include:
154
To minimize the risk of unauthorized dial-in access, remote users should
never store their passwords in plain text login scripts on notebooks and
laptops. Furthermore, portable PCs should be protected by physical keys
and/or basic input output system (BIOS) based passwords to limit access
to data if stolen.
155
Client/server risks and issues
Since the early 1990s, client/server technology has become one of the
predominant ways many organizations have processed production data and
developed and delivered mission critical products and services.
156
a) Disclosure
It is relatively simple for someone to eavesdrop on a conversation taking place
over the Internet. Messages and data traversing the Internet can be seen by other
machines including e-mail files, passwords and in some cases key-strokes as they
are being entered in real time.
b) Masquerade
A common attack is a user pretending to be someone else to gain additional
privileges or access to otherwise forbidden data or systems. This can involve a
machine being reprogrammed to masquerade as another machine (such as
changing its Internet Protocol IP address). This is referred to as spoofing.
c) Unauthorized access
Many Internet software packages contain vulnerabilities that render systems
subject to attack. Additionally, many of these systems are large and difficult to
configure, resulting in a large percentage of unauthorized access incidents.
d) Loss of integrity
e) Denial of service
Denial of service attacks occur when a computer connected to the Internet is
inundated (flooded) with data and/or requests that must be serviced. The machine
becomes so tied up with dealing with these messages that it becomes useless for
any other purpose.
It is difficult to assess the impact of the threats described above, but in generic
terms the following types of impact could occur:
Loss of income
Increased cost of recovery (correcting information and re-establishing
services)
Increased cost of retrospectively securing systems
Loss of information (critical data, proprietary information, contracts)
Loss of trade secrets
Damage to reputation
Legal and regulatory non-compliance
157
Failure to meet contractual commitments
7.5 Encryption
Encryption is the process of converting a plaintext message into a secure coded
form of text called cipher text that cannot be understood without converting back
via decryption (the reverse process) to plaintext again. This is done via a
mathematical function and a special encryption/decryption password called the
key.
The limitations of encryption are that it cant prevent loss of data and encryption
programs can be compromised. Therefore encryption should be regarded as an
essential but incomplete form of access control that should be incorporated into an
organizations overall computer security program.
158
b) Asymmetric or public key system
Asymmetric encryption systems use two keys, which work together as a
pair. One key is used to encrypt data, the other is used to decrypt data.
Either key can be used to encrypt or decrypt, but once one key has been
used to encrypt data, only its partner can be used to decrypt the data
(even the key that was used to encrypt the data cannot be used to
decrypt it). Generally, with asymmetric encryption, one key is known
only to one person the secret or private key the other key is known by
many people the public key. A common form of asymmetric encryption
is RSA (named after its inventors Rivest, Shamir and Adelman).
Companies should build firewalls to protect their networks from attacks. In order to
be effective, firewalls should allow individuals on the corporate network to access
the Internet and at the same time stop hackers or others on the Internet from
gaining access to the corporate network to cause damage.
Firewalls are hardware and software combinations that are built using routers,
servers and a variety of software. They should sit in the most vulnerable point
between a corporate network and the Internet and they can be as simple or
complex as system administrators want to build them.
Thereare many different types of firewalls, but many enable organizations to:
Block access to particular sites on the Internet
Prevent certain users from accessing certain servers or services
Monitor communications between an internal and external networks
Eavesdrop and record all communications between an internal network and
the outside world to investigate network penetrations or detect internal
subversions.
Encrypt packets that are sent between different physical locations within an
organization by creating a virtual private network over the Internet.
159
7.7 Intrusion detection systems (IDS)
Intrusion or intruder detection is the identification of and response to ill-minded
activities. An IDS is a tool aiding in the detection of such attacks. An IDS detects
patterns and issues an alert. There are two types of IDSs, network-based and host-
based.
Network-based IDSs identify attacks within the network that they are monitoring
and issue a warning to the operator. If a network-based IDS is placed between the
Internet and the firewall it will detect all the attack attempts, whether they do or do
not enter the firewall. If the IDS is placed between a firewall and the corporate
network it will detect those attacks that could not enter the firewall i.e. it will detect
intruders. The IDS is not a substitute for a firewall, but complements the function of
a firewall.
Host-based IDSs are configured for a specific environment and will monitor various
internal resources of the operating system to warn of a possible attack. They can
detect the modification of executable programs, the deletion of files and issue a
warning when an attempt is made to use a privileged command.
Fire
Natural disasters earthquake, volcano, hurricane, tornado
Power failure
Power spike
Air conditioning failure
Electrical shock
Equipment failure
Water damage/flooding even with facilities located on upper floors of high-
rise buildings, water damage is a risk, typically occurring from broken water
pipes
Bomb threat/attack
160
Are backup media protected from damage due to temperature extremes, the
effects of magnetic fields and water damage?
f) Strategically locating the computer room to reduce the risk of flooding, the
computer room should not be located in the basement. If located in a multi-
storey building, studies show that the best location for the computer room to
reduce the risk of fire, smoke and water damage is on 3 rd, 4th, 5th or 6th floor.
g) Regular inspection by fire department to ensure that all fire detection
systems comply with building codes, the fire department should inspect the
system and facilities annually.
h) Fireproof walls, floors and ceilings surrounding the computer room walls
surrounding the information processing facility should contain or block fire
161
from spreading. The surrounding walls would have at least a two-hour fire
resistance rating.
i) Electrical surge protectors these electrical devices reduce the risk of
damage to equipment due to power spikes. Voltage regulators measure the
incoming electrical current and either increase or decrease the charge to
ensure a consistent current. Such protectors are typically built into the
uninterruptible power supply (UPS) system.
j) Uninterruptible power supply system (UPS)/generator a UPS system
consists of a battery or petrol powered generator that interfaces between
the electrical power entering the facility and the electrical power entering
the computer. The system typically cleanses the power to ensure wattage
into the computer is consistent. Should a power failure occur, the UPS
continues providing electrical power from the generator to the computer for
a certain length of time. A UPS system can be built into a computer or can be
an external piece of equipment.
k) Emergency power-off switch there may be a need to shut off power to the
computer and peripheral devices, such as during a computer room fire or
emergency evacuation. Two emergency power-off switches should serve this
purpose, one in the computer room, the other near, but outside, the
computer room. They should be clearly labelled, easily accessible for this
purpose and yet still secured from unauthorized people. The switches should
be shielded to prevent accidental activation.
l) Power leads from two substations electrical power lines that feed into the
facility are exposed to many environmental hazards- water, fire, lightning,
cutting to due careless digging etc. To reduce the risk of a power failure due
to these events that, for the most part, are beyond the control of the
organization, redundant power lines should feed into the facility. In this way,
interruption of one power line does not adversely affect electrical supply.
m) Wiring placed in electrical panels and conduit electrical fires are always a
risk. To reduce the risk of such a fire occurring and spreading, wiring should
be placed in fire-resistant panels and conduit. This conduit generally lies
under the fire-resistant raised computer room floor.
n) Prohibitions against eating, drinking and smoking within the information
processing facility food, drink and tobacco use can cause fires, build-up of
contaminants or damage to sensitive equipment especially in case of liquids.
They should be prohibited from the information processing facility. This
prohibition should be overt, for example, a sign on the entry door.
o) Fire resistant office materials wastebaskets, curtains, desks, cabinets and
other general office materials in the information processing facility should be
fire resistant. Cleaning fluids for desktops, console screens and other office
furniture/fixtures should not be flammable.
p) Documented and tested emergency evacuation plans evacuation plans
should emphasize human safety, but should not leave information
processing facilities physically unsecured. Procedures should exist for a
controlled shutdown of the computer in an emergency situation, if time
permits.
9. Computer ethics
162
Although ethical decision-making is a thoughtful process, based on ones own
personal fundamental principles we need codes of ethics and professional conduct
for the following reasons:
163
resources or unnecessary expenditure of human resources such as the
time and effort required to purge systems of computer viruses.
o Be honest and trustworthy: the honest computing professional will not
make deliberately false or deceptive claims about a system or system
design, but will instead provide full disclosure of all pertinent system
limitations and problems. He has a duty to be honest about his
qualifications and about any circumstance that may lead to a conflict of
interest.
o Be fair and take action not to discriminate: the values of equality,
tolerance and respect for others and the principles of equal justice govern
this imperative.
o Honour property rights including copyrights and patents: violation of
copyrights, patents, trade secrets and the terms of license agreement is
prohibited by the law in most circumstances. Even when software is not
so protected, such violations are contrary to professional behaviour.
Copies of software should be made only with proper authorization.
Unauthorized duplication of materials must not be condoned.
o Give proper credit for intellectual property: computing professionals are
obligated to protect the integrity of intellectual property. Specifically, one
must not take credit for others ideas or work, even in cases where the
work has not been explicitly protected by copyright, patent etc.
o Respect the privacy of others: computing and communication technology
enables the collection and exchange of personal information on a scale
unprecedented in the history of civilization. Thus there is increased
potential for violating the privacy of individuals and groups. It is the
responsibility of professionals to maintain the privacy and integrity of
data describing individuals. This includes taking precautions to ensure the
accuracy of data, as well as protecting it from authorized access or
accidental disclosure to inappropriate individuals. Furthermore,
procedures must be established to allow individuals to review their
records and correct inaccuracies.
o Honour confidentiality: the principle of honesty extends to issues of
confidentiality of information whenever one has made an explicit promise
to honour confidentiality or, implicitly, when private information not
directly related to the performance of ones duties become available. The
ethical concern is to respect all obligations of confidentiality to
employers, clients, and users unless discharged from such obligations by
requirements of the law or other principles of this code.
o Strive to achieve the highest quality, effectiveness and dignity in both the
process and product of professional work.
o Acquire and maintain professional competence
o Know and respect existing laws pertaining to professional work
o Accept and provide appropriate professional review
o Give comprehensive and thorough evaluations of computer systems and
their impacts, including analysis of possible risks.
o Honour contracts, agreements and assigned responsibilities
164
o Improve public understanding of computing and its consequences
o Access computing and communication resources only when authorized to
do so
Software engineers shall commit themselves to making the analysis, specification, design,
development, testing and maintenance of software a beneficial and respected profession. In
accordance with their commitment to the health, safety and welfare of the public, software
engineers shall adhere to the following eight principles.
10. Terminology
Digital signature
165
A digital signature (not to be confused with a digital certificate) is an electronic signature
that can be used to authenticate the identity of the sender of a message or the signer of a
document, and possibly to ensure that the original content of the message or document
that has been sent is unchanged. Digital signatures are easily transportable, cannot be
imitated by someone else, and can be automatically time-stamped. The ability to ensure
that the original signed message arrived means that the sender cannot easily repudiate it
later.
A digital signature can be used with any kind of message, whether it is encrypted or not,
simply so that the receiver can be sure of the sender's identity and that the message
arrived intact. A digital certificate contains the digital signature of the certificate-issuing
authority so that anyone can verify that the certificate is real.
How it works
Assume you were going to send the draft of a contract to your lawyer in another town. You
want to give your lawyer the assurance that it was unchanged from what you sent and that
it is really from you.
a) You copy-and-paste the contract (it's a short one!) into an e-mail note.
b) Using special software, you obtain a message hash (mathematical summary) of the
contract.
c) You then use a private key that you have previously obtained from a public-private
key authority to encrypt the hash.
d) The encrypted hash becomes your digital signature of the message. (Note that it will
be different each time you send a message.)
Digital Certificate
A digital certificate is an electronic "credit card" that establishes your credentials when
doing business or other transactions on the Web. It is issued by organizations known as
certification authority (CA). It contains your name, a serial number, expiration dates, a copy
of the certificate holder's public key (used for encrypting messages and digital signatures),
and the digital signature of the certificate-issuing authority so that a recipient can verify
that the certificate is real. Some digital certificates conform to a standard, X.509. Digital
certificates can be kept in registries so that authenticating users can look up other
users' public keys
.
166
DATA COMMUNICATION AND COMPUTER NETWORKS
167
other and with other equipment is through cable and air. There are five kinds of
communication channels used for cable or air connections:
- Telephone lines
- Coaxial cable
- Fiber-optic cable
- Microwave
- Satellite
Coaxial cable
Coaxial cable is a high-frequency transmission cable that replaces the multiple
wires of telephone lines with a single solid copper core. It has over 80 times the
transmission capacity of twisted pair. It is often used to link parts of a computer
system in one building.
Fibre-optic cable
Fibre-optic cable transmits data as pulses of light through tubes of glass. It has
over 26,000 times the transmission capacity of twisted pair. A fibre-optic tube can
be half the diameter of human hair. Fibre-optic cables are immune to electronic
interference and more secure and reliable. Fibre-optic cable is rapidly replacing
twisted-pair telephone lines.
Microwave
Microwaves transmit data as high-frequency radio waves that travel in straight
lines through air. Microwaves cannot bend with the curvature of the earth. They
can only be transmitted over short distances. Microwaves are good medium for
sending data between buildings in a city or on a large college campus. Microwave
transmission over longer distances is relayed by means of dishes or antennas
installed on towers, high buildings or mountaintops.
Satellite
Satellites are used to amplify and relay microwave signals from one transmitter on
the ground to another. They orbit about 22,000 miles above the earth. They rotate
at a precise point and speed and can be used to send large volumes of data. Bad
weather can sometimes interrupt the flow of data from a satellite transmission.
INTELSAT (INternational TELecommunication SATellite consortium), owned by 114
governments forming a worldwide communications system, offers many satellites
that can be used as microwave relay stations.
168
digital form. This process is known as digitisation and has the following
advantages:
2.1 Modem
A modem is a hardware device that converts computer signals (digital signals) to
telephone signals (analog signals) and telephone signals (analog signals) back to
computer signals (digital signals).
The process of converting digital signals to analog is called modulation while the
process of converting analog signals to digital is called demodulation.
Computer Computer
Modem Modem
The speed with which modems transmit data varies. Communications speed is
typically measured in bits per second (bps). The most popular speeds for
conventional modems are 36.6 kbps (36,600 bps) and 56kbps (56,000 bps). The
higher the speed, the faster you can send and receive data.
169
Types of modems
a) External modem
An external modem stands apart from the computer. It is connected by a cable
to the computers serial port. Another cable is used to connect the modem to
the telephone wall jack.
b) Internal modem
An internal modem is a plug-in circuit board inside the system unit. A telephone
cable connects this type of modem to the telephone wall jack.
c) Wireless modem
A wireless modem is similar to an external modem. It connects to the
computers serial port, but does not connect to telephone lines. It uses new
technology that receives data through the air.
170
Simplex communication data travels in one-direction only e.g. point-of-sale
terminals.
Half-duplex communication data flows in both directions, but not
simultaneously. E.g. electronic bulletin board
Full-duplex communication data is transmitted back and forth at the same
time e.g. mainframe communications.
Protocols
Protocols are sets of communication rules for exchange of information. Protocols
define speeds and modes for connecting one computer with another computer.
Network protocols can become very complex and therefore must adhere to certain
standards. The first set of protocol standards was IBM Systems Network
Architecture (SNA), which only works for IBMs own equipment.
Data has to arrive intact in order to be used. Two techniques are used to detect
and correct errors.
a) Forward error control additional redundant information is transmitted with
each character or frame so that the receiver cannot only detect when errors
are present, but can also determine where the error has occurred and thus
corrects it.
b) Feedback (backward) error control only enough additional information is
transmitted so that the receiver can identify that an error has occurred. An
associated retransmission control scheme is then used to request that
another copy of the information be sent.
171
Block sum check an extension of the parity check in that an additional set
of parity bits is computed for a block of characters (or frame). The set of
parity bits is known as the block (sum) check character.
Cyclic Redundancy Check (CRC) the CRC or frame check sequence (FCS) is
used for situations where bursts of errors may be present (parity and block
sum checks are not effective at detecting bursts of errors). A single set of
check digits is generated for each frame transmitted, based on the contents
of the frame and appended to the tail of the frame.
Recovery
When errors are so bad and that you cant ignore them, have a new plan to get the
data.
Security
What are you concerned about if you want to send an important message?
Did the receiver get it?
o Denial of service
Is it the right receiver?
o Receiver spoofing
Is it the right message?
o Message corruption
Did it come from the right sender?
o Sender spoofing
Network management
This involves configuration, provisioning, monitoring and problem-solving.
3. Computer networks
172
Server a node that shares resources with other nodes. May be called a file
server, printer server, communication server, web server, or database server.
Network Operating System (NOS) the operating system of the network that
controls and coordinates the activities between computers on a network, such
as electronic communication and sharing of information and resources.
Distributed processing computing power is located and shared at different
locations. Common in decentralized organizations (each office has its own
computer system but is networked to the main computer).
Host computer a large centralized computer, usually a minicomputer or
mainframe.
A MAN is a computer network that may be citywide. This type of network may be
used as a link between office buildings in a city. The use of cellular phone systems
expand the flexibility of a MAN network by linking car phones and portable phones
to the network.
Wide Area Networks (WAN)
A WAN is a computer network that may be countrywide or worldwide. It normally
connects networks over a large physical area, such as in different buildings, towns
or even countries. A modem connects a LAN to a WAN when the WAN connection is
an analogue line.
For a digital connection a gateway connects one type of LAN to another LAN, or
WAN, and a bridge connects a LAN to similar types of LAN. This type of network
173
typically uses microwave relays and satellites to reach users over long distances.
The widest of all WANs is the Internet, which spans the entire globe.
WAN technologies
How you get from one computer to the other across the Internet.
3.3 Configurations
A computer network configuration is also called its topology. The topology is the
method of arranging and connecting the nodes of a network. There are four
principal network topologies:
a) Star
b) Bus
c) Ring
d) Hierarchical (hybrid)
e) Completely connected (mesh)
Star network
In a star network there are a number of small computers or peripheral devices
linked to a central unit called a main hub. The central unit may be a host computer
or a file server. All communications pass through the central unit and control is
maintained by polling. This type of network can be used to provide a time-sharing
system and is common for linking microcomputers to a mainframe.
Advantages:
It is easy to add new and remove nodes
A node failure does not bring down the entire network
It is easier to diagnose network problems through a central hub
174
Disadvantages:
If the central hub fails the whole network ceases to function
It costs more to cable a star configuration than other topologies (more cable
is required than for a bus or ring configuration).
Node
Bus network
In a bus network each device handles its communications control. There is no host
computer; however there may be a file server. All communications travel along a
common connecting cable called a bus. It is a common arrangement for sharing
data stored on different microcomputers. It is not as efficient as star network for
sharing common resources, but is less expensive. The distinguishing feature is that
all devices (nodes) are linked along one communication line - with endpoints -
called the bus or backbone.
Advantages:
Reliable in very small networks as well as easy to use and understand
Requires the least amount of cable to connect the computers together and
therefore is less expensive than other cabling arrangements.
Is easy to extend. Two cables can be easily joined with a connector, making
a longer cable for more computers to join the network
A repeater can also be used to extend a bus configuration
Disadvantages:
Heavy network traffic can also slow a bus considerably. Because any
computer can transmit at any time, bus networks do not coordinate when
information is sent. Computers interrupting each other can use a lot of
bandwidth
Each connection between two cables weakens the electrical signal
The bus configuration can be difficult to troubleshoot. A cable break or
malfunctioning computer can be difficult to find and can cause the whole
network to stop functioning.
175
Ring network
In a ring network each device is connected to two other devices, forming a ring.
There is no central file server or computer. Messages are passed around the ring
until they reach their destination. Often used to link mainframes, especially over
wide geographical areas. It is useful in a decentralized organization called a
distributed data processing system.
Advantages:
Ring networks offer high performance for a small number of workstations or
for larger networks where each station has a similar work load
Ring networks can span longer distances than other types of networks
Ring networks are easily extendable
Disadvantages
Relatively expensive and difficult to install
Failure of one component on the network can affect the whole network
It is difficult to troubleshoot a ring network
Adding or removing computers can disrupt the network
Advantages:
Improves sharing of data and programs across the network
Offers reliable communication between nodes
Disadvantages:
Difficult and costly to install and maintain
Difficult to troubleshoot network problems
176
Completely connected (mesh) configuration
Is a network topology in which devices are connected with many redundant
interconnections between network nodes.
Advantages:
Yields the greatest amount of redundancy (multiple connections between
same nodes) in the event that one of the nodes fail where network traffic can
be redirected to another node.
Network problems are easier to diagnose
Disadvantages
The cost of installation and maintenance is high (more cable is required than
any other configuration)
177
A client/server network environment is one in which one computer acts as the
server and provides data distribution and security functions to other computers
that are independently running various applications. An example of the simplest
client/server model is a LAN whereby a set of computers is linked to allow
individuals to share data. LANs (like other client/server environments) allow users
to maintain individual control over how information is processed.
Numerous protocols are involved in transferring a single file even when two
computers are directly connected. The large task of transferring a piece of data is
broken down into distinct sub tasks. There are multiple ways to accomplish each
task (individual protocols). The tasks are well described so that they can be used
interchangeably without affecting the overall system.
178
o Takes care of the needs of the specific application
o HTTP: send request, get a batch of responses from a bunch of different
servers
o Telnet: dedicated interaction with another machine
Transport Layer
o Makes sure data is exchanged reliably between the two end systems
o Needs to know how to identify the remote system and package the data
properly
Application Layer
o User application protocols
Transport Layer
o Transmission control protocol
o Data reliability and sequencing
Internet Layer
o Internet Protocol
o Addressing, routing data across Internet
Network Access Layer
o Data exchange between host and local network
o Packets, flow control
o Network dependent (circuit switching, Ethernet etc)
Physical Layer
o Physical interface, signal type, data rate
Data is passed from top layer of the transmitter to the bottom, then up from the
bottom layer to the top on the recipient. However, each layer on the transmitter
communicates directly with the recipients corresponding layer. This creates a
virtual data flow between layers. The data sent can be termed as a data packet or
data frame.
Data Data
Application 179
Presentation
Session
Virtual Data Flow
Application
Presentation
Session
Transport
Network
Data Link
Physical
1. Application Layer
This layer provides network services to application programs such as file transfer
and electronic mail. It offers user level interaction with network programs and
provides user application, process and management functions.
2. Presentation Layer
The presentation layer uses a set of translations that allow the data to be
interpreted properly. It may have to carry out translations between two systems if
they use different presentation standards such as different character sets or
different character codes. It can also add data encryption for security purposes. It
basically performs data interpretation, format and control transformation. It
separates what is communicated from data representation.
3. Session Layer
The session layer provides an open communications path to the other system. It
involves setting up, maintaining and closing down a session (a communication time
span). The communications channel and the internetworking should be transparent
to the session layer. It manages (administration and control) sessions between
cooperating applications.
4. Transport Layer
If data packets require to go out of a network then the transport layer routes them
through the interconnected networks. Its task may involve splitting up data for
transmission and reassembling it after arrival. It performs the tasks of end-to-end
packetization, error control, flow control, and synchronization. It offers network
transparent data transfer and transmission control.
5. Network Layer
The network layer routes data frames through a network. It performs the tasks of
connection management, routing, switching and flow control over a network.
180
The data link layer ensures that the transmitted bits are received in a reliable way.
This includes adding bits to define the start and end of a data frame, adding extra
error detection/correction bits and ensuring that multiple nodes do not try to
access a common communications channel at the same time. It has the tasks of
maintaining and releasing the data link, synchronization, error and flow control.
7. Physical Layer
The physical link layer defines the electrical characteristics of the communications
channel and the transmitted signals. This includes voltage levels, connector types,
cabling, data rate etc. It provides the physical interface.
The main types of cables used in networks are twisted-pair, coaxial and fibre-optic.
Twisted-pair and coaxial cables transmit electric signals, whereas fibre-optic cables
transmit light pulses. Twisted-pair cables are not shielded and thus interfere with
nearby cables. Public telephone lines generally use twisted-pair cables. In LANs
they are generally used up to bit rates of 10 Mbps and with maximum lengths of
100m.
Coaxial cable has a grounded metal sheath around the signal conductor. This limits
the amount of interference between cables and thus allows higher data rates.
Typically they are used at bit rates of 100 Mbps for maximum lengths of 1 km.
The highest specification of the three cables is fibre-optic. This type of cable allows
extremely high bit rates over long distances. Fibre-optic cables do not interfere
with nearby cables and give greater security, more protection from electrical
damage by external equipment and greater resistance to harsh environments; they
are also safer in hazardous environments.
Most modern networks have a backbone, which is a common link to all the
networks within an organization. This backbone allows users on different network
segments to communicate and also allows data into and out of the local network.
181
networks of dissimilar type. Routers operate rather like gateways and can either
connect two similar networks or two dissimilar networks. The key operation of a
gateway, bridge or router is that it only allows data traffic through itself when the
data is intended for another network which is outside the connected network. This
filters traffic and stops traffic not intended for the network from clogging up the
backbone. Modern bridges, gateways and routers are intelligent and can determine
the network topology. A spanning-tree bridge allows multiple network segments
to be interconnected. If more than one path exists between individual segments
then the bridge finds alternative routes. This is useful in routing frames away from
heavy traffic routes or around a faulty route.
Fax machines
Fax machines convert images to signals that can be sent over a telephone line to a
receiving machine. They are extremely popular in offices. They can scan the image
of a document and print the image on paper. Microcomputers use fax/modem
circuit boards to send and receive fax messages.
182
Voice messaging systems
Voice messaging systems are computer systems linked to telephones that convert
human voice into digital bits. They resemble conventional answering machines and
electronic mail systems. They can receive large numbers of incoming calls and
route them to appropriate voice mailboxes which are recorded voice messages.
They can forward calls and deliver the same message to many people.
Shared resources
Shared resources are communication networks that permit microcomputers to
share expensive hardware such as laser printers, chain printers, disk packs and
magnetic tape storage. Several microcomputers linked in a network make shared
resources possible. The connectivity capabilities of shared resources provide the
ability to share data located on a computer.
Online services
Online services are business services offered specifically for microcomputer users.
Well-known online service providers are America Online (AOL), AT&T WorldNet,
CompuServe, Africa Online, Kenyaweb, UUNET, Wananchi Online and Microsoft
Network. Typical online services offered by these providers are:
Teleshopping- a database which lists prices and description of products. You place
an order, charge the purchase to a credit card and merchandise is delivered by a
delivery service.
Home banking banks offer this service so you can use your microcomputer to pay
bills, make loan payments, or transfer money between accounts.
Investing investment firms offer this service so you can access current prices of
stocks and bonds. You can also buy and sell orders.
Travel reservations travel organizations offer this service so you can get
information on airline schedules and fare, order tickets, and charge to a credit card.
183
Internet access you can get access to the World Wide Web.
Internet
The Internet is a giant worldwide network. The Internet started in 1969 when the
United States government funded a major research project on computer
networking called ARPANET (Advanced Research Project Agency NETwork). When
on the Internet you move through cyberspace.
Communicating
o Communicating on the Internet includes e-mail, discussion groups
(newsgroups), and chat groups
o You can use e-mail to send or receive messages to people around the
world
o You can join discussion groups or chat groups on various topics
Shopping
- Shopping on the Internet is called e-commerce
- You can window shop at cyber malls called web storefronts
- You can purchase goods using checks, credit cards or electronic cash
called electronic payment
Researching
- You can do research on the Internet by visiting virtual libraries and
browse through stacks of books
- You can read selected items at the virtual libraries and even check out
books
Entertainment
- There are many entertainment sites on the Internet such as live
concerts, movie previews and book clubs
- You can also participate in interactive live games on the Internet
You get connected to the Internet through a computer. Connection to the Internet
is referred to as access to the Internet. Using a provider is one of the most common
ways users can access the Internet. A provider is also called a host computer and is
already connected to the Internet. A provider provides a path or connection for
individuals to access the Internet.
184
There are three widely used providers:
(i) Colleges and universities colleges and universities provide free access
to the Internet through their Local Area Networks,
(ii) Internet Service Providers (ISP) ISPs offer access to the Internet for a
fee. They are more expensive than online service providers.
(iii) Online Service Providers provide access to the Internet and a variety of
other services for a fee. They are the most widely used source for Internet
access and less expensive than ISP.
Connections
There are three types of connections to the Internet through a provider:
o Direct or dedicated
o SLIP and PPP
o Terminal connection
Direct or dedicated
This is the most efficient access method to all functions on the Internet. However it
is expensive and rarely used by individuals. It is used by many organizations such
as colleges, universities, service providers and corporations.
Terminal connection
This type of connection also uses a high-speed modem and standard telephone
line. Your computer becomes part of a terminal network with a terminal connection.
With this connection, your computers operations are very limited because it only
displays communication that occurs between provider and other computers on the
Internet. It is less expensive than SLIP or PPP but not as fast or convenient.
Internet protocols
TCP/IP
The standard protocol for the Internet is TCP/IP. TCP/IP (Transmission Control
Protocol/Internet Protocol) are the rules for communicating over the Internet.
Protocols control how the messages are broken down, sent and reassembled. With
TCP/IP, a message is broken down into small parts called packets before it is sent
185
over the Internet. Each packet is sent separately, possibly travelling through
different routes to a common destination. The packets are reassembled into correct
order at the receiving computer.
Internet services
Telnet
Telnet allows you to connect to another computer (host) on the Internet
With Telnet you can log on to the computer as if you were a terminal
connected to it
There are hundreds of computers on the Internet you can connect to
Some computers allow free access; some charge a fee for their use
186
A browser is a special software used on a computer to access the web
The software provides an uncomplicated interface to the Internet and web
documents
It can be used to connect you to remote computers using Telnet
It can be used to open and transfer files using FTP
It can be used to display text and images using the web
Two well-known browsers are:
o Netscape communicator
o Microsoft Internet Explorer
Typically the first web page on a website is referred to as the home page. The
home page presents information about the site and may contain references and
connections to other documents or sites called hyperlinks. Hyperlink connections
may contain text files, graphic images, audio and video clips. Hyperlink
connections can be accessed by clicking on the hyperlink.
Applets and Java
187
Web pages contain links to special programs called applets written in a
programming language called Java.
Java applets are widely used to add interest and activity to a website.
Applets can provide animation, graphics, interactive games and more.
Applets can be downloaded and run by most browsers.
Search tools
Search tools developed for the Internet help users locate precise information. To
access a search tool, you must visit a web site that has a search tool available.
There are two basic types of search tools available:
- Indexes
- Search engines
Indexes
Indexes are also known as web directories
They are organized by major categories e.g. Health, entertainment,
education etc
Each category is further organized into sub categories
Users can continue to search of subcategories until a list of relevant
documents appear
The best known search index is Yahoo
Search engines
Search engines are also known as web crawlers or web spiders
They are organized like a database
Key words and phrases can be used to search through a database
Databases are maintained by special programs called agents, spiders or bots
Widely used search engines are Google, HotBot and AltaVista.
Web utilities
Web utilities are programs that work with a browser to increase your speed,
productivity and capabilities. These utilities can be included in a browser. Some
utilities may be free on the Internet while others can be charged for a nominal
charge. There are two categories of web utilities:
Plug-ins
Helper applications
188
Plug-ins
A plug-in is a program that automatically loads and operates as part of your
browser.
Many websites require plug-ins for users to fully experience web page
contents
Some widely used plug-ins are:
o Shockwave from macromedia used for web-based games, live
concerts and dynamic animations
o QuickTime from Apple used to display video and play audio
o Live-3D from Netscape used to display three-dimensional graphics
and virtual reality
Helper applications
Helper applications are also known as add-ons and helper applications. They are
independent programs that can be executed or launched from your browser. The
four most common types of helper applications are:
Discussion groups
There are several types of discussion groups on the Internet:
Mailing lists
Newsgroups
Chat groups
Mailing lists
In this type of discussion groups, members communicate by sending messages to a
list address. To join, you send your e-mail request to the mailing list subscription
189
address. To cancel, send your email request to unsubscribe to the subscription
address.
Newsgroups
Newsgroups are the most popular type of discussion group. They use a special type
of computers called UseNet. Each UseNet computer maintains the newsgroup
listing. There are over 10,000 different newsgroups organized into major topic
areas. Newsgroup organization hierarchy system is similar to the domain name
system. Contributions to a particular newsgroup are sent to one of the UseNet
computers. UseNet computers save messages and periodically share them with
other UseNet computers. Interested individuals can read contributions to a
newsgroup.
Chat groups
Chat groups are becoming a very popular type of discussion group. They allow
direct live communication (real time communication). To participate in a chat
group, you need to join by selecting a channel or a topic. You communicate live
with others by typing words on your computer. Other members of your channel
immediately see the words on their computers and they can respond. The most
popular chat service is called Internet Relay Chat (IRC), which requires special chat
client software.
Instant messaging
Instant messaging is a tool to communicate and collaborate with others. It allows
one or more people to communicate with direct live communication. It is similar
to chat groups, but it provides greater control and flexibility. To use instant
messaging, you specify a list of friends (buddies) and register with an instant
messaging server e.g. Yahoo Messenger. Whenever you connect to the Internet,
special software will notify your messaging server that you are online. It will notify
you if any of your friends are online and will also notify your buddies that you are
online.
190
E-mail addresses
The most important element of an e-mail message is the address of the person who
is to receive the letter. The Internet uses an addressing method known as the
Domain Name System (DNS). The system divides an address into three parts:
(i) User name identifies a unique person or computer at the listed domain
(ii) Domain name refers to a particular organization
(iii) Domain code identifies the geographical or organizational area
Almost all ISPs and online service providers offer e-mail service to their customers.
The main standards that relate to the protocols of email transmission and reception
are:
Simple Mail Transfer Protocol (SMTP) which is used with the TCP/IP
suite. It has traditionally been limited to the text-based electronic messages.
191
Multipurpose Internet Mail Extension which allows the transmission
and reception of mail that contains various types of data, such as speech,
images and motion video. It is a newer standard than SMTP and uses much
of its basic protocol.
The possible use of the Internet for non-useful applications (by employees).
The possible connection of non-friendly users from the global connection into
the organizations local network.
For these reasons, many organizations have shied away from connection to the
global network and have set-up intranets and extranets.
Firewalls are often used to protect organizational Internets from external threats.
Intranets
Intranets are in-house, tailor-made networks for use within the organization and
provide limited access (if any) to outside services and also limit the external traffic
(if any) into the intranet. An intranet might have access to the Internet but there
will be no access from the Internet to the organizations intranet.
192
WWW browsers
A firewall
Extranets
Extranets (external Intranets) allow two or more companies to share parts of their
Intranets related to joint projects. For example two companies may be working on a
common project, an Extranet would allow them to share files related with the
project.
Extranets allow other organizations, such as suppliers, limited access to the
organizations network.
The purpose of the extranet is to increase efficiency within the business and
to reduce costs
Firewalls
A firewall (or security gateway) is a security system designed to protect
organizational networks. It protects a network against intrusion from outside
sources. They may be categorized as those that block traffic or those that
permit traffic.
It consists of hardware and software that control access to a companys
intranet, extranet and other internal networks.
It includes a special computer called a proxy server, which acts as a
gatekeeper.
All communications between the companys internal networks and outside
world must pass through this special computer.
The proxy server decides whether to allow a particular message or file to
pass through.
4. Information superhighway
Information superhighway is a name first used by US Vice President Al Gore for the
vision of a global, high-speed communications network that will carry voice, data,
video and other forms of information all over the world, and that will make it
possible for people to send e-mail, get up-to-the-minute news, and access business,
government and educational information. The Internet is already providing many of
these features, via telephone networks, cable TV services, online service providers
and satellites.
193
5. Terminology
Multiplexors/concentrators
Are the devices that use several communication channels at the same time. A
multiplexor allows a physical circuit to carry more than one signal at one time when
the circuit has more capacity (bandwidth) than individual signals required. It
transmits and receives messages and controls the communication lines to allow
multiple users access to the system. It can also link several low-speed lines to one
high-speed line to enhance transmission capabilities.
Cluster controllers
Are the communications terminal control units that control a number of devices
such as terminals, printers and auxiliary storage devices. In such a configuration
devices share a common control unit, which manages input/output operations with
a central computer. All messages are buffered by the terminal control unit and then
transmitted to the receivers.
Protocol converters
Are devices used to convert from one protocol to another such as between
asynchronous and synchronous transmission. Asynchronous terminals are attached
to host computers or host communication controllers using protocol converters.
Asynchronous communication techniques do not allow easy identification of
transmission errors; therefore, slow transmission speeds are used to minimize the
potential for errors. It is desirable to communicate with the host computer using
synchronous transmission if high transmission speeds or rapid response is needed.
Multiplexing
Multiplexing is sending multiple signals or streams of information on a carrier at the
same time in the form of a single, complex signal and then recovering the separate
signals at the receiving end. Analog signals are commonly multiplexed using
frequency-division multiplexing (FDM), in which the carrier bandwidth is divided
into sub-channels of different frequency widths, each carrying a signal at the same
time in parallel. Digital signals are commonly multiplexed using time-division
multiplexing (TDM), in which the multiple signals are carried over the same channel
194
in alternating time slots. In some optical fiber networks, multiple signals are carried
together as separate wavelengths of light in a multiplexed signal using dense
wavelength division multiplexing (DWDM).
Circuit-switched
Circuit-switched is a type of network in which a physical path is obtained for and
dedicated to a single connection between two end-points in the network for the
duration of the connection. Ordinary voice phone service is circuit-switched. The
telephone company reserves a specific physical path to the number you are calling
for the duration of your call. During that time, no one else can use the physical
lines involved.
Packet-switched
Packet-switched describes the type of network in which relatively small units of
data called packets are routed through a network based on the destination address
contained within each packet. Breaking communication down into packets allows
the same data path to be shared among many users in the network. This type of
communication between sender and receiver is known as connectionless (rather
than dedicated). Most traffic over the Internet uses packet switching and the
Internet is basically a connectionless network.
Virtual circuit
A virtual circuit is a circuit or path between points in a network that appears to be a
discrete, physical path but is actually a managed pool of circuit resources from
which specific circuits are allocated as needed to meet traffic requirements.
A permanent virtual circuit (PVC) is a virtual circuit that is permanently available to
the user just as though it were a dedicated or leased line continuously reserved for
that user. A switched virtual circuit (SVC) is a virtual circuit in which a connection
session is set up for a user only for the duration of a connection. PVCs are an
important feature of frame relay networks and SVCs are proposed for later
inclusion.
VSAT
VSAT (Very Small Aperture Terminal) is a satellite communications system that
serves home and business users. A VSAT end user needs a box that interfaces
between the user's computer and an outside antenna with a transceiver. The
transceiver receives or sends a signal to a satellite transponder in the sky. The
satellite sends and receives signals from an earth station computer that acts as a
hub for the system. Each end user is interconnected with the hub station via the
195
satellite in a star topology. For one end user to communicate with another, each
transmission has to first go to the hub station which retransmits it via the satellite
to the other end user's VSAT. VSAT handles data, voice, and video signals.
196
CURRENT TRENDS IN INFORMATION TECHNNOLOGY
1. Electronic commerce
Electronic commerce (e-commerce) is the buying and selling of goods and services
over the Internet. Businesses on the Internet that offer goods and services are
referred to as web storefronts. Electronic payment to a web storefront can include
check, credit card or electronic cash.
Web storefronts are also known as virtual stores. This is where shoppers can go to
inspect merchandise and make purchases on the Internet. Web storefront creation
package is a new type of program to help businesses create virtual stores. Web
storefront creation packages (also known as commerce servers) do the following:
Allow visitors to register, browse, place products into virtual shopping carts
and purchase goods and services.
Calculate taxes and shipping costs and handle payment options
Update and replenish inventory
Ensure reliable and safe communications
Collects data on visitors
Generates reports to evaluate the sites profitability
Person-to-person sites
Owner of site provides a forum for buyers and sellers to gather. The owner of the
site typically facilitates rather than being involved in transactions. Buyers and
sellers on this type of site must be cautious.
197
1.3 Electronic payment
The greatest challenge for e-commerce is how to pay for the purchases. Payment
methods must be fast, secure and reliable. Three basic payment methods now in
use are:
(i) Checks
After an item is purchased on the Internet, a check for payment is sent in the
mail
It requires the longest time to complete a purchase
It is the most traditional and safest method of payment
(ii) Credit card
Credit card number can be sent over the Internet at the time of purchase
It is a faster and a more convenient method of paying for Internet purchases
However, credit card fraud is a major concern for buyers and sellers
Criminals known as carders specialize in stealing, trading and using stolen
credit cards stolen from the Internet.
(iii) Electronic cash
Electronic cash is also known as e-cash, cyber cash or digital cash
It is the Internets equivalent of traditional cash
Buyers purchase e-cash from a third party such as a bank that specializes in
electronic currency
Sellers convert e-cash to traditional currency through a third party
It is more secure than using a credit card for purchases
The EDI process is a hybrid process of systems software and application systems.
EDI system software can provide utility services used by all application systems.
These services include transmission, translation and storage of transactions
initialised by or destined for application processing. EDI is an application system in
that the functions it performs are based on business needs and activities. The
applications, transactions and trading partners supported will change over time
and the co-mingling of transactions, purchase orders, shipping notices, invoices
and payments in the EDI process makes it necessary to include application
processing procedures and controls in the EDI process.
198
invoices or material release schedules, the proper controls and edits need to be
built within each companys application system to allow this communication to
take place.
3. Outsourcing practices
Outsourcing is a contractual agreement whereby an organization hands over
control of part or all of the functions of the information systems department to an
external party. The organization pays a fee and the contractor delivers a level of
service that is defined in a contractually binding service level agreement. The
contractor provides the resources and expertise required to perform the agreed
service. Outsourcing is becoming increasingly important in many organizations.
The specific objective for IT outsourcing vary from organization to organization.
Typically, though, the goal is to achieve lasting, meaningful improvement in
information system through corporate restructuring to take advantage of a
vendors competencies.
Reasons for embarking on outsourcing include:
A desire to focus on a business core activities
Pressure on profit margins
Increasing competition that demands cost savings
Flexibility with respect to both organization and structure
Business risks associated with outsourcing are hidden costs, contract terms not
being met, service costs not being competitive over the period of the entire
contract, obsolescence of vendor IT systems and the balance of power residing with
the vendor. Some of the ways that these risks can be reduced are:
By establishing measurable partnership enacted shared goals and rewards
199
Utilization of multiple suppliers or withhold a piece of business as an
incentive
Formalization of a cross-functional contract management team
Contract performance metrics
Periodic competitive reviews and benchmarking/benchtrending
Implementation of short-term contracts
Outsourcing is the term used to encompass three quite different levels of external
provision of information systems services. These levels relate to the extent to
which the management of IS, rather than the technology component of it, have
been transferred to an external body. These are time-share vendors, service
bureaus and facilities management.
3.1 Time-share vendors
These provide online access to an external processing capability that is usually
charged for on a time-used basis. Such arrangements may merely provide for the
host processing capability onto which the purchaser must load software.
Alternatively the client may be purchasing access to the application. The storage
space required may be shared or private. This style of provision of the pure
technology gives a degree of flexibility allowing ad hoc, but processor intensive
jobs to be economically feasible.
3.2 Service bureaus
These provide an entirely external service that is charged by time or by the
application task. Rather than merely accessing some processing capability, as with
time-share arrangements, a complete task is contracted out. What is contracted for
is usually only a discrete, finite and often small, element of overall IS.
The specialist and focused nature of this type of service allows the bureaus to be
cost effective at the tasks it does since the mass coverage allows up-to-date
efficiency-oriented facilities ideal for routine processing work. The specific nature of
tasks done by service bureaus tend to make them slow to respond to change and
so this style of contracting out is a poor choice where fast changing data is
involved.
3.3 Facilities management (FM)
This may be the semi-external management of IS provision. In the physical sense
all the IS elements may remain (or be created from scratch) within the clients
premises but their management and operation become the responsibility of the
contracted body. FM contracts provide for management expertise as well as
technical skills. FM deals are legally binding equivalent of an internal service level
agreement. Both specify what service will be received but significantly differ in
that, unlike when internal IS fails to deliver, with an FM contract legal redress is
possible. For most organizations it is this certainty of delivery that makes FM
attractive.
FM deals are increasingly appropriate for stable IS activities in those areas that
have long been automated so that accurate internal versus external cost
comparisons can be made. FM can also be appealing for those areas of high
technology uncertainty since it offers a form of risk transfer. The service provider
200
must accommodate unforeseen changes or difficulties in maintaining service
levels.
4. Software houses
A software house is a company that creates custom software for specific clients.
They concentrate on the provision of software services. These services include
feasibility studies, systems analysis and design, development of operating systems
software, provision of application programming packages, tailor-made
application programming, specialist system advice etc. A software house may offer
a wide range of services or may specialize in a particular area.
6. Data warehousing
A data warehouse is a subject-oriented, integrated, time-variant, non-volatile
collection of data in support of managements decision-making process.
201
Alignment with enterprise right-sizing objectives as the enterprise becomes
flatter, greater emphasis and reliance on distributed decision support will
increase.
7. Data Mining
This is the process of discovering meaningful new correlations, patterns, and trends
by digging into (mining) large amounts of data stored in warehouses, using artificial
intelligence and statistical and mathematical techniques.
Industries that are already taking advantage of data mining include retail, financial,
medical, manufacturing, environmental, utilities, security, transportation, chemical,
insurance and aerospace industries. Most organizations engage in data mining to:
Discover knowledge the goal of knowledge discovery is to determine
explicit hidden relationships, patterns, or correlations from data stored in an
enterprises database. Specifically, data mining can be used to perform:
202
The Internet does not create new crimes but causes problems of enforcement and
jurisdiction. The following discussion shows how countries like England deals with
computer crime through legislation and may offer a point of reference for other
countries.
203
Hacking
Gaining unauthorized access to computer programs and data. This was not criminal
in England prior to Computer Misuse Act of 1990.
It is not a comprehensive statute for computer crime and does not generally
replace the existing criminal law. It however creates three new offences.
Cyberstalking
Using a public telecommunication system to harass another person may be an
offence under the Telecommunications Act 1984. Pursuing a course of harassing
conduct is an offence under the Protection From Harassment Act 1997.
204
8.2 Intellectual property rights
These are legal rights associated with creative effort or commercial reputation or
goodwill.
Rights differ according to subject matter being protected, scope of protection and
manner of creation. Broadly include:
Computer programs are protected as literary works. Literal copying is the copying
of program code while non-literal copying is judged on objective similarity and
look and feel. Copyright protects most material on the Internet e.g. linking
(problem caused by deep links), framing (displaying a website within another site),
caching and service provider liability.
Registered designs
Trademarks A trademark is a sign that distinguishes goods and services from
each other. Registration gives partial monopoly over right to use a certain mark.
Most legal issues of trademarks and information technology have arisen from
the Internet such as:
o Meta tags use of a trademarked name in a meta tag by someone not
entitled to use it may be infringement.
o Search engines sale of keywords that are also trademarked names to
advertisers may be infringement
o Domain names involves hijacking and cybersquatting of trademarked
domain names
Design rights
Passing off
Law of confidence
Rights in performances
205
In 1994 an MIT student was indicted for placing commercial software on website for
copying purposes. Student was accused of wire fraud and the interstate
transportation of stolen property. The case was thrown out on a technicality ground
since the student did not benefit from the arrangement and did not download the
software himself. His offence also did not come under any existing law.
Software publishers estimate that more than 50% of the software in US is pirated
and 90% in some foreign countries. In US, software companies can copyright it and
thus control its distribution. It is illegal to make copies without authorization.
Reverse Engineering
Interfaces are often incomplete, obscure and inaccurate, so developers must look
at what the code really does. Reverse engineering is often a necessity for reliable
software design. Companies doing reverse engineering must not create competing
products. Courts have allowed reverse engineering under certain restrictions.
Copying in transmission
Store and forward networks, a network node gets data in transmission, stores it
and forwards to the next node until it reaches its destination. Everybody gets a
copy, who archives them? Are the intermediate copies a violation of copyright? If
users email pictures or documents which contain trademarks or copyrighted
materials, do email copies on servers put the servers company in jeopardy?
206
Liability for online information involves defective information and defamation.
Where a person acts on information given over the Internet and suffers a loss
because information was inaccurate, will anyone be liable. Two problems that arise
are; one, a person who puts information on the Internet will only be liable if he
owes a duty of care to the person who suffers the loss. Two, damage caused in this
way will normally be pure economic loss, which cannot usually be claimed for in
delict (tort). However, there is a limited exception to this general principle in
respect of negligent misstatement. This is where according to Hedley Byrne & Co v
Heller & Partners:
the person giving the advice/information represented himself as an expert.
The person giving the advice/information knew (or should have known) that
the recipient was likely to act on it, and
The person giving the advice/information knew (or should have known) that
the recipient of information was likely to suffer a loss if the information was
given without sufficient care.
Can an Internet Service Provider be liable for defective information placed by
someone else? ISP may be regarded as a publisher. Traditional print publishers
have been held not to be liable for inaccurate information contained in the books
they publish. But ISP may be liable if it is shown that they had been warned that
the information was inaccurate and did nothing to remove it.
Defamatory statements may be published on the WWW, in newsgroups and by
email. Author of the statements will be liable for defamation, but may be difficult to
trace or not worth suing. But employers and Internet service providers may be
liable. Defamation is a delict (tort) and employers are vicariously liable for delicts
committed by their employees in the course of their employment. Many employers
try to avoid the possibility of actionable statements being published by their staff
by monitoring email and other messages. Print publishers are liable for defamatory
statements published by them, whether they were aware of them or not. ISPs could
be liable in the same way.
9. Terminology
Data Mart
A data mart is a repository of data gathered from operational data and other
sources that is designed to serve a particular community of knowledge workers. In
scope, the data may derive from an enterprise-wide database or data warehouse or
be more specialized. The emphasis of a data mart is on meeting the specific
demands of a particular group of knowledge users in terms of analysis, content,
presentation, and ease-of-use. Users of a data mart can expect to have data
presented in terms that are familiar.
In practice, the terms data mart and data warehouse each tend to imply the
presence of the other in some form. However, most writers using the term seem to
agree that the design of a data mart tends to start from an analysis of user needs
and that a data warehouse tends to start from an analysis of what data already
exists and how it can be collected in such a way that the data can later be used.
207
A data warehouse is a central aggregation of data (which can be distributed
physically); a data mart is a data repository that may derive from a data warehouse
or not and that emphasizes ease of access and usability for a particular designed
purpose. In general, a data warehouse tends to be a strategic but somewhat
unfinished concept; a data mart tends to be tactical and aimed at meeting an
immediate need. In practice, many products and companies offering data
warehouse services also tend to offer data mart capabilities or services.
208