0% found this document useful (0 votes)
27 views361 pages

Management Information System CPA

The document provides an introduction to Information and Communication Technology (ICT), detailing its components, roles in various sectors, and the evolution of computer systems. It covers the definition of computers, their applications, advantages, characteristics, and the historical development of computing devices. Additionally, it outlines the generations of computers, highlighting advancements in technology and processing capabilities over time.

Uploaded by

mukhwanacale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views361 pages

Management Information System CPA

The document provides an introduction to Information and Communication Technology (ICT), detailing its components, roles in various sectors, and the evolution of computer systems. It covers the definition of computers, their applications, advantages, characteristics, and the historical development of computing devices. Additionally, it outlines the generations of computers, highlighting advancements in technology and processing capabilities over time.

Uploaded by

mukhwanacale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 361

STUDY TEXT 1

CHAPTER 1
INTRODUCTION TO INFORMATION COMMUNICATION
TECHNOLOGY (ICT)

SYNOPSIS
Introduction…………………………………………………………. 1
Overview of Computer Systems……………………………………... 1
Overview of Components of Information
Communication Technology………………………………………… 39
Information Communication Technology
Personnel and Information Communication Structure………………. 42
Role of ICT in Business Environments……………………………… 49
Information Centers…………………………………………………. 51

INTRODUCTION
Information and Communications Technology or (ICT), is often used as an extended synonym for
information technology (IT), but is a more specific term that stresses the role of unified
communications and the integration of telecommunications (telephone lines and wireless signals),
computers as well as necessary enterprise software, middleware, storage, and audio-visual systems,
which enable users to access, store, transmit, and manipulate information.

The phrase ICT had been used by academic researchers since the 1980s, but it became popular after it
was used in a report to the UK government by Dennis Stevenson in 1997and in the revised National
Curriculum for England, Wales and Northern Ireland in 2000.

The term ICT is now also used to refer to the convergence of audio-visual and telephone networks
with computer networks through a single cabling or link system. There are large economic incentives
(huge cost savings due to elimination of the telephone network) to merge the audio-visual, building
management and telephone network with the computer network system using a single unified system
of cabling, signal distribution and management.

The term Info-communications is used in some cases as a shorter form of information and
communication(s) technology. In fact info-communications is the expansion of telecommunications
with information processing and content handling functions on a common digital technology base.

OVERVIEW OF COMPUTER SYSTEMS

What is a computer?
A computer is an information-processing machine. It may also be defined as a device that works
under the control of stored programs automatically accepting, storing and processing data to produce
information that is the result of that processing.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 2

A computer is an electronic device capable of executing instructions, developed based on algorithms


stored in its memory, to process data fed to it and produce the required results faster than human beings.
The definition from the Merriam-Webster Dictionary :
"one that computes; specifically : a programmable electronic device that can store, retrieve, and
process data"

The forms of information processed include:

 Data – e.g. invoices, sales ledger and purchase ledger, payroll, stock controls etc.
 Text – widely available in many offices with microcomputers
 Graphics – e.g. business graphs, symbols
 Images – e.g. pictures
 Voice – e.g. telephone
Processing includes creating, manipulating, storing, accessing and transmitting.

Why use computers?


Use of computers has become a necessity in many fields. Computers have revolutionized the way
businesses are conducted. This is due to the advantages that computer systems offer over manual
systems.

Some of the advantages of using computers include:

a) Speed
Computers have higher processing speeds than other means of processing, measured as number of
instructions executed per second.

b) Accuracy
Computers are not prone to errors. So long as the programs are correct, they will always give correct
output. A computer is designed in such a way that many of the inaccuracies, which could arise due to
the malfunctioning of the equipment, are detected and their consequences avoided in a way, which is
completely transparent to the user.

c) Consistency
Given the same data and the same instructions computers will produce exactly the same answer every
time that particular process is repeated.

d) Reliability
Computer systems are built with fault tolerance features, meaning that failure of one of the
components does not necessarily lead to failure of the whole system.

e) Memory capability
A computer has the ability to store and access large volumes of data.

f) Processing capability
A computer has the ability to execute millions of instructions per second.

Computer Application Areas


MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 3

Some of the areas that computers are used include:

a. Communication
Digital communication- which uses computer- is popular and is being adopted worldwide as opposed
to analogue communication which using the telephony system. Computers have also enhanced
communication through email communication, electronic data interchange, electronic funds transfer,
Internet etc.

b. Banking
The banking sector has incorporated computer systems in such areas as credit analysis, fund transfers,
customer relations, automated teller machines, home banking, and online banking.

c. Organizational management
The proliferation of management information systems have aided greatly the processes of managerial
planning, controlling, directing as well as decision-making. Computers are used in organization for
transaction processing, managerial control as well as decision-support. Other specific areas where
computer systems have been incorporated include sales and marketing, accounting, customer service
etc.

d. Science, research and engineering


Computers are used :

 as research tools, complex computations


 for simulation e.g. outer-space simulations, flight simulations
 as diagnostic and monitoring tools,
 computerized maps using global positioning satellite (GPS) technology
 for modern mass production methods in the auto industry using computer driven technology

e. Education
Computers incorporate databases of information that are useful in organizing and disseminating
educational resources. Such E-learning and virtual or distributed classrooms have enabled the
teaching industry to have a global reach to the students. Computers are also used for test scoring
uniform tests done in schools, school administration and computer aided instructions.

f. Management of information materials


The Internet has massive reference material on virtually every learning area. Computer systems have
enabled the efficient running of libraries for information storage and retrieval.

g. Manufacturing and production


Computer aided design (CAD), computer integrated manufacturing (CIM), process control systems
among other technologies are computer systems that have revolutionized the production industry.

h. Entertainment
Use of computers in the entertainment industry has increased tremendously over the years. Computers
enable high-quality storage of motion pictures and music files using high-speed and efficient digital
storage devices such as CDs, VCDs and DVDs. The Internet is also a great source of entertainment
resources. Computer games have also become a major source of entertainment.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 4

i. Retailing
Computers are used in point of sale systems and credit card payment systems as well as stock
inventories.

j. Home appliances
Computers (especially embedded computers or microprocessors) are included in household items for
reasons of economy and efficiency of such items. Major appliances such as microwave ovens, clothes
washers, refrigerators and sewing machines are making regular use of microprocessors.

k. Reservation systems
Guest booking, accommodation and bills accounting using computers in hotels have made the process
to be more efficient and faster. Airline computer reservation systems have also enhanced and
streamlined air travel across major airlines. Major players in the industry have also adopted online
reservation systems.

l. Health care and medicine


Computers have played an important role in the growth and improvement of health care that the use of
computers in medicine has become a medical specialty in itself. Computers are used in such areas as
maintenance of patient records, medical insurance systems, medical diagnosis, and patient monitoring.

Characteristics of Computers
Some of the characteristics of computers include:
1. Speed – a computer is a very fast machine. It can perform in a very few seconds the amount of
work that a human being can do in a year if he/she worked day and night doing nothing else.
2. Accuracy – the computer accuracy is consistently high.
3. Diligence – computers are free from monotony, tiredness and lack of concentration etc. It can
therefore work for hours without creating an error. For example if 10 million calculations are
to be done, a computer will do the tenth million calculations with exactly the same speed and
accuracy as the first one.
4. Versatility – a computer performs various tasks with ease. I.e. it can search for a letter, the next
moment prepare an electricity bill, and write a report next then do an arithmetic calculation all
with ease.
5. Power of remembering – a computer can store and recall any information due to its secondary
storage capability.
6. No intelligence Quotient (IQ) – a computer cannot make its own decisions and has to be
instructed on what to do.
7. No feelings – computers are devoid of emotions. They have no feelings or instincts and none
possesses the equivalent of a human heart and soul.

History of Computers
Earliest Forms of Computing Devices:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 5

Computer technology in its original form developed as an attempt to have a device that could handle
complex mathematical calculations faster and with ease. The two devices that were used in the early
ages of civilization were:

a) The abacus
It has several vertical threads (or poles) each with a number of beads. The position of each thread
represents a value. For instance, if the values are the decimal system, then thread values (from the
right) are ones, tens, hundreds, and so on. If using the binary system, then the values are ones, twos,
fours, and so on.

The abacus is a digital device. It works only with discreet values

The abacus is still used today in China and Japan

b) The slide rule


The slide rule is analog. This means that it can measure an entity whose value changes continuously.
It is capable of handling values at any position. This is the kind of change we get in temperature, a
car accelerating in speed and such.

The slide rule is like two rulers placed side by side. As one ruler slides over the other a required
mathematical calculation is given from the values marked on the rulers. The position of the sliding
device provides the answer to the calculation that is being done.

The Inventions Towards The Modern Computer:

1. Pascal's Cogs and Wheels - Devised by Blaise Pascal in 1642 to assist his father in his business.

2, Leibniz's "Stepped Reckoner"- Was an improvement of Pascal's work, though Leibniz hadn't
seen the actual machine made by Pascal. It could do more functions including multiplication, division
and others.

3. The Analytical Engine - The invention was made by an Englishman, Charles Babbage in 1830’s.
Although the man died before the completion of his work, the Analytical Engine provided the basic
components that make up the modern computer.

4. The ENIAC- (Electronic numerical Integrator and Computer). This was the first truly modern
computer. It was put together in 1946 by a team of American Scientists (J. Presper Eckert and John
W. Mauchly). ENIAC was made using a design invented by John Von Neumann. This design
utilized stored programs to work at the computer.

The evolution of computerization in business may be summarised as:

 1870s: Development of the typewriter allows speedier communication and less copying.
 1920s: Invention of the telephone enables both Wide Area Networks (WAN) and Local Area
Networks (LAN) communication in real time. This marks the beginning of telecommunication.
 1930s: Use of scientific management is made available to analyse and rationalise.
 1940s: Mathematical techniques developed in World War II (operations research) are applied to
the decision making process.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 6

 1950s: Introduction of copying facilitates cheap and faster document production, and the (limited)
introduction of Electronic Data Processing (EDP) speeds up large scale transaction processing.
 1960s: Emergence of Management Information Systems (MIS) provides background within which
office automation can develop.
 1970s: Setting up of telecommunication networks to allow for distant communication between
computer systems. There is widespread use of word processors in text editing and formatting,
advancement in personal computing- emergence of PCs. Use of spread sheets.
 1980s: Development of office automation technologies that combine data, text, graphics and voice.
Development of DSS, EIS and widespread use of personal productivity software.
 1990s: Advanced groupware; integrated packages, combining most of the office work- clerical,
operational as well as management.
 2000s: Wide spread use of Internet and related technology in many spheres of organisations
including electronic commerce (e-commerce), e-learning, e-health

Landmark Inventions
 500 B.C. - counting table with beads
 1150 in China - ABACUS - beads on wires
 1642 Adding machine - Pascal
 1822 Difference machine/Analytic Engine - design by Babbage
 1890 Holerith punched card machine - for U.S. census
 1944 Mark I (Harvard) - first stored program computer
 1947 ENIAC (Penn)- first electronic stored program computer
 1951 UNIVAC - first commercial computer; 1954 first installation
 1964 IBM - first all-purpose computer (business + scientific)
 1973 HP-65, hand-held, programmable ‘calculator’
 ~1975 Altair, Intel - first Micro-computer; CPU on a “chip”

Generation of Computers
The view of computers into generations is based on the fundamental technology employed. Each new
generation is characterized by greater speed, larger memory capacity and smaller overall size than the
previous one.

i. First Generation Computers (1946 – 1957)


 Used vacuum tubes to construct computers.
 These computers were large in size and writing programs on them was difficult.
 The following are major drawbacks of First generation computers.
o The operating speed was quite slow.
o Power consumption was very high.
o It required large space for installation.
o The programming capability was quite low.
o Cumbersome to operate – switching between programs, input and output

ii. Second Generation Computers (1958 - 1964)

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 7

 Replaced vacuum tubes with transistors.


 The transistor is smaller, cheaper and dissipates less heat than a vacuum tube.
 The second generation also saw the introduction of more complex arithmetic and logic
units, the use of high – level programming languages and the provision of system
software with the computer.
 Transistors are smaller than electric tubes and have higher operating speed. They have
no filament and require no heating. Manufacturing cost was also lower. Thus the size
of the computer got reduced considerably.
 It is in the second generation that the concept of Central Processing Unit (CPU),
memory, programming language and input and output units were developed. The
programming languages such as COBOL, FORTRAN were developed during this
period.

iii. Third Generation Computers (1965 - 1971)


 Had an integrated circuit.
 Although the transistor technology was a major improvement over vacuum tubes,
problems remained. The transistors were individually mounted in separate packages
and interconnected on printed circuit boards by separate wires. This was a complex,
time consuming and error-prone process.
 The early integrated circuits are referred to as small-scale integration (SSI). Computers
of this generation were smaller in size, lower cost, larger memory and processing
speed was much higher.

iv. Fourth Generation Computers (1972 - Present)


 Employ Large Scale Integrated (LSI) and Very Large Scale Integrated (VLSI) circuit
technology to construct computers. Over 1,000 components can be placed on a single
integrated-circuit chip.

v. Fifth Generation Computers


 These are computers of 1990s
 Use Very Large Scale Integrated (VLSI) circuit technology to build computers. Over
10,000 components can be incorporated on a single integrated chip.
 The speed is extremely high in fifth generation computer. Apart from this it can
perform parallel processing. The concept of Artificial intelligence has been introduced
to allow the computer to take its own decision.

Summary of Computer Generations

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 8

The following table summarises the effect of technology on the main components of a computer
system (Baer 1984). The size values present an order of magnitude figure (followed by typical values
in bytes of storage)

GENERATION
Technology FIRST SECOND THIRD FOURTH
Processor SSI LSI, VLSI,
Vacuum tube Transistor
technology LSI ULSI
Processor Multifunction Microcomputers Workstations
Uniprocessor
structure units Minicomputers on LANs
Mainframe
1 100 2000 1000
speed
Microprocessor
- - 1 10
speed
hardwired drum hardwired
Control hardwired hardwired
microprogram & microprogram
Primary Semiconductor
Vacuum tube Core Semiconductor
memory 64K to 256K
1 10 200 2000
Size bytes
200 4000 64-1M 1M - 40M
Secondary channels & fixed-head
extended I/O
memory & I/O drum tape asynchronous &movable-arm
optical disk
paths I/O disks
1 10 500 5000
Size bytes
1K-5K 100K-64K 10M-500M 500M-5000M
Memory experimental segmentation & segmentation &
-
hierarchy paging systems paging, caches paging, caches

Classification of Computers
Computers can be classified in different ways as shown below:
i. Classification by processing
This is by how the computer represents and processes the data.
a) Digital computers are computers which process data that is represented in the form of discrete
values by operating on it in steps. Digital computers process data represented in the form of
discrete values like 0, 1, 2. They are used for both business data processing and scientific
purposes since digital computation results in greater accuracy.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 9

b) Analog computers are used for scientific, engineering, and process-controlled purposes.
Outputs are represented in the form of graphs. Analogue computers process data represented
by physical variables and output physical magnitudes in the form of smooth graphs.

c) Hybrid computers are computers that have the combined features of digital and analog
computers. They offer an efficient and economical method of working out special problems in
science and various areas of engineering.

Classification by purpose
This is a classification by the use to which the computer is put.

a) Special purpose computers are used for a certain specific function e.g. in medicine, engineering,
manufacturing.

b) General-purpose computers can be used for a wide variety of tasks e.g. accounting, word
processing

ii. Classification by generation


This is a time-based classification coinciding with technological advances.
The computers are categorized as First generation through to Fifth generation.

a) First generation. Computers of the early 1940s. Used a circuitry of wires and vacuum tubes.
Produced a lot of heat, took a lot of space, were very slow and expensive. Examples are LEO 1
and UNIVAC 1.

b) Second generation. Computers of the early 1950s. Made use of transistors and thus were
smaller and faster. (200KHz). Examples include the IBM system 1000.

c) Third generation. Computers of the 1960s. Made use of Integrated Circuits. Speeds of up to
1MHz. Examples include the IBM system 360.

d) Fourth generation. Computers of the 1970s and 1980s. Used Large Scale Integration (LSI)
technology. Speeds of up to 10MHz. Examples include the IBM 4000 series.

e) Fifth generation. Computers of the 1990s. Use Very Large Scale Integration (VLSI)
technology and have speeds up to 400MHz and above.

iii. Classification by power and size/ configuration

Super computers
They are very large in size and use multiple processors and superior technology. Super computers are
biggest in size, the most expensive in price than any other is classified and known as super computer.
It can process trillions of instructions in seconds. This computer is not used as a PC in a home neither
by a student in a college. Governments specially use this type of computer for their different calculations
and heavy jobs. Different industries also use this huge computer for designing their products.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 10

In most of the Hollywood’s movies it is used for animation purposes. This kind of computer is also
helpful for forecasting weather reports worldwide. They are known for von Newman’s design i.e.
multiple processor system with parallel processing. In such a system a task is broken down and shared
among processes for faster execution. They are used for complex tasks requiring a lot of computational
power.

Mainframe computers
A mainframe is another giant computer after the super computer and can also process millions of
instruction per second and capable of accessing billions of data .They are physically very large in size
with very high capacity of main memory. This computer is commonly used in big hospitals, airline
reservations companies, and many other huge companies prefer mainframe because of its capability of
retrieving data on a huge basis. They can be linked to smaller computers and handle hundreds of users
they are also used in space exploitation. The term mainframe was mainly used for earliest computers as
they were big in size though today the term is used to refer to large computers. A large number of
peripherals can be attached to them. They are expensive to install.

Minicomputers
They are smaller than the main frame but bigger than minicomputers. They support concurrent users.
They can be used as servers in companies. They are slower and less costly compared to mainframe
computers but more powerful, reliable and expensive than microcomputers.

Micro computers
They are of advanced technology i.e. the micro era based on large scale integration that confines several
physical components per small elements thumb size IC, hence the size reduced. It is the smallest of the
three computers. They are usually called personal computers since they are designed to be used by
individuals. The microchip technology has enabled reduction of size of computers. Microcomputers can
be a desktop, laptop, notebooks, or even palmtop

i. Notebook computer
It is an extremely lightweight personal computer. Notebook computers typically weigh less than 6
pounds and are small enough to fit easily in a briefcase. Aside from size and portability,. Notebook
computers use a variety of techniques, known as flat-panel technologies, to produce a lightweight and
non-bulky display screen.

ii. Desktop Computer


It is an independent personal computer that is made especially for use on a desk in an office or home.
The term is used mainly to distinguish this type of personal computer from portable computers and
laptops, but also to distinguish other types of computers like the server or mainframe.

iii. Laptop
A small portable computer light enough to carry comfortably, with a flat screen and keyboard that fold
together. Laptops are battery-operated, often have a thin, backlit or sidelit LCD display screen, and
some models can even mate with a docking station to perform as a full-sized desktop system back at
the office. Advances in battery technology allow laptop computers to run for many hours between

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 11

charges, and some models have a set of business applications built into ROM. Today's high-end
(Advanced) laptops provide all the capabilities of most desktop computers.

iv. Palmtop
It is a small computer that literally fits in your palm. Compared to full-size computers, palmtops are
severely limited, but they are practical for certain functions such as phone books and calendars.
Palmtops that use a pen rather than a keyboard for input are often called hand-held computers or PDAs.
Because of their small size, most palmtop computers do not include disk drives. However, many contain
PCMCIA slots in which you can insert disk drives, modems, memory, and other devices. Nowadays
palmtops are being integrated into the mobile phones as multipurpose devices.

Data Representation in Computers


Data exists as electrical voltages in a computer. Since electricity can exist in 2 states, on or off, binary
digits are used to represent data. Binary digits, or bits, can be “0” or “1”. The bit is the basic unit of
representing data in a digital computer.

A bit is either a 1 or a 0. These correspond to two electronic/magnetic states of ON (1) and OFF (0) in
digital circuits which are the basic building blocks of computers. All data operated by a computer and
the instructions that manipulate that data must be represented in these units. Other units are a
combination of these basic units. Such units include:

 1 byte (B) = 23 bits = 8 bits – usually used to represent one character e.g. ‘A’
 1 kilobyte (KB) – 210 bytes = 1024 bytes (usually considered as 1000 bytes)
 1 megabyte (MB)– 220 bytes = 1048576 bytes (usually considered as 1000000 bytes/1000 KB)
 1 gigabyte (GB)– 230 bytes = 1073741824 bytes (usually considered as 1,000,000,000
bytes/1000 MB)
 1 terabyte (TB) – 240 bytes = 1099511627776 bytes (usually considered as one trillion
bytes/1000 GB)

Bit patterns (the pattern of 1s or 0s found in the bytes) represent various kinds of data:

 Numerical values (using the binary number system)


 Text/character data (using the ASCII coding scheme)
 Program instructions (using the machine language)
 Pictures (using such data formats as gif, jpeg, bmp and wmf)
 Video (using such data formats as avi, mov and mpeg)
 Sound/music (using such data formats as wav, au and mp3)

Computer data is represented using number systems and either one of the character coding schemes.

Character Coding Schemes


(i) ASCII – American Standard Code for Information Interchange

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 12

ASCII (American Standard Code for Information Interchange) is the most common format for text
files in computers and on the Internet. In an ASCII file, each alphabetic, numeric, or special character
is represented with a 7-bit binary number (a string of seven 0s or 1s). 128 possible characters are
defined.

Unix and DOS-based operating systems use ASCII for text files. Windows NT and 2000 uses a newer
code, Unicode. IBM's S/390 systems use a proprietary 8-bit code called EBCDIC. Conversion
programs allow different operating systems to change a file from one code to another. ASCII was
developed by the American National Standards Institute (ANSI).

(ii) EBCDIC
EBCDIC is a binary code for alphabetic and numeric characters that IBM developed for its larger
operating systems. It is the code for text files that is used in IBM's OS/390 operating system for its
S/390 servers and that thousands of corporations use for their legacy applications and databases. In an
EBCDIC file, each alphabetic or numeric character is represented with an 8-bit binary number (a string
of eight 0's or 1's). 256 possible characters (letters of the alphabet, numerals, and special characters)
are defined.

(iii) Unicode
Unicode is an entirely new idea in setting up binary codes for text or script characters. Officially called
the Unicode Worldwide Character Standard, it is a system for "the interchange, processing, and display
of the written texts of the diverse languages of the modern world." It also supports many classical and
historical texts in a number of languages.

Number Systems
(i) Decimal system (base 10)
This is the normal human numbering system where all numbers are represented using base 10.The
decimal system consists of 10 digits namely 0 to 9. This system is not used by the computer for internal
data representation. The position of a digit represents its relation to the power of ten.
E.g. 45780 = {(0×100) + (8×101) + (7×102) + (5×103) + (4×104)}

(ii) Binary system (base 2)


This is the system that is used by the computer for internal data representation whereby numbers are
represented using base 2. Its basic units are 0 and 1, which are referred to as BITs (Binary digits). 0
and 1 represent two electronic or magnetic states of the computer that are implemented in hardware.
The implementation is through use of electronic switching devices called gates, which like a normal
switch are in either one of two states: ON (1) or OFF (0).

The information supplied by a computer as a result of processing must be decoded in the form
understandable to the user.

E.g. Number 15 in decimal is represented as 1111 in binary system:


1111 = {(1×20) + (1×21) + (1×22) + (1×23)}

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 13

= 1 + 2 + 4 + 8 = 15

(iii) Octal system (base 8)


Since binary numbers are long and cumbersome, more convenient representations combine groups of
three or four bits into octal (base 8) digits respectively. In octal number system, there are only eight
possible digits, that is, 0 to 7. This system is more popular with microprocessors because the number
represented in octal system can be used directly for input and output operations. Complex binary
numbers with several 1’s and 0’s can be conveniently handled in base eight. The binary digits are
grouped into binary digits of threes and each group is used to represent an individual octal digit.

For example: the binary number 10001110011 can be handled as 2163 octal number.

That is 010 001 110 011

2 1 6 3

(iv) Hexadecimal (base 16)


The hexadecimal number system is similar to octal system with the exception that the base is 16 and
there must be 16 digits. The sixteen symbols used in this system are the decimal digits 0 to 9 and
alphabets A to F. Hexadecimal numbers are used because more complex binary notations can be
simplified by grouping the binary digits into groups of four each group representing a hexadecimal
digit. For example the binary number 0001.0010.1010.0000 can be handled in base 16 as 12A0.

That is 0001 0010 1010 0000

1 2 A 0

Storage Capacity

All of the data and programs that are used by a computer are represented as bits within the main
memory. The storage of these bits is made more manageable by grouping them together in multiples
of eight. In fact, the term byte is widely used when referring to memory size and file size rather than
bit.

When file sizes become particularly large it becomes cumbersome to describe them in terms of bytes
because the file may be in the order of, say, 2578 bytes or 456,347 bytes. As the computer is a two-
state machine it is convenient to express the capacity of memory and backing store in powers of 2.
Consequently, the following table represents the hierarchy of memory capacity

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 14

Unit of memory Composed of Typical files

1 bit Can be 1 or 0

1 byte 8 bits 1 character

1 kilobyte (Kb) 210= 1024 bytes Half an A4 page

1 megabyte (Mb) 220= 1,048,576 bytes 1024 Kb 500 A4 pages

1 gigabyte (Gb) 230= 1,073,741,824 bytes 1024 Mb 500,000 A4pages

1 terabyte (Tb) 240= 1,099,511,627,776 bytes 1024 Gb Enormous!

The units above are used to measure the capacity of both the main memory and the backing store.
However, the capacity of backing store devices is much larger than that of main memory.

At the time of writing this unit memory is measured in terms of megabytes and gigabytes (currently
up to 3 Gb of RAM), whereas a typical hard disk has a capacity in the order of 80 Gb. No doubt these
figures will seem low in future years.

Computer Structure
The diagram below shows the components used in a typical computer system. It is a simple
representation of how a computer works and is often referred to as the ‘four box diagram’

MEMORY

PROCESSOR
INPUT OUTPUT

BACKING STORE

 Input devices – Enters program and data into computer system.


 Central Processing Unit (CPU) – This is the part of the computer that processes data.
Consists of main memory, the control unit and the arithmetic and logic unit.
 Main Memory – Temporary storage to hold programs and data during execution/ processing.
 Control Unit – Controls execution of programs.
 Arithmetic Logic Unit (ALU) – Performs actual processing of data using program
instructions.
 Output devices – Displays information processed by the computer system.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 15

 Storage devices – Permanent storage of data and programs before and after it is processed by
the computer system.
 Communication devices – Enable communication with other computers.

When your computer is switched off all programs and data are held on backing store media such as
hard drives, floppy disks, zip disks and CD-R/W. Once the computer is switched on, the operating
system is loaded from the backing store into main memory (RAM). The computer is now ready to run
programs.

When the user opens a word processor file both the application program and the file itself are loaded
into the main memory. The user may then edit the document by typing on the keyboard. It is the
processor that controls the timing of operations and runs the word- processing program, allowing the
user to add new text.

Once the editing is complete, the user saves the file to the backing store and these over-writes the
original file (unless a new file name is used). If there is a power failure or the user does not save the
document to the backing store then the file will be lost forever.

Throughout this process the document is outputted to the monitor so that the user can see what is
happening. The user may wish to obtain a hardcopy of the document by using the mouse (input) to
instruct the processor (process) to make a printout (output).

Hardware
Refers to the physical, tangible computer equipment and devices, which provide support for major
functions such as input, processing (internal storage, computation and control), output, secondary
storage (for data and programs), and communication.

Hardware categories
A computer system is a set of integrated devices that input, output, process, and store data and
information. Computer systems are currently built around at least one digital processing device. There
are five main hardware components in a computer system: the central processing unit (CPU); primary
storage (main memory); secondary storage; and input and output devices.

Basic elements of hardware


The basic elements that make up a computer system are as follows:
 Input devices
 Output devices
 Processing devices

Input Devices
Most computers cannot accept data in forms customary to human communication such as speech or
hand-written documents. It is necessary, therefore, to present data to the computer in a way that provides
easy conversion into its own electronic pulse-based forms. This is commonly achieved by typing data
using the keyboard or using an electronic mouse or any other input device.

Keyboard
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 16

It can be connected to a computer system through a terminal. A terminal is a form of input and output
device. A terminal can be connected to a mainframe or other types of computers called a host computer
or server. There are four types of terminals namely dumb, intelligent, network and Internet.

 Dumb Terminal
- Used to input and receive data only.
- It cannot process data independently.
- A terminal used by an airline reservation clerk to access a mainframe computer
for flight information is an example of a dumb terminal
 Intelligent Terminal
- Includes a processing unit, memory, and secondary storage.
- It uses communications software and a telephone hookup or other
communications link.
- A microcomputer connected to a larger computer by a modem or network link is
an example of an intelligent terminal.
 Network Terminal
- Also known as a thin client or network computer.
- It is a low cost alternative to an intelligent terminal.
- Most network terminals do not have a hard drive.
- This type of terminal relies on a host computer or server for application or system
software.
 Internet Terminal
- Is also known as a web terminal.
- It provides access to the Internet and displays web pages on a standard television
set.
- It is used almost exclusively in the home.

Direct data entry devices – Direct entry creates machine-readable data that can go directly to the
CPU. It reduces human error that may occur during keyboard entry. Direct entry devices include
pointing, scanning and voice-input devices.

Pen input devices e.g. Lightpen


Pen input devices are used to select or input items by touching the screen with the pen. Light pens
accomplish this by using a white cell at the tip of the pen. When the light pen is placed against the
monitor, it closes a photoelectric circuit. The photoelectric circuit identifies the spot for entering or
modifying data. Engineers who design microprocessor chips or airplane parts use light pens.

Touch sensitive screen inputs


Touch sensitive screens, or touch screens, allow the user to execute programs or select menu items by
touching a portion of a special screen. Behind the plastic layer of the touch screen are crisscrossed
invisible beams of infrared light. Touching the screen with a finger can activate actions or commands.
Touch screens are often used in ATMs, information centres, restaurants, and or stores. They are
popularly used at gas stations for customers to select the grade of gas or request a receipt at the pump
(in developed countries), as well as in fast-food restaurants to allow clerks to easily enter orders

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 17

Scanning Devices
Scanning devices, or scanners, can be used to input images and character data directly into a computer.
The scanner digitises the data into machine-readable form. The scanning devices used in direct-entry
include the following:
 Image Scanner – converts images on a page to electronic signals.
 Fax Machine – converts light and dark areas of an image into format that can be sent
over telephone lines.
 Bar-Code Readers – photoelectric scanner that reads vertical striped marks printed
on items.
 Character and Mark Recognition Devices – scanning devices used to read marks
on documents.

Character and Mark Recognition Device Features


They can be used by mainframe computers or powerful microcomputers. There are three kinds of
character and mark recognition devices:

a. Magnetic-ink character recognition (MICR)


Magnetic ink character recognition, or MICR, readers are used to read the numbers printed at the
bottom of checks in special magnetic ink. These numbers are an example of data that is both machine
readable and human readable. The use of MICR readers increases the speed and accuracy of processing
checks.

b. Optical-character recognition (OCR)


Read special preprinted characters, such as those on utility and telephone bills.

c. Optical-mark recognition (OMR)


Reads marks on tests – also called mark sensing. Optical mark recognition readers are often used for
test scoring since they can read the location of marks on what is sometimes called a mark sense
document. This is how, for instance, standardized tests, such as the KCPE, SAT or GMAT is scored.

Voice–input devices
Voice-Input Devices can also be used for direct input into a computer. Speech recognition can be used
for data input when it is necessary to keep your hands free. For example, a doctor may use voice
recognition software to dictate medical notes while examining a patient. Voice recognition can also
be used for security purposes to allow only authorized people into certain areas or to use certain
devices.

 Voice-input devices convert speech into a digital code.


 The most widely used voice-input device is the microphone.
 A microphone, sound card, and software form a voice recognition system.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 18

Note:
Point-of-sale (POS) terminals (electronic cash registers) use both keyboard and direct entry.
 Keyboard Entry can be used to type in information.
 Direct Entry can be used to read special characters on price tags.

Point-of-sale terminals can use wand readers or platform scanners as direct entry devices.
 Wand readers or scanners reflect light on the characters.
 Reflection is changed by photoelectric cells to machine-readable code.
 Encoded information on the product’s barcode e.g. price appear on terminal’s digital display.

Output Devices
The output devices covered this sub-section include:

a. cathode-ray tube (CRT) monitors


b. LCD panels
c. inkjet printers
d. laser printers
e. loudspeakers

a. CRT monitors
CRT monitors comprise a sealed glass tube that has no air inside it. An electron gun at one end fires a
stream of tiny electrons at the screen located at the other end. The image is made by illuminating
particles on the screen.

Accuracy

The main factors are the refresh rate, the number of pixels and also the physical size of the monitor.
What is really important is what the refresh rate will be at the maximum desired resolution. To keep it
simple, every pixel or dot on the screen is refreshed or redrawn many times every second. If this
flicker can be detected it can cause eyestrain and image quality is simply not the same as if it were
flicker-free. The industry standard for flicker-free images is 75 Hz as very few people can detect
flicker at or above 75 Hz. Most flicker-free monitors offer a refresh rate of 85 Hz. Those that use
higher rates do not offer any additional advantage and could even be considered counter-productive.

Resolution

A monitor image is made up of pixels, or picture elements. Pixels are either illuminated or not; the
pattern they show is what makes up the image.

A 17" monitor may have a maximum resolution of 1280 ×1024. Not only does this ratio (5:4) cause
image distortion but text is simply too small to read at this high a resolution on this size of monitor. A
17" monitor should use either an 800 × 600 or 1024 ×768 resolution, which are thedesired (4:3) ratio.
A 15" monitor should use 640 × 480 (4:3) or 800 ×600 resolution

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 19

b. LCD panels
Applying a voltage across an LCD material changes the alignment and light-polarising properties of
its molecules so that they can be used in conjunction with polarising filters to create an electronic
shutter that will either let light pass or stop it passing. Thus, the LCD display works by allowing
different amounts of white backlight through an active filter.

The red, green and blue of each pixel are achieved by filtering the white light that is allowed through.

LCD stands for Liquid Crystal Display. LCD is also known as TFT (Thin Film Transistor).

Accuracy

The main factors are the refresh rate, the number of pixels and the physical size of the LCD monitor.
The refresh rate is set at an industry standard of 75 MHz.

Resolution

Like the CRT monitor this is based on the pixel array. Different screen modes can be selected but the
maximum resolution is often 1280×1024.

The number of bits allocated to represent each pixel is called the colour depth. The colour depth can
be as high as 24 bits, which allows more than 16 million different colours to be represented. It is
difficult to imagine any more than 16 million colours so 24-bit colour depth is often referred to as true
colour.

Typical uses

LCD monitors are lightweight, compact and can require little power to run compared to CRT
monitors. They are ideal for use in laptops, tablets and palmtops. Full size LCD monitors for desktop
systems are becoming very popular.

c. Inkjet printers
These work by spraying a fine jet of ink, which is both heated and under pressure, onto paper. Most
have a black cartridge and either a single colour cartridge or separate red, yellow and blue cartridges

Accuracy

The quality of the printed image is measured by the number and spacing of the dots of ink on the
page. The image resolution is generally measured in dpi. The higher the dpi, the better the quality or
sharpness of the printed image. The vertical and horizontal resolutions may, therefore, be different
depending on the number of nozzles on the print head and the distance moved. Typical resolution is
2880×1440.

Speed

The major factor here tends to be the mode of communication with the computer. Often this figure is
given in terms of pages per minute for black and white or colour, e.g. black and white 10 ppm and
colour 6 ppm.

Typical uses

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 20

Home, office and business. These printers are ideal for that occasional presentations and livening up
mostly text documents with some colour.

They are also good for creative home projects such as invitations, birth announcements and personal
greeting cards.

d. Laser printers
These operate by using a laser beam to trace the image of the page layout onto a photosensitive drum.
This image then attracts toner by means of an electrostatic charge. The toner is fused to the paper by
heat and pressure.

Accuracy

Determined by the dpi. A typical laser printer can print from 600 to 2400 dpi, which produces very
high quality images.

Speed

A laser printer needs to have all the information about a page in its memory before it can start
printing. If the page has a lot of detail then it will take longer to print. One way to speed up a printer is
to add more internal RAM. Once the first page has printed the rest normally follow directly. Like
inkjet printers, speeds are given in terms of pages per minute, e.g. 14 ppm for black and white, 8 Mb
RAM.

e. Loudspeakers
There are two types of speaker systems used on computers: those that are inbuilt and those that are
external. Most computers will have a speaker (or two) incorporated in the case or perhaps the monitor.
The purpose of inbuilt speaker systems is limited to producing a sound from the computer and nothing
more; the quality is poor.

Multimedia computers are intended to produce good sound quality that is comparable to hi-fi systems.
They include ‘active speakers’, which have their own power supply and usually have an amplifier. A
good quality system will include a sub-woofer and five speakers to produce surround sound.

Accuracy/Quality

This can be measured as the comparison between the original sound and that produced by the
computer. Speakers are only one component of sound quality; the formats of the sound tracks and
type of soundcard also have a significant effect.

If we consider the sound produced from a pre-recorded CD or DVD movie then active systems can be
as good as a professional hi-fi system.

Processing Devices
(i) The CPU (Central Processing Unit)
The CPU (Central Processing Unit) controls the processing of instructions. The CPU produces
electronic pulses at a predetermined and constant rate. This is called the clock speed. Clock speed is
generally measured in megahertz, that is, millions of cycles per second

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 21

It consists of:

d. Control Unit (CU) – The electronic circuitry of the control unit accesses program
instructions, decodes them and coordinates instruction execution in the CPU.
The main functions of the control unit are:

1. To control the timing of operations within the processor

2. To send out signals that fetch instructions from the main memory

3. To interpret these instructions

4. To carry out instructions that are fetched from the main memory

In general the control unit is responsible for the running of programs that are loaded into the main
memory.

e. Arithmetic and Logic Unit (ALU) – Performs mathematical calculations and logical
comparisons.
The main functions of the ALU are:

1. To perform arithmetic calculations (addition, subtraction, multiplication, division)

2. To perform logic functions involving branching, e.g.

IF...THEN

f. Registers – These are high-speed storage circuitry that holds the instruction and the
data while the processor is executing the instruction.
g. Bus – This is a highway connecting internal components to each other.

(ii) Main Memory


Primary storage, also called main memory, although not a part of the CPU, is closely related to the
CPU. Main memory holds program instructions and data before and after execution by the CPU. All
instructions and data pass through main memory locations. Memory is located physically close to the
CPU to decrease access time, that is, the time it takes the CPU to retrieve data from memory. Although
the overall trend has been increased memory access time, memory has not advanced as quickly as
processors. Memory access time is often measured in milliseconds, or one thousandths of a second.

i. Output
Results are taken from main storage and fed to an output device. This may be a printer, in which case
the information is automatically converted to a printed form called hard copy or to a monitor screen for
a soft copy of data or information.

Secondary Storage Devices

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 22

When a computer is switch off the data has to be stored on a secondary storage device so that it can be
loaded back in at a later date. Current backing store devices fall into two categories: magnetic and
optical. We will examine the following devices in turn: magnetic storage devices/media:

a. floppy drive
b. hard drive
c. zip drive
d. magnetic tape optical storage devices/media:
e. CD-ROM
f. CD-R
g. CD-RW
h. DVD-ROM
i. rewritable DVD
– DVD-R
– DVD-RW

Random (direct) and serial access devices

Random access is where the system can go straight to the data it requires. A disk is a random-access
medium. To read data stored on the disk, the system simply has to have the address on the disk where
the data is stored, and the read head can go directly to that location and begin the transfer. This makes
a disk drive a faster method of data storage and data access than a tape drive, which uses serial access.

An alternative to direct access is sequential access (serial access), in which a data location is found by
starting at one place and seeking through every successive location until the data is found.
Historically, tape storage is associated with sequential access, and disk storage is associated with
direct access.

Magnetic and Optical Storage

Data is stored by magnetising the surface of flat, circular plates that constantly rotate at high speed
(typically 60 to 120 revolutions per second). A read/write head floats on a cushion of air a fraction of
a millimetre above the surface of the disk. The drive is inside a sealed unit because even a speck of
dust could cause the heads to crash.

Optical storage is any storage method in which data is written and read with a laser for archival or
backup purposes. Typically, data is written to optical media, such as CDs and DVDs. For several
years, proponents have spoken of optical storage as a near-future replacement for both hard drives in
personal computers and tape backup in mass storage.

Optical media is more durable than tape and less vulnerable to environmental conditions. On the other
hand, it tends to be slower than typical hard drive speeds, and to offer lower storage capacities.
According to OSTA (Optical Storage Technology Association), current optical speeds are
approaching those of hard drives. A number of new optical formats, such as Blu-ray and UDO (ultra
density optical), use a blue laser to dramatically increase capacities.

a. Floppy drive/disk

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 23

A floppy disk is a small disk that the user can remove from the floppy disk drive. The disk is made
from circular plastic plates coated in ferric oxide. When the disk is formatted or initialised, the surface
of the disk is divided into tracks and sectors on which data is stored as magnetic patterns.

Type of Access

Direct/random

Speed of data access

Floppy disks are relatively slow to access because they rotate far more slowly than hard disks, at only
six revolutions per second, and only start spinning when requested. The access speed is about 36 Kb
per second.

Capacity

High-density disks hold 1.44 Mb of data (enough to store about 350 pages of A4 text). A floppy disk
needs to be formatted before it can be used but most disks are now sold already formatted.

Functions

Floppy disks used to be a convenient means of storing small files and of transferring files from one
computer to another. Many single files are now larger than 1.44 Mb, mainly due to graphics and video
(jpeg and mpeg) making the floppy disk an unsuitable medium for anything but small files

New USB flash drives (32 Mb to 2 Gb), which can be inserted into a USB port, are making the floppy
disk drive redundant to the extent that some computers are now sold without a floppy disk drive.

b. Hard Disk
A hard disk is a rigid disk with a magnetised surface. The surface is divided into tracks and sectors on
which data is stored magnetically. The data is read by a read/write head fixed to an arm that moves
across the surface of the disk. Hard disks are usually sealed in a protective container to prevent dust
corrupting the data.

Type of access

Random/direct

Speed of data transfer

Hard disks rotate at much higher speeds than floppy disks, reaching speeds of up to 7200 rotations per
minute. This means that the fastest hard disk can transfer data from disk to computer at the rate of 22
Mb per second. Some can even manage higher transfer rates in short bursts of up to 33 Mb per
second.

Capacity

Measured in gigabytes, the standard amount for a desktop computer is currently 80 Gb but it is
possible to purchase hard disks with a capacity of 250 Gb

Functions

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 24

The hard drive is used in all computer systems: stand-alone, network and mainframe. It has become
an essential component of the modern computer, particularly with the increase in video editing, which
demands a great deal of storage space. A typical hard disk will store:

• the operating system


• applications
• user files
c. Zip drive
A zip drive is a removable storage device that securely stores computer data magnetically. It is
durable and portable, and a 100 Mb zip drive can hold the equivalent of 70 floppy disks.

Type of access

Direct/random

Speed of data access

This depends on the connection type. The USB 1.0 transfer rate is 0.9 Mb s–1, the USB 2.0 transfer
rate is 7.3 Mb s–1 and the Firewire rate is 7.3Mb s–1.

Capacity

Older zip drives take 100 Mb disks, but 250 Mb has become the standard and the latest devices hold a
massive 750 Mb. The newer drives can also read all previous zip media.

Functions

Good for storing large files on a portable medium, particularly photo images, which tend to be large,
desktop publishing files and video. Often used to back up data.

As with floppy disks, USB flash drives are likely to make zip drives (especially the smaller capacity
ones) obsolete.

d. Magnetic tape
For almost as long as computers have existed, magnetic tape has been the back-up medium of choice.
Tape is inexpensive, well understood and easy to remove and replace. But as hard drives grew larger
and databases became massive data warehouses, tape had to change to store more data and do it faster.
From large reel-to-reel mainframe tape, focus shifted to the speed and convenience of digital audio
tapes (DATs). Tape is a sequential medium so data has to be read from it in order.

Modern systems use cassettes. Some of these are even smaller than an audio cassette but hold more
data that the huge reels.

Type of access

Serial

Speed of data access

Access speeds have been traditionally slow due to the serial access to the data; however, a data
transfer rate of between 0.92 Mb s–1 and 30 Mb s–1 is possible.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 25

Capacity

Magnetic tape comes in a wide range of sizes, from 10 Gb to 500 Gb. Compressed data tapes can hold
up to a massive 1300 Gb of data on a single tape.

Functions

Magnetic tape can be used for permanent storage. Tapes are often used to make a copy of hard disks
for back-up (security) reasons. This is automatically done overnight and is suitable for network or
mainframe backups.

e. CD-ROM drive
The term CD-ROM is short for compact disk read-only memory. CD-ROM disks can only be used to
read information stored on them – the user cannot save data to a CD-ROM disk. CD-ROM writers use
a high-powered laser to store data by making tiny pits in the surface of the CD-ROM disk.

The pattern of these pits is read by a sensor in the CD-ROM drive that detects light reflected off the
surface of the disk. The patterns are then turned into binary numbers.

Type of access

Direct/random

Speed of data access

The speed varies from drive to drive. The original CD drives read data at a rate of 150 Kb per second.
Rather than quoting speed in Kb s–1 the norm has been to relate the speed as multiples of 150 Kb s–1

The latest 56-speed drives read data at a rate of 56× (150 Kb s–1), i.e. 8.4 Mb s–1

Manufacturers quote the highest speeds achieved by their drives during tests in ideal conditions but
these speeds are often not achieved during typical use.

Capacity

The capacity of CD-ROM disks ranges from 650 Mb to 700 Mb of data. With compression the
capacity can be up to 1.3 Gb.

f. CD-RW drives
CD-Rewritable (CD-RW) drives let you burn, or write, CD-R and CD-RW media with your favourite
music or photos or just to back up data. The most important feature to look for is the drive’s record
speed, which tells you how long you’ll spend waiting for it to finish burning a CD.

Type of access

Direct/random

Speed of data access

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 26

Three numbers are usually used to rate drive speed: record speed, rewrite speed and read speed
(usually in that order). The highest number listed is often for reading; the lowest is rewriting.
Recording frequently is the same as or less than reading. Note that a drive with a 48×record speed
theoretically could burn a CD in half the time a 24× drive requires, but in reality the speed difference
is less pronounced

g. CD-R (media)
Compact disk recordable (CD-R) is a also known as write once read many. This is a bit of a misnomer
as it is in fact possible to write to a CD-R in different sessions until the disk is finalised. Once
finalised the disk cannot be written to again. There are several different formats of CD-R and some
formats will not work in standard CD-ROM drives. The write process is irreversible.

Type of access

Direct/random

Speed of data access

These disks are burned for the CD-ROM drive so access speeds are measured in multiples of 150 Kb
s–1. The latest speed is 56×read.

Capacity

Normally 700 Mb but up to 1.3 Gb with compression. The capacity can also be given as the time to
record music onto the CD until full, e.g. 80 minutes.

Typical uses

Some of the uses of CD-R include:

• Distribution of a finished product/program


• Permanent backing storage and archiving
• Encyclopedias
h. CD-RW (media)
These are the same size and shape as other CD media but you can write and rewrite a CD-RW.

Speed of data access

This is really the read speed, which is the same as for a CD-R. However, we also have to consider the
initial write speed to a blank CD (given as 52×in the above example) and the re-write speed to a used
CD (32×in the above example). Generally the re-write speed is the slower of the quoted speeds

Capacity

Same as a CD-R, i.e. 700 Mb

Typical uses

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 27

• Portable media for transferring large files to another computer


• Back-up of hard drive (drive image)
• Storing photos/movie files (large size)
i. DVD-ROM drive (digital versatile disk)
These disks are the same size (12 cm) and composition (polycarbonate) as CDs, but store more
information as a consequence of smaller track spacing and smaller ‘lands and pits’ (bits).

Speed of access to data

The data transfer rate from a DVD-ROM disk at 1×speed is roughly equivalent to a 9×CD-ROM drive
(1×CD-ROM data transfer rate is 150 Kbs–1, or 0.146 Mb s–1). The DVD physical spin rate is about
three times faster than that of a CD (that is, 1×DVD spin = 3×CD spin). A drive listed as ‘16×/40×’
reads a DVD at 16 times normal speed, or a CD at 40 times normal speed

Typical uses

• Encyclopedias
• Games
• Movies
DVD-RW combination drive

There are currently two main versions of rewritable DVDs: DVD-RW and DVD+RW. There is little
difference between the two other than speed of access to data. Modern DVD-RW drives allow access
to both types of disks. DVD-RW drives write DVD-R, DVD-RW, CD-R, and CD-RW disks.

Speed of access to data

The time it takes to burn a DVD depends on the speed of the recorder and the amount of data. The
playing time of the video may have little to do with the recording time, since half an hour at high data
rates can take more space than an hour at low data rates. A 2×recorder, running at 22Mb s–1, can write
a full 4.7 Gb DVD in about 30 minutes. A 4×recorder can do it in about 15 minutes.

DVD-R (media)

There are six different formats of DVD and this one allows the user to record in a single session or in
multiple sessions until the disk is complete. DVD-R is compatible with most DVD drives and players.

Speed of data access

This depends on the drive being used but a typical speed is 40×(CD),i.e. 6 Mb s–1

Capacity

Normally 4.7 Gb

A major problem with DVD is the format of data. There are several different data formats that are not
compatible with each other. In other words, a DVD+R/RW drive cannot write a DVD-R or DVD-RW
disk, and vice versa (unless it is a combo drive that writes both formats). Very roughly, DVD-R and

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 28

DVD+R disks work in about 85% of existing drives and players, while DVD-RW and DVD+RW
disks work in around 70%. The situation is steadily improving.

Interface
An interface is a hardware device that is needed to allow the processor to communicate with an
external or internal device such as a printer, modem or hard drive. Sometimes the interface is a board
in the computer and sometimes it is a connection to a port.

The reason that an interface is required is that there are differences in characteristics between the
peripheral device and the processor. Those characteristics include:

• Data conversion
• Speed of operation
• Temporary storage of data.

Data Conversion

The commonest example of data conversion is when the peripheral accepts an analogue signal that
must be converted into digital for the processor to comprehend it.

Speed of operation

The speed of operation of peripheral devices tends to be in terms of pages per minutes, frames per
second or megabytes per second; however, the processor works at a rate in line with its internal clock,
which is much faster. The speed of the internal operations is measured in gigahertz and a processor
will typically work at 2.8 GHz, i.e. 2800,000,000 cycles per second. This difference in the speed of
operation between the processor and devices requires an interface between the two devices as the
processor can deliver data much faster than the peripheral device can handle.

Data storage

In older computer systems the processor would stand idle while the printer was finishing a print job.
One way around this problem is to have the data held temporarily in transit between the processor and
the printer. Interfaces are used to hold this data, thus releasing the processor; the data is held in a
‘buffer’. Keyboard characters entered by the user are stored in the keyboard buffer while they are
being processed.

One of the important considerations when purchasing a portable CD-RW drive is the type of interface
it uses. There are four interface options for portable drives: parallel port, PC card, USB 2.0 and IEEE
1394 Firewire.

Most users favour USB 2.0 and Firewire because of their high connection speeds and flexibility.

Types of interfaces include IDE, SCSI, serial, parallel, PCI, USB and Firewire
Storage capacity abbreviations

 KB - kilobyte - 1000 (thousand)

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 29

 MB - megabyte - 1,000,000 (million)


 GB - gigabyte - 1,000,000,000 (billion)
 TB - terabyte - 1,000,000,000,000 (trillion)

Communication devices
There are two types of communication devices
a) Modem
b) Fax/modem

a. Modem
Modems allow computers (digital devices) to communicate via the phone system (based on analog
technology). It turns the computers digital data into analog, sends it over the phone line, and then
another modem at the other end of the line turns the analog signal back into digital data.

 Fax/modem

It is a basic digital/analog modem enhanced with fax transmission hardware that enables faxing of
information from computer to another fax/modem or a fax machine (NOTE: a separate scanner must
be connected to the computer in order to use the fax/modem to transfer external documents)

Computer Memory
Memory capability is one of the features that distinguish a computer from other electronic devices.
Like the CPU, memory is made of silicon chips containing circuits holding data represented by on or
off electrical states, or bits. Eight bits together form a byte. Memory is usually measured in megabytes
or gigabytes.

A kilobyte is roughly 1,000 bytes. Specialized memories, such as cache memories, are typically
measured in kilobytes. Often both primary memory and secondary storage capacities today contain
megabytes, or millions of bytes, of space.

Types of Memory

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 30

The main memory of a computer is composed of ROM and RAM.


Read Only Memory (ROM) is used to store a small part of the operating system called the bootstrap
loader. When your computer is switched on, the bootstrap loader examines the backing store devices
to find the operating system. Once found it is loaded into RAM.

1. RAM (Random Access Memory) /RWM (Read Write Memory) – Also referred to as main
memory, primary storage or internal memory. Its content can be read and can be changed and is the
working area for the user. It is used to hold programs and data during processing. RAM chips are
volatile, that is, they loose their contents if power is disrupted. Typical sizes of RAM include 32MB,
64MB, 128MB, 256MB and 512MB.

a. EDO (Extended Data Out) –It is a type of random access memory (RAM) chip that
improves the time to read from memory on faster microprocessors such as the Intel Pentium.
EDO RAM was initially optimized for the 66 MHz Pentium. For faster computers, different
types of synchronous dynamic RAM (SDRAM) are recommended

b. DRAM (Dynamic RAM) – It is a type of random-access memory that stores each bit
of data in a separate capacitor within an integrated circuit. The capacitor can be either charged
or discharged; these two states are taken to represent the two values of a bit, conventionally
called 0 and 1. Since capacitors leak charge, the information eventually fades unless the
capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic
memory as opposed to SRAM and other static memory.

c. SDRAM – Synchronous- is dynamic random access memory (DRAM) that is


synchronized with the system bus. Classic DRAM has an asynchronous interface, which
means that it responds as quickly as possible to changes in control inputs. SDRAM has a
synchronous interface, meaning that it waits for a clock signal before responding to control
inputs and is therefore synchronized with the computer's system bus. The clock is used to
drive an internal finite state machine that pipelines incoming commands. The data storage
area is divided into several banks, allowing the chip to work on several memory access
commands at a time, interleaved among the separate banks. This allows higher data access
rates than an asynchronous DRAM.

2. ROM (Read Only Memory) – Its contents can only be read and cannot be changed. ROM
chips is non-volatile, so the contents aren’t lost if the power is disrupted. ROM provides permanent
storage for unchanging data & instructions, such as data from the computer maker. It is used to hold
instructions for starting the computer called the bootstrap program.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 31

PROM: the settings must be programmed into the chip. After they are programmed, PROM behaves
like ROM – the circuit states can’t be changed. PROM is used when instructions will be permanent,
but they aren’t produced in large enough quantities to make custom chip production (as in ROM) cost
effective. PROM chips are, for example, used to store video game instructions.

Instructions are also programmed into erasable programmable read-only memory. However, the
contents of the chip can be erased and the chip can be reprogrammed.

EPROM chips are used where data and instructions don’t change often, but non-volatility and
quickness are needed. The controller for a robot arm on an assembly line is an example of EPROM
use.

a) PROM (Programmable Read Only Memory) – It is written onto only once using special
devices. Used mostly in electronic devices such as alarm systems.
b) EPROM (Erasable Programmable Read Only Memory) –Can be written onto more than
once.
3. Cache Memory - Cache memory is high-speed memory that a processor can access more quickly
than RAM. Frequently used instructions are stored in cache since they can be retrieved more quickly,
improving the overall performance of the computer. Level 1 (L1) cache is located on the processor;
Level 2 (L2) cache is located between the processor and RAM.

Software
Software is a program commercially prepared and tested in software by one or a group of programmers
and system analyst to perform a specified task. Software is simply set of instructions that cause a
computer to perform one or more tasks. The set of instructions is often called a program or, if the set
is particularly large and complex, a system. Computers cannot do any useful work without instructions
from software; thus a combination of software and hardware (the computer) is necessary to do any
computerized work. A program must tell the computer each of a set of tasks to perform, in a framework
of logic, such that the computer knows exactly what to do and when to do it. Data are raw facts and
ideas that have not been processed while Information is data that has been processed so as to be useful
to the user

Classification of Software

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 32

SOFTWARE

System software Application software

Operating system Service programs General /ready-made Special/tailor


applications made applications

Utilities Development Communication


programs programs

Software can be broadly classified into

1) system software and


2) application software

1) System software
It consists of programs that control operations of the computer and enable user to make efficient use
of computers. They coordinate computer activities and optimize use of computers. They are used to
control the computer and develop and run application programs examples of jobs done by the system
software are management of computer resources, defragmentation etc. They can be divided into;

(i) Operating system


It is a complex program and most important program that runs on a computer and which controls the
operation of a computer. It perform basic tasks, such as recognizing input from the keyboard, sending
output to the display screen, keeping track of files and directories on the disk, and controlling
peripheral devices such as disk drives and printers. In general the operating system supervises and
directs all the software components and the hardware components. Sophisticated operating system
could handle multi-processors, many users and tasks simultaneously. Examples of computers
operating systems are UNIX, Microsoft windows 95/98, Windows NT, Windows 2000, Windows XP,
Windows Vista and Linux.

(ii) Service programs

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 33

They are programs designed for general support of the processes of a computer; "a computer system
provides utility programs to perform the tasks needed by most users". The service programs can
further be divided into;

 Utilities-They performs a variety of tasks that maintain or enhance the computer’s operating
system Utility programs are generally fairly small. Each type has a specific job to do. Below
are some descriptions of utilities.
 Anti-virus applications protect your computer from the damage that can be caused by
viruses and similar programs
 Compression utilities make files smaller for storage (or sending over the Internet)
and then return them to normal size.
 Data recovery utilities attempt to restore data and files that have been damaged or
accidentally deleted.
 Disk defragmenters reorganize the data stored on disks so that it is more efficiently
arranged.
 Firewalls prevent outsiders from accessing your computer over a network such as the
Internet.
 Development programs are used in the creation of new software. They comprise of sets of
software tools to allow programs to be written and tested. Knowledge of appropriate
programming language is assumed. Tools used here are
 Text editor that allows one to enter and modify programs statements
 Assembler- allows one to code in machine programs language .i.e. processor specific
 Compilers-makes it possible for programmer to convert source code to object code
which can be stored and saved on different computers.
 Interpreters-used to convert source programs statement by statement as it executes
the program without being compiled first.
 Libraries- commonly used parts or portions of a program which can be called or
included in the programmer’s code without having to recode that portion.
 Diagnostic utilities-used to detect bugs in the logic of program during program
development
 Communication programs- refer to programs that make it possible to transmit data.

2) Application software
Are programs for user to do their jobs e.g. typing, recording keeping, production of financial
statements, drawing, and statistics.

Application software are divided into

a) General /ready-made software


b) Tailor made/special purpose software

a) General /ready-made software

General/ready-made software is developed to perform a variety of tasks, usually determined by use.


Such software can be customized by user to achieve specific goals e.g. ms office which is a suit of
programs performing a variety of tasks e.g. word processing for producing documents, database for

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 34

storing, retrieving and manipulating data and various calculations on spreadsheets. General purpose
programs are discussed below;

i. Word processing applications


Writing tasks previously done on typewriters with considerable effort can now be easily completed
with word-processing software. Documents can be easily edited and formatted. Revisions can be
made by deleting (cutting), inserting, moving (cutting and pasting), and copying data. Documents can
be stored (saved) and opened again for revisions and/or printing. Many styles and sizes of fonts are
available to make the document attractive. Example: MS Word, Word Pad etc.

ii. Spreadsheet applications


Spreadsheet software permits performance of an almost endless variety of quantitative tasks such as
budgeting, keeping track of inventory, preparing financial reports, or manipulating numbers in any
fashion, such as averaging each of ten departmental monthly sales over a six-month period. A
spreadsheet contains cells, the intersection of rows and columns. Each cell contains a value keyed in
by the user. Cells also contain formulas with many capabilities, such as adding, multiplying, dividing,
subtracting, averaging, or even counting. An outstanding feature is a spreadsheet's ability to
recalculate automatically. If one were preparing a budget, for example, and wanted to change a
variable such as an increase in salary or a change in amount of car payments, the formulas would
automatically recalculate the affected items and the totals. Example: Excel, Lotus1-2-3 etc.

iii. Database software


A database contains a list of information items that are similar in format and/or nature. An example is
a phone book that lists a name, address, and phone number for each entry. Once stored in a database,
information can be retrieved in several ways, using reports and queries. For example, all the names
listed for a given area code could be printed out and used for a commercial mailing to that area.
Examples of database software include Ms Access, Dbase, Oracle etc.

iv. Presentation software: for making slide shows.


It allows users to create visual presentation A speaker may use presentation software to organize a
slide show for an audience. Text, graphics, sound, and movies can easily be included in the
presentation. An added feature is that the slide show may be enhanced by inclusion of handouts with
two to six slides printed on a page. The page may be organized to provide space for notes to be
written in by the audience as the presentation ensues. An example of this is Power Point. Preparation
of the software is simplified by the use of 'wizards' that walk the user through the creation of the
presentation.

v. Desktop publishing software


This software permits the user to prepare documents by using both word-processing devices and
graphics. Desktop publishing software uses word-processing software, with all its ease of entering and
revising data, and supplements it with sophisticated visual features that stem from graphics software.
For example, one can enhance a printed message with virtually any kind of illustration, such as
drawings, paintings, and photographs. . Examples of Desktop publishing software is PageMaker,
Corel Draw, and Ms Publisher

vi. Multimedia applications for creating video and music


It allows users to create image, audio, video etc. Example: Real Player, Media Player etc.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 35

vii. Activity management programs like calendars and address books


NB: Nowadays most of the general purpose software is being sold as a complete software
suite such as Microsoft office or Lotus SmartSuite. These suites offer four or more software
products packaged together at a much lower price than buying the packages separately.
b) Tailor made/special purpose software

Tailor-made computer system refers to computer application developed by in-house IT personnel or


outside software house according to specific user requirements in a firm. They are developed for
given purpose e.g. Payroll system, stock control system etc.

Sources of Application Software

Proprietary Software
Proprietary software is computer software licensed under exclusive legal right of the copyright
holder with the intent that the licensee is given the right to use the software only under certain
conditions, and restricted from other uses, such as modification, sharing, studying, redistribution,
or reverse engineering.
Some of the advantages of proprietary software include:
a) You can get exactly what you need in terms of reports, features etc.
b) Being involved in development offers a further level in control over results.
c) There is more flexibility in making modifications that may be required to counteract a new
initiative by a competitor or to meet new supplier or customer requirements. A merger with
another firm or an acquisition will also necessitate software changes to meet new business
needs.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 36

Some of the disadvantages of proprietary software include:


a) It can take a long time and significant resources to develop required features.
b) In house system development staff may become hard pressed to provide the required level of
on-going support and maintenance because of pressure to get on to other new projects.
c) There is more risk concerning the features and performance of the software that has yet to be
developed.

Off-the –shelf Software


Commercial-Off-The-Shelf Software (COTS) is pre-built software usually from a 3rd party vendor.
COTS can be purchased, leased or even licensed to the general public. Better, faster and cheaper
software applications are what organizations are currently looking for.
Some of the advantages of off-the-shelf software include:
a) The initial cost is lower since the software firm is able to spread the development costs
over a large number of customers.
b) There is lower risk that the software will fail to meet the basic business needs .You can
analyse existing features and performance of the package
c) Package is likely to be of high quality since many customer firms have tested the software
and helped identify many of its bugs.

Some of the advantages of off-the-shelf software include:


a) An organization may have to pay for features that are not required and never used.
b) The software may lack important features, thus requiring future modifications or
customisation. This can be very expensive because users must adopt future releases of the
software.
c) Software may not match current work processes and data standards.

Programming Languages
Programming languages are collections of commands, statements and words that are combined using
a particular syntax, or rules, to write both systems and application software. This results in meaningful
instructions to the CPU.

Generations of programming languages


Machine Language (1st Generation Languages)
A machine language consists of binary digit, that is, zeroes and ones. Instructions and addresses are
written in binary (0,1) code. Binary is the only “language” a CPU can understand. The CPU directly
interprets and executes this language, therefore making it fast in execution of its instructions. Machine
language programs directly instructed the computer hardware, so they were not portable. That is, a
program written for computer model A could not be run on computer model B without being rewritten.
All software in other languages must ultimately be translated down to machine language form. The
translation process makes the other languages slower.
An advantage of machine language includes:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 37

 The only advantage is that program of machine language run very fast because no translation
program is required for the CPU.

Disadvantages of machine language include:


 It is very difficult to program in machine language. The programmer has to know details of
hardware to write program.
 The programmer has to remember a lot of codes to write a program, which results in program
errors.
 It is difficult to debug the program.

Assembly Language (2nd Generation languages)


It uses symbols and codes instead of binary digits to represent program instructions. It is a symbolic
language meaning that instructions and addresses are written using alphanumeric labels, meaningful
to the programmer.

The resulting programs still directly instructed the computer hardware. For example, an assembly
language instruction might move a piece of data stored at a particular location in RAM into a particular
location on the CPU. Therefore, like their first generation counterparts, second generation programs
were not easily portable.

Assembly languages were designed to run in a small amount of RAM. Furthermore, they are low-level
languages; that is the instructions directly manipulate the hardware. Therefore, programs written in
assembly language execute efficiently and quickly. As a result, more systems software is still written
using assembly languages.

The language has a one to one mapping with machine instructions but has macros added to it. A macro
is a group of multiple machine instructions, which are considered as one instruction in assembly
language. A macro performs a specific task, for example adding, subtracting etc. A one to one mapping
means that for every assembly instruction there is a corresponding single or multiple instructions in
machine language.
An assembler is used to translate the assembly language statements into machine language.
Advantages of assembly language include:
 The symbolic programming of Assembly Language is easier to understand and saves a lot of
time and effort of the programmer.
 It is easier to correct errors and modify program instructions.
 Assembly Language has the same efficiency of execution as the machine level language.
Because this is one-to-one translator between assembly language program and its
corresponding machine language program.

Disadvantages of assembly language include:


 One of the major disadvantages is that assembly language is machine dependent. A program
written for one computer might not run in other computers with different hardware
configuration.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 38

High-level languages (3rd generation languages)


Third generation languages are easier to learn and use than were earlier generations. Thus programmers
are more productive when using third generation languages. For most applications, this increased
productivity compensates for the decrease in speed and efficiency of the resulting programs.
Furthermore, programs written in third generation languages are portable; that is, a program written to
run on a particular type of computer can be run with little or no modification on another type of
computer. Portability is possible because third generation languages are “high-level languages”; that
is instructions do not directly manipulate the computer hardware.

Third generation languages are sometimes referred to as “procedural” languages since program
instructions, must still the computer detailed instructions of how to reach the desired result.
High-level languages incorporated greater use of symbolic code. Its statements are more English –like,
for example print, get, while. They are easier to learn but the resulting program is slower in execution.
Examples include Basic, Cobol, C and Fortran. They have first to be compiled (translated into
corresponding machine language statements) through the use of compilers.

Advantages of high level languages include:


 Higher-level languages have a major advantage over machine and assembly languages that
higher-level languages are easy to learn and use.
 Are portable

Fourth Generation Languages (4GLs)


Fourth generation languages are even easier to use, and more English-like, than are third generation
languages. Fourth generation languages are sometimes referred to as “non-procedural”, since
programs tell the computer what it needs to accomplish, but do not provide detailed instructions as to
how it should accomplish it. Since fourth generation languages concentrate on the output, not
procedural details, they are more easily used by people who are not computer specialists, that is, by
end users.

Many of the first fourth generation languages were connected with particular database management
systems. These languages were called query languages since they allow people to retrieve information
from databases. Structured query language, SQL, is a current fourth generation language used to access
many databases. There are also some statistical fourth generation languages, such as SAS or SPSS.

Some fourth generation languages, such as Visual C++, Visual Basic, or PowerBuilder are targeted to
more knowledgeable users, since they are more complex to use. Visual programming languages, such
as visual basic, use windows, icons, and pull down menus to make programming easier and more
intuitive.

Object Oriented Programming


First, second, third and fourth generation programming languages were used to construct programs that
contained procedures to perform operations, such as draw or display, on data elements defined in a
file.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 39

Object oriented programs consist of objects, such as a time card, that include descriptions of the data
relevant to the object, as well as the operations that can be done on that data. For example, included in
the time card object, would be descriptions of such data such as employee name, hourly rate, start time,
end time, and so on. The time card object would also contain descriptions of such operations as
calculate total hours worked or calculate total pay.

Language translators
Although machine language is the only language the CPU understands, it is rarely used anymore since
it is so difficult to use. Every program that is not written in machine language must be translated into
machine language before it can be executed. This is done by a category of system software called
language translation software. These are programs that convert the code originally written by the
programmer, called source code, into its equivalent machine language program, called object code.
There are two main types of language translators: interpreters and compilers.

Interpreters
While a program is running, interpreters read, translate, and execute one statement of the program at a
time. The interpreter displays any errors immediately on the monitor. Interpreters are very useful for
people learning how to program or debugging a program. However, the line-by-line translation adds
significant overhead to the program execution time leading to slow execution.

Compilers
A compiler uses a language translation program that converts the entire source program into object
code, known as an object module, at one time. The object module is stored and it is the object module
that executes when the program runs. The program does not have to be compiled again until changes
are made in the source code.

Software trends and issues


Open source software coming to the scene. This is software that is freely available to anyone and can
be easily modified. The use of open source software has increased dramatically due to the World
Wide Web. Users can download the source code from web sites. Open source software is often more
reliable than commercial software because there are many users collaborating to fix problems. The
biggest problem with open source software is the lack of formal technical support. However, some
companies that package open source software with various add-ons and sell it with support are
addressing this. An example of this is Red Hat Linux operating system

OVERVIEW OF COMPONENTS OF INFORMATION


COMMUNICATION TECHNOLOGY
Information technology has been evolving rapidly during the last half of the 20th century, particularly
since the 1960’s and 1970’s.The current era has acquired the name “information era”

It has revolutioned the media and modes of computing, storing and communicating information.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 40

Man’s infinite capacity for invention and desire for discovery, exploration and research has led to
rapid growth of technologies and there by information technology, Information explosion has created
problems for proper processing and dissemination of information, which can only be solved, with the
aid of this information technology.

ICT facilitates innovation, free of information creative expression, and effective management. IT in
education has tremendously increased because of it provides enhanced satisfaction, cost effectiveness,
faster and simpler programmes, rapid responses and easier operational procedures

Information Technology

The term “Information Technology” in English is derived from the French word ‘ Informatique’ and
“Informatika” in Russian encompasses the notation of information handling. IT is a new science of
collecting, storing, processing and transmitting information.

The word “Information Technology” is a combination of two words. One is information and other is
technology. Information means knowledge, it can be a bit or a para or a page. IT is science of
information handling, particularly using computers to support the communication of knowledge in
technical, economic and social fields.

According to ALA Glossary, Information Technology is the application of computers & other
technologies to the acquisition, organisation, storage, retrieval & dissemination.

According to UNESCO, IT is scientific, technological and engineering disciplines and management


techniques used in information handling and processing their applications, computers and their
interaction with man and machines or associated social, economic and cultural matters.

Components of Information Technology


Technological change is becoming a driving force in our society. Information technology is a generic
term used for a group of technologies. James William (1982) has identified the following six major
new technologies as most relevant in modern library and information system.

 Micro. Mini and Large scale computers


 Processor, memory and input/output channels
 Mass storage technologies,
 Data communication, networking and distributed processing,
 Data entry, display respond, and
 Software
These technologies can also be grouped into three major areas:

 Computer Technology,
 Communication Technology and
 Reprographic, Micrographic and Printing Technologies
Computer Technology

The wide spread use of computer technology has made dramatic developments in the information
transmission process in very field of human life. Highly sophisticated information services ranging

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 41

from elaborate abstracting and indexing services to computerized data bases in almost all scientific
disciplines are in wide use all over the world.

The current developments in computer technology include mini computers, microcomputers,


personnel computers, portable computers, super computers, speaking computer with IQS, seeing
robots, microchip technology, artificial intelligence, software developments, C-ROM technology,
machine-readable database, etc.

Communication Technology

Audio Technology

Due to tremendous improvements and inventions, older gramophone records are now dwindling and
much sophisticated cassettes and tape records are emerging. The outmoded AM (Amplitude
Modulated) radio receivers are being received by the modern FM (Frequency Modulation) receivers.
Thus, the new audio technology can be used in libraries and information centers for a wide variety of,
recreation, etc.

Audio-Visual Technology

Motion pictures, Television, Video disc are the main contributions of this technology

Videodisc is a new medium containing prerecorded information, which allows the user to reproduce
this information in the form of images on the screen of a television receiver at, will. Videodisc
technology offers high quality storage, image stability and speed of recall.

Facsimile Transmission (Fax)

Facsimile transmission has been boosted by the adoption of methods of data compression made
possible by compact, reliable and inexpensive electronics. During the initial stages, the average speed
of facsimile transmission was found to be 3.4 minutes per page. This technology was slow it was
replaced by micro facsimile- Satellite communication and fiber optics have increased the potential of
facsimile transmission.

Electronic Mail

E-mail is the electronic transmission and receiving of messages, information, data files, letters or
documents by means of point-to-point systems or computer-based messages system.

Reprographic, Micrographic and Printing Technologies

The technology of reprography made a big impact on the document delivery system. Most of the
research libraries have reprographic machines and provide photocopy of any document on demand.
Using reprographic and micrographic techniques, we can condense the bulky archives and newspapers
and solve the storage problems. They also serve the purpose of preservation they help in resource
sharing and save the time of users.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 42

Micro Forms

Microforms is a term for all type of micro-documents whether they are transparent or opaque or in roll
or sheet form. The verities of microforms are microfilm, microfiche, ultra fiche, microopaques, cards,
computer about microfiche / micro film (COM)

Roll-film (microfilm)

It is a continuous strip of film with images arranged in sequence. It is available in 100 feet roll with
35mm width.

Microfiche

It is flat film having large number of images arranged in rows and columns. Standard sized microfiche
of 4x6 inches accommodated 98 pages.

Printing Technology

Thousands of years ago, people recognized the necessity of keeping records of their daily activities.
Paper was invented and the art of writing and record keeping came to be defined. At present lasers
and computers have entered the field of printing.

Computer printers are three categories;

a) line printers,
b) dot matrix printer, and
c) laser printers.
Laser printers are popular today.

Conclusion

New information technology will enable information services to carry out consolidation and synthesis
of scientific information on a very large scale. In spite of tremendous advantages and advancement of
information, it can be concluded that digital learning will help learners to get a new learning
experience. No doubt, ICT will supplement the traditional educational system but it wouldn’t replace
it.

INFORMATION COMMUNICATION TECHNOLOGY PERSONNEL


AND INFORMATION COMMUNICATION STRUCTURE

Information Communication Technology Personnel


ICT (information and communications technology - or technologies) is an umbrella term that includes
any communication device or application, encompassing: radio, television, cellular phones, computer
and network hardware and software, satellite systems and so on, as well as the various services and
applications associated with them, such as videoconferencing and distance learning. ICTs are often
spoken of in a particular context, such as ICTs in education, health care, or libraries. The term is
somewhat more common outside of the United States.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 43

According to the European Commission, the importance of ICTs lies less in the technology itself than
in its ability to create greater access to information and communication in underserved populations.
Many countries around the world have established organizations for the promotion of ICTs, because it
is feared that unless less technologically advanced areas have a chance to catch up, the increasing
technological advances in developed nations will only serve to exacerbate the already-existing
economic gap between technological "have" and "have not" areas. Internationally, the United Nations
actively promotes ICTs for Development (ICT4D) as a means of bridging the digital divide.

ICT has become an integral and accepted part of everyday life for many people. ICT is increasing in
importance in people’s lives and it is expected that this trend will continue, to the extent that ICT
literacy will become a functional requirement for people’s work, social, and personal lives.

ICT includes the range of hardware and software devices and programmes such as personal
computers, assistive technology, scanners, digital cameras, multimedia programmes, image editing
software, database and spreadsheet programmes. It also includes the communications equipment
through which people seek and access information including the Internet, email and video
conferencing.

The use of ICT in appropriate contexts in education can add value in teaching and learning, by
enhancing the effectiveness of learning, or by adding a dimension to learning that was not previously
available. ICT may also be a significant motivational factor in students’ learning, and can support
students’ engagement with collaborative learning.

ICT staff is responsible for the development, management and support of the ICT infrastructure in the
organisation, including the internal and external electronic communication networks, including:

a. wide area networks (WANs) and local area networks (LANs) that link the operational systems
within healthcare organisations the hardware e.g. desktop computers, printers software
systems e.g. email systems, applications and systems used for pathology reports and patient
administration. wide area networks (WANs) and local area networks (LANs) that link the
operational systems within healthcare organisations
b. the hardware e.g. desktop computers, printers
c. software systems e.g. email systems, applications and systems used for pathology reports and
patient administration.

Some of the functions of ICT department include:


a) Development, on-going operation and maintenance of information systems
b) Advisor to ICT users throughout the organisation
c) Catalyst for improving operations through system enhancements/ new systems development
d) Co-ordinating systems integration in the org.
e) Establishing standards, policy, and procedures relating to ICT.
f) Evaluating and selecting hardware and software
g) Co-ordinating end-user education.

Officers in ICT department include:


 IT Manager/Director
 Systems analysts

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 44

 Programmers- system and applications


 Database administrator
 Network administrator
 Librarian
 Support staff- hardware, software technicians
 Data entry clerks

The number of people working in the ICT department and what they do will depend on:
 The size of the computing facility. Larger computers are operated on a shift work basis.
 The nature of the work. Batch processing systems tend to require more staff.
 Whether a network is involved. This requires additional staff.
 How much software and maintenance is done in house instead of seeking external resources.

The information technology staff may be categorized into various sections whose managers are
answerable to the information technology manager. The responsibilities of the information technology
manager include:
 Giving advice to managers on all issues concerning the information technology department.
 Determining the long-term IT policy and plans of the organization.
 Liaisons with external parties like auditors and suppliers.
 Setting budgets and deadlines.
 Selecting and promoting IT staff.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 45

Structure of ICT Department

ICT DIRECTOR

MANAGER

System Development Operational Support System Support

 Analyst  Data clerk  Network


 Programmer  Librarian administrator
 Database
administrator

Functional Structure for Information Services Department


Management of Information
Services

System Operations Services Technical Services


Development

System analysis & Computer Operation User Service


Design and Data center information
Center

Application Program Data entry Data entry

Other Support
Development Production Control
Support & support Network
Management
Development Center
Technology
Management

Capacity
Management

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 46

The sections that make up the ICT department and their functions are discussed below:

1) Development section
System Analysis Functions include:
 System investigations.
 System design.
 System testing.
 System implementation.
 System maintenance.

Programming Functions include:


 Writing programs.
 Testing programs.
 Maintenance of programs.
 System programmers write and maintain system software. Application programmers write
programs or customize software to carry out specific tasks.

2) Operations section
Duties include:
 Planning procedures, schedules and staff timetables.
 Contingency planning.
 Supervision and coordination of data collection, preparation, control and computer room
operations.
 Liaison with the IT manager and system development manager.

The operations section also does:

a) Data preparation
Data preparation staff are responsible for converting data from source documents to computer sensible
form.

Duties are:
 Correctly entering data from source documents and forms.
 Keeping a record of data handled.
 Reporting problems with data or equipment.

b) Data control
Data control staff are generally clerks. Duties include:
 Receiving incoming work on time.
 Checking and logging incoming work before passing it to the data preparation staff.
 Dealing with errors and queries on processing.
 Checking and distributing output.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 47

Computer room manager


Duties include:
 Control of work progress as per targets.
 Monitoring machine usage.
 Arranging for maintenance and repairs.
Computer operators

Controls and operates hardware in the computer room.

Duties include:

• Starting up equipment.
• Running programs.
• Loading peripherals with appropriate media.
• Cleaning and simple maintenance.

Files librarian

They usually keeps all files organized and up to date. Typical duties are:

• Keeping records of files and their use.


• Issuing files for authorized use.
• Storing files securely.

3) System Support Section

This section is charged with responsibilities over database and network management

Database management

The database administrator is responsible for database management. He is responsible for the
planning, organization and control of the database. His functions include

• Coordinating database design.


• Controlling access to the database for security and privacy.
• Establishing back-up and recovery procedures.
• Controlling changes to the database.
• Selecting and maintaining database software.
• Meeting with users to resolve problems and determine changing requirements.

Network management

The network administrator/controller/manager is responsible for network management. Functions


include:

• Assignment of user rights.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 48

• Creating and deleting of users.


• Training of users.
• Conflict resolution.
• Advising managers on planning and acquisition of communication equipment.

Evaluating effectiveness and efficiency of ICT departments

It is important to measure how a system, organization or a department performs, mainly its efficiency
and effectiveness.

Efficiency is a ratio of what is produced to what is consumed. It ranges from 0 – 100%. Systems can
be compared by how efficient they are.

Information Communication Structure


The way businesses convey information, whether internally or externally, varies from one company to
another. Sometimes it is simply a reflection of the way a company is organized or due to its industry.
For example, information flows in a small business far differently than in a major corporation.
Nevertheless, communication in any organization will flow in several recognized ways.

a. Independent
The independent structure has no established standards of communication. It is considered flexible
and a product of individual activity. Thus, it is the mode for professionals who own their own offices
and function as their own entitles, like attorneys and physicians. As a result, communication is viewed
in a more fragmented way. In the business world, this structure is almost exclusively confined to
independent professionals.

b. Matrix
The matrix structure of a business is based on group work within departments. In other words, each
department is assigned a task, and that department is responsible for completing that task. The result
is a form of business communication that also tends to be somewhat fragmented, but only if the
departments fails to communicate within one another. Within the departments, communication is
more effective, because the task at hand requires keeping one another informed.

c. Entrepreneurial
The entrepreneurial business structure is most common within small businesses. Here, leadership
(whether one or more) communicates decisions to individual employees. Consequently, results are
achieved more quickly because decision makers readily convey their decisions to the employees
responsible for carrying them out.

d. Pyramid
The pyramid structure is seen most frequently in large companies with multiple departments. The
decisions of company heads are passed down through the chain of command: to department heads,
supervisors, managers, and so forth. Inversely, information about company activities often flows from
employees up through managers, supervisors, department heads and to company heads.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 49

e. Communication Channels
Regardless of the type of business structure, all information passed from one person to another
follows some type of communication channel. Communication channels can be either lateral or
horizontal. Lateral is within a department or between departments among employees of the same
level. Horizontal is from one level of an organization to another. The organization's overall business
structure will play a role in the communication channels it develops.

ROLE OF ICT IN BUSINESS ENVIRONMENT


From market cannibalization of old media by new media through to the deployment of Radio
Frequency Identification (RFI) tagging in aircraft maintenance, businesses know that Information
Communication Technology (ICT) can transform operations or make them obsolete. The challenge of
adapting ICT include sustaining current operations, overcoming incumbency, market dynamics, risk
management and funding transition.

From the implementation of mainframes and desktops, through to cloud computing and smart phones,
business has adapted to changes in Information Communication Technology (ICT). Whilst what a
business needs to do change slowly (the need to be customer centric and make a profit), how a
business operates (the use of ICT to better service customers) has brought significant rapid change to
a business. It is the change in how a business operates, including ICT that, allows a business to
remain competitive.

Although ICT has significantly impacted businesses to lower costs, improve service, and standardise
processes and operations, the adoption of ICT and resulting business changes has not always been
smooth. Some businesses have failed to make changes, others have missed opportunities, and others
are reluctant to change due to risk and/or the need to overcome incumbency. The business change
around the adoption of ICT starts with an appreciation of the business impacts of changes in ICT.

ICT is Business
Irrespective of an individual technology or changes in a technology, common requirements for ICT
within the business environment include:

 ICT is not an adjunct to business: ICT is business;


 ICT present at the business table;
 ICT managed and operated as a utility infrastructure to service needs;
 ICT being the assembly line for knowledge workers;
 ICT showing the business the opportunities, markets and transformation that ICT brings;
 ICT providing the knowledge utility for real time decision making to support business.

Command & Control

Changes in ICT, the availability of information and the speed with which decisions need to be made is
changing the command and control structure within businesses. Even if the decision makers had all of
the information needed at the right time to make a decision, decision makers struggle to find the time
to make all of the decisions. The emerging trend is to use ICT to allow for decentralised decision
making within frameworks for delivery. The changes in ICT are driving empowerment and problem
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 50

solving at source. Such changes place a premium on strategy and planning, with a culture of
empowerment to manage outcomes and behaviours. Underpinning such a structure are distributed
operations with the ability to adapt to changes, to self-heal and create an emergent behaviour.
Changes include:

a. People – Leaders with visions and strategy and the ability to implement and manage such
environments. The assurance to support empowered operations is required, together with
decision making at source. The required strategies, communication and skilling of staff to
work within such structures are necessary.
b. Process – Adoption of distributed operation business models and the use of frameworks and
tools such as enterprise risk management to ensure delivery.
c. Information – Access to information is key to success, with knowledge being a utility that
underpins business.

Transaction Processing

As more transactions are processed by ICT without intervention, the skill set required is changing.
Proactive problem solvers are required when things go wrong and to manage exceptions, and to
engage with customers to manage expectations. With routine transactions processed by ICT, more
skilled resources with excellent communication skills and increasing specialisation are required to
address complicated and high worth transactions. A veneer of generalists to work across the resulting
silos is also required. Changes include:

a. People – More skilled resources with critical thinking and proactive problem solving are
required. A premium is placed on the professional or soft skills.
b. Process – Successful processes are engineered from the custom view to deliver outcomes and
work across the silos of a business.
c. Information – Access to information in context integrated with work-flow is required.

Collaboration

Meeting customer needs and delivery of outcomes increasingly requires collaboration across
interacting dependencies. Permanent staff, casuals, contractors, out-sourcers, and off-shore resources
are increasingly coming together to work across the globe in collaborative teams to address issues as
they arrive. The freeing up of staff from routine transaction processing further reinforces the project
nature of roles. Changes include:

a. People – Such environments place a premium on effective communication, coordination and


organisational skills and the ability to operate to strategy.
b. Process – Such environments require management that allow for agility and adaption and the
use of process to deliver outcome without process been an end itself.
c. Information – Integrated communication and knowledge sharing is required in such
environments.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 51

Changing Markets

The increasing use of ICT means that products come to market faster, with a decreasing time in the
market with offerings being more easily copied and innovated. Changes to the business model like
the use of the “value of free” or the use of “how to” are being accommodated. Revision of the sales
process to include webinars and podcasts, the need for sticky messages, and the role of the sales to be
the trusted adviser in an ocean of choice (solution selling) are all impacting businesses. Changes
include:

a. People
Ability to respond to change and challenges is required, together with the ability to listen and problem
solve. The empowerment of an educated and skilled workforce that is trusted to deliver in such an
environment is required.

b. Process
Within dynamic markets, processes need to respond and accommodate change whilst assuring
delivery.

c. Information
The cross-silo management of knowledge is required.

Creativeness, Conversations & Confidence


Changes in ICT create a business environment about global reach with local service. Access to
information across devices and channels is required and customer service is about having
conversations with customers to solve problems. The fostering and nurturing of analytical thinking
and creativity and innovation is required, with a willingness to respond quickly to mistakes and
failures. Changes include:

a. People – Ability to work across channels where and when the opportunity presents is
required. Flexibility and professionalism of skilled resources allowing for critical thinking
and innovation ensures delivery.
b. Process – The ability to deliver across channels and devices is necessary, with a tight
integration of information to process.
c. Information – Access to information to facilitate conversation and interaction is required.

INFORMATION CENTERS
ICT has shaped the global arena and revolutionized the way we transact business at the local and
international level. Competition for business has become cut-throat; our customers are much more
informed and their level expectations very high; government funding has dwindled over the last few
years; the environment has become very fluid and dynamic. In the current set setup only those
businesses that are innovative, dynamic and technology savvy and have a chance of surviving the
complexity and strong turbulence. Quality has become a key concern for organizations and customers
and this cannot be gainsaid.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 52

In realization of these global developments and related initiatives in this key area of technology,
businesses have embraced ICT as a driving force for the attainment of our goals and objectives. While
we take pride in enviable achievements in expanding the ICT infrastructure and providing more
access; technology has enabled businesses to effectively participate in the global higher education
arena.

An information center is a "center designed specifically for storing, processing, and retrieving
information for dissemination at regular intervals, on demand or selectively, according to express
needs of users

10-Step Guide to Promoting the Information Center


Promoting the Information Center to an internal audience is often left by the wayside. The pressures
of daily work may mean there is little time to plan a campaign for advertising its services. As the
Information Center continues to play an increasingly strategic role in the organisation's activities,
helping users understand what the Information Center can do for them is crucial.

Ten simple steps for success:

1. Where are you now?

Before any campaign can be developed, you need to look at the current situation. What perceptions
exist? Are they fair? Are there misconceptions which need to be overcome?

Measuring the perceived value of the Information Center will help you plan your strategy. You will
know what you want to achieve through the promotional activities, and how to measure their success.

There are three main ways you can obtain this information:

 Interviews with staff who deal with the Information Center


 Company-wide questionnaire
 General background research

2. Identify audiences

It is likely that the Information Center will have different target audiences, with different
expectations, information needs and perceptions. These should be segmented into identifiable groups
so you can communicate more effectively with them. These groups may be split by function, e.g.
marketing, sales, finance, human resources etc., according to those who are familiar with the
Information Centre, and use its services a great deal, through to late adopters, who are not yet aware
of what it can do for them.

3. Set objectives

Once you have identified the current perceptions, you will know what you want to achieve. Setting
objectives will lay down clear goals which can be used in the future to assess the success of your
campaign. Objectives might include:

 Raise general awareness of activities

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 53

 Communicate the availability of resources


 Communicate personalities and contact points
 Promote successes and new information available.
 Generate new business for the Information Centre

4. Set tactics

Consideration needs to be given as to how you will actually achieve these objectives. There are a
number of tactics which can be used, and which may vary depending on the audiences you have
identified. These could include:

 Advertising
This may be most effective for new recruits, or those who rarely use the Information Center.
Induction visits, electronic bulletin boards, email, brochures and flyers could all be used to advertise
the Information Center's services.

 Demonstrations
Seminars, quarterly briefings, and workshops are all easy, interactive promotional tools. These offer
good opportunities to deepen the knowledge users have about the Information Center. Guest speakers
in particular topic areas will enable you to showcase an example of how you have assisted them with
a particular business project.
 Newsletters
Newsletters featuring information updates, new research findings and case studies can all be used to
push out positive messages.

 Alerting
Alerts on users' systems could be set up to bring new information to their attention in real time. This
could be done by function or department, so that numerous alerts don't have to be set up all around the
organisation.
5. Set timescales

Consider over what timescales you will run your first campaign. It is important not to expect results
overnight. Set a period of time for each stage of the plan, and then the time over which you will begin
and spread your activity.

6. Evaluate criteria and gain feedback

Decide how you will measure the success of your campaign. A follow-up survey after the first
quarter, and then every six months will help you know if you are nearing your goal. As well as the
surveys, be sure to include opportunity for brainstorming, round table discussions and debate in your
workshops and seminars. This will throw up new ideas about how the Information Center and users
can work better together, what new information needs users may have, and how they can help you. If
the internal events programme is a great success, it may be worth setting up a user committee which
meets on a regular basis.

7. Communicate success

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 54

Be positive. If the Information Center has completed any successful projects, gained some new and
exclusive information, or taken on a new recruit, this should be communicated. Your aim should be to
keep what you do, who does it and with what results at the forefront of users' minds. Take advantage
of monthly newsletters, a bulletin board on an intranet site, or speaker opportunities at company
events.

8. Develop new tools

Look at new ways of researching and delivering information to help you develop a better service and
become known as a centre of excellence. Keep investigating new technologies or emerging
approaches to research and present your findings and innovations at workshops or briefings, to help
raise support for your work and create organisational buy-in to the importance of inward investment.

9. See the other side

Time spent shadowing key personnel in other departments will help you build contacts and
relationships. It will also enable you to better understand the business priorities of your users and their
information needs. In order that this does not prove a costly drain on resources in the information
team, one person could be assigned to shadow another in a particular function, with shadowing done
in rotation.

10. Keep doing it

The key to effective promotion is persistence. A successful campaign is one that develops over time.
It is not a one-off exercise. As the relationship between the Information Center and users matures, the
promotion will become easier and users will approach you with ideas, questions and feedback.

Information Technology as Driving Force for Innovation


Last two decades have seen great stride in information technology. The development in information
technology has changed the way business is getting conducted. One of the striking points about
information technology is innovation. Information technology has been a driving force for product,
service and process innovation.

Innovation in Last Decades

It has brought forward capabilities, which previously were only considered as fiction novel material.
Information technology has supported miniaturization of electronic circuits’ making many products’
portable, for example, computers, phones, etc. Information technology has helped development in
communication technology by making it affordable. Penetration rate of mobile phone is higher than
ever before with greater coverage and with ever lowering cost.

The concept of big data has become reality, with development of high memory storage devices.

Function of Information Technology

Information technology is a network of devices, which are connected with each other, which process
data into useful and meaningful information. Information technology, therefore, has six broad
functions around which innovation is driven. The six broad functions are as follows:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 55

a) Capture
It is defined as a process to obtain information in a form which can be further manipulated. This input
of information may be through keyboard, mouse, picture, etc.

b) Transmit
It is defined as a process through which captured information is sent from one system to another. This
system could be within same geographical boundary or otherwise. For example, Radio, TV, email,
telephone, fax, etc.

c) Store
It is defined as a process through which captured information is kept in safe and secure manner and,
which can be further accessed when required for example, hard disk, USB, etc.

d) Retrieval
It is defined as a process through which stored information can be called upon when required. For
example, RAM, hard disk, USB, etc.

e) Manipulation
It is defined as a process through which captured and stored information can be transformed. This
transformation could be the arrangement of data, calculation, presentation, etc., For example,
computer software.

f) Display
It is defined as a process of projecting the information. For example, computer screen, printer, etc.

Innovation and Information Technology


The last two decades of development and evolution in information technology is around six functions.
The innovation driven by information technology has been the by-product of the six functions. Some
of the significant development which has been achieved is as follows:

a) Portability
Advances in information technology have made portability of all electronic gadgets possible.

b) Speed
Computing is now done at speed at which earlier generations of super computer were working.

c) Miniaturization
Another innovation is in form of hand-held computing devices as well as an information system, like
GPS, Smartphone, IPad etc.

d) Connectivity
Information technology has transformed communication capability.

e) Entertainment
Proliferation of multimedia and digital information has been tremendous.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 56

f) User Interface
Advancement in information technology has changed way users interact with computing devices. The
advent of touch screen has made computing intuitive and interactive.
From above cases it can leave no doubt that information technology and development in the driving
force within today’s innovation.

Terminology
Multiprogramming
Multiprogramming is a rudimentary form of parallel processing in which several programs are run at
the same time on a uniprocessor. Since there is only one processor, there can be no true simultaneous
execution of different programs. Instead, the operating system executes part of one program, then part
of another, and so on. To the user it appears that all programs are executing at the same time.

Multiprocessing
Multiprocessing is the coordinated (simultaneous execution) processing of programs by more than one
computer processor. Multiprocessing is a general term that can mean the dynamic assignment of a
program to one of two or more computers working in tandem or can involve multiple computers
working on the same program at the same time (in parallel).

Multitasking
In a computer operating system, multitasking is allowing a user to perform more than one computer
task (such as the operation of an application program) at a time. The operating system is able to keep
track of where you are in these tasks and go from one to the other without losing information. Microsoft
Windows 2000, IBM's OS/390, and Linux are examples of operating systems that can do multitasking
(almost all of today's operating systems can). When you open your Web browser and then open word
at the same time, you are causing the operating system to do multitasking.

Multithreading
It is easy to confuse multithreading with multitasking or multiprogramming, which are somewhat
different ideas.

Multithreading is the ability of a program or an operating system process to manage its use by more
than one user at a time and to even manage multiple requests by the same user without having to have
multiple copies of the programming running in the computer

REVISION EXERCISES
1. Explain an overview of computer systems
2. Discuss in detail computer structures
3. Discuss the function of a control unit
4. Differentiate between data and information
5. What are the features of the Random Access Memory and Read Only Memory?

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 57

6. Provide a brief description of a computer system.


7. List the most important secondary storage media. What are the strengths and limitations of
each?
8. Discuss the various types of application that run on the desktop of a computer.
9. What are the major types of software? How do they differ in terms of users and uses?
10. Discuss the different types of computer generation
11. Discuss how virtual memory concept is implemented indicating its key objective.
12. A software engineer requires a range of software utilities. Explain the usefulness of any three
such utilities.
13. Discuss the various inputs and output devices.
14. Distinguish between serial, parallel and massively parallel processing
15. Explain the backing storage devices that a computer users may use.
16. Discuss the various components of information technology
17. Discuss in detail an effective information communication structure
18. What are some of the roles ICT in business.
19. What is an information center and how can be an effective information center be created.
20. Discuss the relationship between Information technology and innovation
21. What are the four different types of semiconductor memory and where are they used?

22. Describe multiprogramming, virtual storage, time-sharing, and multiprocessing. Why are they
important for the operation of an information system?
23. Explain two major advantages of multiprogramming.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 58

CHAPTER 2
INTRODUCTION TO SYSTEMS DEVELOPMENT
SYNOPSIS
Introduction……………………………………………………. 58
Role Of Management in Systems Development……………….. 59
Systems Development Approach………………………………. 60
Systems Development Life Cycle……………………………… 64
Rapid Applications Development……………………………… 75
Business Process Re-Engineering……………………………… 77
Systems Development Constraints…………………………….. 80

INTRODUCTION
Software systems are developed in order to support the activities that occur in some (class of)
business domain(s). As a direct consequence, concepts from the business domain(s) are bound to play
an important role in the deliverables that are produced in the course of system development, such as
requirements and design documents, the constructed system, as well as the manuals for using the
system. When, for instance, developing a software system to assist in the handling of claims in the
context of a health insurance company, concepts such as “claim”, “treatment”, “processing of claims”,
“policy”, etc., are bound to play a crucial role. During system development, requirements on the
software system are likely to be expressed in terms of these concepts, while the design of the system
is bound to comprise a class or entity type “claim” and “policy” and some activity/process “claim
processing”. Needless to say that these concepts will even be reflected in the (user) manuals of the
system.

The concepts in the business domain are not the only concepts that play a role during system
development. The software system will be implemented using several forms of technologies and pre-
existing infrastructures. This gives rise to an additional class of concepts: the concepts from the
implementation domain. These concepts deal with the mapping of the concepts from the business
domain to the technological infrastructure underlying the software system. Examples of such concepts
would be: “claim queue handler”, “claim scheduler”, etc. Some of the concepts in the implementation
domain are likely to be application dependent while others will be of a more infrastructural/generic
nature. In this article we mainly focus on concepts that are native to the business domain.

In sum, one could state that during system development, a lot of “concept handling” occurs. At times
we may engage in it without explicitly realizing we do so. Concept handling may software needs, or
during the design and realization of the implementation of the system and its documentation. Business
domain concepts are introduced, evolved and retired for different reasons. Initially they are introduced
with the aim of scoping and understanding the business domain for which the software system is to be
built.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 59

During requirements engineering as well as the design and realization of the system, additional
insights may be gained into the structure and nature of the business domain. These insights are bound
to lead to the evolution of the concepts used thus far.

It is our believe that one should not just handle concepts, but rather consciously manage them. We
regard the proper management of concepts during system development as an essential cornerstone for
the development of systems that indeed fit the needs of the business domain. With the notion of
concept management we refer to: the deliberate activity of introducing, evolving and retiring
concepts, where deliberate hints at the goal-driven nature of the management of the concepts.

ROLE OF MANAGEMENT IN SYSTEM DEVELOPMENT


It is management's responsibility to ensure that systems thinking are utilized throughout the systems
development process.

Management complements the SDLC when it comes to project quality. It provides a method of
managing these unique project efforts, which increases the odds of attaining cost, schedule and quality
goals.

The primary benefits of a good management process will:

i. Provide consistency of success with regard to time, cost, and quality objectives
ii. Ensure customer expectations are met
iii. Collect historical information/data for future use
iv. Provide a method of thought for ensuring all requirements are addressed through a
comprehensive work definition process
v. Reduce Risks associated with the project
vi. Minimize scope creep by providing a process for managing changes
Without strong management support, circumstances will affect our ability to satisfy the customer and
meet our project and product objectives. Management that is willing to intervene, when asked to, will
further increase the probability of successfully delivering a quality product.

The Role of Management Information Systems in Decision Making


Management information systems help people make and share important business decisions.
Management information systems combine hardware, software and network products in an integrated
solution that provides managers with data in a format suitable for analysis, monitoring, decision-
making and reporting. The system collects data, stores it in a database and makes it available to users
over a secure network.

a. Information Access
Managers need rapid access to information to make decisions about strategic, financial, marketing and
operational issues. Companies collect vast amounts of information, including customer records, sales
data, market research, financial records, manufacturing and inventory data, and human resource
records. However, much of that information is held in separate departmental databases, making it
difficult for decision makers to access data quickly. A management information system simplifies and
speeds up information retrieval by storing data in a central location that is accessible via a network.
The result is decisions that are quicker and more accurate.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 60

b. Data Collection
Management information systems bring together data from inside and outside the organization. By
setting up a network that links a central database to retail outlets, distributors and members of a
supply chain, companies can collect sales and production data daily, or more frequently, and make
decisions based on the latest information.

c. Collaboration
In situations where decision-making involves groups, as well as individuals, management information
systems make it easy for teams to make collaborative decisions. In a project team, for example,
management information systems enable all members to access the same essential data, even if they
are working in different locations.

d. Interpretation
Management information systems help decision-makers understand the implications of their
decisions. The systems collate raw data into reports in a format that enables decision-makers to
quickly identify patterns and trends that would not have been obvious in the raw data. Decision-
makers can also use management information systems to understand the potential effect of change. A
sales manager, for example, can make predictions about the effect of a price change on sales by
running simulations within the system and asking a number of “what if the price was” questions.

e. Presentation
The reporting tools within management information systems enable decision-makers to tailor reports
to the information needs of other parties. If a decision requires approval by a senior executive, the
decision-maker can create a brief executive summary for review. If managers want to share the
detailed findings of a report with colleagues, they can create full reports and provide different levels
of supplementary data.

SYSTEM DEVELOPMENT APPROACHES


Every software development methodology approach acts as a basis for applying specific frameworks
to develop and maintain software. Several software development approaches have been used since the
origin of information technology. These are

a) Waterfall: a linear framework


b) Prototyping: an iterative framework
c) Incremental: a combined linear-iterative framework
d) Spiral: a combined linear-iterative framework
e) Rapid application development (RAD): an iterative framework

a. Waterfall development
The waterfall model is a sequential development approach, in which development is seen as flowing
steadily downwards (like a waterfall) through the phases of requirements analysis, design,
implementation, testing (validation), integration, and maintenance. The first formal description of the
method is often cited as an article published by Winston W. Royce in 1970 although Royce did not
use the term "waterfall" in this article.

The basic principles are:


MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 61

• Project is divided into sequential phases, with some overlap and splash back
acceptable between phases.
• Emphasis is on planning, time schedules, target dates, budgets and implementation of
an entire system at one time.
• Tight control is maintained over the life of the project via extensive written
documentation, formal reviews, and approval/signoff by the user and information
technology management occurring at the end of most phases before beginning the
next phase.

The waterfall model is a traditional engineering approach applied to software engineering. It has been
widely blamed for several large-scale government projects running over budget, over time and
sometimes failing to deliver on requirements due to the Big Design Up Front approach. Except when
contractually required, the Waterfall model has been largely superseded by more flexible and versatile
methodologies developed specifically for software development

c) Prototyping
Prototyping is the process of creating an incomplete model of the future full-featured system, which
can be used to let the users have a first idea of the completed program or allow the clients to evaluate
the program.

The process of prototyping involves the following steps :

i) Identify basic requirements.


ii) Develop initial prototype.
iii) Review : The customers, including end-users, examine the prototype and provide
feedback for additions or changes.
iv) Revise and Enhance the Prototype : Using the feedback both the specifications and the
prototype can be improved. If changes are introduced then a repetition of steps 3 and 4 may
be needed.

Types of Prototyping

System prototyping are of various kinds. However, all the methods are in some way based on two
major types of prototyping :

a) Throwaway Prototyping
Throwaway or rapid prototyping refers to the creation of a model that will eventually be discarded
rather than becoming part of the finally delivered system. After preliminary requirements gathering is
accomplished, a simple working model of the system is constructed to visually show the users what
their requirements may look like when they are implemented into a finished system. The most
obvious reason for using throwaway prototyping is that it can be done quickly.

b) Evolutionary Prototyping
Evolutionary prototyping (also known as breadboard prototyping) is quite different from throwaway
prototyping. The main goal when using evolutionary prototyping is to build a very good prototype in a
structured manner so that we can refine it or make further changes to it. The reason for this is that the
evolutionary prototype, when built, forms the heart of the new system, and the improvements and
further requirements will be built on to it. It is not discarded or removed like the throwaway
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 62

prototype. When developing a system using evolutionary prototyping, the system is continually
refined and rebuilt.

c) Incremental Prototyping
The final product is built as separate prototypes. At the end the separate prototypes are merged in an
overall design.

Advantages of Prototyping

Prototyping has the following advantages:

i) Reduced Time and Costs

Prototyping can improve the quality of requirements and specifications provided to developers. Early
determination of what the user really wants can result in faster and less expensive software.

ii) Improved and Increased User Involvement


Prototyping requires user involvement and allows them to see and interact with a prototype; allowing
them to provide better and more complete feedback and specifications. Since users know the problem
better than anyone, the final product is more likely to satisfy the users desire for look, feel and
performance.

iii) The designer and implementer can obtain feedback from the users early in the project
development.

iv) The client and the contractor can compare that the developing system matches with the system
specification, according to which the system is built.

v) It also gives the engineer some idea about the accuracy of initial project estimates and whether the
deadlines can be successfully met.

Disadvantages of Prototyping

Some of the disadvantages of prototyping include

i) Insufficient Analysis
Since a model has to be created, developers will not properly analyse the complete project. This may
lead to a poor prototype and a final project that will not satisfy the users

ii) User Confusion of Prototype and Finished System


Users can begin to think that a prototype, intended to be thrown away, is actually a final system that
merely needs to be finished or polished. Users can also become attached to features that were included
in a prototype for consideration and then removed from the specification for a final system.

iii) Excessive Development Time of the Prototype


A key property to prototyping is the fact that it is supposed to be done quickly. If the developers
forget about this fact, they will develop a prototype that is too complex.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 63

iv) Expense of Implementing Prototyping


The startup costs for building a development team focused on prototyping may be high. Many
companies have to train the team for this purpose which needs extra expenses.

c) Incremental development
Various methods are acceptable for combining linear and iterative systems development
methodologies, with the primary objective of each being to reduce inherent project risk by breaking a
project into smaller segments and providing more ease-of-change during the development process.

The basic principles are:

• A series of mini-waterfalls are performed, where all phases of the waterfall are completed for
a small part of a system, before proceeding to the next increment, or
• Overall requirements are defined before proceeding to evolutionary, mini-waterfall
development of individual increments of a system, or
• The initial software concept, requirements analysis, and design of architecture and system
core are defined via waterfall, followed by iterative prototyping, which culminates in
installing the final prototype, a working system.

d) Spiral development
The spiral model is a software development process combining elements of both design and
prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. It is a
meta-model, a model that can be used by other models.

The basic principles are:

• Focus is on risk assessment and on minimizing project risk by breaking a project into smaller
segments and providing more ease-of-change during the development process, as well as
providing the opportunity to evaluate risks and weigh consideration of project continuation
throughout the life cycle.
• "Each cycle involves a progression through the same sequence of steps, for each part of the
product and for each of its levels of elaboration, from an overall concept-of-operation
document down to the coding of each individual program."
• Each trip around the spiral traverses four basic quadrants:
(1) determine objectives, alternatives, and constraints of the iteration;
(2) evaluate alternatives; Identify and resolve risks;
(3) develop and verify deliverables from the iteration; and
(4) plan the next iteration.
• Begin each cycle with an identification of stakeholders and their win conditions, and end each
cycle with review and commitment.

e) Rapid application development


Rapid application development (RAD) is a software development methodology, which involves
iterative development and the construction of prototypes. Rapid application development is a term
originally used to describe a software development process introduced by James Martin in 1991.

The basic principles are:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 64

• Key objective is for fast development and delivery of a high quality system at a relatively low
investment cost.
• Attempts to reduce inherent project risk by breaking a project into smaller segments and
providing more ease-of-change during the development process.
• Aims to produce high quality systems quickly, primarily via iterative Prototyping (at any
stage of development), active user involvement, and computerized development tools. These
tools may include Graphical User Interface (GUI) builders, Computer Aided Software
Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation
programming languages, code generators, and object-oriented techniques.
• Key emphasis is on fulfilling the business need, while technological or engineering excellence
is of lesser importance.
• Project control involves prioritizing development and defining delivery deadlines or
“timeboxes”. If the project starts to slip, emphasis is on reducing requirements to fit the
timebox, not in increasing the deadline.
• Generally includes joint application design (JAD), where users are intensely involved in
system design, via consensus building in either structured workshops, or electronically
facilitated interaction.
• Active user involvement is imperative.
• Iteratively produces production software, as opposed to a throwaway prototype.
• Produces documentation necessary to facilitate future development and maintenance.
• Standard systems analysis and design methods can be fitted into this framework.

SYSTEM DEVELOPMENT LIFE CYCLE


The SDLC process was designed to ensure end-state solutions meet user requirements in support of
business strategic goals and objectives. In addition, the SDLC also provides a detailed guide to help
Program Managers with all aspects of IT system development, regardless of the system size and
scope.

The System Development Life Cycle (SDLC) is a series of six steps that a project team works through
in order to conceptualize, analyze, design, construct and implement a new information technology
system. Adhering to a SDLC increases efficiency and accuracy and reduces the risk of product failure.

The SDLC contains a comprehensive checklist of the rules and regulations governing IT systems, and
is one way to ensure system developers comply with all applicable government regulations, because
the consequences of not doing so are high and wide ranging. This is especially true in the post 9/11
environment where larger amounts of information are considered sensitive in nature, and are shared
among commercial, international, federal, state, and local partners

Overview

The Systems development life cycle (SDLC) is a process used by a systems analyst to develop an
information system, training, and user (stakeholder) ownership. The SDLC aims to produce a high
quality system that meets or exceeds customer expectations, reaches completion within time and cost
estimates, works effectively and efficiently in the current and planned Information Technology
infrastructure, and is inexpensive to maintain and cost-effective to enhance. "Systems Development

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 65

Computer systems are complex and often (especially with the recent rise of service-oriented
architecture) link multiple traditional systems potentially supplied by different software vendors. To
manage this level of complexity, a number of SDLC models or methodologies have been created,
such as "waterfall"; "spiral"; "Agile software development"; "rapid prototyping"; "incremental"; and
"synchronize and stabilize".

SDLC can be described along spectrum of agile to iterative to sequential. Agile methodologies, such
as XP and Scrum, focus on lightweight processes which allow for rapid changes along the
development cycle. Iterative methodologies, such as Rational Unified Process and dynamic systems
development method, focus on limited project scope and expanding or improving products by
multiple iterations. Sequential or big-design-up-front (BDUF) models, such as Waterfall, focus on
complete and correct planning to guide large projects and risks to successful and predictable results.
Other models, such as Anamorphic Development, tend to focus on a form of development that is
guided by project scope and adaptive iterations of feature development.

In project management a project can be defined both with a project life cycle (PLC) and an SDLC,
during which slightly different activities occur. According to Taylor (2004) "the project life cycle
encompasses all the activities of the project, while the systems development life cycle focuses on
realizing the product requirements". SDLC (systems development life cycle) is used during the
development of an IT project, it describes the different stages involved in the project from the drawing
board, through the completion of the project. SDLC is software development

History
The systems life cycle (SLC) is a methodology used to describe the process for building information
systems, intended to develop information systems in a very deliberate, structured and methodical way,
reiterating each stage of the life cycle. The systems development life cycle, according to Elliott &
Strachan & Radford (2004), "originated in the 1960s, to develop large scale functional business
systems in an age of large scale business conglomerates. Information systems activities revolved
around heavy data processing and number crunching routines".

Several systems development frameworks have been partly based on SDLC, such as the structured
systems analysis and design method (SSADM) produced for the UK government Office of
Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life
cycle approaches to systems development have been increasingly replaced with alternative approaches
and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional
SDLC

The seven-step process contains a procedural checklist and the systematic progression required to
evolve an IT system from conception to disposition. The following descriptions briefly explain each
of the seven phases of the SDLC:

1. Conceptual Planning
This phase is the first step of any system's life cycle. It is during this phase that a need to acquire or
significantly enhance a system is identified, its feasibility and costs are assessed, and the risks and
various project-planning approaches are defined. Roles and responsibilities for the Asset Manager,
Sponsor's Representative, System Development Agent (SDA), System Support Agent (SSA), and

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 66

other parties in SDLC policy are designated during this stage and updated throughout the system's life
cycle.

2. Planning and Requirements Definition.


This phase begins after the project has been defined and appropriate resources have been committed.
The first portion of this phase involves collecting, defining and validating functional, support and
training requirements. The second part is developing initial life cycle management plans, including
project planning, project management, Configuration Management (CM), support, operations, and
training management.

3. Design.
During this phase, functional, support and training requirements are translated into preliminary and
detailed designs. Decisions are made to address how the system will meet functional requirements. A
preliminary (general) system design, emphasizing the functional features of the system, is produced as
a high-level guide. Then a final (detailed) system design is produced that expands the design by
specifying all the technical detail needed to develop the system.

4. Development and Testing


During this phase, systems are developed or acquired based on detailed design specifications. The
system is validated through a sequence of unit, integration, performance, system, and acceptance
testing. The objective is to ensure that the system functions as expected and that sponsor's
requirements are satisfied. All system components, communications, applications, procedures, and
associated documentation are developed/acquired, tested, and integrated. This phase requires strong
user participation in order to verify thorough testing of all requirements and to meet all business
needs.

5. Implementation
During this phase, the new or enhanced system is installed in the production environment, users are
trained, data is converted (as needed), the system is turned over to the sponsor, and business processes
are evaluated. This phase includes efforts required to implement, resolve system problems identified
during the implementation process, and plan for sustainment.

6. Operations and Maintenance


The system becomes operational during this phase. The emphasis during this phase is to ensure that
sponsor needs continue to be met and that the system continues to perform according to
specifications. Routine hardware and software maintenance and upgrades are performed to ensure
effective system operations. User training continues during this phase, as needed, to acquaint new
users to the system or to introduce new features to current users. Additional user support is provided,
as an ongoing activity, to help resolve reported problems.

7. Disposition
This phase represents the end of the system's life cycle. It provides for the systematic termination of a
system to ensure that vital information is preserved for potential future access and/or reactivation. The
system, when placed in the Disposition Phase, has been declared surplus and/or obsolete and has been
scheduled for shutdown. The emphasis of this phase is to ensure that the system (e.g., equipment,
parts, software, data, procedures, and documentation) is packaged and disposed of in accordance with
appropriate regulations and requirements.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 67

Systems Analysis and Design


The Systems Analysis and Design (SAD) is the process of developing Information Systems (IS) that
effectively use hardware, software, data, processes, and people to support the company's businesses
objectives. System Analysis and Design can be considered the meta-development activity, which
serves to set the stage and bound the problem. SAD can be leveraged to set the correct balance among
competing high-level requirements in the functional and non-functional analysis domains. System
Analysis and Design interacts strongly with distributed Enterprise Architecture, Enterprise I.T.
Architecture, and Business Architecture, and relies heavily on concepts such as partitioning,
interfaces, personae and roles, and deployment/operational modeling to arrive at a high-level system
description. This high level description is then further broken down into the components and modules
which can be analyzed, designed, and constructed separately and integrated to accomplish the
business goal. SDLC and SAD are cornerstones of full-lifecycle product and system planning.

Object-oriented analysis
Object-oriented analysis (OOA) is the process of analyzing a task (also known as a problem domain),
to develop a conceptual model that can then be used to complete the task. A typical OOA model
would describe computer software that could be used to satisfy a set of customer-defined
requirements. During the analysis phase of problem-solving, a programmer might consider a written
requirements statement, a formal vision document, or interviews with stakeholders or other interested
parties. The task to be addressed might be divided into several subtasks (or domains), each
representing a different business, technological, or other areas of interest. Each subtask would be
analyzed separately. Implementation constraints, (e.g., concurrency, distribution, persistence, or how
the system is to be built) are not considered during the analysis phase; rather, they are addressed
during object-oriented design (OOD).

The conceptual model that results from OOA will typically consist of a set of use cases, one or more
UML class diagrams, and a number of interaction diagrams. It may also include some kind of user
interface mock-up.

The input for object-oriented design is provided by the output of object-oriented analysis. Realize that
an output artifact does not need to be completely developed to serve as input of object-oriented
design; analysis and design may occur in parallel, and in practice the results of one activity can feed
the other in a short feedback cycle through an iterative process. Both analysis and design can be
performed incrementally, and the artifacts can be continuously grown instead of completely
developed in one shot. Some typical input artifacts for object-oriented design are:

i. Conceptual model
Conceptual model is the result of object-oriented analysis, it captures concepts in the problem domain.
The conceptual model is explicitly chosen to be independent of implementation details, such as
concurrency or data storage.

ii. Use case


Use case is a description of sequences of events that, taken together, lead to a system doing something
useful. Each use case provides one or more scenarios that convey how the system should interact with
the users called actors to achieve a specific business goal or function. Use case actors may be end

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 68

users or other systems. In many circumstances use cases are further elaborated into use case diagrams.
Use case diagrams are used to identify the actor (users or other systems) and the processes they
perform.

iii. System Sequence Diagram


System Sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the
events that external actors generate, their order, and possible inter-system events.

iv. User interface documentations (if applicable)


It’s a document that shows and describes the look and feel of the end product's user interface. It is not
mandatory to have this, but it helps to visualize the end-product and therefore helps the designer.

v. Relational data model (if applicable)


A data model is an abstract model that describes how data is represented and used. If an object
database is not used, the relational data model should usually be created before the design, since the
strategy chosen for object-relational mapping is an output of the OO design process. However, it is
possible to develop the relational data model and the object-oriented design artifacts in parallel, and
the growth of an artifact can stimulate the refinement of other artifacts.

Management and Control

The SDLC phases serve as a programmatic guide to project activity and provide a flexible but
consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC
phase objectives are described in this section with key deliverables, a description of recommended
tasks, and a summary of related control objectives for effective management. It is critical for the
project manager to establish and monitor control objectives during each SDLC phase while executing
projects. Control objectives help to provide a clear statement of the desired result or purpose and
should be used throughout the entire SDLC process. Control objectives can be grouped into major
categories (domains), and relate to the SDLC phases as shown in the figure.

To manage and control any SDLC initiative, each project will be required to establish some degree of
a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the
project. The WBS and all programmatic material should be kept in the "project description" section of
the project notebook. The WBS format is mostly left to the project manager to establish in a way that
best describes the project work.

There are some key areas that must be defined in the WBS as part of the SDLC policy. The following
diagram describes three key areas that will be addressed in the WBS in a manner established by the
project manager.

Work breakdown structured organization

The upper section of the work breakdown structure (WBS) should identify the major phases and
milestones of the project in a summary fashion. In addition, the upper section should provide an
overview of the full scope and timeline of the project and will be part of the initial project description
effort leading to project approval. The middle section of the WBS is based on the seven systems
development life cycle (SDLC) phases as a guide for WBS task development. The WBS elements

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 69

should consist of milestones and "tasks" as opposed to "activities" and have a definitive period
(usually two weeks or more). Each task must have a measurable output (e.x. document, decision, or
analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems
engineering) and may require close coordination with other tasks, either internal or external to the
project. Any part of the project needing support from contractors should have a statement of work
(SOW) written to include the appropriate tasks from the SDLC phases. The development of a SOW
does not occur during a specific phase of SDLC but is developed to include the work from the SDLC
process that may be conducted by external resources such as contractors and structure.

Baselines in the SDLC

Baselines are an important part of the systems development life cycle (SDLC). These baselines are
established after four of the five phases of the SDLC and are critical to the iterative nature of the
model. Each baseline is considered as a milestone in the SDLC.

 Functional baseline: established after the conceptual design phase.


 Allocated baseline: established after the preliminary design phase.
 Product baseline: established after the detail design and development phase.
 Updated product baseline: established after the production construction phase.

SDLC Objectives

When we plan to develop, acquire or revise a system we must be absolutely clear on the objectives of
that system. The objectives must be stated in terms of the expected benefits that the business expects
from investing in that system.

The objectives define the expected return on investment.

An SDLC has three primary business objectives:

a) Ensure the delivery of high quality systems;


b) Provide strong management controls;
c) Maximize productivity.
In other words, the SDLC should ensure that we can produce more function, with higher quality, in
less time, and in a predictable manner.

a. Ensure High Quality


Judging the quality of a wine or a meal is a subjective process. The results of the evaluation reflect the
tastes and opinions of the taster. But we need a more rigorous, objective approach to evaluating the
quality of systems. Therefore, before we can ensure that a system has high quality, we must know
what quality is in a business context.

The primary definition of quality in a business context is the return on investment (ROI) achieved by
the system. The business could have taken the money spent on developing and running the system and
spent it on advertising, product development, staff raises or many other things. However, someone
made a decision that if that money was spent on the system it would provide the best return or at least
a return justifying spending the money on it.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 70

This ROI can be the result of such things as: operational cost savings or cost avoidance; improved
product flexibility resulting in a larger market share; and/or improved decision support for strategic,
tactical and operational planning. In each case the ROI should be expressed quantitatively, not
qualitatively. Qualitative objectives are almost always poorly defined reflections of incompletely
analyzed quantitative benefits.

The SDLC must ensure that these objectives are well defined for each project and used as the primary
measure of success for the project and system.

The business objectives provide the contextual definition of quality. There is also an intrinsic
definition of quality. This definition of quality centers on the characteristics of the system itself: is it
zero defect, is it well-structured, it is well-documented, is it functionally robust, etc. The
characteristics are obviously directly linked to the system's ability to provide the best possible ROI.
Therefore, the SDLC must ensure that these qualities are built into the system. However, how far you
go in achieving intrinsic quality is tempered by the need to keep contextual quality (i.e., ROI) the
number one priority. At times there are trade-offs to be made between the two. Within the constraints
of the business objectives, the SDLC must ensure that the system has a high degree of intrinsic
quality.

b. Provide Strong Management Control


The essence of strong management controls is predictability and feedback. Projects may last for many
months or even years. Predictability is provided by being able to accurately estimate, as early as
possible, how long a project will take, how many resources it will require and how much it will cost.
This information is key to determining if the ROI will be achieved in a timely manner or at all. The
SDLC must ensure that such planning estimates can be put together before there have been any
significant expenditures of resources, time and money on the project. The feedback process tells us
how well we are doing in meeting the plan and the project's objectives. If we are on target, we need
that verified. If there are exceptions, these must be detected as early as possible so that corrective
actions can be taken in a timely manner. The SDLC must ensure that management has timely,
complete and accurate information on the status of the project and the system throughout the
development process.

c. Maximize Productivity
There are two basic definitions of productivity. One centers on what you are building; the other is
from the perspective of how many resources, how much time and how much money it takes to build
it. The first definition of productivity is based on the return on investment (ROI) concept. What value
is there in doing the wrong system twice as fast? It would be like taking a trip to the wrong place in a
plane that was twice as fast. You might have been able to simply walk to the correct destination.
Therefore, the best way to measure a project team's or system department's productivity is to measure
the net ROI of their efforts. The SDLC must not just ensure that the expected ROI for each project is
well defined. It must ensure that the projects being done are those with the maximum possible ROI
opportunities of all of the potential projects.

Even if every project in the queue has significant ROI benefits associated with it, there is a practical
limit to how large and how fast the systems organization can grow. We need to make the available
staff as productive as possible with regard to the time, money and resources required to deliver a
given amount of function. The first issue we face is the degree to which the development process is

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 71

labor intensive. Part of the solution lies in automation. The SDLC must be designed in such a way as
to take maximum advantage of the computer assisted software engineering (CASE) tools.

The complexity of the systems and the technology they use has required increased specialization.
These specialized skills are often scarce. The SDLC must delineate the tasks and deliverables in such
a way as to ensure that specialized resources can be brought to bear on the project in the most
effective and efficient way possible.

One of the major wastes of resources on a project is having to do things over. Scrap and rework
occurs due to such things as errors and changes in scope. The SDLC must ensure that scrap and
rework is minimized. Another activity that results in non-productive effort is the start-up time for new
resources being added to the project. The SDLC must ensure that start-up time is minimized in any
way possible. A final opportunity area for productivity improvements is the use of off-the-shelf
components. Many applications contain functions identical to those in other applications. The SDLC
should ensure that if useful components already exist, they can be re-used in many applications.

What we have identified so far are the primary business objectives of the SDLC and the areas of
opportunity we should focus on in meeting these objectives. What we must now do is translate these
objectives into a set of requirements and design points for the SDLC.

SDLC Requirements

The requirements for the SDLC fall into five major categories:

- Scope
- Technical Activities
- Management Activities
- Usability
- Installation Guidance
The scoping requirements bound what types of systems and projects are supported by the SDLC. The
technical and management activities define the types of tasks and deliverables to be considered in the
project. The usability requirements address the various ways in which the SDLC will be used by the
team members and what must be considered in making the SDLC easy to use in all cases. The
installation requirements address the needs associated with phasing the SDLC into use, possibly piece
by piece, over time.

Scope Requirements

The SDLC must be able to support various project types, project sizes and system types.

Project Types

There are five project types that the SDLC must support:

- New Development
- Rewrites of Existing Systems
- Maintenance
- Package Selection
- System Conversions

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 72

New Development

A totally new system development effort implies that there is no existing system. You have a blank
sheet of paper and total latitude in defining its requirements and design. In reality this is a rather rare
occurance.

Rewrites

In a rewrite there is an existing system but the current design has degenerated to become so poorly
structured that it is difficult to maintain or add any significant new features. Therefore, a new system
will be created to take its place. However, there is a necessity to retain a high degree of functional
compatibility with the existing system. Thus, you might go from a batch system to an on-line system,
from a centralized system to a distributed system, etc., but the core business (i.e., logical) functions
remain the same.

Maintenance

Here we must be careful to make the distinction between a management definition of maintenance and
a technical definition. From a management perspective, some organizations call any project of under
six person months, or some similar resource limit, that affects an existing system a maintenance
project. Some even reduce this to just the effort required to fix errors and comply with regulatory
changes. This can also be called "zero-based maintenance", after zero-based budgeting, since anything
over that is discretionary. The rest is called development.

We prefer to use a technical definition of maintenance to mean any incremental improvements to an


existing system, regardless of the size of the changes. The rationale for this is that the techniques and
tools required to go into an existing system differ from those where there is a blank sheet of paper.

Package Selection

Package selection involves evaluating, acquiring, tailoring and installing third party software.

System Conversions

A system conversion involves translating a system to run in a new environment. This includes
conversions to a new language, a new operating system, a new computer, new disk drives, a new
DBMS, etc. In doing the translation, the system is not redesigned. It is ported over to the new
environment on a one-to-one basis to the extent possible.

In reality, projects are often a blend of these various project types. For example, a package installation
may also require maintenance changes to interfacing systems, developing some new code, converting
other code to run on a compatible configuration and rewriting portions of some systems. The SDLC
must handle each project type and any blend of them.

Project Sizes

Projects come in many sizes. Some may last as short as a day, staffed by only one person. Others may
last many years, staffed by hundreds of people scattered across many development locations. The
types and degree of the management controls, such as project check-points and status reporting,

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 73

change depending on the size of the effort. The SDLC must accommodate the full range of project
sizes without burdening the small project nor over simplifying to the detriment of the large project.

System Types

The SDLC must be able to support:

- Batch systems, on-line systems, real time systems.


- Mainframe, client-server, web, PC, imbedded systems.
- Centralized systems and distributed systems.
- Stand-alone systems and integrated systems.
- Automated systems and manual systems.

The SDLC must support each of these and any combination of them. The SDLC actually needs to
support the full range of combinations and permutations of the various project types, project sizes and
system types. It must do this in a single lifecycle. Creating a unique lifecycle for each possible
combination of the above would result in literally billions of SDLC's. (We leave the computation up
to the reader.)

Technical Activities

The technical activities fall into a number of major categories:

- system definition (analysis, design, coding)


- testing
- system installation (e.g., data conversion, training)
- production support (e.g., problem management)
- evaluating alternatives
- defining releases
- reconciling information across multiple phases
- reconciling to a global view
- defining the project's technical strategy)

In addressing each of these topics we will need to distinguish what tasks must be performed from how
they might be performed. This distinction is important since the "how-to" is dependent on the specific
software engineering techniques selected and the available CASE tools. However, the "what" should
be generic and stable, regardless of the techniques and tools.

System Definition

In defining the requirements for supporting analysis, design and coding we must consider three
aspects of the problem: system components, the categories of requirements and system views.

System Components

Regardless of the techniques being used, we can say that any system can be said to be composed of
nine basic component types:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 74

a) Use Cases
b) Functions
c) Triggers
d) Data Stores
e) Data Flows
f) Data Elements
g) Processors
h) Data Storage
i) Data Connections
j) Actors/External Entities

a) Use Cases are an ordered set of processes, initiated by a specific trigger (e.g., transaction, end
of day), which accomplish a meaningful unit of work from the perspective of the user.
b) Functions are context independent processes that transform data and/or determine the state of
entities.
c) Triggers are the events that intiate Use Cases. There are three types of triggers: time triggers,
state triggers and transaction triggers.
d) Data stores are data at rest. Data flows are data in movement between two processes, a
process and a data store, etc.
e) Data elements are the atomic units within data flows and data stores.
f) Processors are the components which execute the processes and events (i.e., computers and
people).
g) Data storage is the repository in which the data stores reside (e.g., disks, tapes, filing
cabinets).
h) Data connections are the pipelines through which the data flows flow (e.g., communications
network, the mail).
i) Actors/External entities are people or systems outside the scope of the system under
investigation but with which it must interface.
j) Each of these components has many properties or attributes which are needed to fully
describe them. For example, in describing a process we can state its algorithm, who or what
executes it, where it takes place, when it takes place, how much information it must process,
etc. In a given project and for a given component, the properties which must be
gathered/defined may vary. The SDLC must allow for this flexibility versus an all-or nothing
approach.

Strengths and weaknesses


Few people in the modern computing world would use a strict waterfall model for their systems
development life cycle (SDLC) as many modern methodologies have superseded this thinking. Some
will argue that the SDLC no longer applies to models like Agile computing, but it is still a term
widely in use in technology circles. The SDLC practice has advantages in traditional models of
software development, which lends itself more to a structured environment. The disadvantages to
using the SDLC methodology is when there is need for iterative development or (i.e. web
development or e-commerce) where stakeholders need to review on a regular basis the software being

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 75

designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important
to take the best practices from the SDLC model and apply it to whatever may be most appropriate for
the software being designed.

A comparison of the strengths and weaknesses of SDLC:

Strength and Weaknesses of SDLC

Strengths Weaknesses

1. Control Increased development time.


2. Monitor large projects. Increased development cost.
3. Detailed steps. Systems must be defined up front.
4. Evaluate costs and completion targets Rigidity.
5. Documentation. Hard to estimate costs, project overruns.
6. Well defined user input. User input is sometimes limited.
7. Ease of maintenance.
8. Development and design standards.
9. Tolerates changes in MIS staffing.

An alternative to the SDLC is rapid application development, which combines prototyping, joint
application development and implementation of CASE tools. The advantages of RAD are speed,
reduced development cost, and active user involvement in the development process.

RAPID APPLICATION DEVELOPMENT


RAD is a methodology that enables organizations to develop strategically important systems faster
while reducing development costs and maintaining quality. This is achieved by using a series of proven
application development techniques, within a well-defined methodology. These techniques include the
use of:
 Small, well trained development teams
 Evolutionary prototypes
 Integrated power tools that support modelling, prototyping and component re-usability.
 A central repository
 Interactive requirements and design workshops.
 Rigid limits on development time frames.

RAD supports the analysis, design, development and implementation of individual application
systems. However, RAD does not support the planning or analysis required to define the information
needs of the enterprise as a whole or of a major business area of the enterprise. RAD provides a means
for developing systems faster while reducing cost and increasing quality. This is done by:
i. Automating large portions of the system development life cycle,
ii. Imposing rigid limits on development time frames and
iii. Re-using existing components.

The RAD methodology has four major stages:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 76

i. The concept definition stage defines the business functions and data subject areas that the
system will support and determines the system scope.
ii. The functional design stage uses workshops to model the system’s data and processes and to
build a working prototype of critical system components.
iii. The development stage completes the construction of the physical database and application
system, builds the conversion system and develops user aids and deployment work plans.
iv. The deployment stage includes final user testing and training, data conversion and the
implementation of the application system.

Joint Application Design (JAD)


A structured process in which users, managers and analysts work together for several days in a series
of intensive meetings to specify or review system requirements. Aims to develop a shared
understanding of what the information system is supposed to do.

End-user development
End-user development refers to the development of information systems by end users with minimal or
no assistance from professional systems analysts or programmers. This is accomplished through
sophisticated "user-friendly" software tools and gives end-users direct control over their own
computing.

Advantages of end-user development include:


 Improved requirements determination.
 Large productivity gains have been realized when developing certain types of applications.
 Enables end users to take a more active role in the systems development process.
 Many can be used for prototyping.
 Some have new functions such as graphics, modeling, and ad hoc information retrieval.

Disadvantages of end-user development include:


 It is not suited to large transaction-oriented applications or applications with complex updating
requirements.
 Standards for testing and quality assurance may not be applied.
 Proliferation of uncontrolled data and "private" information systems.

End-user development is suited to solving some of the backlog problem because the end-users can
develop their needed applications themselves. It is suited to developing low-transaction systems. End-
user development is valuable for creating systems that access data for such purposes as analysis
(including the use of graphics in that analysis) and reporting. It can also be used for developing simple
data-entry applications.

Computer Aided Software Engineering (CASE)


CASE is Computer-Aided Software Engineering. CASE is the automation of the steps and
methodologies for systems analysis and development. They reduce the repetitive work that developers
do. Usually they have graphical tools for producing charts and diagrams. They may have any of the
following tools:
a) screen and report generators,

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 77

b) data dictionaries,
c) reporting facilities,
d) code generators, and
e) documentation generators.

These tools can greatly increase the productivity of the systems analyst or designer by:
 Enforcing a standard.
 Improving communication between users and technical specialists.
 Organizing and correlating design components and providing rapid access to them via a design
repository or library.
 Automating the tedious and error-prone portions of analysis and design.
 Automating testing and version control

BUSINESS PROCESS RE-ENGINEERING


Business Process Reengineering involves changes in structures and in processes within the business
environment. The entire technological, human, and organizational dimensions may be changed in
BPR.

Information Technology plays a major role in Business Process Reengineering as it provides office
automation, it allows the business to be conducted in different locations, provides flexibility in
manufacturing, permits quicker delivery to customers and supports rapid and paperless transactions.
In general it allows an efficient and effective change in the manner in which work is performed.

What is the Business Process Re-engineering

The globalization of the economy and the liberalization of the trade markets have formulated new
conditions in the market place which are characterized by instability and intensive competition in the
business environment. Competition is continuously increasing with respect to price, quality and
selection, service and promptness of delivery.

Removal of barriers, international cooperation, technological innovations cause competition to


intensify. All these changes impose the need for organizational transformation, where the entire
processes, organization climate and organization structure are changed. Hammer and Champy provide
the following definitions:

Reengineering is the fundamental rethinking and radical redesign of business processes to achieve
dramatic improvements in critical contemporary measures of performance such as cost, quality,
service and speed.

Process is a structured, measured set of activities designed to produce a specified output for a
particular customer or market. It implies a strong emphasis on how work is done within an
organization.

Business processes are characterized by three elements:

a) the inputs, (data such customer inquiries or materials),

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 78

b) the processing of the data or materials (which usually go through several stages and may
necessary stops that turns out to be time and money consuming), and
c) the outcome (the delivery of the expected result). The problematic part of the process is
processing.
Business process reengineering mainly intervenes in the processing part, which is reengineered in
order to become less time and money consuming.

The term "Business Process Reengineering" has, over the past couple of year, gained increasing
circulation. As a result, many find themselves faced with the prospect of having to learn, plan,
implement and successfully conduct a real Business Process Reengineering endeavor, whatever that
might entail within their own business organization. Hammer and Champy (1993) define business
process reengineering (BPR) as the fundamental rethinking and radical redesign of the business
processes to achieve dramatic improvements in critical, contemporary measures of performance, such
as cost, quality, service and speed.

It is a new management approach reflecting the practices, experiences of managers and providing a
source of practical feedback to management science. It represents a response to: -

 Failure of business processes to meet customer needs and deliver customer satisfaction.
 The challenge to organizational politics.
 The gap between the strategic decision made in the boardroom and the day-to-day practice of
the business.
 The disappointment following the application of information technology to businesses during
the 1980’s. This resulted in failure of businesses because senior managers failed to align its
strategy with corporate objectives.
BPR is not confined to manufacturing process and has been applied to a wide range of administrative
and operational activities.

There are a number of principles that have been identified for BPR including:

1. Processes should be designed to achieve desired outcomes rather than focus on tasks.
Removal of job demarcation and emphasize multi-skilling.
2. People who use the output should perform the process themselves. For example a company
could set up a database of approved suppliers. This would allow personnel who actually
require supplies to order them themselves, using line technology and thereby eliminate the
need for using a separate purchasing department
3. Incorporate information processing into the real work that produces the information- avoid
separate data gathering processes or operations.
4. Geographically dispersed resources should be treated as if they were centralized for example
economies of scale through central negotiation of supply contracts, without losing the benefits
of decentralization e.g. flexibility and responsiveness.
5. Link parallel activities rather than integrate the results. This would involve for example, co-
ordination between teams working on different aspects of a single process.
6. Empowerment:- ‘Doers’ should be allowed to be self managing. Put the decision point where
the work is performed.
7. Capture information only once. Ideally only at its source.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 79

Some of the advantages of business process reengineering include:

1. BPR revolves around customer need and helps to give appropriate focus to the business.
2. Provides cost advantages that assists the organization’s competitive position.
3. Encourages a long-term strategic view of operational processes by asking radical questions
about how things are done and how they could be improved.
4. It focuses on the entire processes and therefore the exercise can streamline activities
throughout the organization.
5. It can help eliminate unnecessary activities therefore help reduce organizational complexities.

Some of the disadvantages advantages of business process reengineering include:

1. It requires far-reaching and long term commitment by management and staff. Securing this is
not an easy task.
2. Sometimes it is incorrectly seen as a single once for all cost cutting exercise. Primarily the
aim is not cost cutting and it should be an ongoing process. This view could create hostility as
employees see it as a threat to security.
3. Sometimes it is also seen as a tool to make small changes yet in the real sense it should be
used to make radical changes.

Objectives of BPR
When applying the BPR management technique to a business organization the implementation team
effort is focused on the following objectives:

1. Customer focus
Customer service oriented processes aiming to eliminate customer complaints.

2. Speed
Dramatic compression of the time it takes to complete a task for key business processes. For instance,
if process before BPR had an average cycle time 5 hours, after BPR the average cycle time should be
cut down to half an hour.

3. Compression
Cutting major tasks of cost and capital, throughout the value chain. Organizing the processes a
company develops transparency throughout the operational level reducing cost. For instance the
decision to buy a large amount of raw material at 50% discount is connected to eleven cross checkings
in the organizational structure from cash flow, inventory, to production planning and marketing.
These checkings become easily implemented within the cross-functional teams, optimizing the
decision making and cutting operational cost.

4. Flexibility
Adaptive processes and structures to changing conditions and competition. Being closer to the
customer the company can develop the awareness mechanisms to rapidly spot the weak points and
adapt to new requirements of the market.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 80

5. Quality
Obsession with the superior service and value to the customers. The level of quality is always the
same controlled and monitored by the processes, and does not depend mainly on the person, who
servicing the customer.

6. Innovation
Leadership through imaginative change providing to organization competitive advantage.

7. Productivity
Improve drastically effectiveness and efficiency.

SYSTEM DEVELOPMENT CONSTRAINS


In today world, technologies are developing rapidly at fast pace. Information system faces constant
changes and new technologies are discovered every often. The advancement of technologies in
producing Information System (IS) is good but at the same time produces some constraint. The major
constraints of a new Information System (IS) project development are Scope, Time and Budget
(Cost). These constraints are also known as project management triangle

Project scope is the work that needs to be accomplished with the specified features and functions in a
project. We can also say that the project scope is the goals that need to be fulfilled in order to
complete a project. Without proper project scope, the development team can go out of the track and
produce the final deliverable that is not intended by their clients. This will result in loss of time,
resource to the company whom develops the application. On the other hand, the cost will increase
since they have to recode the developed system to suit the client’s needs

Time is a period measurement that is used in project scheduling to estimate the project duration. The
time here can be either in days, weeks, months or years depending on the complexity and size of the
project. The project duration should be carefully planned and enough time should be provided for
each stages. Without allocating proper time amount the system produced may not be at an optimum
quality. This is because the development team might need to rush to finish the project within the time
frame which results in poorly designed and coded system which is prone to error and bugs.

Budget (Cost) provides a forecast of revenues and expenditures in the project. By evaluating budget,
the profit and loss can also be estimated. This will help to decide whether to undertake the project or

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 81

not. The budget estimation is a difficult task since it involves analysing skills and numerical values.
Without proper budget plan, the development can exceed or over spend which will lead into higher
production cost and lower profit margin.

With an effective project management, one can balances those mentioned constraints easily. Project
management helps to plan each and every progress of a project carefully from start to end. There are
tools and techniques that help us to plan and analyse the project before the actual project starts.

Firstly, the project management provides a standard methodology. The methodology becomes a
standard practice/framework for all the projects that the development team undertakes. By having a
standardized practice there is an improvement in development productivity. This is because the team
members will get used to the development approach and be familiar after working in several projects.
Apart from that, it also makes the development team to remain sticky to the actual scope without
going out of the track.

Secondly, the project management also offers a shorter implementation time. This is because an
effective project management can take control of each process in the project development effectively.
In project management, the proper planning of time in project scheduling ensures that the project
duration is carefully crafted to avoid the wastage of any resources. These results in short
implementation time for the deployment of Information System (IS). Some of the tools and techniques
used to plan the duration of the projects are critical path method, project scheduling and few more.

Apart from that, the project management also allows the cost planning almost accurately. There are
certain tools and techniques such as COCOMO model, payback period, NPV and IRR. These tools
help to draft and analyze the possible cost and profit for the system that is going to be produced. By
having budget estimation, the company can decide whether they are capable of taking the project
under their wing. At the same time, it avoids any losses and mishap in terms of money occurs to the
development company.

In conclusion, the project management is an effective method that assists the development team to
overcome the major constraints in a system development. The project management should be
implemented in all scale of project either big or small

REVISION EXERCISES
1. Discuss the role of management in system development.
2. Explain the role of management information system in decision making.
3. Discuss the various system development.
4. Discuss the various types of prototypes .
5. What are some of the advantages and disadvantages of prototyping?
6. What is an application software package? What are the advantages and disadvantages of
developing information systems based on software packages?
7. What is meant by system development life cycle?
8. Discuss the various phases of system development life cycle.
9. What are the main objectives of system development life cycle?
10. What are some of the strengths and weakness of system development life cycle?
11. What are the advantages and disadvantages of building an information system using the
traditional systems life cycle?
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 82

12. Define rapid application development.


13. Discuss the key players in rapid application development
14. What do we mean by end-user development? What are its advantages and disadvantages?
15. What are software reengineering and reverse engineering? How can they help system
builders?
16. Explain the term business process re-engineering?
17. Discuss the principles of business process re-engineering
18. What are some of the advantages and disadvantages of business process re-engineering?
19. What are some of the objectives of business process re-engineering?
20. Discuss the various constrains in system development?
21. What is rapid application development (RAD)? What system-building tools and methods can
be used in RAD?
22. What is the difference between object-oriented software development and traditional
structured methodologies?
23. What is CASE? How can it help system builders?

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 83

CHAPTER 3
INFORMATION SYSTEMS IN AN ENTERPRISE
SYNOPSIS
Introduction……………………………………………………. 83
Types of Information Systems…………………………………. 89
Systems in a Functional Perspective…………………………… 96
Enterprise Applications and the Business Process Integration… 100

INTRODUCTION
An enterprise information system is generally any kind of computing system that is of "enterprise
class". This means typically offering high quality of service, dealing with large volumes of data and
capable of supporting some large organization ("an enterprise").

Enterprise information systems provide a technology platform that enables organizations to integrate
and coordinate their business processes. An enterprise information system provides a single system
that is central to the organization and that ensures information can be shared across all functional
levels and management hierarchies. Enterprise systems create a standard data structure and are
invaluable in eliminating the problem of information fragmentation caused by multiple information
systems within an organization.

A typical enterprise information system would be housed in one or more data centers, would run
enterprise software, and could include applications that typically cross organizational borders such as
content management systems.

The word enterprise can have various connotations. Frequently the term is used only to refer to very
large organizations. However, the term may be used to mean virtually anything, by virtue of it having
become the latest corporate-speak buzzword

There are plenty of software systems, being information a constant in all of them.

Information is not only a constant element in the systems we are going to focus on, but the
fundamental element. The relevance of information in software systems is related to their function,
that of managing this ungraspable element (that we call information).

That is why the main problems these systems have to solve are related to information representation
and persistence, data reception and transmission, and to the devices that help us to transmit and
communicate this information.

Then, what is an information system? We can define an information system as the compound of
components (or elements) that operate together in order to catch, process, store, and distribute
information. This information is generally used for taking decisions, for the co-ordination, the control,
and the analysis in an organisation. In many occasions, the system ́s basic aim is the management of
that information.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 84

An information system can further be defined as set of coordinated network of components which act
together towards producing, distributing and or processing information. An important factor of
computer based information system is precision, which may not apply to other types of systems.

System
In a system, network of components work towards a single objective, if there is lack of co-ordination
among components, it leads to counterproductive results. A system may have following features:

a. Adaptability
Some systems are adaptive to the exterior environment, while some systems are non-adaptive to the
external environment. For example, anti-lock braking system in car reacts depending on the road
conditions, where as the music system in the car is independent of other happening with the car.

b. Limitation
every system has pre-defined limits or boundaries within which it operates. This limits or boundaries
can be defined by law or current state of technology.

Information
Common definition of information is data. However, data is no true information. Data gets its
meaning and significance if only it is information. Information is represented with data, symbols and
letters.

Information has following properties:

 Objective: One of the key properties of information is its objectiveness. Objective


information is a key component of any modern scientific research.
 Subjective: Set of information which is useful to science may be abstract or irrelevant for
others. Therefore, information is subjective also.
 Temporary: Information is temporary with every update in the database.

Representation of Information

Information is represented with help of data, numbers, letters or symbols. Information is perceived in
a way it gets represented. Decimal system and binary system are two ways of representing
information. The binary circuits of computers are designed to operate under two states (0,1).

Organization of Information

The way in which information is organized directly affect the way the information is managed and
retrieved.

The simplest way of organizing information is through linear model. In this form, data is structured
one after another, for example, in magnetic tapes, music tapes, etc.

In a binary tree model, data is arranged in an inverted tree format where it assumes two values.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 85

The hierarchy model is derived from a binary tree model. In this model, branch can assume multi-
value data, for example in the UNIX operating system this model is used for its file system.

The hypertext model is another way of organizing information; World Wide Web is an example of
this model.

Random access model is another way of organizing information. This model is used for optimum
utilization of available computer storage space. Here data is stored in specified location under
direction of the operating system.

Networking Information

Information is networked through network topology. The layout of all the connected devices, and it
provides virtual shape or structure to the network is known as network topology. The physical
structure may not be representative of network topology. The basic types of topology are bus, ring,
star, tree and mesh.

The above topologies are constructed and managed with help of Hubs, Switches, Bridges, Routers,
Brouters and Gateways.

Securing Information

Security of information as well as an information system is critical. Data back-up is on the way
through which Information can be made secured. Security management for network and information
system is distinct for different setup like home, small business, medium business, large business,
school and government.

For most businesses, there are a variety of requirements for information. Senior managers need
information to help with their business planning. Middle management need more detailed information
to help them monitor and control business activities. Employees with operational roles need
information to help them carry out their duties.

As a result, businesses tend to have several "information systems" operating at the same time. This
revision note highlights the main categories of information system and provides some examples to
help you distinguish between them.

Features
If we go further, we may wonder about the main features of information systems. Then, let ś analyse
them:

a) They manage huge amounts of persistent data (concretely, they manage the data theystore)
b) They manage many users converging access to information (these users produce and consume
data that the system manages)
c) Information system graphic interfaces are, in some aspect, defined in relation to the kind of
information the system manages (certainly, in many formulation screens and reports)
d) Information systems can integrate to many other enterprise applications.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 86

The Changing Face of Business Environment


The last decade has shown rapid development in the information technology and its application. This
has helped changed the way we look at the world as well as the way business is conducted. Both
business and trade have gained under the wave of information technology with improvement in
efficiency, productivity and bottom line. Productivity improvement has facilitated speedy and
accurate production in large volumes. Indian financial sector has also benefited from advancement in
information technology.

Business and Information Technology


Current global and competitive business environment constantly asks for innovation, existing
knowledge base is getting obsolete, continuously thriving for advancement in process improvement.
The learning curve is always put to test, and every company is striving to remain ahead of the curve.
Due to this shift in the way business is getting conducted has thrown out new reality of ever
shortening product and service life cycle. More and more companies are coming out with customized
products and finding ways to differentiate from competition.

A recent survey conducted has highlighted that the change in the business environment can be
summarized with following:

Globalization and opening up of markets has not only increased competition but also has allowed
companies to operate in markets previously considered forbidden.

Inclusion of information technology as integral part of business environment has ensured that
companies are able to process, store and retrieve the huge amount of data at ever dwindling costs.
Globalization has encouraged free movement of capital, goods and service across countries.

Characteristics of Business Environment


To understand business environment and drivers of change, it is first important to study its
characteristics. They are as follows.

i. Business environments are complex in nature as well as dynamic because they are dependent
upon factors like political, economic, legal, technological, social, etc. for sustenance.
ii. Business environment affects companies in different industries in its own unique way. For
example, importers may favor lower exchange rate while exporters may favor higher
exchange rate.
iii. With change in the business environment, some fundamental effects are short term in nature
while some are felt over a period of time.

Business Process Outsourcing


Business Process Outsourcing involves contracting one or many front end (customer related) or back
end (finance, HR, accounting, etc.) activities within a company to a third party service provider. The
number of jobs within BPO industry has increased exponentially in last decade. BPO is one of the
new faces in business environment.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 87

Outsourcing has help companies reduce their overhead expenses, improve productivity, shorten
innovation cycles, encourage new market penetration and also improving customer experience. India
has seen tremendous growth in BPO industry within function like customer care, finance/accounts,
payroll, high end financial services, human-resource, etc.

Emerging Trends
The recent explosion of information technology has seen few but significant emerging trends, for
example, mobile platform for doing business, cloud computing, technology to handle a large volume
of data, etc.

These fresh technologies and platforms are offering numerous opportunities for companies to drive
strategic business advantage and stay ahead of the competition. Companies need to work on new
plans as to maintain flexibility and deliver customer satisfying products and services.

Functions of an Information System


The functions of an information system can be generally classified into those functions involved in:

a) Transaction processing
b) Management reporting

a) Transaction processing
Major processing functions include:

i. Process transactions
Activities such as making a purchase or a sale or manufacturing a product. It may be internal to the
organization or involve an external entity. Performance of a transaction requires records to:

 Direct a transaction to take place


 Report, confirm or explain its performance
 Convey it to those needing a record for background information or reference.

ii. Maintain master files


Many processing activities require operation and maintenance of a master file, which stores relatively
permanent or historical data about organizational entities. E.g. processing an employee paycheck needs
data items such as rate of pay, deductions etc. transactions when processed update data items in the
master file to reflect the most current information.

iii. Produce reports


Reports are significant products of an information system. Scheduled reports are produced on a regular
basis. An information system should also be able to produce special reports quickly based on ‘ad hoc’
or random requests.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 88

iv. Process inquiries


Other outputs of the information system are responses to inquiries using the databases. These may be
regular or ad hoc inquiries. Essentially inquiry processing should make any record or item in the
database easily accessible to authorized personnel.

v. Process interactive support applications


The information system contains applications to support systems for planning, analysis and decision
making. The mode of operation is interactive, with the user responding to questions, requesting for
data and receiving results immediately in order to alter inputs until a solution or satisfactory result is
achieved.

b) Management reporting
This is the function involved in producing outputs for users. These outputs are mainly as reports to
management for planning, control and monitoring purposes. Major outputs of an information system
include:
i. Transaction documents or screens
ii. Preplanned reports
iii. Preplanned inquiry responses
iv. Ad hoc reports and ad hoc inquiry responses
v. User-machine dialog results

Types of decisions
a) Structured/programmable decisions
These decisions tend to be repetitive and well defined e.g. inventory replenishment decisions. A
standardized pre-planned or pre-specified approach is used to make the decision and a specific
methodology is applied routinely. Also the type of information needed to make the decision is known
precisely. They are programmable in the sense that unambiguous rules or procedures can be specified
in advance. These may be a set of steps, flowchart, decision table or formula on how to make the
decision. The decision procedure specifies information to be obtained before the decision rules are
applied. They can be handled by low-level personnel and may be completely automated.
It is easy to provide information systems support for these types of decisions. Many structured
decisions can be made by the system itself e.g. rejecting a customer order if the customer’s credit with
the company is less than the total payment for the order. Yet managers must be able to override these
systems’ decisions because managers have information that the system doesn’t have e.g. the customer
order is not rejected because alternative payment arrangements have been made with the customer.
In other cases the system may make only part of the decision required for a particular activity e.g. it
may determine the quantities of each inventory item to be reordered, but the manager may select the
most appropriate vendor for the item on the basis of delivery lead time, quality and price.
Examples of such decisions include: inventory reorder formulas and rules for granting credit.
Information systems requirements include:
o Clear and unambiguous procedures for data input
o Validation procedures to ensure correct and complete input
o Processing input using decision logic
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 89

o Presentation of output so as to facilitate action

b) Semi-structured/semi-programmable decisions
The information requirements and the methodology to be applied are often known, but some aspects
of the decision still rely on the manager: e.g. selecting the location to build a new warehouse. Here the
information requirements for the decision such as land cost, shipping costs are known, but aspects such
as local labour attitudes or natural hazards still have to be judged and evaluated by the manager.
c) Unstructured/non-programmable decisions
These decisions tend to be unique e.g. policy formulation for the allocation of resources. The
information needed for decision-making is unpredictable and no fixed methodology exists. Multiple
alternatives are involved and the decision variables as well as their relationships are too many and/or
too complex to fully specify. Therefore, the manager’s experience and intuition play a large part in
making the decision.
In addition there are no pre-established decision procedures either because:
 The decision is too infrequent to justify organizational preparation cost of procedure or
 The decision process is not understood well enough, or
 The decision process is too dynamic to allow a stable pre-established decision procedure.
Information systems requirements for support of such decisions are:
 Access to data and various analysis and decision procedures.
 Data retrieval must allow for ad hoc retrieval requests
 Interactive decision support systems with generalized inquiry and analysis capabilities.
Example: Selecting a CEO of a company.

TYPES OF INFORMATION SYSTEMS


In today’s information and communication age, there is a constant reference to information systems
and management of information systems. In the digital age data, storage and retrieval are done
through various systems and interfaces.

The major types of systems include:

1. Transaction Processing Systems (TPS)


2. Management Information Systems (MIS)
3. Decision Support Systems (DSS)
4. Executive Support Systems (ESS)
5. Expert Systems

1. Transaction Processing System (TPS)

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 90

A transaction is any business related exchange, such as a sale to a client or a payment to a vendor.
Transaction processing systems process and record transactions as well as update records. They
automate the handling of data about business activities and transactions. They record daily routine
transactions such as sales orders from customers, or bank deposits and withdrawals. Although they are
the oldest type of business information system around and handle routine tasks, they are critical to
business organization. For example, what would happen if a bank’s system that records deposits and
withdrawals and maintain accounts balances disappears?

TPS are vital for the organization, as they gather all the input necessary for other types of systems.
Think of how one could generate a monthly sales report for middle management or critical marketing
information to senior managers without TPS. TPS provide the basic input to the company’s database.
A failure in TPS often means disaster for the organization. Imagine what happens when an airline
reservation system fails: all operations stops and no transaction can be carried out until the system is
up and running again. Long queues form in front of ATMs and tellers when a bank’s TPS crashes.

Transaction processing systems were created to maintain records and do simple calculations faster,
more accurately and more cheaply than people could do the tasks.

Some of the characteristics of TPS include:


 TPS are large and complex in terms of the number of system interfaces with the various users
and databases and usually developed by MIS experts.
 TPS’s control collection of specific data in specific formats and in accordance with rules,
policies, and goals of organisation- standard format
 They accumulate information from internal operations o the business.
 They are general in nature—applied across organisations.
 They are continuously evolving.

The goal of TPS is to improve transaction handling by:


 Speeding it up
 Using fewer people
 Improving efficiency and accuracy
 Integrating with other organizational information systems
 Providing information that was not available previously

Examples—Airline reservation systems, Automated Teller Machines (ATMs,) order processing


systems, registration systems, Payroll systems and point of sale systems.

2. Management Reporting System (MRS)

Management Reporting Systems (MRS) formerly called Management information systems (MIS)
provide routine information to decision makers to make structured, recurring and routine decisions,
such as restocking decisions or bonus awards. They focus on operational efficiency and provide
summaries of data. A MRS takes the relatively raw data available through a TPS and converts it into
meaningful aggregated form that managers need to conduct their responsibilities. They generate

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 91

information for monitoring performance (e.g. productivity information) and maintaining coordination
(e.g. between purchasing and accounts payable).

The main input to an MRS is data collected and stored by transaction processing systems. A MRS
further processes transaction data to produce information useful for specific purposes. Generally, all
MIS output have been pre-programmed by information systems personnel. Outputs include:

a) Scheduled Reports
These were originally the only reports provided by early management information systems. Scheduled
reports are produced periodically, such as hourly, daily, weekly or monthly. An example might be a
weekly sales report that a store manager gets each Monday showing total weekly sales for each
department compared to sales this week last year or planned sales.

b) Demand Reports
These provide specific information upon request. For instance, if the store manager wanted to know
how weekly sales were going on Friday, and not wait until the scheduled report on Monday, she could
request the same report using figures for the part of the week already elapsed.

c) Exception Reports
These are produced to describe unusual circumstances. For example, the store manager might receive
a report for the week if any department’s sales were more than 10% below planned sales.

Some of the characteristics of MRS include:


 MIS professionals usually design MRS rather than end users- using life cycle oriented
development methodologies.
 They are large and complex in terms of the number of system interfaces with the various
users and databases.
 MRS is built for situations in which information requirements are reasonably well known and
are expected to remain relatively stable. This limits the informational flexibility of MRS but
ensures that a stable informational environment exists.
 They do not directly support the decision making process in a search for alternative solutions
to problems. Information gained through MRS is used in the decision making process.
 They are oriented towards reporting on the past and the present, rather than projecting the
future. Can be manipulated to do predictive reporting.
 MRS has limited analytical capabilities. They are not built around elaborate models, but
rather rely on summarisation and extraction from the databases according to the given criteria.

3. Decision Support System (DSS)


Decision support systems provide problem-specific support for non-routine, dynamic and often
complex decisions or problems. DSS users interact directly with the information systems, helping to
model the problem interactively. DSS basically provide support for non-routine decisions or problems
and an interactive environment in which decision makers can quickly manipulate data and models of
business operations. A DSS might be used for example, to help a management team decide where to
locate a new distribution facility. This is a non-routine, dynamic problem. Each time a new facility
must be built, the competitive, environmental, or internal contexts are most likely different. New

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 92

competitors or government regulations may need to be considered, or the facility may be needed due
to a new product line or business venture.

When the structure of a problem or decision changes, or the information required to address it is
different each time the decision is made, then the needed information cannot be supplied by an MIS,
but must be interactively modelled using a DSS. DSS provide support for analytical work in semi-
structured or unstructured situations. They enable mangers to answer ‘What if’ questions by providing
powerful modelling tools (with simulation and optimization capabilities) and to evaluate alternatives
e.g. evaluating alternative marketing plans.

DSS have less structure and predictable use. They are user-friendly and highly interactive. Although
they use data from the TPS and MIS, they also allow the inclusion of new data, often from external
sources such as current share prices or prices of competitors.

DSS components include:


a) Database (usually extracted from MIS or TPS)
b) Model Base
c) User Dialogue/Dialogue Module

4. Executive information system (EIS) / Executive Support Systems (ESS)


EIS provide a generalized computing and communication environment to senior managers to support
strategic decisions. They draw data from the MIS and allow communication with external sources of
information. But unlike DSS, they are not designed to use analytical models for specific problem
solving. EIS are designed to facilitate senior managers’ access to information quickly and effectively.

ESS has menu-driven user-friendly interfaces, interactive graphics to help visualization of the situation
and communication capabilities that link the senior executives to the external databases he requires.

Top executives need ESS because they are busy and want information quickly and in an easy to read
form. They want to have direct access to information and want their computer set-up to directly
communicate with others. They want structured forms for viewing and want summaries rather than
details.

5. Expert System (ES)


It is an advanced DSS that provides expert advice by asking users a sequence of questions dependent
on prior answers that lead to a conclusion or recommendation. It is made of a knowledge base (database
of decision rules and outcomes), inference engine (search algorithm), and a user interface. ES use
artificial intelligence technology. It attempts to codify and manipulate knowledge rather than
information

ES may expand the capabilities of a DSS in support of the initial phase of the decision making process.
It can assist the second (design) phase of the decision making process by suggesting alternative
scenarios for "what if" evaluation. It assists a human in the selection of an appropriate model for the
decision problem. This is an avenue for an automatic model management; the user of such a system
would need less knowledge about models. ES can simplify model-building in particular simulation
models lends itself to this approach.ES can provide an explanation of the result obtained with a DSS.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 93

This would be a new and important DSS capability. ES can act as tutors. In addition ES capabilities
may be employed during DSS development; their general potential in software engineering has been
recognised.

Other Information Systems


These are special purpose information systems. They are more recent types of information systems that
cannot be characterized as one of the types discussed above.

(i) Office Automation Systems (OAS)


Office automation systems support general office work for handling and managing documents and
facilitating communication. Text and image processing systems evolved as from word processors to
desktop publishing, enabling the creation of professional documents with graphics and special layout
features. Spread sheets, presentation packages like PowerPoint, personal database systems and note-
taking systems (appointment book, notepad, card file) are part of OAS.
In addition OAS includes communication systems for transmitting messages and documents (e-mail)
and teleconferencing capabilities.

(ii) Artificial Intelligence Systems


Artificial intelligence is a broad field of research that focuses on developing computer systems that
simulate human behaviour, that is, systems with human characteristics. These characteristics include,
vision, reasoning, learning and natural language processing.
Examples: Expert systems, Neural Networks, Robotics.

(iii) Knowledge Based Systems/ Knowledge Work Systems (KWS)


Knowledge Work Systems support highly skilled knowledge workers in the creation and integration
of new knowledge in the company. Computer Aided Design (CAD) systems used by product designers
not only allow them to easily make modifications without having to redraw the entire object (just like
word processors for documents), but also enable them to test the product without having to build
physical prototypes.
Architects use CAD software to create, modify, evaluate and test their designs; such systems can
generate photo-realistic pictures, simulating the lighting in rooms at different times of the day, perform
calculations, for instance on the amount of paint required. Surgeons use sophisticated CAD systems to
design operations. Financial institutions use knowledge work systems to support trading and portfolio
management with powerful high-end PCs. These allow managers to get instantaneous analysed results
on huge amounts of financial data and provide access to external databases.
Workflow systems are rule-based programs - (IF ‘this happens’ THEN ‘take this action’)- that
coordinate and monitor the performance of a set of interrelated tasks in a business process.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 94

(iv) Geographic Information Systems (GIS)


Geographic information systems include digital mapping technology used to store and manipulate data
relative to locations on the earth. An example is a marketing GIS database. A GIS is different from a
Global Positioning System (GPS). The latter is a satellite-based system that allows accurate location
determination.
(v) Virtual Reality Systems
Virtual reality systems include 3-dimensional simulation software, where often the user is immersed
in a simulated environment using special hardware (such as gloves, data suits or head mounted
displays). Sample applications include flight simulators, interior design or surgical training using a
virtual patient.
(vi) E-Commerce/E-Business Systems
E-Commerce involves business transactions executed electronically between parties. Parties can be
companies, consumers, public sector organizations or governments.
(vii) Enterprise Resource Planning (ERP) systems
ERP systems are a set of integrated programs that handle most or all organization’s key business
processes at all its locations in a unified manner. Different ERP packages have different scopes. They
often coordinate planning, inventory control, production and ordering. Most include finance and
manufacturing functions, but many are now including customer relationship management, distribution,
human resource as well as supply chain management. ERP systems are integrated around a common
database. Some well-known ERP vendors are ORACLE, SAP and PeopleSoft.
For instance a manufacturing company may prepare a demand forecast for an item for the next month.
The ERP system would then check existing items inventory to see if there is enough on hand to meet
the demand. If not, the ERP system schedules production of the shortfall, ordering additional raw
material and shipping materials if necessary.
(viii) Electronic Funds Transfer (EFT)
EFT is the exchange of money via telecommunications without currency actually changing hands. EFT
refers to any financial transaction that transfers a sum of money from one account to another
electronically. Usually, transactions originate at a computer at one institution (location) and are
transmitted to a computer at another institution (location) with the monetary amount recorded in the
respective organization’s accounts. Because of the potential high volume of money being exchanged,
these systems may be in an extremely high-risk category. Therefore, access security and authorization
of processing are important controls.
Security in an EFT environment is extremely important. Security includes methods used by the
customer to gain access to the system, the communications network and the host or application-
processing site. Individual customer access to the EFT system is generally controlled by a plastic card
and a personal identification number (PIN). Both items are required to initiate a transaction.
(ix) Automated Teller Machine (ATM)
An ATM is a specialized form of point of sale terminal designed for the unattended use by a customer
of a financial institution. These customarily allow a range of banking and debit operations, especially
financial deposits and cash withdrawals. ATMs are usually located in uncontrolled areas and utilize

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 95

unprotected telecommunications lines for data transmissions. Therefore the system must provide high
levels of logical and physical security for both the customer and the machinery.

Recommended internal control guidelines for ATMs include the following:

 Review measures to establish proper customer identification and maintenance of their


confidentiality
 Review file maintenance and retention system to trace transactions
 Review and maintain exception reports to provide an audit trail
 Review daily reconciliation of ATM machine transactions.

Information Systems versus Information Technology


It is often observed that term information system and information technology are used
interchangeably. In a literal sense, information technology is a subset of information systems.
Information systems consist of people, processes, machines and information technology. The great
advancement in information systems is due to development in information technology and
introduction of computers.

Information System
An information system can be defined as set of coordinated network of components, which act
together towards producing, distributing and or processing information. An important characteristic of
computer-based information systems information is precision, which may not apply to other types.

In any given organization information system can be classified based on the usage of the information.
Therefore, information systems in business can be divided into operations support system and
management support system.

Information Technology
Everyday knowingly or unknowingly, everyone is utilizing information technology. It has grown
rapidly and covers many areas of our day to day life like movies, mobile phones, the internet, etc.

Information technology can be broadly defined as integration of computer with telecommunication


equipment for storing, retrieving, manipulating and storage of data. According to Information
Technology Association of America, information technology is defined as “the study, design,
development, application, implementation, support or management of computer-based information
systems.”

Information technology greatly enhances the performance of economy; it provides edge in solving
social issues as well as making information system affordable and user friendly.

Information technology has brought big change in our daily life be it education, life at home, work
place, communication and even in function of government.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 96

Comparison of Information System and Information Technology


Information system and information technology are similar in many ways but at the same time they
are different. Following are some aspects about information system as well as information technology.

 Origin
Information systems have been in existence since pre-mechanical era in form of books, drawings, etc.
However, the origin of information technology is mostly associated with invention of computers.

 Development
Information systems have undergone great deal of evolution, i.e. from manual record keeping to the
current cloud storage system. Similarly, information technology is seeing constant changes with
evermore faster processor and constantly shrinking size of storage devices.

 Business Application
Businesses have been using information systems for example in form of manual books of accounts to
modern tally. The mode of communication has also gone under big change, for example, from a letter
to email. Information technology has helped drive efficiency across organization with improved
productivity and precision manufacturing.

Future of Information System and Information Technology


Information technology has shown exponential growth in the last decade, leading to more
sophisticated information systems. Today’s information technology has tremendously improved
quality of life. Modern medicine has benefited the most with better information system using the
latest information technology.

Information systems have been known to mankind in one form or the other as a resource for decision
making. However, with the advent of information technology information systems have become
sophisticated, and their usage proliferated across all walks of life. Information technology has helped
managed large amount of data into useful and valuable information.

SYSTEM IN A FUNCTIONAL PERSPECTIVE


Information systems can be classified by the specific organizational function they serve as well as by
organizational level. We now describe typical information systems that support each of the major
business functions and provide examples of functional applications for each organizational level.

Sales and Marketing Systems


The sales and marketing function is responsible for selling the organization’s products or services.
Marketing is concerned with identifying the customers for the firm’s products or services,
determining what customers need or want, planning and developing products and services to meet
their needs, and advertising and promoting these products and services. Sales is concerned with
contacting customers, selling the products and services, taking orders, and following up on sales.
Sales and marketing information systems support these activities.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 97

The table below shows that information systems are used in sales and marketing in a number of ways.
At the strategic level, sales and marketing systems monitor trends affecting new products and sales
opportunities, support planning for new products and services, and monitor the performance of
competitors. At the management level, sales and marketing systems support market research,
advertising and promotional campaigns, and pricing decisions. They analyze sales performance and
the performance of the sales staff. At the operational level, sales and marketing systems assist in
locating and contacting prospective customers, tracking sales, processing orders, and providing
customer service support.

Systems Description Organizational Level


Order processing Enter, process and track orders Operational
Pricing analysis Determine prices for products Management
and services
Sales trend forecasting Prepare 5 years sales forecast Strategic

Manufacturing and Production Systems


The manufacturing and production function is responsible for actually producing the firm’s goods and
services. Manufacturing and production systems deal with the planning, development, and
maintenance of production facilities; the establishment of production goals; the acquisition, storage,
and availability of production materials; and the scheduling of equipment, facilities, materials, and
labor required to fashion finished products. Manufacturing and production information systems
support these activities.

The table below shows some typical manufacturing and production information systems arranged by
organizational level. Strategic-level manufacturing systems deal with the firm’s long-term
manufacturing goals, such as where to locate new plants or whether to invest in new manufacturing
technology. At the management level, manufacturing and production systems analyze and monitor
manufacturing and production costs and resources. Operational manufacturing and production
systems deal with the status of production tasks.

System Description Organizational level


Machine control Control the action of machines Operational
and equipment
Production planning Decide when and how many Management
products should be produced
Facilities location Decide where to locate new Strategic
production facilities

Finance and Accounting Systems


The finance function is responsible for managing the firm’s financial assets, such as cash, stocks,
bonds, and other investments, to maximize the return on these financial assets. The finance function is
also in charge of managing the capitalization of the firm (finding new financial assets in stocks,
bonds, or other forms of debt). To determine whether the firm is getting the best return on its
investments, the finance function must obtain a considerable amount of information from sources
external to the firm.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 98

The accounting function is responsible for maintaining and managing the firm’s financial records—
receipts, disbursements, depreciation, payroll—to account for the flow of funds in a firm. Finance and
accounting share related problems—how to keep track of a firm’s financial assets and fund flows.
They provide answers to questions such as these: What is the current inventory of financial assets?
What records exist for disbursements, receipts, payroll, and other fund flows?

The table below shows some of the typical finance and accounting information systems found in large
organizations. Senior management uses finance and accounting systems to establish long-term
investment goals for the firm and to provide long-range forecasts of the firm’s financial performance.
Middle management uses systems to oversee and control firm’s financial resources. Operational
management uses finance and accounting systems to track the flow of funds in the firm through
transactions, such as pay-checks, payments to vendors, securities reports, and receipts.

System Description Group


Accounts Revocable Track money owed the firm Operational management
Budgeting Prepare short term goals Middle management
Profit planning Plan long term profits Senior management

Human Resources Systems


The human resources function is responsible for attracting, developing, and maintaining the firm’s
workforce. Human resources information systems support activities such as identifying potential
employees, maintaining complete records on existing employees, and creating programs to develop
employees’ talents and skills.

Human resources systems help senior management identify the manpower requirements (skills,
educational level, types of positions, number of positions, and cost) for meeting the firm’s long-term
business plans. Middle management uses human resources systems to monitor and analyze the
recruitment, allocation, and compensation of employees. Operational management uses human
resources systems to track the recruitment and placement of the firm’s employees

Examples of Human Resources Information Systems

System Description Group


Training and development Tracks employees training skills Operational management
and performance appraisals
Compensation analysis Monitors the range and distribution Middle management
of employee wages
Human resource planning Plans the long-term labour force Senior management
needs of the organization

Internal Technology Framework: 7S Framework


In the modern age of cutting-edge technology and continuous innovation, product life cycle is ever
shortening. There is constant pressure on companies to differentiate from competition and earn
customer satisfaction. In such a business environment, it is essential that internal organization network
is strong and efficient to deal with any kind of changes.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 99

The 7S framework introduced by McKinsey is one of the ways through which analysis can be done to
determine the efficiency of organization in meeting strategic objective.

The 7S model is utilized to study and suggest areas within company which needs improvement,
examine the effects with change in strategy, internal alignment with every merger and acquisition.

7S Framework

The 7S framework constitutes of 7 factors, which affect organizational effectiveness. These 7 factors
are strategy, organizational structure, IT systems, shared values, employee skills, management style
and staff. These 7 factors can be broadly categorized into Hard Elements-Strategy, Structure, Systems
and Soft Elements-Shared Values, Skills, Style and Staff. Hard elements highlighted above are the
ones which are under direct control of management. Soft elements are not in direct control of
management and are driven by internal culture

The 7 factors as per the framework can be defined as follows:

 Strategy: It is defined as an action plan working towards the organizational defined objective.
 Structure: It is defined as design of organization-employees interaction to meet defined
objective.
 Systems: It is defined as information systems in which organization has invested to fulfill its
defined objective.
 Staff: It is defined as workers employed by the organization.
 Style: It is defined as the approach adopted by the leadership to interact with employees,
supplier and customers.
 Skills: It is defined as characteristics of employees associated with the organization.
 Shared Values: It is the central piece of the whole 7S framework. It is a concept based on
which organization has decided to achieve its objective.

Usage of 7S Framework
The basis of the 7S framework is that for organization to meet its objective it is essential all the seven
elements are in sync and mutually balancing. The model is used to identify which out of 7 factors
need to be balanced as to align with change in organization.

7S framework is helpful in identifying the pain points which are creating a hurdle in organization
growth.

Technology and 7S Framework

In digital age, technology and technology-driven information systems both are game changer as far as
meeting objective for organization is concerned. Companies are moving towards automation, cloud
computing, etc. This has led to technology as central nervous system of the organization.

The 7S framework is applicable across all industries and companies. It is one of the premier models
used to measure organizational effectiveness. In this challenging environment, strategy of
organization is constantly evolving. In such an environment, it is essential organization to look back
upon its seven elements to identify the source which is hampering the growth.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 100

Organization can use the 7S framework to identify its position with existing strategy.

ENTERPRISE APPLICATION AND THE BUSINESS PROCESS


INTEGRATION
The development of technology over the years has led to most systems within an organisation existing
in heterogeneous environments. That is to say, different applications were developed with varying
languages, operate on different hardware and available on numerous platforms. The problems lay in
the fact that when implementing systems, decisions on the technology employed differed from
department to department and also had some dependence on the latest trends. What emerges is that
these systems serve only the departmental needs. Information and process sharing across an
organisation is not accommodated for. These types of systems are known as ‘stovepipes’.

Each of these stovepipe systems held independent data; it was recognised that customer information
and the sharing of this information across departments was extremely valuable to an enterprise.
Allowing the disparate systems to interoperate became increasingly important and necessary. As
organisations grew, so too did the desire to integrate key systems with clients and vendors.

Research has shown that during software development, a third of the time is dedicated to problem of
creating interfaces and points of integration for existing applications and data-stores. Clearly, the idea
and pursuit of application integration is not something new. What is new are the approach and the
ideas that Enterprise Application Integration (EAI) encompasses and the techniques it uses. In order
for it to be a success and a realistic solution, applying EAI requires involvement of the entire
enterprise: business processes, applications, data, standards and platforms.

Business Process
The focus here is on combining tasks, procedures, required input and output information and the tools
needed at each stage of a process. It is imperative that an enterprise identifies all processes that
contribute to the exchange of data within an organisation. This allows organizations to streamline
operations, reduce costs and improve responsiveness to customer demands

Application
The aim here is on taking one application’s data and/or functionality and merging them with that of
another application. This can be realised in a number of ways. For example, business-to-business
integration, web integration, or building websites that are capable of interacting with numerous
systems within the business.

Data and Standards


This addresses the need to have a global standard by which data can be shared and distributed across
an enterprise’s network of systems. Without this format, the two aforementioned integrations would
not be viable. To achieve this, all data and its location must be specified, recorded, and a metadata
model built.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 101

Platform
This provides a secure and reliable means for a corporation’s heterogeneous systems to communicate
and transfer data from one application to another without running into problems.

There are two types of logical integration architecture that EAI employs:

a) Direct Point-to-point and


b) Middleware-based integration.

a) Point-to-point Integration
When dealing with very few applications, this method is certainly adequate. Point-to-point integration
is usually pursued because of its ease and speed of implementation. It must be stressed though, that
the efficiency of this method deteriorates as you try and integrate more systems. So, although to begin
with you only have a few systems, consideration must go into the future; scalability is a huge concern.

b) Middleware
An intermediate layer provides generic interfaces through which the integrated systems are able to
communicate. Middleware performs tasks such as routing and passing data. Each of the interfaces
define a business process provided by an application. Adding and replacing applications will not
affect another application

Overview
Enterprise application integration is an integration framework composed of a collection of
technologies and services which form a middleware to enable integration of systems and applications
across the enterprise.

Supply chain management applications (for managing inventory and shipping), customer relationship
management applications (for managing current and potential customers), business intelligence
applications (for finding patterns from existing data from operations), and other types of applications
(for managing data such as human resources data, health care, internal communications, etc.) typically
cannot communicate with one another in order to share data or business rules. For this reason, such
applications are sometimes referred to as islands of automation or information silos. This lack of
communication leads to inefficiencies, wherein identical data are stored in multiple locations, or
straightforward processes are unable to be automated.

Enterprise Application Integration (EAI) is the process of linking such applications within a single
organization together in order to simplify and automate business processes to the greatest extent
possible, while at the same time avoiding having to make sweeping changes to the existing
applications or data structures. In the words of the Gartner Group, EAI is the “unrestricted sharing of
data and business processes among any connected application or data sources in the enterprise.”

One large challenge of EAI is that the various systems that need to be linked together often reside on
different operating systems, use different database solutions and different computer languages, and in
some cases are legacy systems that are no longer supported by the vendor who originally created

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 102

them. In some cases, such systems are dubbed "stovepipe systems" because they consist of
components that have been jammed together in a way that makes it very hard to modify them in any
way.

Improving Connectivity
If integration is applied without following a structured EAI approach, point-to-point connections grow
across an organization. Dependencies are added on an impromptu basis, resulting in a complex
structure that is difficult to maintain. This is commonly referred to as spaghetti, an allusion to the
programming equivalent of spaghetti code. For example:

The number of connections needed to have fully meshed point-to-point connections, with n points, is

given by Thus, for ten applications to be fully integrated point-to-point, or 45


point-to-point connections are needed.

However, EAI is not just about sharing data between applications; it focuses on sharing both business
data and business process. Middleware analysts attending to EAI involves looking at the system of
systems, which involves large scale inter-disciplinary problems with multiple, heterogeneous,
distributed systems that are embedded in networks at multiple levels. One of the biggest mistakes that
organizations make to solve this problem is excessively focusing on low-level bottom-up IT
approaches, often driven from development-oriented technical teams. In contrast, a paradigm shift is
emerging to start EAI rationalization efforts with effective top-down business-oriented analysis found
in disciplines such as Enterprise Architecture, Business Architecture, and Business Process
Management. The business oriented approach can enable a cohesive business integration strategy
which is supported by, instead of dictated by, technical and data integration strategies

Purposes
EAI can be used for different purposes:

i. Data integration
It ensures that information in multiple systems is kept consistent. This is also known as enterprise
information integration (EII).

ii. Vendor independence


It extracts business policies or rules from applications and implements them in the EAI system, so that
even if one of the business applications is replaced with a different vendor's application, the business
rules do not have to be re-implemented.

iii. Common façade


An EAI system can front-end a cluster of applications, providing a single consistent access interface
to these applications and shielding users from having to learn to use different software package.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 103

Integration Patterns
There are two patterns that EAI systems implement:

1. Mediation (intra-communication)
Here, the EAI system acts as the go-between or broker between multiple applications. Whenever an
interesting event occurs in an application (for instance, new information is created or a new
transaction completed) an integration module in the EAI system is notified. The module then
propagates the changes to other relevant applications.

2. Federation (inter-communication)
In this case, the EAI system acts as the overarching facade across multiple applications. All event
calls from the 'outside world' to any of the applications are front-ended by the EAI system. The EAI
system is configured to expose only the relevant information and interfaces of the underlying
applications to the outside world, and performs all interactions with the underlying applications on
behalf of the requester.

Both patterns are often used concurrently. The same EAI system could be keeping multiple
applications in sync (mediation), while servicing requests from external users against these
applications (federation).

Access patterns
EAI supports both asynchronous and synchronous access patterns, the former being typical in the
mediation case and the latter in the federation case.

Lifetime patterns
An integration operation could be short-lived (e.g. keeping data in sync across two applications could
be completed within a second) or long-lived (e.g. one of the steps could involve the EAI system
interacting with a human work flow application for approval of a loan that takes hours or days to
complete).

Technologies
Multiple technologies are used in implementing each of the components of the EAI system:

a. Bus/hub
This is usually implemented by enhancing standard middleware products (application server,
message bus) or implemented as a stand-alone program (i. e., does not use any middleware), acting as
its own middleware.

b. Application connectivity
The bus/hub connects to applications through a set of adapters (also referred to as connectors). These
are programs that know how to interact with an underlying business application. The adapter performs
two-way communication, performing requests from the hub against the application, and notifying the
hub when an event of interest occurs in the application (a new record inserted, a transaction
completed, etc.). Adapters can be specific to an application (e. g., built against the application

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 104

vendor's client libraries) or specific to a class of applications (e. g., can interact with any application
through a standard communication protocol, such as SOAP, SMTP or Action Message Format
(AMF)). The adapter could reside in the same process space as the bus/hub or execute in a remote
location and interact with the hub/bus through industry standard protocols such as message queues,
web services, or even use a proprietary protocol. In the Java world, standards such as JCA allow
adapters to be created in a vendor-neutral manner.

c. Data format and transformation


To avoid every adapter having to convert data to/from every other applications' formats, EAI systems
usually stipulate an application-independent (or common) data format. The EAI system usually
provides a data transformation service as well to help convert between application-specific and
common formats. This is done in two steps: the adapter converts information from the application's
format to the bus's common format. Then, semantic transformations are applied on this (converting
zip codes to city names, splitting/merging objects from one application into objects in the other
applications, and so on).

d. Integration modules
An EAI system could be participating in multiple concurrent integration operations at any given time,
each type of integration being processed by a different integration module. Integration modules
subscribe to events of specific types and process notifications that they receive when these events
occur. These modules could be implemented in different ways: on Java-based EAI systems, these
could be web applications or EJBs or even POJOs that conform to the EAI system's specifications.

e. Support for transactions


When used for process integration, the EAI system also provides transactional consistency across
applications by executing all integration operations across all applications in a single overarching
distributed transaction (using two-phase commit protocols or compensating transactions).

Communication Architectures
Currently, there are many variations of thought on what constitutes the best infrastructure, component
model, and standards structure for Enterprise Application Integration. There seems to be consensus
that four components are essential for modern enterprise application integration architecture:

i. A centralized broker that handles security, access, and communication. This can be
accomplished through integration servers (like the School Interoperability Framework (SIF)
Zone Integration Servers) or through similar software like the enterprise service bus (ESB)
model that acts as a SOAP-oriented services manager.
ii. An independent data model based on a standard data structure, also known as a canonical data
model. It appears that XML and the use of XML style sheets has become the de facto and in
some cases de jure standard for this uniform business language.
iii. A connector, or agent model where each vendor, application, or interface can build a single
component that can speak natively to that application and communicate with the centralized
broker.
iv. A system model that defines the APIs, data flow and rules of engagement to the system such
that components can be built to interface with it in a standardized way.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 105

Although other approaches like connecting at the database or user-interface level have been explored,
they have not been found to scale or be able to adjust. Individual applications can publish messages to
the centralized broker and subscribe to receive certain messages from that broker. Each application
only requires one connection to the broker. This central control approach can be extremely scalable
and highly evolvable.

Enterprise Application Integration is related to middleware technologies such as message-oriented


middleware (MOM), and data representation technologies such as XML. Other EAI technologies
involve using web services as part of service-oriented architecture as a means of integration.
Enterprise Application Integration tends to be data centric. In the near future, it will come to include
content integration and business processes.

Elements of Information System Model


Proliferation of information technology has increased in the last decade. Today’s organizations are
acknowledging the importance of information systems. It has been accepted worldwide that
information system provides competitive edge and are the bedrock for innovation. The six basic
functions of information systems are capture data, transmit data, store data, retrieve data, manipulate
data and display information.

Elements of Information System Model

The elements of an information system are

a) customers,
b) business processes,
c) product services and communication technology.
Design of an information system is done based on elements of the model.

a) Customers
Every information system has end users or customers. An information system can have internal as
well as external. Customers are beneficiaries of products and services provided by an information
system. Here external customers could be people visiting a website for shopping or e-commerce
transaction, people searching for cooking recipe, searching for tax saving tools, etc.

Internal customer of an information system could be employee receiving salary from payroll system,
employee checking inventory and stock, etc. Sometimes these employees could be the customer for
the product and services, for example, employee working with a computer manufacturer could be
customer of manufactured product.

For a manufacturing organization, production department would be customer for supply department.
Therefore, information system requirements of each department would be different. Information
systems are design to service what is best for external customers. However, information systems
should be flexible enough to support internal requirements also.

b) Products and Services


The result of data transformation is products and services. An information system can generate
products as well as service depending upon industry it is developed for. In clothing industry designer
clothes are produced based customer’s requirement. Here completed garments are product and custom
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 106

design is a service. In internet banking, customer can accomplish the entire banking task, without
visiting the bank. Internet banking, therefore, is a service.

An information system can generate various types of services and products based on its design. An
effective information system needs to satisfy customer expectation. An information system should
provide product and service based on customer’s needs and requirements.

c) Business Processes
Business activity consists of various processes. These processes include talking to customer,
understanding her requirements, manufacturing product as per requirement, provide post sales service,
etc. A business process may not be structured all the time and may not be formal. An improvement in
the business process directly impacts business performance. An information system can improve a
business process, by providing relevant information, increasing a step in business process or
eliminating a step in a business process.

Communication Technology
Communication technology and computers are the central pieces of an information system model.
Their presence is required to deliver efficient business process and customer delighting products and
services. Infusion of technology within business creates win-win situations. Technology improves
internal communication via email chat, etc. and improve external communication through website,
webinar etc. Access to valuable information is quicker through information system, and this can
provide a competitive edge in digital age.

Information system model highlights the pivot role information system plays in bringing efficiency in
any work system.

REVISION EXERCISES
1. Discuss the changing phase of business environment.
2. Discuss the various characteristics of business environments.
3. Discuss the various types of information systems
4. Compare and contrast between information system and information technology
5. Discuss information systems form a functional perspectives
6. Discuss the elements of an information system model

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 107

CHAPTER 4
FILE ORGANIZATION AND APPLICATION

SYNOPSIS
Introduction……………………………………………………… 107
Files and File Structure…………………………………………. 108
File Organisation Methods……………………………………… 109
Processing of Computer Files………………………………….. 113
Database Systems……………………………………………….. 116
Characteristics, Importance and
Limitations of Database Systems……………………………… 122
Data Warehousing………………………………………………. 126

INTRODUCTION
Files stored on magnetic media can be organised in a number of ways, just as in a manual system.
There are advantages and disadvantages to each type of file organisation, and the method chosen will
depend on several factors such as:

• how the file is to be used;


• how many records are processed each time the file is updated;
• whether individual records need to be quickly accessible

A file is a collection of data, usually stored on disk. As a logical entity, a file enables you to divide
your data into meaningful groups, for example, you can use one file to hold all of a company's product
information and another to hold all of its personnel information. As a physical entity, a file should be
considered in terms of its organization.

File organization refers to the logical relationships among the various records that constitute the file,
particularly with respect to the means of identification and access to any specific record.

File structure refers to the format of the label and data blocks and of any logical record control
information. The organization of a given file may be sequential, relative, or indexed.

File organization is the methodology which is applied to structured computer files. Files contain
computer records which can be documents or information which is stored in a certain way for later
retrieval. File organization refers primarily to the logical arrangement of data (which can itself be
organized in a system of records with correlation between the fields/columns) in a file system. It
should not be confused with the physical storage of the file in some types of storage media. There are
certain basic types of computer file, which can include files stored as blocks of data and streams of
data, where the information streams out of the file while it is being read until the end of the file is

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 108

encountered. A program that uses a file needs to know the structure of the file and needs to interpret
its contents.

Methods and Design Paradigm


It is a high-level design decision to specify a system of file organization for a computer software
program or a computer system designed for a particular purpose. Performance is high on the list of
priorities for this design process, depending on how the file is being used. The design of the file
organization usually depends mainly on the system environment. For instance, factors such as whether
the file is going to be used for transaction-oriented processes like OLTP or Data Warehousing, or
whether the file is shared among various processes like those found in a typical distributed system or
standalone. It must also be asked whether the file is on a network and used by a number of users and
whether it may be accessed internally or remotely and how often it is accessed.

However, all things considered the most important considerations might be:

 Rapid access to a record or a number of records which are related to each other.
 The Adding, modification, or deletion of records.
 Efficiency of storage and retrieval of records.
 Redundancy, being the method of ensuring data integrity.
A file should be organized in such a way that the records are always available for processing with no
delay. This should be done in line with the activity and volatility of the information.

FILES AND FILE STRUCTURE


It is important to understand the difference between a file system and a directory. A file system is a
section of hard disk that has been allocated to contain files. This section of hard disk is accessed by
mounting the file system over a directory. After the file system is mounted, it looks just like any other
directory to the end user.

However, because of the structural differences between the file systems and directories, the data
within these entities can be managed separately.

When the operating system is installed for the first time, it is loaded into a directory structure, as
shown in the following illustration.

File Types
The UNIX filesystem contains several different types of files:

1. Ordinary Files
 Used to store your information, such as some text you have written or an image you have
drawn. This is the type of file that you usually work with.
 Always located within/under a directory file
 Do not contain other files

2. Directories
 Branching points in the hierarchical tree

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 109

 Used to organize groups of files


 May contain ordinary files, special files or other directories
 Never contain "real" information which you would work with (such as text). Basically, just
used for organizing files.
 All files are descendants of the root directory, ( named / ) located at the top of the tree.

3. Special Files
 Used to represent a real physical device such as a printer, tape drive or terminal, used for
Input/Ouput (I/O) operations
 Unix considers any device attached to the system to be a file - including your terminal:
 By default, a command treats your terminal as the standard input file (stdin) from which to
read its input
 Your terminal is also treated as the standard output file (stdout) to which a command's output
is sent
 Usually only found under directories named /dev

4. Pipes
 UNIX allows you to link commands together using a pipe. The pipe acts a temporary file
which only exists to hold data from one command until it is read by another
 For example, to pipe the output from one command into another command:
who | wc -l

This command will tell you how many users are currently logged into the system. The standard output
from the who command is a list of all the users currently logged into the system. This output is piped
into the wc command as its standard input. Used with the -l option this command counts the numbers
of lines in the standard input and displays the result on its standard output - your terminal.

FILE ORGANIZATION METHOD


Organizing a file depends on what kind of file it happens to be: a file in the simplest form can be a
text file, (in other words a file which is composed of ascii (American Standard Code for Information
Interchange) text.) Files can also be created as binary or executable types (containing elements other
than plain text.) Also, files are keyed with attributes which help determine their use by the host
operating system.

Types of File Organisation


The available methods include:

1. serial;
2. sequential;
3. indexed sequential;
4. random access

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 110

1. Serial file organisation


The records on a serial file are not in any particular sequence, and so this type of organisation would
not be used for a master file as there would be no way to find a particular record except by reading
through the whole file, starting at the beginning, until the right record was located. Serial files are
used as temporary files to store transaction data

2. Sequential file organisation


A sequential file contains records organized in the order they were entered. The order of the records is
fixed. The records are stored and sorted in physical, contiguous blocks within each block the records
are in sequence. Records in these files can only be read or written sequentially.

Once stored in the file, the record cannot be made shorter, or longer, or deleted. However, the record
can be updated if the length does not change. (This is done by replacing the records by creating a new
file.) New records will always appear at the end of the file.

If the order of the records in a file is not important, sequential organization will suffice, no matter how
many records you may have. Sequential output is also useful for report printing or sequential reads
which some programs prefer to do. As with serial organisation, records are stored one after the other,
but in a sequential file the records are sorted into key sequence. Files that are stored on tape are
always either serial or sequential, as it is impossible to write records to a tape in any way except one
after the other. From the computer’s point of view there is essentially no difference between a serial
and a sequential file. In both cases, in order to find a particular record, each record must be read,
starting from the beginning of the file, until the required record is located. However, when the whole
file has to be processed (for example a payroll file prior to payday) sequential processing is fast and
efficient.

3. Index-sequential files
Key searches are improved by this system too. The single-level indexing structure is the simplest one
where a file, whose records are pairs, contains a key pointer. This pointer is the position in the data
file of the record with the given key. A subset of the records, which are evenly spaced along the data
file, is indexed, in order to mark intervals of data records.

This is how a key search is performed: the search key is compared with the index keys to find the
highest index key coming in front of the search key, while a linear search is performed from the
record that the index key points to, until the search key is matched or until the record pointed to by the
next index entry is reached. Regardless of double file access (index + data) required by this sort of
search, the access time reduction is significant compared with sequential file searches.

Let's examine, for sake of example, a simple linear search on a 1,000 record sequentially organized
file. An average of 500 key comparisons are needed (and this assumes the search keys are uniformly
distributed among the data keys). However, using an index evenly spaced with 100 entries, the total
number of comparisons is reduced to 50 in the index file plus 50 in the data file: a five to one
reduction in the operations count!

Hierarchical extension of this scheme is possible since an index is a sequential file in itself, capable of
indexing in turn by another second-level index, and so forth and so on. And the exploit of the
hierarchical decomposition of the searches more and more, to decrease the access time will pay
increasing dividends in the reduction of processing time. There is however a point when this

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 111

advantage starts to be reduced by the increased cost of storage and this in turn will increase the index
access time.

Hardware for index-sequential organization is usually disk-based, rather than tape. Records are
physically ordered by primary key. And the index gives the physical location of each record. Records
can be accessed sequentially or directly, via the index. The index is stored in a file and read into
memory at the point when the file is opened. Also, indexes must be maintained.

Life sequential organization the data is stored in physical contiguous box. However the difference is
in the use of indexes. There are three areas in the disc storage:

 Primary Area
It contains file records stored by key or ID numbers.

 Overflow Area
It contains records area that cannot be placed in primary area.

 Index Area
It contains keys of records and there locations on the disc. When there is need to access records
sequentially by some key value and also to access records directly by the same key value, the
collection of records may be organized in an effective manned called Indexes Sequential
Organization.

You must be familiar with search process for a word in a language dictionary. The data in the
dictionary is stored in sequential manner. However an index is provided in terms of thumb tabs. To
search for a word we do not search sequentially. We access the index that is the appropriate thumb
tab, locate an approximate location for the word and then proceed to find the word sequentially.

To implement the concept of indexed sequential file organizations, we consider an approach in which
the index part and data part reside on a separate file. The index file has a tree structure and data file
has a sequential structure. Since the data file is sequenced, it is not necessary for the index to have an
entry for each record Following figure shows a sequential file with a two-level index.

Level 1 of the index holds an entry for each three-record section of the main file. The level 2 indexes
level 1 in the same way.

When the new records are inserted in the data file, the sequence of records need to be preserved and
also the index is accordingly updated.

Two approaches used to implement indexes are static indexes and dynamic indexes.

As the main data file changes due to insertions and deletions, the static index contents may change but
the structure does not change . In case of dynamic indexing approach, insertions and deletions in the
main data file may lead to changes in the index structure. Recall the change in height of B-Tree as
records are inserted and deleted.

Both dynamic and static indexing techniques are useful depending on the type of application.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 112

4. Random Access
It refers to the ability to access data at random. The opposite of random access is sequential access. To
go from point A to point Z in a sequential-access system, you must pass through all intervening
points. In a random-access system, you can jump directly to point Z. Disks are random access media,
whereas tapes are sequential access media.

The terms random access and sequential access are often used to describe data files. A random-access
data file enables you to read or write information anywhere in the file. In a sequential-access file, you
can only read and write information sequentially, starting from the beginning of the file.

Both types of files have advantages and disadvantages. If you are always accessing information in the
same order, a sequential-access file is faster. If you tend to access information randomly, random
access is better.

Random access is sometimes called direct access.

Problems of Traditional File Based Approach


Each function in an organisation develops specific applications in isolation from other divisions, each
application using their own data files. This leads to the following problems:

 Data redundancy
It causes data to be duplicated in multiple data files
Redundancy leads to inconsistencies in data representation e.g. refer to the same person as client or
customer values of data items across multiple files

 Program-data dependence
There is a tight relationship between data files and specific programs used to maintain files.

 Lack of flexibility
There is need to write a new program to carry out each new task

 Integrity problems
Integrity constraints (e.g. account balance > 0) become part of program code. It’s hard to add new
constraints or change existing ones

 Concurrent access by multiple users is difficult


Concurrent access is needed for performance. Uncontrolled concurrent accesses can lead to
inconsistencies. E.g. two people reading a balance and updating it at the same time.

External File Structure and File Extensions


Microsoft Windows and MS-DOS File Systems

The external structure of a file depends on whether it is being created on a FAT or NTFS partition.
The maximum filename length on a NTFS partition is 256 characters, and 11 characters on FAT (8
character name+"."+3 character extension.) NTFS filenames keep their case, whereas FAT filenames

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 113

have no concept of case (but case is ignored when performing a search under NTFS Operating
System). Also, there is the new VFAT which permits 256 character filenames.

UNIX and Apple Macintosh File Systems

The concept of directories and files is fundamental to the UNIX operating system. On Microsoft
Windows-based operating systems, directories are depicted as folders and moving about is
accomplished by clicking on the different icons. In UNIX, the directories are arranged as a hierarchy
with the root directory being at the top of the tree. The root directory is always depicted as /. Within
the / directory, there are subdirectories (e.g.: etc and sys). Files can be written to any directory
depending on the permissions. Files can be readable, writable and/or executable.

Organizing files using Libraries

With the advent of Microsoft Windows 7 the concept of file organization and management has
improved drastically by way of use of powerful tool called libraries. A library is file organization
system to bring together related files and folders stored in different locations of the local as well as
network computer such that these can be accessed centrally through a single access point. For
instance, various images stored in different folders in the local computer or/and across a computer
network can be accumulated in an image library. Aggregation of similar files can be manipulated,
sorted or accessed conveniently as and when required through a single access point on a computer
desktop by use of a library. This feature is particularly very useful for accessing similar content of
related content, and also, for managing projects using related and common data.

PROCESSING OF COMPUTER FILES


A piece of information used in an application is primarily represented as a group of bits. So far, if we
requested information from the user, when the application exited, we lost all information that the user
had entered. This is because such information was only temporarily stored in the random access
memory (RAM). In some cases, you will want to "keep" information that the user has entered so you
can make the information available the next time the user opens the application. In some other cases,
whether you request information from the user or inherently provide it to the user, you may want
different people working from different computers to use or share the same data. In these and other
scenarios, you must store the information somewhere and retrieve it when necessary. This is the basis
of file processing.

Computer Files
A file is a collection of related data or information that is normally maintained on a secondary storage
device. The purpose of a file is to keep data in a convenient location where they can be located and
retrieved as needed. The term computer file suggests organized retention on the computer that
facilitates rapid, convenient storage and retrieval. As defined by their functions, two general types of
files are used in computer information systems: master files and transaction files.

Streams
File processing consists of creating, storing, and/or retrieving the contents of a file from a
recognizable medium. For example, it is used to save word-processed files to a hard drive, to store a

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 114

presentation on floppy disk, or to open a file from a CD-ROM. A stream is the technique or means of
performing file processing. In order to manage files stored in a computer, each file must be able to
provide basic pieces of information about itself. This basic information is specified when the file is
created but can change during the lifetime of a file.

To create a file, a user must first decide where it would be located: this is a requirement. A file can be
located on the root drive. Alternatively, a file can be positioned inside of an existing folder. Based on
security settings, a user may not be able to create a file just anywhere in the (file system of the)
computer. Once the user has decided where the file would reside, there are various means of creating
files that the users are trained to use. When creating a file, the user must give it a name following the
rules of the operating system combined with those of the file system. The most fundamental piece of
information a file must have is a name.

Once the user has created a file, whether the file is empty or not, the operating system assigns basic
pieces of information to it. Once a file is created, it can be opened, updated, modified, renamed, etc.

The Name of a File


Before performing file processing, one of your early decisions will consist of specifying the type of
operation you want the user to perform. For example, the user may want to create a brand new file,
open an existing file, or perform a routine operation on a file. In all or most cases, whether you are
creating a new file or manipulating an existing one, you must specify the name of the file. You can do
this by declaring a string variable but, as we will learn later on, most classes used to create a stream
can take a string that represents the file.

If you are creating a new file, there are certainly some rules you must observe. The name of a file
follows the directives of the operating system. On MS DOS and Windows 3.X (that is, prior to
Microsoft Windows 9X), the file had to use the 8.3 format. The actual name had to have a maximum
of 8 characters with restrictions on the characters that could be used. The user also had to specify
three characters after a period. The three characters, known as the file extension, were used by the
operating system to classify the file. That was all necessary for those 8-bit and 16-bit operating
systems. Various rules have changed. For example, the names of folders and files on Microsoft
Windows >= 95 can have up to 255 characters. The extension of the file is mostly left to the judgment
of the programmer but the files are still using extensions. Applications can also be configured to save
different types of files; that is, files with different extensions.

Master files
Master files contain information to be retained over a relatively long time period. Information in
master files is updated continuously to represent the current status of the business.

An example is an accounts receivable file. This file is maintained by companies that sell to customers
on credit. Each account record will contain such information as account number, customer name and
address, credit limit amount, the current balance owed, and fields indicating the dates and amounts of
purchases during the current reporting period. This file is updated each time the customer makes a
purchase. When a new purchase is made, a new account balance is computed and compared with the
credit limit. If the new balance exceeds the credit limit, an exception report may be issued and the order
may be held up pending management approval.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 115

Transaction Files
Transaction files contain records reflecting current business activities. Records in transaction files are
used to update master files. To continue with the illustration, records containing data on customer
orders are entered into transaction files. These transaction files are then processed to update the
master files. This is known as posting transaction data to master file. For each customer transaction
record, the corresponding master record is accessed and updated to reflect the last transaction and the
new balance. At this point, the master file is said to be current.

Accessing Files
Files can be accessed
 Sequentially - start at first record and read one record after another until end of file or desired
record is found
o known as “sequential access”
o only possible access for serial storage devices
 Directly - read desired record directly
o known as “random access” or “direct access”

Disadvantage of Computer File-based Processing System

Although a computer file-based processing system has many advantages over manual record keeping
system, but it has some limitations. The basic disadvantages (or limitations) of computer file-based
processing system are described below.

a. Data Redundancy
Redundancy means having multiple copies of the same data. In computer file-based processing
system, each application program has its own data files. The same data may be duplicated in more
than one file. The duplication of data may create many problems such as:

To update a specific data/record, the same data must be updated in all files, otherwise different file
may have different information about a specific item. A valuable storage space is wasted.

b. Data Inconsistency
Data inconsistency means that different files may contain different information of a particular object
or person. Actually redundancy leads to inconsistency. When the same data is stored in multiple
locations, the inconsistency may occur.

c. Data Isolation
In computer file-based system, data is isolated in separate files. It is difficult to update and to access
particular information from data files.

d. Data Atomicity
Data atomicity means data or record is either entered as a whole or it is not entered at all.

e. Data Dependence
In computer file-based processing systems, the data stored in file depends upon the application
program through which the file was created. It means that the structure of data files is coupled with
application program.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 116

The physical structure of data files and records are defined in the application program code. It is
difficult to change the structure of data files or records. If you want to change the structure of data file
(or format of file), then you have to modify the application program.

f. Program Maintenance
In computer file-based processing system, the structure of data file is coupled with the individual
application programs. Therefore, any modification to a data file such as size of a data field, its type
etc. requires the modification of the application program also. This process of modifying the program
is referred to as program maintenance.

g. Data Sharing
In computer file-based processing systems, each application program uses its own private data files.
The computer file-based processing systems do not provide the facility to share data of a data file
among multiple users on the network.

h. Data Security
The computer file-based processing system do not provide the proper security system against illegal
access of data. Anyone can easily change or delete valuable data stored in the data file. It is the most
complicated problem of file-processing system.

i. Incompatible File Format


In computer file-based processing systems, the structure of data file is coupled with the application
program and the structure of data file is dependent on the programming languages in which the
application program was developed.

DATABASE SYSTEM
DBMSs are system software that aid in organizing, controlling and using the data needed by
application programs. A DBMS provides the facility to create and maintain a well-organized database.
It also provides functions such as normalization to reduce data redundancy, decrease access time and
establish basic security measures over sensitive data.

DBMS can control user access at the following levels:


 User and the database
 Program and the database
 Transaction and the database
 Program and data field
 User and transaction
 User and data field

The following are some of the advantages of DBMS:


 Data independence for application systems
 Ease of support and flexibility in meeting changing data requirements
 Transaction processing efficiency
 Reduction of data redundancy (similar data being held at more than one point – utilizes more
resources) – have one copy of the data and avail it to all users and applications
 Maximizes data consistency – users have same view of data even after an update

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 117

 Minimizes maintenance cost through data sharing


 Opportunity to enforce data/programming standards
 Opportunity to enforce data security
 Availability of stored data integrity checks
 Facilitates terminal users ad hoc access to data, especially designed query
languages/application generators

Most DBMS have internal security features that interface with the operating system access control
mechanism/package, unless it was implemented in a raw device. A combination of the DBMS security
features and security package functions is often used to cover all required security functions. This dual
security approach however introduces complexity and opportunity for security lapses.

DBMS Architecture
Data elements required to define a database are called metadata. There are three types of metadata:

 conceptual schema metadata,


 external schema metadata and
 internal schema metadata.
If any one of these elements is missing from the data definition maintained within the DBMS, the
DBMS may not be adequate to meet users’ needs. A data definition language (DDL) is a component
used for creating the schema representation necessary for interpreting and responding to the users’
requests.

Data dictionary and directory systems (DD/DS) have been developed to define and store in source and
object forms all data definitions for external schemas, conceptual schemas, the internal schema and all
associated mappings. The data dictionary contains an index and description of all the items stored in
the database. The directory describes the location of the data and access method. Some of the benefits
of using DD/DS include:

 Enhancing documentation
 Providing common validation criteria
 Facilitating programming by reducing the needs for data definition
 Standardizing programming methods

Database Structure
The common database models are:
 Hierarchical database model
 Network database model
 Relational database model
 Object–oriented model

Hierarchical Database Model


This model allows the data to be structured in a parent/child relationship (each parent may have many
children, but each child would be restricted to having only one parent). Under this model, it is difficult

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 118

to express relationships when children need to relate to more than one parent. When the data
relationships are hierarchical, the database is easy to implement, modify and search.

A hierarchical structure has only one root. Each parent can have numerous children, but a child can
have only one parent. Subordinate segments are retrieved through the parent segment. Reverse pointers
are not allowed. Pointers can be set only for nodes on a lower level; they cannot be set to a node on a
predetermined access path.

Network Database Model


The model allows children to relate to more than one parent. A disadvantage to the network model is
that such structure can be extremely complex and difficult to comprehend, modify or reconstruct in
case of failure. The network structure is effective in stable environments where the complex
interdependencies of the requirements have been clearly defined.

The network structure is more flexible, yet more complex, than the hierarchical structure. Data
records are related through logical entities called sets. Within a network, any data element can be
connected to any item. Because networks allow reverse pointers, an item can be an owner and a
member of the same set of data. Members are grouped together to form records, and records are
linked together to form a set. A set can have only one owner record but several member records.

Relational Database Model


The model is independent from the physical implementation of the data structure. The relational
database organization has many advantages over the hierarchical and network database models. They
are:
 Easier for users to understand and implement in a physical database system
 Easier to convert from other database structures
 Projection and joint operations (referencing groups of related data elements not stored
together) are easier to implement and creation of new relations for applications is easier to do.
 Access control over sensitive data is easy to implement
 Faster in data search
 Easier to modify than hierarchical or network structures

Relational database technology separates data from the application and uses a simplified data model.
Based on set theory and relational calculations, a relational database models information in a table
structure with columns and rows. Columns, called domains or attributes, correspond to fields. Rows
or tuples are equal to records in a conventional file structure. Relational databases use normalization
rules to minimize the amount of information needed in tables to satisfy users’ structured and
unstructured queries to the database.

Database Administrator
He/she coordinates the activities of the database system. Duties include:
 Schema definition
 Storage structure and access method definition
 Schema and physical organisation modification

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 119

 Granting user authority to access the database


 Specifying integrity constraints
 Acting as liaison with users
 Monitoring performance and responding to changes in requirements
 Security definitions

Database Security, Integrity and Control


Security is the protection of data from accidental or deliberate threats, which might cause unauthorized
modification, disclosure or destruction of data and the protection of the information system from the
degradation or non-availability of service. Data integrity in the context of security is when data are the
same as in source documents and have not been accidentally or intentionally altered, destroyed or
disclosed. Security in database systems is important because:

 Large volumes of data are concentrated into files that are physically very small
 The processing capabilities of a computer are extensive, and enormous quantities of data are
processed without human intervention.
 Easy to lose data in a database from equipment malfunction, corrupt files, loss during copying
of files and data files are susceptible to theft, floods etc.
 Unauthorized people can gain access to data files and read classified data on files
 Information on a computer file can be changed without leaving any physical trace of change
 Database systems are critical in competitive advantage to an organization

Some of the controls that can be put in place include:


1) Administrative controls
It controls by non-computer based measures. They include:
a. Personnel controls e.g. selection of personnel and division of responsibilities
b. Secure positioning of equipment
c. Physical access controls
d. Building controls
e. Contingency plans

2) PC controls
a. Keyboard lock
b. Password
c. Locking disks
d. Training
e. Virus scanning
f. Policies and procedures on software copying

3) Database controls
A number of controls have been embedded into DBMS, these include:
a. Authorization – granting of privileges and ownership, authentication
b. Provision of different views for different categories of users
c. Backup and recovery procedures

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 120

d. Checkpoints – the point of synchronization between database and transaction log files. All
buffers are force written to storage.
e. Integrity checks e.g. relationships, lookup tables, validations
f. Encryption – coding of data by special algorithm that renders them unreadable without
decryption
g. Journaling – maintaining log files of all changes made
h. Database repair

4) Development controls
When a database is being developed, there should be controls over the design, development and testing
e.g.
a. Testing
b. Formal technical review
c. Control over changes
d. Controls over file conversion

5) Document standards
They are standards that are required for documentation such as:
a. Requirement specification
b. Program specification
c. Operations manual
d. User manual

6) Legal issues
a. Escrow agreements – legal contracts concerning software
b. Maintenance agreements
c. Copyrights
d. Licenses
e. Privacy

7) Other controls including


a. Hardware controls such as device interlocks which prevent input or output of data from
being interrupted or terminated, once begun
b. Data communication controls e.g. error detection and correction.

Database recovery is the process of restoring the database to a correct state in the event of a failure.

Some of the techniques include:


1) Backups
2) Mirroring – two complete copies of the database are maintained online on different stable storage
devices.
3) Restart procedures – no transactions are accepted until the database has been repaired
4) Undo/redo – undoing and redoing a transaction after failure.

A distributed database system exists where logically related data is physically distributed between a
number of separate processors linked by a communication network.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 121

A multi-database system is a distributed system designed to integrate data and provide access to a
collection of pre-existing local databases managed by heterogeneous database systems such as oracle.

Terminology and Overview


Formally, the term "database" refers to the data itself and supporting data structures. A "database
management system" (DBMS) is a suite of computer software providing the interface between users
and a database or databases. Because they are so closely related, the term "database" when used
casually often refers to both a DBMS and the data it manipulates.

Outside the world of professional information technology, the term database is sometimes used
casually to refer to any collection of data (perhaps a spreadsheet, maybe even a card index). This
article is concerned only with databases where the size and usage requirements necessitate use of a
database management system.

The interactions catered for by most existing DBMS fall into four main groups:

i. Data definition. Defining new data structures for a database, removing data structures from
the database, modifying the structure of existing data.
ii. Update. Inserting, modifying, and deleting data.
iii. Retrieval. Obtaining information either for end-user queries and reports or for processing by
applications.
iv. Administration. Registering and monitoring users, enforcing data security, monitoring
performance, maintaining data integrity, dealing with concurrency control, and recovering
information if the system fails.

A DBMS is responsible for maintaining the integrity and security of stored data, and for recovering
information if the system fails.

Both a database and its DBMS conform to the principles of a particular database model. "Database
system" refers collectively to the database model, database management system, and database.

Physically, database servers are dedicated computers that hold the actual databases and run only the
DBMS and related software. Database servers are usually multiprocessor computers, with generous
memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to
one or more servers via a high-speed channel, are also used in large volume transaction processing
environments. DBMSs are found at the heart of most database applications. DBMSs may be built
around a custom multitasking kernel with built-in networking support, but modern DBMSs typically
rely on a standard operating system to provide these functions. Since DBMSs comprise a significant
economical market, computer and storage vendors often take into account DBMS requirements in
their own development plans.

Databases and DBMSs can be categorized according to the database model(s) that they support (such
as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone),

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 122

the query language(s) used to access the database (such as SQL or XQuery), and their internal
engineering, which affects performance, scalability, resilience, and security.

CHARACTERISTICS, IMPORTANCE AND LIMITATION OF DBMS

CHARACTERISTICS OF DBMS
A database management system (DBMS) consists of several components. Each component plays very
important role in the database management system environment. The major components of database
management system are:

a) Software
b) Hardware
c) Data
d) Procedures
e) Database Access Language

a) Software
The main component of a DBMS is the software. It is the set of programs used to handle the database
and to control and manage the overall computerized database

i. DBMS software itself, is the most important software component in the overall system
ii. Operating system including network software being used in network, to share the data of
database among multiple users.
iii. Application programs developed in programming languages such as C++, Visual Basic that
are used to access database in database management system. Each program contains
statements that request the DBMS to perform operation on database. The operations may
include retrieving, updating, deleting data etc. The application program may be conventional
or online workstations or terminals.

b) Hardware
Hardware consists of a set of physical electronic devices such as computers (together with associated
I/O devices like disk drives), storage devices, I/O channels, electromechanical devices that make
interface between computers and the real world systems etc, and so on. It is impossible to implement
the DBMS without the hardware devices. In a network, a powerful computer with high data
processing speed and a storage device with large storage capacity is required as database server.

c) Data
Data is the most important component of the DBMS. The main purpose of DBMS is to process the
data. In DBMS, databases are defined, constructed and then data is stored, updated and retrieved to
and from the databases. The database contains both the actual (or operational) data and the metadata
(data about data or description about data).

d) Procedures
Procedures refer to the instructions and rules that help to design the database and to use the DBMS.
The users that operate and manage the DBMS require documented procedures on hot use or run the
database management system. These may include.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 123

i. Procedure to install the new DBMS.


ii. To log on to the DBMS.
iii. To use the DBMS or application program.
iv. To make backup copies of database.
v. To change the structure of database.
vi. To generate the reports of data retrieved from database.

e) Database Access Language


The database access language is used to access the data to and from the database. The users use the
database access language to enter new data, change the existing data in database and to retrieve
required data from databases. The user write a set of appropriate commands in a database access
language and submits these to the DBMS. The DBMS translates the user commands and sends it to a
specific part of the DBMS called the Database Jet Engine. The database engine generates a set of
results according to the commands submitted by user, converts these into a user readable form called
an inquiry report and then displays them on the screen. The administrators may also use the database
access language to create and maintain the databases.

The most popular database access language is SQL (Structured Query Language). Relational
databases are required to have a database query language.

Users
The users are the people who manage the databases and perform different operations on the databases
in the database system. There are three kinds of people who play different roles in database system

• Application Programmers
• Database Administrators
• End-Users

Application Programmers

The people who write application programs in programming languages (such as Visual Basic, Java, or
C++) to interact with databases are called Application Programmer.

Database Administrators

A person who is responsible for managing the overall database management system is called database
administrator or simply DBA.

End-Users

The end-users are the people who interact with database management system to perform different
operations on database such as retrieving, updating, inserting, deleting data etc.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 124

IMPORTANCES OF DBMS
The database management system has a number of advantages as compared to traditional computer
file-based processing approach. The database administrator must keep in mind these benefits or
capabilities during databases and monitoring the DBMS.

The main advantages of DBMS are described below.

a. Controlling Data Redundancy


In non-database systems each application program has its own private files. In this case, the
duplicated copies of the same data is created in many places. In DBMS, all data of an organization is
integrated into a single database file. The data is recorded in only one place in the database and it is
not duplicated.

b. Sharing of Data
In DBMS, data can be shared by authorized users of the organization. The database administrator
manages the data and gives rights to users to access the data. Many users can be authorized to access
the same piece of information simultaneously. The remote users can also share same data. Similarly,
the data of same database can be shared between different application programs.

c. Data Consistency
By controlling the data redundancy, the data consistency is obtained. If a data item appears only once,
any update to its value has to be performed only once and the updated value is immediately available
to all users. If the DBMS has controlled redundancy, the database system enforces consistency.

d. Integration of Data
In Database management system, data in database is stored in tables. A single database contains
multiple tables and relationships can be created between tables (or associated data entities). This
makes easy to retrieve and update data.

e. Integration Constraints
Integrity constraints or consistency rules can be applied to database so that the correct data can be
entered into database. The constraints may be applied to data item within a single record or the may
be applied to relationships between records.

f. Data Security
Form is very important object of DBMS. You can create forms very easily and quickly in DBMS.
Once a form is created, it can be used many times and it can be modified very easily. The created
forms are also saved along with database and behave like a software component. A form provides
very easy way (user-friendly) to enter data into database, edit data and display data from database.
The non-technical users can also perform various operations on database through forms without going
into technical details of a fatabase.

g. Report Writers
Most of the DBMSs provide the report writer tools used to create reports. The users can create very
easily and quickly. Once a report is created, it can be used may times and it can be modified very
easily. The created reports are also saved along with database and behave like a software component.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 125

h. Control Over Concurrency


In a computer file-based system, if two users are allowed to access data simultaneously, it is possible
that they will interfere with each other. For example, if both users attempt to perform update operation
on the same record, then one may overwrite the values recorded by the other. Most database
management systems have sub-systems to control the concurrency so that transactions are always
recorded with accuracy.

i. Backup and Recovery Procedures


In a computer file-based system, the user creates the backup of data regularly to protect the valuable
data from damage due to failures to the computer system or application program. It is very time
consuming method, if amount of data is large. Most of the DBMSs provide the 'backup and recovery'
sub-systems that automatically create the backup of data and restore data if required.

j. Data Independence
The separation of data structure of database from the application program that uses the data is called
data independence. In DBMS, you can easily change the structure of database without modifying the
application program.

LIMITATION OF DBMS
Although there are many advantages of DBMS, the DBMS may also have some minor disadvantages.
These are:

a. Cost of Hardware and Software


A processor with high speed of data processing and memory of large size is required to run the DBMS
software. It means that you have to upgrade the hardware used for file-based system. Similarly,
DBMS software is also very costly.

b. Cost of Data Conversion


When a computer file-based system is replaced with database system, the data stored into data file
must be converted to database file. It is very difficult and costly method to convert data of data file
into database. You have to hire database system designers along with application programmers.
Alternatively, you have to take the services of some software house. So a lot of money has to be paid
for developing software.

c. Cost of Staff Training


Most database management system are often complex systems so the training for users to use the
DBMS is required. Training is required at all levels, including programming, application
development, and database administration. The organization has to be paid a lot of amount for the
training of staff to run the

d. Appointing Technical Staff


The trained technical persons such as database administrator, application programmers, data entry
operations etc. are required to handle the DBMS. You have to pay handsome salaries to these persons.
Therefore, the system cost increases.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 126

e. Database Damage
In most of the organization, all data is integrated into a single database. If database is damaged due to
electric failure or database is corrupted on the storage media, the your valuable data may be lost
forever.

DATA WAREHOUSING
Different people have different definitions for a data warehouse. The most popular definition came
from Bill Inmon, who provided the following:

A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in


support of management's decision making process.

Subject-Oriented: A data warehouse can be used to analyze a particular subject area. For example,
"sales" can be a particular subject.

Integrated: A data warehouse integrates data from multiple data sources. For example, source A and
source B may have different ways of identifying a product, but in a data warehouse, there will be only
a single way of identifying a product.

Time-Variant: Historical data is kept in a data warehouse. For example, one can retrieve data from 3
months, 6 months, 12 months, or even older data from a data warehouse. This contrasts with a
transactions system, where often only the most recent data is kept. For example, a transaction system
may hold the most recent address of a customer, where a data warehouse can hold all addresses
associated with a customer.

Non-volatile: Once data is in the data warehouse, it will not change. So, historical data in a data
warehouse should never be altered.

Ralph Kimball provided a more concise definition of a data warehouse:

A data warehouse is a copy of transaction data specifically structured for query and analysis.

This is a functional view of a data warehouse. Kimball did not address how the data warehouse is
built like Inmon did, rather he focused on the functionality of a data warehouse.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 127

Contrasting OnLine Transaction Processing (OLTP )and Data Warehousing Environments

The figure below illustrates key differences between an OLTP system and a data warehouse.

One major difference between the types of system is that data warehouses are not usually in third
normal form (3NF), a type of data normalization common in OLTP environments. Data warehouses
and OLTP systems have very different requirements. Here are some examples of differences between
typical data warehouses and OLTP systems:

a. Workload
Data warehouses are designed to accommodate ad hoc queries and data analysis. You might not know
the workload of your data warehouse in advance, so a data warehouse should be optimized to perform
well for a wide variety of possible query and analytical operations. OLTP systems support only
predefined operations. Your applications might be specifically tuned or designed to support only these
operations.

b. Data modifications
A data warehouse is updated on a regular basis by the ETL process (run nightly or weekly) using bulk
data modification techniques. The end users of a data warehouse do not directly update the data
warehouse except when using analytical tools, such as data mining, to make predictions with
associated probabilities, assign customers to market segments, and develop customer profiles.

In OLTP systems, end users routinely issue individual data modification statements to the database.
The OLTP database is always up to date, and reflects the current state of each business transaction.

c. Schema design
Data warehouses often use denormalized or partially denormalized schemas (such as a star schema) to
optimize query and analytical performance.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 128

OLTP systems often use fully normalized schemas to optimize update/insert/delete performance, and
to guarantee data consistency.

d. Typical operations
A typical data warehouse query scans thousands or millions of rows. For example, "Find the total
sales for all customers last month."

A typical OLTP operation accesses only a handful of records. For example, "Retrieve the current
order for this customer."

e. Historical data
Data warehouses usually store many months or years of data. This is to support historical analysis and
reporting.

OLTP systems usually store data from only a few weeks or months. The OLTP system stores only
historical data as needed to successfully meet the requirements of the current transaction.

Extracting Information from a Data Warehouse


You can extract information from the masses of data stored in a data warehouse by analyzing the data.
The oracle database provides several ways to analyze data:

A wide array of statistical functions, including descriptive statistics, hypothesis testing, correlations
analysis, test for distribution fit, cross tabs with Chi-square statistics, and analysis of variance
(ANOVA); these functions are described in the Oracle Database SQL Language Reference.

Data Mining
Data mining uses large quantities of data to create models. These models can provide insights that are
revealing, significant, and valuable. For example, data mining can be used to:

• Predict those customers likely to change service providers.


• Discover the factors involved with a disease.
• Identify fraudulent behavior.

Data mining is not restricted to solving business problems. For example, data mining can be used in
the life sciences to discover gene and protein targets and to identify leads for new drugs.

Oracle data mining performs data mining in the oracle database. Oracle data mining does not require
data movement between the database and an external mining server, thereby eliminating redundancy,
improving efficient data storage and processing, ensuring that up-to-date data is used, and maintaining
data security.

Oracle Data Mining Functionality


Oracle data mining supports the major data mining functions. There is at least one algorithm for each
data mining function.

Oracle data mining supports the following data mining functions:


MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 129

i. Classification
Grouping items into discrete classes and predicting which class an item belongs to; classification
algorithms are Decision Tree, Naive Bayes, Generalized Linear Models (Binary Logistic Regression),
and Support Vector Machines.

ii. Regression
Approximating and predicting continuous numerical values; the algorithms for regression are Support
Vector Machines and Generalized Linear Models (Multivariate Linear Regression).

iii. Anomaly Detection


Detecting anomalous cases, such as fraud and intrusions; the algorithm for anomaly detection is one-
class Support Vector Machines.
iv. Attribute Importance
Identifying the attributes that have the strongest relationships with the target attribute (for example,
customers likely to churn); the algorithm for attribute importance is Minimum Descriptor Length.

v. Clustering
Finding natural groupings in the data that are often used for identifying customer segments; the
algorithms for clustering are k-Means and O-Cluster.

vi. Associations
Analyzing "market baskets", items that are likely to be purchased together; the algorithm for
associations is a priori.

vii. Feature Extraction


Creating new attributes (features) as a combination of the original attributes; the algorithm for feature
extraction is Non-Negative Matrix Factorization.

In addition to mining structured data, ODM permits mining of text data (such as police reports,
customer comments, or physician's notes) or spatial data.

Oracle Data Mining Interfaces


Oracle Data Mining APIs provide extensive support for building applications that automate the
extraction and dissemination of data mining insights.

Data mining activities such as model building, testing, and scoring are accomplished through a
PL/SQL API, a Java API, and SQL Data Mining functions. The Java API is compliant with the data
mining standard JSR 73. The Java API and the PL/SQL API are fully interoperable.

Oracle Data Mining allows the creation of a supermodel, that is, a model that contains the instructions
for its own data preparation. The embedded data preparation can be implemented automatically and/or
manually. Embedded Data Preparation supports user-specified data transformations; Automatic Data
Preparation supports algorithm-required data preparation, such as binning, normalization, and outlier
treatment.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 130

SQL Data Mining functions support the scoring of classification, regression, clustering, and feature
extraction models. Within the context of standard SQL statements, pre-created models can be applied
to new data and the results returned for further processing, just like any other SQL query.

Predictive Analytics automates the process of data mining. Without user intervention, Predictive
Analytics routines manage data preparation, algorithm selection, model building, and model scoring
so that the user can benefit from data mining without having to be a data mining expert.

ODM programmatic interfaces include

• Data mining functions in Oracle SQL for high performance scoring of data
• DBMS_DATA_MINING PL/SQL packages for model creation, description, analysis, and
deployment
• DBMS_DATA_MINING_TRANSFORM PL/SQL package for transformations required for
data mining
• Java interface based on the Java Data Mining standard for model creation, description,
analysis, and deployment
• DBMS_PREDICTIVE_ANALYTICS PL/SQL package supports the following procedures:
a) EXPLAIN - Ranks attributes in order of influence in explaining a target column
b) PREDICT - Predicts the value of a target column
c) PROFILE - Creates segments and rules that identify the records that have the same target
value.

REVISION EXERCISES
1. Discuss the various types of files
2. What are some of the methods of file organization?
3. Discuss the basis of processing of computer files.
4. Discuss the disadvantages of computer file processing system
5. What is a database system?
6. What are some of the characteristics of a database system
7. What is the importance of databases system?
8. What are the limitation of a database system
9. What is a data warehousing?
10. How is information extracted from data warehousing

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 131

CHAPTER 5
DATA COMMUNICATION AND COMPUTER
NETWORKS
SYNOPSIS
Introduction………………………………………………………. 131
Principles of Data Communication
and Networks……………………………………………………… 136
Data Transmission Characteristics…………………………......... 139
Types of Networks………………………………………………. 142
Network Topologies……………………………………………… 143
Benefits And Challenges of Networks
in an Organisation…………………………………………………. 164
Limitations Of Networks In An Organisation………….................. 167
Cloud Computing………………………………………………….. 168

INTRODUCTION
Communication is defined as transfer of information, such as thoughts and messages between two
entities. The invention of telegraph, radio, telephone, and television made possible instantaneous
communication over long distances.

In the context of computers and information technology (IT), the data are represented by binary digit
or bit has only two values 0s and 1s. In fact anything the computer deals with are 0s and 1s only.

Due to this it is called discrete or digital. In the digital world messages, thoughts, numbers.. etc can be
represented in different streams of 0s and 1s. Data communications concerns itself with the
transmission (sending and receiving) of information between two locations by means of electrical
signals. The two types of electrical signals are analog and digital. Data communication is the name
given to the communication where exchange of information takes place in the form of 0s and 1s over
some kind of media such as wire or wireless. The subject-Data Communications deals with the
technology, tools, products and equipment to make this happen.

Data
Data the raw material for information is defined as groups of non-random symbols that represent
quantities, actions, objects etc. In information systems data items are formed from characters that may
be alphabetical, numeric, or special symbols. Data items are organized for processing purposes into data
structures, file structures and databases. Data relevant to information processing and decision-making
may also be in the form of text, images or voice.

Information
Information is data that has been processed into a form that is meaningful to the recipient and is of real
or perceived value in current or prospective actions or decisions. It is important to note that data for one

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 132

level of an information system may be information for another. For example, data input to the
management level is information output of a lower level of the system such as operations level.
Information resources are reusable. When retrieved and used it does not lose value: it may indeed gain
value through the credibility added by use.

The value of information is described most meaningfully in the context of a decision. If there were no
current or future choices or decisions, information would be unnecessary. The value of information in
decision-making is the value of change in decision behaviour caused by the information less the cost
of obtaining the information. Decisions however are usually made without the “right” information. The
reasons are:

 The needed information is unavailable


 The effort to acquire the information is too great or too costly.
 There is no knowledge of the availability of the information.
 The information is not available in the form needed.

Much of the information that organizations or individuals prepare has value other than in decision-
making. The information may also be prepared for motivation and background building.

Desirable Qualities of Information


Some of the qualities of information include:
 Availability – Information should be available and accessible to those who need it.
 Comprehensible – Information should be understandable to those who use it.
 Relevance – Information should be applicable to the situations and performance of
organizational functions. Relevant information is important to the decision maker.
 Secure – Information should be secure from access by unauthorized users.
 Usefulness – Information should be available in a form that is usable.
 Timeliness - Information should be available when it is needed.
 Reliability – Reliable information can be depended on. In many cases, reliability of
information depends on the reliability of the data collection method. In other instances,
reliability depends on the source of information.
 Accuracy – Information should be correct, precise and without error. In some cases inaccurate
information is generated because inaccurate data is fed into the transformation process (this is
commonly called garbage in garbage out, GIGO).
 Consistency– Information should not be self-contradictory.
 Completeness – Complete information contains all the important facts. For example an
investment report that does not contain all the costs is not complete.
 Economical – Information should always be relatively economical to produce. Decision
makers must always balance the value of information and the cost of producing it.
 Flexibility – Flexible information can be used for a variety of purposes.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 133

Data Processing
Data processing may be defined as those activities, which are concerned with the systematic recording,
arranging, filing, processing and dissemination of facts relating to the physical events occurring in the
business. Data processing can also be described as the activity of manipulating the raw facts to generate
a set or an assembly of meaningful data, what is described as information. Data processing activities
include:
i. data collection,
ii. classification,
iii. sorting, adding,
iv. merging,
v. summarizing,
vi. storing,
vii. retrieval and
viii. dissemination.

The black box model is an extremely simple principle of a machine, that is, irrespective of how a
machine operates internally any machine takes an input, operates on it and then produces an output.
In dealing with digital computers this data consists of: numerical data, character data and special
(control) characters.

Use of computers for data processing involves four stages:


 Data input – This is the process of data capture into the computer system for processing. Input
devices are used.
 Storage – This is an intermediary stage where input data is stored within the computer system
or on secondary storage awaiting processing or output after processing. Program instructions
to operate on the data are also stored in the computer.
 Processing – The central processing unit of the computer manipulates data using arithmetic
and logical operations.
 Data output – The results of the processing function are output by the computer using a variety
of output devices.

Data Processing Activities


The basic processing activities include:
 Record – bring facts into a processing system in usable form
 Classify – data with similar characteristics are placed in the same category, or group.
 Sort – arrangement of data items in a desired sequence
 Calculate – apply arithmetic functions to data
 Summarize – to condense data or to put it in a briefer form
 Compare – perform an evaluation in relation to some known measures
 Communicate – the process of sharing information
 Store – to hold processed data for continuing or later use.
 Retrieve – to recover data previously stored

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 134

Data Communications and Computer


Data communication systems are the electronic systems that transmit data over communication lines
from one location to another. End users need to know the essential parts of communication technology,
including connections, channels, transmission, network architectures and network types.
Communication allows microcomputer users to transmit and receive data and gain access to electronic
resources.

 Source – creates the data, could be a computer or a telephone


 Transmitter – encodes the information e.g. modem, network card
 Transmission system – transfers the information e.g. wire or complex network
 Receiver – decodes the information for the destination e.g. modem, network card
 Destination – accepts and uses the incoming information, could be a computer or telephone

Communication Channels
The transmission media used in communication are called communication channels. Two ways of
connecting microcomputers for communication with each other and with other equipment is through
cable and air. There are five kinds of communication channels used for cable or air connections:
- Telephone lines
- Coaxial cable
- Fiber-optic cable
- Microwave
- Satellite

Telephone lines (Twisted Pair)


Telephone line cables made up of copper wires called twisted pair. A single twisted pair culminates in
a wall jack where you plug your phone. Telephone lines have been the standard communication
channel for both voice and data. More technically advanced and reliable transmission media is now
replacing it.

Coaxial cable
Coaxial cable is a high-frequency transmission cable that replaces the multiple wires of telephone lines
with a single solid copper core. It has over 80 times the transmission capacity of twisted pair. It is often
used to link parts of a computer system in one building.

Fibre-optic cable
Fibre-optic cable transmits data as pulses of light through tubes of glass. It has over 26,000 times the
transmission capacity of twisted pair. A fibre-optic tube can be half the diameter of human hair. Fibre-
optic cables are immune to electronic interference and more secure and reliable. Fibre-optic cable is
rapidly replacing twisted-pair telephone lines.

Microwave
Microwaves transmit data as high-frequency radio waves that travel in straight lines through air.
Microwaves cannot bend with the curvature of the earth. They can only be transmitted over short
distances. Microwaves are good medium for sending data between buildings in a city or on a large
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 135

college campus. Microwave transmission over longer distances is relayed by means of ‘dishes’ or
antennas installed on towers, high buildings or mountaintops.

Satellite
Satellites are used to amplify and relay microwave signals from one transmitter on the ground to
another. They orbit about 22,000 miles above the earth. They rotate at a precise point and speed and
can be used to send large volumes of data. Bad weather can sometimes interrupt the flow of data from
a satellite transmission. INTELSAT (INternational TELecommunication SATellite consortium),
owned by 114 governments forming a worldwide communications system, offers many satellites that
can be used as microwave relay stations.

Data Transmission: Analog versus Digital


Information is available in an analogue or in a digital form. Computer-generated data can easily be
stored in a digital format, but analogue signals, such as speech and video, must first be sampled at
regular intervals and then converted into a digital form. This process is known as digitisation and has
the following advantages:

 Digital data is less affected by noise


 Extra information can be added to digital signals so that errors can either be detected or corrected.
 Digital data tends not to degrade over time
 Processing of digital information is relatively easy, either in real-time or non real-time
 A single type of media can be used to store many different types of information (such as video,
speech, audio and computer data can be stored on tape, hard-disk or CD-ROM).
 A digital system has a more dependable response, whereas an analogue system’s accuracy depends
on parameters such as component tolerance, temperature, power supply variations, and so on.
Analogue systems thus produce a variable response and no two analogue systems are identical.
 Digital systems are more adaptable and can be reprogrammed with software. Analogue systems
normally require a change of hardware for any functional changes (although programmable
analogue devices are now available).

The main disadvantage with digital conversion is:


 Digital samples must be quantized to given levels: this adds an error called quantization error. The
larger the number of bits used to represent each sample, the smaller the quantization error.

Modem
A modem is a hardware device that converts computer signals (digital signals) to telephone signals
(analog signals) and telephone signals (analog signals) back to computer signals (digital signals).
The process of converting digital signals to analog is called modulation while the process of converting
analog signals to digital is called demodulation.

Computer Computer
Modem Modem
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 136

Digital signal modulate telephone line demodulate digital signal


(Analog signal)
Modem Transmission Speed

The speed with which modems transmit data varies. Communications speed is typically measured in
bits per second (bps). The most popular speeds for conventional modems are 36.6 kbps (36,600 bps)
and 56kbps (56,000 bps). The higher the speed, the faster you can send and receive data.

Types of Modems
There are 3 types of modems as discussed below.
a) External modem
An external modem stands apart from the computer. It is connected by a cable to the computer’s serial
port. Another cable is used to connect the modem to the telephone wall jack.

b) Internal modem
An internal modem is a plug-in circuit board inside the system unit. A telephone cable connects this
type of modem to the telephone wall jack.

c) Wireless modem
A wireless modem is similar to an external modem. It connects to the computer’s serial port, but does
not connect to telephone lines. It uses new technology that receives data through the air.

PRINCIPLES OF DATA COMMUNICATION AND NETWORKS


Entire data communication system revolves around three fundamental concepts

a. Destiny

The system should transmit the message to the correct intended destination. The destination can be
another user or another computer.

b. Reliability

The system should deliver the data to the destiny faithfully. Any unwanted signals (noise) added
along with the original data may play havoc.

c. Fast

The system should transmit the data as fast as possible within the technological constraints. In case of
audio and video data they must be received in the same order as they are produced without adding any
significant delay

Data Transmission
Technical matters that affect data transmission include:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 137

 Bandwidth
 Type of transmission
 Direction of data flow
 Mode of transmitting data
 Protocols

Bandwidth
Bandwidth is the bits-per-second (bps) transmission capability of a communication channel. There are
three types of bandwidth:
 Voice band – bandwidth of standard telephone lines (9.6 to56 kbps)
 Medium band – bandwidth of special leased lines used (56 to 264,000 kbps)
 Broadband – bandwidth of microwave, satellite, coaxial cable and fiber optic (56 to 30,000,000
kbps)

Types of transmission
There are 2 types of transmission
a) serial data transmission
b) parallel data transmission

a. Serial data transmission


In serial transmission, bits flow in a continuous stream. It is the way most data is sent over telephone
lines. It is used by external modems typically connected to a microcomputer through a serial port. The
technical names for such serial ports are RS-232C connector or asynchronous communications port.

b. Parallel data transmission


In parallel transmission, bits flow through separate lines simultaneously (at the same time). Parallel
transmission is typically limited to communications over short distances (not telephone lines). It is the
standard method of sending data from a computer’s CPU to a printer.

Direction of data transmission


There are three directions or modes of data flow in a data communication system.
 Simplex communication – data travels in one-direction only e.g. point-of-sale terminals.
 Half-duplex communication – data flows in both directions, but not simultaneously. E.g.
electronic bulletin board
 Full-duplex communication – data is transmitted back and forth at the same time e.g. mainframe
communications.

Mode of data transmission


Data may be sent over communication channels in either be:
a) asynchronous transmission
b) synchronous transmission

a) Asynchronous transmission
Data is sent and received one byte at a time. Used with microcomputers and terminals with slow speeds.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 138

b) Synchronous transmission
Data is sent and received several bytes (blocks) at a time. It requires a synchronized clock to enable
transmission at timed intervals.
Protocols
Protocols are sets of communication rules for exchange of information. Protocols define speeds and
modes for connecting one computer with another computer. Network protocols can become very
complex and therefore must adhere to certain standards. The first set of protocol standards was IBM
Systems Network Architecture (SNA), which only works for IBM’s own equipment.

The Open Systems Interconnection (OSI) is a set of communication protocols defined by International
Standards Organization. The OSI is used to identify functions provided by any network and separates
each network’s functions into seven ‘layers’ of communication rules.

Error Detection and Control

Data has to arrive intact in order to be used. Two techniques are used to detect and correct errors.

a) Forward error control


Additional redundant information is transmitted with each character or frame so that the receiver
cannot only detect when errors are present, but can also determine where the error has occurred and
thus corrects it.

b) Feedback (backward) error control


Only enough additional information is transmitted so that the receiver can identify that an error has
occurred. An associated retransmission control scheme is then used to request that another copy of the
information be sent.

Error detection methods include:


 Parity check
The transmitter adds an additional bit to each character prior to transmission. The parity bit used is a
function of the bits making up the character. The recipient performs the same function on the received
character and compares it to the parity bit. If it is different an error is assumed.

 Block sum check


An extension of the parity check in that an additional set of parity bits is computed for a block of
characters (or frame). The set of parity bits is known as the block (sum) check character.

 Cyclic Redundancy Check (CRC) – the CRC or frame check sequence (FCS) is used for
situations where bursts of errors may be present (parity and block sum checks are not effective
at detecting bursts of errors). A single set of check digits is generated for each frame
transmitted, based on the contents of the frame and appended to the tail of the frame.

Recovery
When errors are so bad and that you can’t ignore them, have a new plan to get the data.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 139

Security
What are you concerned about if you want to send an important message?
 Did the receiver get it?
o Denial of service
 Is it the right receiver?
o Receiver spoofing
 Is it the right message?
o Message corruption
 Did it come from the right sender?
o Sender spoofing

Network management
This involves configuration, provisioning, monitoring and problem-solving.

DATA TRANSMISSION CHARACTERISTIC


Data transmission, digital transmission, or digital communications is the physical transfer of data (a
digital bit stream) over a point-to-point or point-to-multipoint communication channel. Examples of
such channels are copper wires, optical fibers, wireless communication channels, and storage media.
The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave,
microwave, or infrared signal.

While analog transmission is the transfer of a continuously varying analog signal, digital
communications is the transfer of discrete messages. The messages are either represented by a
sequence of pulses by means of a line code (baseband transmission), or by a limited set of
continuously varying wave forms (passband transmission), using a digital modulation method. The
passband modulation and corresponding demodulation (also known as detection) is carried out by
modem equipment. According to the most common definition of digital signal, both baseband and
passband signals representing bit-streams are considered as digital transmission, while an alternative
definition only considers the baseband signal as digital, and passband transmission of digital data as a
form of digital-to-analog conversion.

Data transmitted may be digital messages originating from a data source, for example a computer or a
keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-
stream for example using pulse-code modulation (PCM) or more advanced source coding (analog-to-
digital conversion and data compression) schemes. This source coding and decoding is carried out by
codec equipment.

The effectiveness of a data communications system depends on four fundamental characteristics:


delivery, accuracy, timeliness, and jitter.

1. Delivery
The system must deliver data to the correct destination. Data must be received by the intended device
or user and only by that device or user.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 140

2. Accuracy:
The system must deliver the data accurately. Data that have been altered in transmission and left
uncorrected are unusable.

3. Timeliness:
The system must deliver data in a timely manner. Data delivered late are useless. In the case of video
and audio, timely delivery means delivering data as they are produced, in the same order that they are
produced, and without significant delay. This kind of delivery is called real-time transmission.

4. Jitter
Jitter refers to the variation in the packet arrival time. It is the uneven delay in the delivery of audio or
video packets. For example, let us assume that video packets are sent every 30ms. If some of the
packets arrive with 30ms delay and others with 40ms delay, an uneven quality in the video is the
result.

NETWORKS
A network is a set of devices (often referred to as nodes) connected by communication links. A node
can be a computer, printer, or any other device capable of sending and/or receiving data generated by
other nodes on the network

Computer Networks

A computer network is a communications system connecting two or more computers that work to
exchange information and share resources (hardware, software and data). A network may consist of
microcomputers, or it may integrate microcomputers or other devices with larger computers. Networks
may be controlled by all nodes working together equally or by specialized nodes coordinating and
supplying all resources. Networks may be simple or complex, self-contained or dispersed over a large
geographical area.

Network architecture is a description of how a computer is set-up (configured) and what strategies are
used in the design. The interconnection of PCs over a network is becoming more important especially
as more hardware is accessed remotely and PCs intercommunicate with each other.

Distributed Processing
Most networks use distributed processing, in which a task is divided among multiple computers.
Instead of one single large machine being responsible for all aspects of a process, separate computer
(usually a personal computer or workstation) handle a subset.

Network Criteria
A network must be able to meet a certain number of criteria. The most important of these are
performance, reliability, and security.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 141

Performance

Performance can be measured in many ways, including transmit time and response time.

Transmit time is the amount of time required for a message to travel from one device to another.
Response time is the elapsed time between an inquiry and a response.

The performance of a network depends on a number of factors, including the number of users, the
type of transmission medium, the capabilities of the connected hardware, and the efficiency of the
software.

Performance is often evaluated by two networking metrics: throughput and delay. We often need
more throughputs and less delay

Reliability
In addition to accuracy of delivery, network reliability is measured by the frequency of failure, the
time it takes a link to recover from a failure.

Security
Network security issues include protecting data from unauthorized access, protecting data from
damage and development, and implementing policies and procedures for recovery from breaches and
data losses

Terms used to describe computer networks

 Node – any device connected to a network such as a computer, printer, or data storage device.
 Client – a node that requests and uses resources available from other nodes. Typically a
microcomputer.
 Server – a node that shares resources with other nodes. May be called a file server, printer server,
communication server, web server, or database server.
 Network Operating System (NOS) – the operating system of the network that controls and
coordinates the activities between computers on a network, such as electronic communication and
sharing of information and resources.
 Distributed processing – computing power is located and shared at different locations. Common
in decentralized organizations (each office has its own computer system but is networked to the
main computer).
 Host computer – a large centralized computer, usually a minicomputer or mainframe.

TYPES OF NETWORKS
Different communication channels allow different types of networks to be formed. Telephone lines
may connect communications equipment within the same building. Coaxial cable or fibre-optic cable

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 142

can be installed on building walls to form communication networks. You can also create your own
network in your home or apartment. Communication networks also differ in geographical size.
Three important networks according to geographical size are
a) LANs,
b) MANs
c) WANs.

a) Local Area Network (LAN)

A LAN is a computer network in which computers and peripheral devices are in close physical
proximity. It is a collection of computers within a single office or building that connect to a common
electronic connection – commonly known as a network backbone. This type of network typically uses
microcomputers in a bus organization linked with telephone, coaxial, or fibre-optic cable. A LAN
allows all users to share hardware, software and data on the network. Minicomputers, mainframes or
optical disk storage devices can be added to the network. A network bridge device may be used to link
a LAN to other networks with the same configuration. A network gateway device may be used to link
a LAN to other networks, even if their configurations are different.

b) Metropolitan Area Network (MAN)

A MAN is a computer network that may be citywide. This type of network may be used as a link
between office buildings in a city. The use of cellular phone systems expand the flexibility of a MAN
network by linking car phones and portable phones to the network.
c) Wide Area Networks (WAN)
A WAN is a computer network that may be countrywide or worldwide. It normally connects networks
over a large physical area, such as in different buildings, towns or even countries. A modem connects
a LAN to a WAN when the WAN connection is an analogue line.
For a digital connection a gateway connects one type of LAN to another LAN, or WAN, and a bridge
connects a LAN to similar types of LAN. This type of network typically uses microwave relays and
satellites to reach users over long distances. The widest of all WANs is the Internet, which spans the
entire globe.

WAN Technologies
How you get from one computer to the other across the Internet.

(i) Circuit switching


 A dedicated path between machines is established
 All resources are guaranteed
 Has limitation of set-up delay but has fast transmission
(ii) Packet switching
 Nodes in the network ‘routers’ decide where to send data next
 No resources are guaranteed “best effort”
 Little set-up, transmission delay at each router

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 143

 Computer-computer communication
(iii) Frame relay
 Like packet switching
 Low level error correction removed to yield higher data rates
(iv) Cell relay – ATM (Asynchronous Transmission Mode)
 Frame relay with uniformly sized packets (cells)
 Dedicated circuit paths
(v) ISDN (Integrated Services Digital Network)
 Transmits voice and data traffic
 Specialized circuit switching
 Uses frame relay (narrowband) and ATM (broadband)

NETWORK TOPOLOGIES
Topology refers to the way in which the network of computers is connected. Each topology is suited
to specific tasks and has its own advantages and disadvantages.

The choice of topology is dependent upon

• type and number of equipment being used


• planned applications and rate of data transfers
• required response times
• cost
There are four principal network topologies:
a) Star
b) Bus
c) Ring
d) Hierarchical (hybrid)
e) Completely connected (mesh)

Star network
In a star network there are a number of small computers or peripheral devices linked to a central unit
called a main hub. The central unit may be a host computer or a file server. All communications pass
through the central unit and control is maintained by polling. This type of network can be used to
provide a time-sharing system and is common for linking microcomputers to a mainframe.

Advantages:
 It is easy to add new and remove nodes
 A node failure does not bring down the entire network
 It is easier to diagnose network problems through a central hub

Disadvantages:
 If the central hub fails the whole network ceases to function

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 144

 It costs more to cable a star configuration than other topologies (more cable is required than
for a bus or ring configuration).

Bus network
In a bus network each device handles its communications control. There is no host computer; however
there may be a file server. All communications travel along a common connecting cable called a bus.
It is a common arrangement for sharing data stored on different microcomputers. It is not as efficient
as star network for sharing common resources, but is less expensive. The distinguishing feature is that
all devices (nodes) are linked along one communication line - with endpoints - called the bus or
backbone.

Advantages:
 Reliable in very small networks as well as easy to use and understand
 Requires the least amount of cable to connect the computers together and therefore is less
expensive than other cabling arrangements.
 Is easy to extend. Two cables can be easily joined with a connector, making a longer cable for
more computers to join the network
 A repeater can also be used to extend a bus configuration

Disadvantages:
 Heavy network traffic can also slow a bus considerably. Because any computer can transmit
at any time, bus networks do not coordinate when information is sent. Computers interrupting
each other can use a lot of bandwidth
 Each connection between two cables weakens the electrical signal
 The bus configuration can be difficult to troubleshoot. A cable break or malfunctioning
computer can be difficult to find and can cause the whole network to stop functioning.

Ring network

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 145

In a ring network each device is connected to two other devices, forming a ring. There is no central file
server or computer. Messages are passed around the ring until they reach their destination. Often used
to link mainframes, especially over wide geographical areas. It is useful in a decentralized organization
called a distributed data processing system.

Advantages:
 Ring networks offer high performance for a small number of workstations or for larger
networks where each station has a similar work load
 Ring networks can span longer distances than other types of networks
 Ring networks are easily extendable

Disadvantages
 Relatively expensive and difficult to install
 Failure of one component on the network can affect the whole network
 It is difficult to troubleshoot a ring network
 Adding or removing computers can disrupt the network

Hierarchical (hybrid) network


A hierarchical network consists of several computers linked to a central host computer. It is similar to
a star. Other computers are also hosts to other, smaller computers or to peripheral devices in this type
of network. It allows various computers to share databases, processing power, and different output
devices. It is useful in centralized organizations.

Advantages:
 Improves sharing of data and programs across the network
 Offers reliable communication between nodes

Disadvantages:
 Difficult and costly to install and maintain
 Difficult to troubleshoot network problems

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 146

Completely connected (mesh) configuration


Is a network topology in which devices are connected with many redundant interconnections between
network nodes.

Advantages:
 Yields the greatest amount of redundancy (multiple connections between same nodes) in the
event that one of the nodes fail where network traffic can be redirected to another node.
 Network problems are easier to diagnose

Disadvantages
 The cost of installation and maintenance is high (more cable is required than any other
configuration)

Client/Server Environment
Use of client/server technology is one of the most popular trends in application development. More
and more business applications have embraced the advantages of the client/server architecture by
distributing the work among servers and by performing as much computational work as possible on
the client workstation. This allows users to manipulate and change the data that they need to change
without controlling resources on the main processing unit.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 147

In client/server systems, applications no longer are limited to running on one machine. The applications
are split so that processing may take place on different machines. The processing of data takes place
on the server and the desktop computer (client). The application is divided into pieces or tasks so
processing can be done more efficiently.

A client/server network environment is one in which one computer acts as the server and provides data
distribution and security functions to other computers that are independently running various
applications. An example of the simplest client/server model is a LAN whereby a set of computers is
linked to allow individuals to share data. LANs (like other client/server environments) allow users to
maintain individual control over how information is processed.

Client/server computing differs from mainframe or distributed system processing in that each
processing component is mutually dependent. The ‘client’ is a single PC or workstation associated with
software that provides computer presentation services as an interface to server computing resources.
Presentation is usually provided by visually enhanced processing software known as a Graphical User
Interface (GUI). The ‘server’ is one or more multi-user computer(s) (these may be mainframes,
minicomputers or PCs). Server functions include any centrally supported role, such as file sharing,
printer sharing, database access and management, communication services, facsimile services,
application development and others. Multiple functions may be supported by a single server.

Network Protocols
Protocols are the set of conventions or rules for interaction at all levels of data transfer. They have
three main components:
 Syntax – data format and signal types
 Semantics – control information and error handling
 Timing – data flow rate and sequencing

Numerous protocols are involved in transferring a single file even when two computers are directly
connected. The large task of transferring a piece of data is broken down into distinct sub tasks. There
are multiple ways to accomplish each task (individual protocols). The tasks are well described so that
they can be used interchangeably without affecting the overall system.

Benefits derived from using network protocols include:


 Smaller user applications – the browser runs HTTP (Hyper Text Transfer Protocol). It isn’t
aware of how the connection to the network is made.
 Can take advantage of new technologies – one can browse on a wireless palm or cell phone
 Don’t have to reinvent the wheel – fewer programming errors, less effort during development
of network-oriented application systems as previous components are reused.
 Enhanced uniformity in communication

Common network protocols include:


(i) 3 layer logical model
(ii) TCP/IP (Transmission Control Protocol/Internet Protocol)

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 148

(iii) ISO/OSI model (International Organizations for Standards/Open System


Interconnection)

The three (3) layer logical model include the following:

 Application Layer
o Takes care of the needs of the specific application
o HTTP: send request, get a batch of responses from a bunch of different servers
o Telnet: dedicated interaction with another machine

 Transport Layer
o Makes sure data is exchanged reliably between the two end systems
o Needs to know how to identify the remote system and package the data properly

 Network Access Layer


o Makes sure data is exchanged reliably into and out of the computer.
o Concerns the physical connection to the network and transfer of information across this
connection
o Software here depends on physical medium used

TCP/IP (Transmission Control Protocol/Internet Protocol)

 Application Layer
o User application protocols
 Transport Layer
o Transmission control protocol
o Data reliability and sequencing
 Internet Layer
o Internet Protocol
o Addressing, routing data across Internet
 Network Access Layer
o Data exchange between host and local network
o Packets, flow control
o Network dependent (circuit switching, Ethernet etc)
 Physical Layer
o Physical interface, signal type, data rate

ISO/OSI Model (International Standard Organization/Open System Interconnection)


An important concept in understanding data communications is the Open Systems Interconnection
(OSI) model. It allows manufacturers of different systems to interconnect their equipment through
standard interfaces. It also allows software and hardware to integrate well and be portable on differing
systems. The International Standards Organization (ISO) developed the model.

Data is passed from top layer of the transmitter to the bottom, then up from the bottom layer to the top
on the recipient. However, each layer on the transmitter communicates directly with the recipient’s

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 149

corresponding layer. This creates a virtual data flow between layers. The data sent can be termed as a
data packet or data frame.

Data Data

Virtual Data Flow


Application Application

Presentation Presentation
Session Session

Transport Transport
Network
Network
Data Link
Data Link
Physical
Physical

Transmitter Actual Data Flow Recipient

1. Application Layer
This layer provides network services to application programs such as file transfer and electronic mail.
It offers user level interaction with network programs and provides user application, process and
management functions.

2. Presentation Layer
The presentation layer uses a set of translations that allow the data to be interpreted properly. It may
have to carry out translations between two systems if they use different presentation standards such as
different character sets or different character codes. It can also add data encryption for security
purposes. It basically performs data interpretation, format and control transformation. It separates what
is communicated from data representation.

3. Session Layer
The session layer provides an open communications path to the other system. It involves setting up,
maintaining and closing down a session (a communication time span). The communications channel
and the internetworking should be transparent to the session layer. It manages (administration and
control) sessions between cooperating applications.

4. Transport Layer
If data packets require to go out of a network then the transport layer routes them through the
interconnected networks. Its task may involve splitting up data for transmission and reassembling it
after arrival. It performs the tasks of end-to-end packetization, error control, flow control, and
synchronization. It offers network transparent data transfer and transmission control.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 150

5. Network Layer
The network layer routes data frames through a network. It performs the tasks of connection
management, routing, switching and flow control over a network.

6. Data Link Layer


The data link layer ensures that the transmitted bits are received in a reliable way. This includes adding
bits to define the start and end of a data frame, adding extra error detection/correction bits and ensuring
that multiple nodes do not try to access a common communications channel at the same time. It has
the tasks of maintaining and releasing the data link, synchronization, error and flow control.

7. Physical Layer
The physical link layer defines the electrical characteristics of the communications channel and the
transmitted signals. This includes voltage levels, connector types, cabling, data rate etc. It provides the
physical interface.

Network Cable Types

The cable type used on a network depends on several parameters including:


 The data bit rate
 The reliability of the cable
 The maximum length between nodes
 The possibility of electrical hazards
 Power loss in cables
 Tolerance or harsh conditions
 Expense and general availability of the cache
 Ease of connection and maintenance
 Ease of running cables

The main types of cables used in networks are


a) twisted-pair,
b) coaxial
c) fibre-optic.

a) Twisted-pair
Twisted-pair and coaxial cables transmit electric signals, whereas fibre-optic cables transmit light
pulses. Twisted-pair cables are not shielded and thus interfere with nearby cables. Public telephone
lines generally use twisted-pair cables. In LANs they are generally used up to bit rates of 10 Mbps and
with maximum lengths of 100m.

b) Coaxial cable
Coaxial cable has a grounded metal sheath around the signal conductor. This limits the amount of
interference between cables and thus allows higher data rates. Typically they are used at bit rates of
100 Mbps for maximum lengths of 1 km.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 151

c) Fibre-optic
The highest specification of the three cables is fibre-optic. This type of cable allows extremely high bit
rates over long distances. Fibre-optic cables do not interfere with nearby cables and give greater
security, more protection from electrical damage by external equipment and greater resistance to harsh
environments; they are also safer in hazardous environments.

Internetworking Connections

Most modern networks have a backbone, which is a common link to all the networks within an
organization. This backbone allows users on different network segments to communicate and also
allows data into and out of the local network.

Networks are partitioned from other networks using a bridge, a gateway or a router. A bridge links
two networks of the same type. A gateway connects two networks of dissimilar type. Routers operate
rather like gateways and can either connect two similar networks or two dissimilar networks. The key
operation of a gateway, bridge or router is that it only allows data traffic through itself when the data
is intended for another network which is outside the connected network. This filters traffic and stops
traffic not intended for the network from clogging up the backbone. Modern bridges, gateways and
routers are intelligent and can determine the network topology. A spanning-tree bridge allows
multiple network segments to be interconnected. If more than one path exists between individual
segments then the bridge finds alternative routes. This is useful in routing frames away from heavy
traffic routes or around a faulty route.

A repeater is used to increase the maximum interconnection length since for a given cable
specification and bit rate, each has a maximum length of cable.

Network Standards
Standards are good because they allow many different implementations of interoperable technology.
However they are slow to develop and multiple standard organizations develop different standards for
the same functions.

Application of Computer Networks within an Organization


Connectivity is the ability and means to connect a microcomputer by telephone or other
telecommunication links to other computers and information sources around the world.
The connectivity options that make communication available to end-users include:
 Fax machines (Facsimile transmission machines).
 E-mail (Electronic mail)
 Voice messaging systems
 Video conferencing systems
 Shared resources
 Online services

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 152

Fax machines
Fax machines convert images to signals that can be sent over a telephone line to a receiving machine.
They are extremely popular in offices. They can scan the image of a document and print the image on
paper. Microcomputers use fax/modem circuit boards to send and receive fax messages.

E-mail (electronic mail)


E-mail is a method of sending an electronic message between individuals or computers. One can
receive e-mail messages even when one is not on the computer. E-mail messages can contain text,
graphics, images as well as sound.

Voice messaging systems


Voice messaging systems are computer systems linked to telephones that convert human voice into
digital bits. They resemble conventional answering machines and electronic mail systems. They can
receive large numbers of incoming calls and route them to appropriate ‘voice mailboxes’ which are
recorded voice messages. They can forward calls and deliver the same message to many people.
Video conferencing systems
Video conferencing systems are computer systems that allow people located at various geographic
locations to have in-person meetings. They can use specially equipped videoconferencing rooms to
hold meetings. Desktop videoconferencing systems use microcomputers equipped with inexpensive
video cameras and microphones that sit atop a computer monitor.
Shared resources
Shared resources are communication networks that permit microcomputers to share expensive
hardware such as laser printers, chain printers, disk packs and magnetic tape storage. Several
microcomputers linked in a network make shared resources possible. The connectivity capabilities of
shared resources provide the ability to share data located on a computer.
Online services
Online services are business services offered specifically for microcomputer users. Well-known online
service providers are America Online (AOL), AT&T WorldNet, CompuServe, Africa Online,
Kenyaweb, UUNET, Wananchi Online and Microsoft Network. Typical online services offered by
these providers are:
 Teleshopping- a database which lists prices and description of products. You place an order,
charge the purchase to a credit card and merchandise is delivered by a delivery service.
 Home banking – banks offer this service so you can use your microcomputer to pay bills,
make loan payments, or transfer money between accounts.
 Investing – investment firms offer this service so you can access current prices of stocks and
bonds. You can also buy and sell orders.
 Travel reservations – travel organizations offer this service so you can get information on
airline schedules and fare, order tickets, and charge to a credit card.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 153

 Internet access – you can get access to the World Wide Web.

Internet

The Internet is a giant worldwide network. The Internet started in 1969 when the United States
government funded a major research project on computer networking called ARPANET (Advanced
Research Project Agency NETwork). When on the Internet you move through cyberspace.

Cyberspace is the space of electronic movement of ideas and information.


The web provides a multimedia interface to resources available on the Internet. It is also known as
WWW or World Wide Web. The web was first introduced in 1992 at CERN (Centre for European
Nuclear Research) in Switzerland. Prior to the web, the Internet was all text with no graphics,
animations, sound or video.

Common Internet applications


Some of the application of internet include:
 Communicating
- Communicating on the Internet includes e-mail, discussion groups (newsgroups), and chat
groups
- You can use e-mail to send or receive messages to people around the world
- You can join discussion groups or chat groups on various topics

 Shopping
- Shopping on the Internet is called e-commerce
- You can window shop at cyber malls called web storefronts
- You can purchase goods using checks, credit cards or electronic cash called electronic
payment
 Researching
- You can do research on the Internet by visiting virtual libraries and browse through
stacks of books
- You can read selected items at the virtual libraries and even check out books
 Entertainment
- There are many entertainment sites on the Internet such as live concerts, movie
previews and book clubs
- You can also participate in interactive live games on the Internet

How Do You Get Connected To The Internet?

You get connected to the Internet through a computer. Connection to the Internet is referred to as access
to the Internet. Using a provider is one of the most common ways users can access the Internet. A
provider is also called a host computer and is already connected to the Internet. A provider provides a
path or connection for individuals to access the Internet.

There are three widely used providers:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 154

(i) Colleges and universities – colleges and universities provide free access to the Internet
through their Local Area Networks,
(ii) Internet Service Providers (ISP) – ISPs offer access to the Internet for a fee. They are more
expensive than online service providers.
(iii) Online Service Providers – provide access to the Internet and a variety of other services for
a fee. They are the most widely used source for Internet access and less expensive than ISP.

Connections
There are three types of connections to the Internet through a provider:
o Direct or dedicated
o SLIP and PPP
o Terminal connection
Direct or dedicated
This is the most efficient access method to all functions on the Internet. However it is expensive and
rarely used by individuals. It is used by many organizations such as colleges, universities, service
providers and corporations.
SLIP and PPP
This type of connection is widely used by end users to connect to the Internet. It is slower and less
convenient than direct connection. However it provides a high level of service at a lower cost than
direct connection. It uses a high-speed modem and standard telephone line to connect to a provider that
has a direct connection to the Internet. It requires special software protocol: SLIP (Serial Line Internet
Protocol) or PPP (Point-to-Point Protocol). With this type of connection your computer becomes part
of a client/server network. It requires special client software to communicate with server software
running on the provider’s computer and other Internet computers.
Terminal connection
This type of connection also uses a high-speed modem and standard telephone line. Your computer
becomes part of a terminal network with a terminal connection. With this connection, your computer’s
operations are very limited because it only displays communication that occurs between provider and
other computers on the Internet. It is less expensive than SLIP or PPP but not as fast or convenient.
Internet protocols
The standard protocol for the Internet is TCP/IP. TCP/IP (Transmission Control Protocol/Internet
Protocol) are the rules for communicating over the Internet. Protocols control how the messages are
broken down, sent and reassembled. With TCP/IP, a message is broken down into small parts called
packets before it is sent over the Internet. Each packet is sent separately, possibly travelling through
different routes to a common destination. The packets are reassembled into correct order at the
receiving computer.
Internet services
The four commonly used services on the Internet are:
 Telnet
 FTP
 Gopher

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 155

 The Web

Telnet
 Telnet allows you to connect to another computer (host) on the Internet
 With Telnet you can log on to the computer as if you were a terminal connected to it
 There are hundreds of computers on the Internet you can connect to
 Some computers allow free access; some charge a fee for their use

FTP (File Transfer Protocol)


 FTP allows you to copy files on the Internet
 If you copy a file from an Internet computer to your computer, it is called downloading.
 If you copy a file from your computer to an Internet computer, it is called uploading.
Gopher
 Gopher allows you to search and retrieve information at a particular computer site called a
gopher site
 Gopher is a software application that provides menu-based functions for the site.
 It was originally developed at the University of Minnesota in 1991
 Gopher sites are computers that provide direct links to available resources, which may be on
other computers
 Gopher sites can also handle FTP and Telnet to complete their retrieval functions
The Web
 The web is a multimedia interface to resources available on the Internet
 It connects computers and resources throughout the world
 It should not be confused with the term Internet
Browser
 A browser is a special software used on a computer to access the web
 The software provides an uncomplicated interface to the Internet and web documents
 It can be used to connect you to remote computers using Telnet
 It can be used to open and transfer files using FTP
 It can be used to display text and images using the web
 Two well-known browsers are:
o Netscape communicator
o Microsoft Internet Explorer
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 156

Uniform Resource Locators (URLs)


 URLs are addresses used by browsers to connect to other resources
 URLs have at least two basic parts
o Protocol – used to connect to the resource, HTTP (Hyper Text Transfer Protocol) is
the most common.
o Domain Name – the name of the server where the resource is located
 Many URLs have additional parts specifying directory paths, file names and pointers
 Connecting to a URL means that you are connecting to another location called a web site
 Moving from one web site to another is called surfing
Web portals
Web portals are sites that offer a variety of services typically including e-mail, sports updates, financial
data, news and links to selected websites. They are designed to encourage you to visit them each time
you access the web. They act as your home base and as a gateway to their resources
Web pages
A web page is a document file sent to your computer when the browser has connected to a website.
The document file may be located on a local computer or halfway around the world. The document
file is formatted and displayed on your screen as a web page through the interpretation of special
command codes embedded in the document called HTML (Hyper Text Mark-up Language).
Typically the first web page on a website is referred to as the home page. The home page presents
information about the site and may contain references and connections to other documents or sites
called hyperlinks. Hyperlink connections may contain text files, graphic images, audio and video clips.
Hyperlink connections can be accessed by clicking on the hyperlink.
Applets and Java
 Web pages contain links to special programs called applets written in a programming language
called Java.
 Java applets are widely used to add interest and activity to a website.
 Applets can provide animation, graphics, interactive games and more.
 Applets can be downloaded and run by most browsers.

Search tools
Search tools developed for the Internet help users locate precise information. To access a search tool,
you must visit a web site that has a search tool available. There are two basic types of search tools
available:
- Indexes

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 157

- Search engines
Indexes
 Indexes are also known as web directories
 They are organized by major categories e.g. Health, entertainment, education etc
 Each category is further organized into sub categories
 Users can continue to search of subcategories until a list of relevant documents appear
 The best known search index is Yahoo

Search engines
 Search engines are also known as web crawlers or web spiders
 They are organized like a database
 Key words and phrases can be used to search through a database
 Databases are maintained by special programs called agents, spiders or bots
 Widely used search engines are Google, HotBot and AltaVista.

Web utilities
Web utilities are programs that work with a browser to increase your speed, productivity and
capabilities. These utilities can be included in a browser. Some utilities may be free on the Internet
while others can be charged for a nominal charge. There are two categories of web utilities:
 Plug-ins
 Helper applications
Plug-ins
 A plug-in is a program that automatically loads and operates as part of your browser.
 Many websites require plug-ins for users to fully experience web page contents
 Some widely used plug-ins are:
a) Shockwave from macromedia – used for web-based games, live concerts and
dynamic animations
b) QuickTime from Apple – used to display video and play audio
c) Live-3D from Netscape – used to display three-dimensional graphics and virtual
reality

Helper Applications

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 158

Helper applications are also known as add-ons and helper applications. They are independent programs
that can be executed or launched from your browser. The four most common types of helper
applications are:
 Off-line browsers – also known as web-downloading utilities and pull products.
It is a program that automatically connects you to selected websites. They download HTML documents
and saves them to your hard disk. The document can be read latter without being connected to the
Internet.
 Information pushers – also known as web broadcasters or push products.
They automatically gather information on topic areas called channels. The topics are then sent to your
hard disk. The information can be read later without being connected to the Internet.
 Metasearch utilities – offline search utilities are also known as metasearch programs.
They automatically submit search requests to several indices and search engines. They receive the
results, sort them, eliminate duplicates and create an index.
 Filters
Filters are programs that allow parents or organizations to block out selected sites e.g. adult sites. They
can monitor the usage and generate reports detailing time spent on activities.

Discussion Groups
There are several types of discussion groups on the Internet:
 Mailing lists
 Newsgroups
 Chat groups

Mailing lists
In this type of discussion groups, members communicate by sending messages to a list address. To
join, you send your e-mail request to the mailing list subscription address. To cancel, send your email
request to unsubscribe to the subscription address.

Newsgroups
Newsgroups are the most popular type of discussion group. They use a special type of computers called
UseNet. Each UseNet computer maintains the newsgroup listing. There are over 10,000 different
newsgroups organized into major topic areas. Newsgroup organization hierarchy system is similar to
the domain name system. Contributions to a particular newsgroup are sent to one of the UseNet
computers. UseNet computers save messages and periodically share them with other UseNet
computers. Interested individuals can read contributions to a newsgroup.

Chat groups
Chat groups are becoming a very popular type of discussion group. They allow direct ‘live’
communication (real time communication). To participate in a chat group, you need to join by selecting
a channel or a topic. You communicate live with others by typing words on your computer. Other
members of your channel immediately see the words on their computers and they can respond. The

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 159

most popular chat service is called Internet Relay Chat (IRC), which requires special chat client
software.

Instant messaging
Instant messaging is a tool to communicate and collaborate with others. It allows one or more people
to communicate with direct ‘live’ communication. It is similar to chat groups, but it provides greater
control and flexibility. To use instant messaging, you specify a list of friends (buddies) and register
with an instant messaging server e.g. Yahoo Messenger. Whenever you connect to the Internet, special
software will notify your messaging server that you are online. It will notify you if any of your friends
are online and will also notify your buddies that you are online.

E-mail (Electronic Mail)


E-mail is the most common Internet activity. It allows you to send messages to anyone in the world
who has an Internet e-mail account. You need access to the Internet and e-mail program to use this
type of communication. Two widely used e-mail programs are Microsoft’s Outlook Express and
Netscape’s Communicator.

E-mail has three basic elements:


(i) Header – appears first in an e-mail message and contains the following information
a. Address – the address of the person(s) that is to receive the e-mail
b. Subject – a one line description of the message displayed when a person checks their
mail
c. Attachment – files that can be sent by the e-mail program
(ii) Message – the text of the e-mail communication
(iii) Signature – may include sender’s name, address and telephone number (optional)

E-mail addresses
The most important element of an e-mail message is the address of the person who is to receive the
letter. The Internet uses an addressing method known as the Domain Name System (DNS). The system
divides an address into three parts:

(i) User name – identifies a unique person or computer at the listed domain
(ii) Domain name – refers to a particular organization
(iii) Domain code – identifies the geographical or organizational area

Almost all ISPs and online service providers offer e-mail service to their customers.

The main advantages of email are:


 It is normally much cheaper than using the telephone (although, as time equates to money for
most companies, this relates any savings or costs to a user’s typing speed).
 Many different types of data can be transmitted, such as images, documents, speech etc.
 It is much faster than the postal service.
 Users can filter incoming email easier than incoming telephone calls.
 It normally cuts out the need for work to be typed, edited and printed by a secretary.
 It reduces the burden on the mailroom

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 160

 It is normally more secure than traditional methods


 It is relatively easy to send to groups of people (traditionally, either a circulation list was
required or a copy to everyone in the group was required).
 It is usually possible to determine whether the recipient has actually read the message (the
electronic mail system sends back an acknowledgement).

The main disadvantages are:


 It stops people using the telephone
 It cannot be used as a legal document
 Electronic mail messages can be sent on the spur of the moment and may be regretted later on
(sending by traditional methods normally allows for a rethink). In extreme cases messages can
be sent to the wrong person (typically when replying to an email message, where messages is
sent to the mailing list rather than the originator).
 It may be difficult to send to some remote sites. Many organizations have either no electronic
mail or merely an intranet. Large companies are particularly wary of Internet connections and
limit the amount of external traffic.
 Not everyone reads his or her electronic mail on a regular basis (although this is changing as
more organizations adopt email as the standard communication medium).

The main standards that relate to the protocols of email transmission and reception are:
 Simple Mail Transfer Protocol (SMTP) – which is used with the TCP/IP suite. It has
traditionally been limited to the text-based electronic messages.
 Multipurpose Internet Mail Extension – which allows the transmission and reception of
mail that contains various types of data, such as speech, images and motion video. It is a newer
standard than SMTP and uses much of its basic protocol.

Organizational Internets: Intranets and Extranets


An organization may experience two disadvantages in having a connection to the WWW and the
Internet:

 The possible use of the Internet for non-useful applications (by employees).
 The possible connection of non-friendly users from the global connection into the organization’s
local network.

For these reasons, many organizations have shied away from connection to the global network and
have set-up intranets and extranets.

An organizational Internet is the application of Internet technologies within a business network. It is


used to connect employees to each other and to other organizations. There are two types of technologies
used in organizational Internets:

o Intranets – a private network within an organization


o Extranets – a private network that connects more than one organization

Firewalls are often used to protect organizational Internets from external threats.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 161

Intranets
Intranets are in-house, tailor-made networks for use within the organization and provide limited access
(if any) to outside services and also limit the external traffic (if any) into the intranet. An intranet might
have access to the Internet but there will be no access from the Internet to the organization’s intranet.

Organizations which have a requirement for sharing and distributing electronic information normally
have three choices:
- Use a proprietary groupware package such as Lotus Notes
- Set up an Intranet
- Set up a connection to the Internet

Groupware packages normally replicate data locally on a computer whereas Intranets centralize their
information on central servers which are then accessed by a single browser package. The stored data
is normally open and can be viewed by any compatible WWW browser. Intranet browsers have the
great advantage over groupware packages in that they are available for a variety of clients, such as
PCs, Macs, UNIX workstations and so on. A client browser also provides a single GUI interface, which
offers easy integration with other applications such as electronic mail, images, audio, video, animation
and so on.

The main elements of an Intranet are:


 Intranet server hardware
 Intranet server software
 TCP/IP stack software on the clients and servers
 WWW browsers
 A firewall

Other properties defining an Intranet are:


 Intranets use browsers, websites, and web pages to resemble the Internet within the business.
 They typically provide internal e-mail, mailing lists, newsgroups and FTP services
 These services are accessible only to those within the organization

Extranets
Extranets (external Intranets) allow two or more companies to share parts of their Intranets related to
joint projects. For example two companies may be working on a common project, an Extranet would
allow them to share files related with the project.
 Extranets allow other organizations, such as suppliers, limited access to the organization’s
network.
 The purpose of the extranet is to increase efficiency within the business and to reduce costs

Firewalls
 A firewall (or security gateway) is a security system designed to protect organizational
networks. It protects a network against intrusion from outside sources. They may be
categorized as those that block traffic or those that permit traffic.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 162

 It consists of hardware and software that control access to a company’s intranet, extranet and
other internal networks.
 It includes a special computer called a proxy server, which acts as a gatekeeper.
 All communications between the company’s internal networks and outside world must pass
through this special computer.
 The proxy server decides whether to allow a particular message or file to pass through.

Information Superhighway

Information superhighway is a name first used by former US Vice President Al Gore for the vision of
a global, high-speed communications network that will carry voice, data, video and other forms of
information all over the world, and that will make it possible for people to send e-mail, get up-to-the-
minute news, and access business, government and educational information. The Internet is already
providing many of these features, via telephone networks, cable TV services, online service providers
and satellites.

It is commonly used as a synonym for National Information Infrastructure (NII). NII is a proposed,
advanced, seamless web of public and private communications networks, interactive services,
interoperable hardware and software, computers, databases, and consumer electronics to put vast
amounts of information at user’s fingertips.

Terminology

 Multiplexors/concentrators
They are the devices that use several communication channels at the same time. A multiplexor allows
a physical circuit to carry more than one signal at one time when the circuit has more capacity
(bandwidth) than individual signals required. It transmits and receives messages and controls the
communication lines to allow multiple users access to the system. It can also link several low-speed
lines to one high-speed line to enhance transmission capabilities.

 Front end communication processors


They are the hardware devices that connect all network communication lines to a central computer to
relieve the central computer from performing network control, format conversion and message
handling tasks. Other functions that a front-end communication processor performs are:

o Polling and addressing of remote units


o Dialling and answering stations on a switched network
o Determining which remote station a block is to be sent
o Character code translation
o Control character recognition and error checking
o Error recovery and diagnostics
o Activating and deactivating communication lines

 Cluster controllers

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 163

They are the communications terminal control units that control a number of devices such as terminals,
printers and auxiliary storage devices. In such a configuration devices share a common control unit,
which manages input/output operations with a central computer. All messages are buffered by the
terminal control unit and then transmitted to the receivers.

 Protocol converters
They are devices used to convert from one protocol to another such as between asynchronous and
synchronous transmission. Asynchronous terminals are attached to host computers or host
communication controllers using protocol converters. Asynchronous communication techniques do not
allow easy identification of transmission errors; therefore, slow transmission speeds are used to
minimize the potential for errors. It is desirable to communicate with the host computer using
synchronous transmission if high transmission speeds or rapid response is needed.

 Multiplexing
Multiplexing is sending multiple signals or streams of information on a carrier at the same time in the
form of a single, complex signal and then recovering the separate signals at the receiving end. Analog
signals are commonly multiplexed using frequency-division multiplexing (FDM), in which the carrier
bandwidth is divided into sub-channels of different frequency widths, each carrying a signal at the
same time in parallel. Digital signals are commonly multiplexed using time-division multiplexing
(TDM), in which the multiple signals are carried over the same channel in alternating time slots. In
some optical fibre networks, multiple signals are carried together as separate wavelengths of light in a
multiplexed signal using dense wavelength division multiplexing (DWDM).

 Circuit-switched
Circuit-switched is a type of network in which a physical path is obtained for and dedicated to a single
connection between two end-points in the network for the duration of the connection. Ordinary voice
phone service is circuit-switched. The telephone company reserves a specific physical path to the
number you are calling for the duration of your call. During that time, no one else can use the physical
lines involved.

Circuit-switched is often contrasted with packet-switched. Some packet-switched networks such as the
X.25 network are able to have virtual circuit-switching. A virtual circuit-switched connection is a
dedicated logical connection that allows sharing of the physical path among multiple virtual circuit
connections.

 Packet-switched
Packet-switched describes the type of network in which relatively small units of data called packets
are routed through a network based on the destination address contained within each packet. Breaking
communication down into packets allows the same data path to be shared among many users in the
network. This type of communication between sender and receiver is known as connectionless (rather
than dedicated). Most traffic over the Internet uses packet switching and the Internet is basically a
connectionless network.

 Virtual circuit

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 164

A virtual circuit is a circuit or path between points in a network that appears to be a discrete, physical
path but is actually a managed pool of circuit resources from which specific circuits are allocated as
needed to meet traffic requirements.
A permanent virtual circuit (PVC) is a virtual circuit that is permanently available to the user just as
though it were a dedicated or leased line continuously reserved for that user. A switched virtual circuit
(SVC) is a virtual circuit in which a connection session is set up for a user only for the duration of a
connection. PVCs are an important feature of frame relay networks and SVCs are proposed for later
inclusion.

 Closed circuit television


Closed circuit television (CCTV) is a television system in which signals are not publicly distributed;
cameras are connected to television monitors in a limited area such as a store, an office building, or on
a college campus. CCTV is commonly used in surveillance systems.

 VSAT
VSAT (Very Small Aperture Terminal) is a satellite communications system that serves home and
business users. A VSAT end user needs a box that interfaces between the user's computer and an
outside antenna with a transceiver. The transceiver receives or sends a signal to a satellite transponder
in the sky. The satellite sends and receives signals from an earth station computer that acts as a hub for
the system. Each end user is interconnected with the hub station via the satellite in a star topology. For
one end user to communicate with another, each transmission has to first go to the hub station which
retransmits it via the satellite to the other end user's VSAT. VSAT handles data, voice, and video
signals.

VSAT offers a number of advantages over terrestrial alternatives. For private applications, companies
can have total control of their own communication system without dependence on other companies.
Business and home users also get higher speed reception than if using ordinary telephone service or
ISDN.

BENEFITS AND CHALLENGES OF NETWORKS IN AN


ORGANIZATION

Benefits of Networks in an Organization


Computer networks have highly benefited various fields of educational sectors, business world and
many organizations. They can be seen everywhere they connect people all over the world. There are
some major advantages which compute networks has provided making the human life more relaxed
and easy. Some of them are listed below

a. Communication
Communication is one of the biggest advantages provided by the computer networks. Different
computer networking technology has improved the way of communications people from the same or
different organization can communicate in the matter of minutes for collaborating the work activities.
In offices and organizations computer networks are serving as the backbone of the daily
communication from top to bottom level of organization. Different types of softwares can be installed
which are useful for transmitting messages and emails at fast speed.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 165

b. Data sharing
Another wonderful advantage of computer networks is the data sharing. All the data such as
documents, file, accounts information, reports multi media etc can be shared with the help computer
networks. Hardware sharing and application sharing is also allowed in many organizations such as
banks and small firms. .

c. Instant and multiple accesses


Computer networks are multiply processed .many of users can access the same information at the
same time. Immediate commands such as printing commands can be made with the help of computer
networks.

d. Video conferencing
Before the arrival of the computer networks there was no concept for the video conferencing. LAN
and WAN have made it possible for the organizations and business sectors to call the live video
conferencing for important discussions and meetings

e. Internet Service
Computer networks provide internet service over the entire network. Every single computer attached
to the network can experience the high speed internet. Fast processing and work load distribution

f. Broadcasting
With the help of computer networks news and important messages can be broadcasted just in the
matter of seconds who saves a lot of time and effort of the work.

People, can exchange messages immediately over the network any time or we can say 24 hour.

g. Photographs and large files


Computer network can also be used for sending large data file such as high resolution photographs
over the computer network.

h. Saves Cost
Computer networks save a lot of cost for any organizations in different ways. Building up links
thorough the computer networks immediately transfers files and messages to the other people which
reduced transportation and communication expense. It also raises the standard of the organization
because of the advanced technologies that are used in networking.

i. Remote access and login


Employees of different or same organization connected by the networks can access the networks by
simply entering the network remote IP or web remote IP. In this the communication gap which was
present before the computer networks no more exist.

j. Flexible
Computer networks are quite flexible all of its topologies and networking strategies supports addition
for extra components and terminals to the network. They are equally fit for large as well as small
organizations.

k. Reliable
Computer networks are reliable when safety of the data is concerned. If one of the attached system
collapse same data can be gathered form another system attached to the same network.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 166

l. Data transmission
Data is transferred at the fast speed even in the scenarios when one or two terminals machine fails to
work properly. Data transmission in seldom affected in the computer networks. Almost complete
communication can be achieved in critical scenarios too.

m. Provides broader view


For a common man computer networks are an n idea to share their individual views to the other
world.

Challenges of Networks in an Organization


There are many aspects to network design, all of which present unique challenges. From a business
perspective, planning and executing an effective network strategy requires a substantial ongoing
organizational commitment. The organization must be committed to developing an infrastructure that
facilitates communication of the business objectives to the network planning team. The organization
must also develop internal standards, methods, and procedures to promote effective planning. A
commitment to do things the “right” way means adhering to the standardized processes and
procedures even when there are substantial pressures to take risky shortcuts.

The organization should strive to hire, train, and retain skilled managers and staff who understand
technology and how it can be used to satisfy organizational objectives. This is not easy, given the
highly competitive job market for network specialists, and the rapid proliferation of new networking
technologies. During the planning process, potentially serious political and organizational issues
should be identified. For instance, people may feel threatened if they believe that the proposed
network will compromise their power or influence. Consequently, they may attempt to hinder the
project’s progress. The organization must confront these fears and develop strategies for dealing with
them.

In addition to organizational challenges, numerous technical challenges must be faced when designing
a network. Perhaps the foremost challenge is the sheer multiplicity of options that must be considered.
Added to this is the fact that current networks continue to grow in size, scope, and complexity. On top
of this, the networking options available are in a constant state of flux. Keeping abreast of new
developments and relating them to organizational requirements is a formidable task, and it is rare that
an organization will have all the in-house expertise that it needs to do this well.

Often consultants and outside vendors are needed to help plan and implement the network. It is much
easier to manage the activities of the consultants if the organization has a firm grip on the business
objectives and requirements. However, sometimes consultants are needed to help develop and specify
the business objectives and requirements. Although outside consultants offer benefits such as
expertise and objectivity, they also present their own set of challenges. For instance, it is important to
develop a “technology transfer” plan when working with outside consultants, to make sure that in-
house staff can carry on as needed after the consultant leaves.

Through the 1970s and 1980s, if you wanted a network, you could call IBM and they would design
your network. It was a common adage that “the only risk is not buying IBM.” However, for the
foreseeable future, there will be increasing numbers of network vendors in the marketplace and a
decreasing likelihood that any one vendor will satisfy all of the organization’s network requirements.
While often unavoidable, using multiple vendors can pose problems, particularly when there are
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 167

problems with the network implementation and each vendor is pointing a finger at the other. Since it
is increasingly likely that a particular network vendor will provide only a part of the network solution,
it is incumbent on the network design team to make sure that the global network requirements are
addressed.

In short, the sheer volume, complexity, and pace of change in technology complicate the already
formidable task of network design. Strategies for meeting these challenges are dictated by common
sense and good management principles. We briefly summarize some of these strategies below:

a) Develop methods for hiring and retaining good staff.


b) Where necessary, augment existing staff with consultants and vendor support.
c) Use training and internal communication to reduce the fears of those affected by the network.
d) Encourage and offer ongoing education to help staff remain current with new trends in
technology.
e) A voluminous amount of technical information is available from a variety of such sources as
vendor/telco/consultant presentations, conferences, technical books and magazines, and the
Internet. Turn to these sources on a regular basis to help keep up with new developments in
the industry.

LIMITATIONS OF NETWORKS IN AN ORGANIZATION


The main disadvantage of networks is that users become dependent upon them. For example, if a
network file server develops a fault, then many users may not be able to run application programs
and get access to shared data. To overcome this, a back-up server can be switched into action when
the main server fails. A fault on a network may also stop users from being able to access peripherals
such as printers and plotters. To minimize this, a network is normally segmented so that a failure in
one part of it does not affect other parts.

Another major problem with networks is that their efficiency is very dependent on the skill of the
systems manager. A badly managed network may operate less efficiently than non-networked
computers. Also, a badly run network may allow external users into it with little protection against
them causing damage. Damage could also be caused by novices causing problems, such as deleting
important files.

All these could be summarized as below:

1. If a network file server develops a fault, then users may not be able to run application
programs
2. A fault on the network can cause users to loose data (if the files being worked upon are not
saved)
3. If the network stops operating, then it may not be possible to access various resources
4. Users work-throughput becomes dependent upon network and the skill of the systems
manager
5. It is difficult to make the system secure from hackers, novices or industrial espionage
6. Decisions on resource planning tend to become centralized, for example, what word processor
is used, what printers are bought, e.t.c.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 168

7. Networks that have grown with little thought can be inefficient in the long term.
8. As traffic increases on a network, the performance degrades unless it is designed properly
9. Resources may be located too far away from some users
10. The larger the network becomes, the more difficult it is to manage

CLOUD COMPUTING
Cloud computing is the use of computing resources (hardware and software) that are delivered as a
service over a network (typically the Internet). The name comes from the common use of a cloud-
shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud
computing entrusts remote services with a user's data, software and computation.

End users access cloud-based applications through a web browser or a light-weight desktop or mobile
application while the business software and user's data are stored on servers at a remote location.
Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and
focus on projects that differentiate their businesses instead of infrastructure. Proponents also claim
that cloud computing allows enterprises to get their applications up and running faster, with improved
manageability and less maintenance, and enables IT to more rapidly adjust resources to meet
fluctuating and unpredictable business demand.

In the business model using software as a service (SaaS), users are provided access to application
software and databases. Cloud providers manage the infrastructure and platforms that run the
applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-
per-use basis. SaaS providers generally price applications using a subscription fee.

Proponents claim that the SaaS allows a business the potential to reduce IT operational costs by
outsourcing hardware and software maintenance and support to the cloud provider. This enables the
business to reallocate IT operations costs away from hardware/software spending and personnel
expenses, towards meeting other IT goals. In addition, with applications hosted centrally, updates can
be released without the need for users to install new software. One drawback of SaaS is that the users'
data are stored on the cloud provider's server. As a result, there could be unauthorized access to the
data.

Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar
to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the
broader concept of converged infrastructure and shared services.

History
The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe
became available in academia and corporations, accessible via thin clients / terminal computers, often
referred to as "dumb terminals", because they were used for communications but had no internal
computational capacities. To make more efficient use of costly mainframes, a practice evolved that
allowed multiple users to share both the physical access to the computer from multiple terminals as

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 169

well as to share the CPU time. This eliminated periods of inactivity on the mainframe and allowed for
a greater return on the investment. The practice of sharing CPU time on a mainframe became known
in the industry as time-sharing.

In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-
point data circuits, began offering virtual private network (VPN) services with comparable quality of
service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use
overall network bandwidth more effectively. They began to use the cloud symbol to denote the
demarcation point between what the provider was responsible for and what users were responsible for.
Cloud computing extends this boundary to cover servers as well as the network infrastructure.

As computers became more prevalent, scientists and technologists explored ways to make large-scale
computing power available to more users through time sharing, experimenting with algorithms to
provide the optimal use of the infrastructure, platform and applications with prioritized access to the
CPU and efficiency for the end users.

John McCarthy opined in the 1960s that "computation may someday be organized as a public utility."
Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility,
online, illusion of infinite supply), the comparison to the electricity industry and the use of public,
private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966
book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots
go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated
that the entire world would operate on dumb terminals powered by about 15 large data centers. Due to
the expense of these powerful computers, many corporations and other entities could avail themselves
of computing capability through time sharing and several organizations, such as GE's GEISCO, IBM
subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966),
National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by
Tymshare in 1968), and Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial
venture.

The development of the Internet from being document centric via semantic data towards more and
more services was described as "Dynamic Web".This contribution focused in particular in the need
for better meta-data able to describe not only implementation details but also conceptual details of
model-based applications.

The ubiquitous availability of high-capacity networks, low-cost computers and storage devices as well
as the widespread adoption of hardware virtualization, service-oriented architecture, autonomic, and
utility computing have led to a tremendous growth in cloud computing.

After the dot-com bubble, Amazon played a key role in the development of cloud computing by
modernizing their data centers, which, like most computer networks, were using as little as 10% of
their capacity at any one time, just to leave room for occasional spikes. Having found that the new
cloud architecture resulted in significant internal efficiency improvements whereby small, fast-
moving "two-pizza teams" (teams small enough to feed with two pizzas) could add new features faster
and more easily, Amazon initiated a new product development effort to provide cloud computing to
external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 170

In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying
private clouds. In early 2008, OpenNebula, enhanced in the Reservoir European Commission-funded
project, became the first open-source software for deploying private and hybrid clouds, and for the
federation of clouds. In the same year, efforts were focused on providing quality of service guarantees
(as required by real-time interactive applications) to cloud-based infrastructures, in the framework of
the IRMOS European Commission-funded project, resulting to a real-time cloud environment. By
mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among
consumers of IT services, those who use IT services and those who sell them" and observed that
"organizations are switching from company-owned hardware and software assets to per-use service-
based models" so that the "projected shift to computing ... will result in dramatic growth in IT
products in some areas and significant reductions in other areas."

Similar Systems and Concepts


Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The
goal of Cloud Computing is to allow users to take benefit from all of these technologies, without the
need for deep knowledge about or expertise with each one of them. The Cloud aims to cut costs, and
help the users focus on their core business instead of being impeded by IT obstacles .

The main enabling technologies for Cloud Computing are virtualization and autonomic computing.
Virtualization abstracts the physical infrastructure, which is the most rigid component, and makes it
available as a soft component that is easy to use and manage. By doing so, virtualization provides the
agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On
the other hand, autonomic computing automates the process through which the user can provision
resources on-demand. By minimizing user involvement, automation speeds up the process and
reduces the possibility of human errors.

Users face difficult business problems every day. Cloud Computing adopts concepts from Service-
oriented Architecture (SOA) that can help the user break these problems into services that can be
integrated to provide a solution. Cloud Computing provides all of its resources as services, and makes
use of the well-established standards and best practices gained in the domain of SOA to allow global
and easy access to cloud services in a standardized way.

Cloud Computing also leverages concepts from utility computing in order to provide metrics for the
services used. Such metrics are at the core of the public cloud pay-per-use models. In addition,
measured services are an essential part of the feedback loop in autonomic computing, allowing
services to scale on-demand and to perform automatic failure recovery.

Cloud Computing is a kind of Grid Computing; it has evolved from Grid by addressing the QoS and
reliability problems. Cloud Computing provides the tools and technologies to build data/compute
intensive parallel applications with much more affordable prices compared to traditional parallel
computing techniques.

Cloud computing shares characteristics with:

i. Autonomic computing
They are computer systems capable of self-management.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 171

ii. Client–server model


Client–server computing refers broadly to any distributed application that distinguishes between
service providers (servers) and service requesters (clients).

iii. Grid computing


It is a form of distributed and parallel computing, whereby a 'super and virtual computer' is composed
of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.

iv. Mainframe computer


Powerful computers used mainly by large organizations for critical applications, typically bulk data
processing such as census, industry and consumer statistics, police and secret intelligence services,
enterprise resource planning, and financial transaction processing.

v. Utility computing
The "packaging of computing resources, such as computation and storage, as a metered service
similar to a traditional public utility, such as electricity."

vi. Peer-to-peer means distributed architecture without the need for central coordination.
Participants are both suppliers and consumers of resources (in contrast to the traditional
client–server model).
vii. Cloud gaming
Also known as on-demand gaming it is a way of delivering games to computers. Gaming data is
stored in the provider's server, so that gaming is independent of client computers used to play the
game.
Characteristics
Cloud computing exhibits the following key characteristics:

i. Agility improves with users' ability to re-provision technological infrastructure resources.


ii. Application programming interface (API) accessibility to software that enables machines to
interact with cloud software in the same way that a traditional user interface (e.g., a computer
desktop) facilitates interaction between humans and computers. Cloud computing systems
typically use Representational State Transfer (REST)-based APIs.
iii. Cost is claimed to be reduced, and in a public cloud delivery model capital expenditure is
converted to operational expenditure. This is purported to lower barriers to entry, as
infrastructure is typically provided by a third-party and does not need to be purchased for one-
time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-
grained with usage-based options and fewer IT skills are required for implementation (in-
house).The e-FISCAL project's state of the art repository contains several articles looking into
cost aspects in more detail, most of them concluding that costs savings depend on the type of
activities supported and the type of infrastructure available in-house.
iv. Device and location independence enable users to access systems using a web browser
regardless of their location or what device they are using (e.g., PC, mobile phone). As
infrastructure is off-site (typically provided by a third-party) and accessed via the Internet,
users can connect from anywhere.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 172

v. Virtualization technology allows servers and storage devices to be shared and utilization be
increased. Applications can be easily migrated from one physical server to another.
vi. Multitenancy enables sharing of resources and costs across a large pool of users thus allowing
for:
• Centralization of infrastructure in locations with lower costs (such as real estate,
electricity, etc.)
• Peak-load capacity increases (users need not engineer for highest possible load-
levels)
• Utilisation and efficiency improvements for systems that are often only 10–20%
utilised.
vii. Reliability is improved if multiple redundant sites are used, which makes well-designed cloud
computing suitable for business continuity and disaster recovery.
viii. Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-
grained, self-service basis near real-time,without users having to engineer for peak loads.
ix. Performance is monitored, and consistent and loosely coupled architectures are constructed
using web services as the system interface.
x. Security could improve due to centralization of data, increased security-focused resources,
etc., but concerns can persist about loss of control over certain sensitive data, and the lack of
security for stored kernels. Security is often as good as or better than other traditional
systems, in part because providers are able to devote resources to solving security issues that
many customers cannot afford. However, the complexity of security is greatly increased when
data is distributed over a wider area or greater number of devices and in multi-tenant systems
that are being shared by unrelated users. In addition, user access to security audit logs may be
difficult or impossible. Private cloud installations are in part motivated by users' desire to
retain control over the infrastructure and avoid losing control of information security.
xi. Maintenance of cloud computing applications is easier, because they do not need to be
installed on each user's computer and can be accessed from different places.
The National Institute of Standards and Technology's definition of cloud computing identifies "five
essential characteristics":

a) On-demand self-service
A consumer can unilaterally provision computing capabilities, such as server time and network
storage, as needed automatically without requiring human interaction with each service provider.

b) Broad network access


Capabilities are available over the network and accessed through standard mechanisms that promote
use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and
workstations).
c) Resource pooling
The provider's computing resources are pooled to serve multiple consumers using a multi-tenant
model, with different physical and virtual resources dynamically assigned and reassigned according to
consumer demand.

d) Rapid elasticity.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 173

Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly
outward and inward commensurate with demand. To the consumer, the capabilities available for
provisioning often appear unlimited and can be appropriated in any quantity at any time.

e) Measured service.
Cloud systems automatically control and optimize resource use by leveraging a metering capability at
some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and
active user accounts). Resource usage can be monitored, controlled, and reported, providing
transparency for both the provider and consumer of the utilized service.

On-demand self-service
On-demand self-service allows users to obtain, configure and deploy cloud services themselves using
cloud service catalogues, without requiring the assistance of IT. This feature is listed by the National
Institute of Standards and Technology (NIST) as a characteristic of cloud computing.

The self-service requirement of cloud computing prompts infrastructure vendors to create cloud
computing templates, which are obtained from cloud service catalogues. Manufacturers of such
templates or blueprints include BMC Software (BMC), with Service Blueprints as part of their cloud
management platform Hewlett-Packard (HP), which names its templates as HP Cloud Maps
RightScale and Red Hat, which names its templates CloudForms.

The templates contain predefined configurations used by consumers to set up cloud services. The
templates or blueprints provide the technical information necessary to build ready-to-use clouds. Each
template includes specific configuration details for different cloud infrastructures, with information
about servers for specific tasks such as hosting applications, databases, websites and so on. The
templates also include predefined Web service, the operating system, the database, security
configurations and load balancing.

Cloud consumers use cloud templates to move applications between clouds through a self-service
portal. The predefined blueprints define all that an application requires to run in different
environments. For example, a template could define how the same application could be deployed in
cloud platforms based on Amazon Web Service, VMware or Red Hat. The user organization benefits
from cloud templates because the technical aspects of cloud configurations reside in the templates,
letting users to deploy cloud services with a push of a button. Cloud templates can also be used by
developers to create a catalog of cloud services.

Service models

Cloud computing providers offer their services according to several fundamental models:
infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where
IaaS is the most basic and each higher model abstracts from the details of the lower models. Other key
components in XaaS are described in a comprehensive taxonomy model published in 2009, such as
Strategy-as-a-Service, Collaboration-as-a-Service, Business Process-as-a-Service, Database-as-a-
Service, etc. In 2012, network as a service (NaaS) and communication as a service (CaaS) were

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 174

officially included by ITU (International Telecommunication Union) as part of the basic cloud
computing models, recognized service categories of a telecommunication-centric cloud ecosystem

Infrastructure as a service (IaaS)

In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often)
virtual machines - and other resources. (A hypervisor, such as Xen or KVM, runs the virtual machines
as guests. Pools of hypervisors within the cloud operational support-system can support large numbers
of virtual machines and the ability to scale services up and down according to customers' varying
requirements.) IaaS clouds often offer additional resources such as a virtual-machine disk image
library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area
networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on-demand
from their large pools installed in data centers. For wide-area connectivity, customers can use either
the Internet or carrier clouds (dedicated virtual private networks).

To deploy their applications, cloud users install operating-system images and their application
software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating
systems and the application software. Cloud providers typically bill IaaS services on a utility
computing basis cost reflects the amount of resources allocated and consumed.

Examples of IaaS providers include:

 Amazon EC2,
 Azure Services Platform,
 DynDNS,
 Google Compute Engine,
 HP Cloud,
 iland,
 Joyent,
 LeaseWeb,
 Linode,
 NaviSite,
 Oracle Infrastructure as a Service,
 Rackspace Cloud,
 ReadySpace Cloud Services,
 ReliaCloud,
 SAVVIS,
 SingleHop, and
 Terremark
Platform as a service (PaaS)

In the PaaS model, cloud providers deliver a computing platform typically including operating
system, programming language execution environment, database, and web server. Application
developers can develop and run their software solutions on a cloud platform without the cost and
complexity of buying and managing the underlying hardware and software layers. With some PaaS
offers, the underlying computer and storage resources scale automatically to match application
demand such that cloud user does not have to allocate resources manually.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 175

Examples of PaaS include:

 AWS Elastic Beanstalk,


 Cloud Foundry,
 Heroku,
 Force.com,
 EngineYard,
 Mendix,
 OpenShift,
 Google App Engine,
 Windows Azure Cloud Services and
 OrangeScape.
Software as a service (SaaS)

In the SaaS model, cloud providers install and operate application software in the cloud and cloud
users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and
platform where the application runs. This eliminates the need to install and run the application on the
cloud user's own computers, which simplifies maintenance and support. Cloud applications are
different from other applications in their scalability—which can be achieved by cloning tasks onto
multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the
work over the set of virtual machines. This process is transparent to the cloud user, who sees only a
single access point. To accommodate a large number of cloud users, cloud applications can be
multitenant, that is, any machine serves more than one cloud user organization. It is common to refer
to special types of cloud based application software with a similar naming convention: desktop as a
service, business process as a service, test environment as a service, communication as a service.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so price is
scalable and adjustable if users are added or removed at any point.

Examples of SaaS include: Google Apps, Microsoft Office 365, Onlive, GT Nexus, Marketo, Casengo
and TradeCard.

Network as a service (NaaS)

A category of cloud services where the capability provided to the cloud service user is to use
network/transport connectivity services and/or inter-cloud network connectivity services. NaaS
involves the optimization of resource allocations by considering network and computing resources as
a unified whole.

Traditional NaaS services include flexible and extended VPN, and bandwidth on demand. NaaS
concept materialization also includes the provision of a virtual network service by the owners of the
network infrastructure to a third party (VNP – VNO)

Cloud clients

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 176

Users access cloud computing using networked client devices, such as desktop computers, laptops,
tablets and smartphones. Some of these devices - cloud clients - rely on cloud computing for all or a
majority of their applications so as to be essentially useless without it. Examples are thin clients and
the browser-based Chromebook. Many cloud applications do not require specific software on the
client and instead use a web browser to interact with the cloud application. With Ajax and HTML5
these Web user interfaces can achieve a similar or even better look and feel as native applications.
Some cloud applications, however, support specific client software dedicated to these applications
(e.g., virtual desktop clients and most email clients). Some legacy applications (line of business
applications that until now have been prevalent in thin client Windows computing) are delivered via a
screen-sharing technology.

Deployment models
Some of the deployment models include :

a) Public cloud
b) Community cloud
c) Hybrid cloud
d) Private cloud
a. Public cloud
Public cloud applications, storage, and other resources are made available to the general public by a
service provider. These services are free or offered on a pay-per-use model. Generally, public cloud
service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure and
offer access only via Internet (direct connectivity is not offered).

b. Community cloud
Community cloud shares infrastructure between several organizations from a specific community with
common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-
party and hosted internally or externally. The costs are spread over fewer users than a public cloud
(but more than a private cloud), so only some of the cost savings potential of cloud computing are
realized.

c. Hybrid cloud
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain
unique entities but are bound together, offering the benefits of multiple deployment models.Such
composition expands deployment options for cloud services, allowing IT organizations to use public
cloud computing resources to meet temporary needs. This capability enables hybrid clouds to employ
cloud bursting for scaling across clouds.

Cloud bursting is an application deployment model in which an application runs in a private cloud or
data center and "bursts" to a public cloud when the demand for computing capacity increases. A
primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for
extra compute resources when they are needed.

Cloud bursting enables data centers to create an in-house IT infrastructure that supports average
workloads, and use cloud resources from public or private clouds, during spikes in processing
demands.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 177

By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain degrees of fault
tolerance combined with locally immediate usability without dependency on internet connectivity.
Hybrid cloud architecture requires both on-premises resources and off-site (remote) server-based
cloud infrastructure.

Hybrid clouds lack the flexibility, security and certainty of in-house applications.Hybrid cloud
provides the flexibility of in house applications with the fault tolerance and scalability of cloud based
services.

d. Private cloud
Private cloud is cloud infrastructure operated solely for a single organization, whether managed
internally or by a third-party and hosted internally or externally. Undertaking a private cloud project
requires a significant level and degree of engagement to virtualize the business environment, and
requires the organization to reevaluate decisions about existing resources. When done right, it can
have improve business, but every step in the project raises security issues that must be addressed to
prevent serious vulnerabilities.

They have attracted criticism because users "still have to buy, build, and manage them" and thus do
not benefit from less hands-on management, essentially "[lacking] the economic model that makes
cloud computing such an intriguing concept

Comparison for SaaS


Public cloud Private cloud
Initial cost Typically zero Typically high
Running cost Predictable Unpredictable
Customization Impossible Possible
Privacy No (Host has access to the data) Yes
Single sign-on Impossible Possible
Scaling up Easy while within defined limits Laborious but no limits

Architecture
Cloud computing sample architecture

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud
computing, typically involves multiple cloud components communicating with each other over a loose
coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of
tight or loose coupling as applied to mechanisms such as these and others.

The Intercloud

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 178

The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network
of networks" on which it is based.

Cloud engineering

Cloud engineering is the application of engineering disciplines to cloud computing. It brings a


systematic approach to the high-level concerns of commercialisation, standardisation, and governance
in conceiving, developing, operating and maintaining cloud computing systems. It is a
multidisciplinary method encompassing contributions from diverse areas such as systems, software,
web, performance, information, security, platform, risk, and quality engineering.

Issues in Cloud computing


Cloud computing offers the enterprise enormous opportunities: 56% of European decision-makers
estimate that the Cloud is a priority between 2013 and 2014.Even better: the Cloud budget should
reach 30% of the overall IT budget. But several deterrents to the Cloud remain: reliability, availability
of services and data, security, complexity, costs, regulations and legal issues, performance, migration,
reversion, the lack of standards, limited customization, etc. The Cloud also offers several benefits,
however: infrastructure flexibility, faster deployment of applications and data, cost control, adaptation
of Cloud resources to real needs, improved productivity, etc. Today's Cloud market is dominated by
software and services in SaaS mode and IaaS (infrastructure), especially the private Cloud. PaaS and
the public Cloud are further back.

a. Privacy
Privacy advocates have criticized the cloud model for hosting companies' greater ease can control—
and thus, can monitor at will—communication between host company and end user, and access user
data (with or without permission). Instances such as the secret NSA program, working with AT&T,
and Verizon, which recorded over 10 million telephone calls between American citizens, causes
uncertainty among privacy advocates, and the greater powers it gives to telecommunication
companies to monitor user activity.A cloud service provider (CSP) can complicate data privacy
because of the extent of virtualization (virtual machines) and cloud storage used to implement cloud
service. CSP operations, customer or tenant data may not remain on the same system, or in the same
data center or even within the same provider's cloud; this can lead to legal concerns over jurisdiction.
While there have been efforts (such as US-EU Safe Harbor) to "harmonise" the legal environment,
providers such as Amazon still cater to major markets (typically the United States and the European
Union) by deploying local infrastructure and allowing customers to select "availability zones."Cloud
computing poses privacy concerns because the service provider may access the data that is on the
cloud at any point in time. They could accidentally or deliberately alter or even delete information.

b. Compliance
To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data
Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt
community or hybrid deployment modes that are typically more expensive and may offer restricted
benefits. This is how Google is able to "manage and meet additional government policy requirements
beyond FISMA"and Rackspace Cloud or QubeSpace are able to claim PCI compliance.

Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds that the
hand-picked set of goals and standards determined by the auditor and the auditee are often not
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 179

disclosed and can vary widely. Providers typically make this information available on request, under
non-disclosure agreement.

Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the EU
regulations on export of personal data.

c. Legal

As with other changes in the landscape of computing, certain legal issues arise with cloud computing,
including trademark infringement, security concerns and sharing of proprietary data resources.

The Electronic Frontier Foundation has criticized the United States government for considering
during the Megaupload seizure process that people lose property rights by storing data on a cloud
computing service.

One important but not often mentioned problem with cloud computing is the problem of who is in
"possession" of the data. If a cloud company is the possessor of the data, the possessor has certain
legal rights. If the cloud company is the "custodian" of the data, then a different set of rights would
apply. The next problem in the legalities of cloud computing is the problem of legal ownership of the
data. Many Terms of Service agreements are silent on the question of ownership.

d .Vendor lock-in

Because cloud computing is still relatively new, standards are still being developed. Many cloud
platforms and services are proprietary, meaning that they are built on the specific standards, tools and
protocols developed by a particular vendor for its particular cloud offering. This can make migrating
off a proprietary cloud platform prohibitively complicated and expensive.

Three types of vendor lock-in can occur with cloud computing:

i. Platform lock-in: cloud services tend to be built on one of several possible virtualization
platforms, for example VMWare or Xen. Migrating from a cloud provider using one platform
to a cloud provider using a different platform could be very complicated.
ii. Data lock-in: since the cloud is still new, standards of ownership, i.e. who actually owns the
data once it lives on a cloud platform, are not yet developed, which could make it complicated
if cloud computing users ever decide to move data off of a cloud vendor's platform.
iii. Tools lock-in: if tools built to manage a cloud environment are not compatible with different
kinds of both virtual and physical infrastructure, those tools will only be able to manage data
or apps that live in the vendor's particular cloud environment.

Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor
lock-in, and aligns with enterprise data centers that are operating hybrid cloud models. The absence of
vendor lock-in lets cloud administrators select his or her choice of hypervisors for specific tasks, or to
deploy virtualized infrastructures to other enterprises without the need to consider the flavor of
hypervisor in the other enterprise.

A heterogeneous cloud is considered one that includes on-premise private clouds, public clouds and
software-as-a-service clouds. Heterogeneous clouds can work with environments that are not

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 180

virtualized, such as traditional data centers. Heterogeneous clouds also allow for the use of piece
parts, such as hypervisors, servers, and storage, from multiple vendors.

Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each
other. The result is complicated migration between backends, and makes it difficult to integrate data
spread across various locations. This has been described as a problem of vendor lock-in. The solution
to this is for clouds to adopt common standards.

e. Open source
Open-source software has provided the foundation for many cloud computing implementations,
prominent examples being the Hadoop framework and VMware's Cloud Foundry. In November 2007,
the Free Software Foundation released the Affero General Public License, a version of GPLv3
intended to close a perceived legal loophole associated with free software designed to run over a
network.

f. Open standards
Most cloud providers expose APIs that are typically well-documented (often under a Creative
Commons license but also unique to their implementation and thus not interoperable. Some vendors
have adopted others' APIs and there are a number of open standards under development, with a view
to delivering interoperability and portability. As of November 2012, the Open Standard with broadest
industry support is probably OpenStack, founded in 2010 by NASA and Rackspace, and now
governed by the OpenStack Foundation. OpenStack supporters include AMD, Intel, Canonical, SUSE
Linux, Red Hat, Cisco, Dell, HP, IBM, Yahoo and now VMware.

g. Security
As cloud computing is achieving increased popularity, concerns are being voiced about the security
issues introduced through adoption of this new model. The effectiveness and efficiency of traditional
protection mechanisms are being reconsidered as the characteristics of this innovative deployment
model can differ widely from those of traditional architectures. An alternative perspective on the topic
of cloud security is that this is but another, although quite broad, case of "applied security" and that
similar security principles that apply in shared multi-user mainframe security models apply with cloud
security.

The relative security of cloud computing services is a contentious issue that may be delaying its
adoption. Physical control of the Private Cloud equipment is more secure than having the equipment
off site and under someone else's control. Physical control and the ability to visually inspect data links
and access ports is required in order to ensure data links are not compromised. Issues barring the
adoption of cloud computing are due in large part to the private and public sectors' unease
surrounding the external management of security-based services. It is the very nature of cloud
computing-based services, private or public, that promote external management of provided services.
This delivers great incentive to cloud computing service providers to prioritize building and
maintaining strong management of secure services. Security issues have been categorised into
sensitive data access, data segregation, privacy, bug exploitation, recovery, accountability, malicious
insiders, management console security, account control, and multi-tenancy issues. Solutions to various
cloud security issues vary, from cryptography, particularly public key infrastructure (PKI), to use of
multiple cloud providers, standardisation of APIs, and improving virtual machine support and legal
support.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 181

Cloud computing offers many benefits, but is vulnerable to threats. As cloud computing uses increase,
it is likely that more criminals find new ways to exploit system vulnerabilities. Many underlying
challenges and risks in cloud computing increase the threat of data compromise. To mitigate the
threat, cloud computing stakeholders should invest heavily in risk assessment to ensure that the
system encrypts to protect data, establishes trusted foundation to secure the platform and
infrastructure, and builds higher assurance into auditing to strengthen compliance. Security concerns
must be addressed to maintain trust in cloud computing technology.

h. Sustainability
Although cloud computing is often assumed to be a form of green computing, no published study
substantiates this assumption. Citing the servers' effects on the environmental effects of cloud
computing, in areas where climate favors natural cooling and renewable electricity is readily
available, the environmental effects will be more moderate. (The same holds true for "traditional" data
centers.) Thus countries with favorable conditions, such as Finland, Sweden and Switzerland, are
trying to attract cloud computing data centers. Energy efficiency in cloud computing can result from
energy-aware scheduling and server consolidation. However, in the case of distributed clouds over
data centers with different source of energies including renewable source of energies, a small
compromise on energy consumption reduction could result in high carbon footprint reduction.

i. Abuse
As with privately purchased hardware, customers can purchase the services of cloud computing for
nefarious purposes. This includes password cracking and launching attacks using the purchased
services. In 2009, a banking trojan illegally used the popular Amazon service as a command and
control channel that issued software updates and malicious instructions to PCs that were infected by
the malware.

j. IT governance
The introduction of cloud computing requires an appropriate IT governance model to ensure a secured
computing environment and to comply with all relevant organizational information technology
policies. As such, organizations need a set of capabilities that are essential when effectively
implementing and managing cloud services, including demand management, relationship
management, data security management, application lifecycle management, risk and compliance
management. A danger lies with the explosion of companies joining the growth in cloud computing
by becoming providers. However, many of the infrastructural and logistical concerns regarding the
operation of cloud computing businesses are still unknown. This over-saturation may have
ramifications for the industry as whole.

k. Consumer end storage


The increased use of cloud computing could lead to a reduction in demand for high storage capacity
consumer end devices, due to cheaper low storage devices that stream all content via the cloud
becoming more popular. While unregulated usage is beneficial for IT and tech moguls like Amazon,
the anonymous nature of the cost of consumption of cloud usage makes it difficult for business to
evaluate and incorporate it into their business plans.

l. Ambiguity of terminology
Outside of the information technology and software industry, the term "cloud" can be found to
reference a wide range of services, some of which fall under the category of cloud computing, while

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 182

others do not. The cloud is often used to refer to a product or service that is discovered, accessed and
paid for over the Internet, but is not necessarily a computing resource. Examples of service that are
sometimes referred to as "the cloud" include, but are not limited to, crowd sourcing, cloud printing,
crowd funding, cloud manufacturing.

REVISION EXECRISES
1. Discuss the principles of data communication and networks.
2. What are some of the characteristics of data transmission?
3. There is a global trend towards adopting digital communication as opposed to analogue
systems. Analogue data has therefore to be converted to digital data in a process known as
digitisation. Why is it advantageous to digitise data?
4. Briefly describe the main components of a protocol.
5. What is a network topology?
6. Discuss the four types of network topology
7. Why has the use of fiber optic systems become popular in the recent past?
8. List three examples of network protocols.
9. What are some of the benefits networks in an organizations?
10. Discuss the challenges and limitations of networks in an organization
11. Identify the main components of a Local Area Network (LAN)
12. Describe how fiber optic systems are used in communications systems.
13. Define the following terms:
(i) Attenuation
(ii) Delay distortion
(iii) Noise
14. What is cloud computing and what are some of the characteristics of cloud computing
15. There are three main types of network topologies namely; star, ring and bus. As a network
administrator, you have been asked to produce a briefing document that discusses each
topology in terms of cabling cost, fault tolerance, data redundancy and performance as the
number of nodes increases.

CHAPTER 6
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 183

E-COMMERCE
SYNOPSIS
Introduction……………………………………………………. 183
Impact of The Internet on Business……………………………. 183
Models of E-Commerce……………………………………….. 190
Business Opportunities in E-Commerce……………………… 198
Challenges of E-Commerce………………………………….. 200
Mobile Computing……………………………………………. 203
Internet Labs…………………………………………………… 204

INTRODUCTION
Electronic commerce, commonly known as ecommerce, is a type of industry where buying and selling
of product or service is conducted over electronic systems such as the Internet and other computer
networks. Electronic commerce draws on technologies such as mobile commerce, electronic funds
transfer, supply chain management, Internet marketing, online transaction processing, electronic data
interchange (EDI), inventory management systems, and automated data collection systems. Modern
electronic commerce typically uses the World Wide Web at least at one point in the transaction's life-
cycle, although it may encompass a wider range of technologies such as e-mail, mobile devices social
media, and telephones as well.

Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of
the exchange of data to facilitate the financing and payment aspects of business transactions.

E-commerce can be divided into:

• E-tailing or "virtual storefronts" on websites with online catalogs, sometimes gathered into a
"virtual mall"
• The gathering and use of demographic data through Web contacts and social media
• Electronic Data Interchange (EDI), the business-to-business exchange of data
• E-mail and fax and their use as media for reaching prospective and established customers (for
example, with newsletters)
• Business-to-business buying and selling
• The security of business transactions

IMPACT OF THE INTERNET ON BUSINESS


The Internet has a wide variety of uses. It provides an excellent means for disseminating information
and communicating with other people in all regions of the world. While the greatest use of the
Internet has been sharing information, other sources of use are rapidly developing. For instance, chat
rooms, a space where people can go to discuss an assortment of issues, and Internet Commerce, which
connects buyers and sellers online. The following are other examples of current Internet uses:

1. Technical Papers

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 184

Originally, the Internet was only used by the government and universities. Research scientists used
the Internet to communicate with other scientists at different labs and to access powerful computer
systems at distant computing facilities. Scientists also shared the results of their work in technical
papers stored locally on their computer system in ftp sites. Researchers from other facilities used the
Internet to access the ftp directory and obtain these technical papers. Examples of research sites are
NASA and NASA AMES.

2. Share Company Information

Commercial companies are now using the Web for many purposes. One of the first ways that
commercial companies used the Web was to share information with their employees. Sterling
Software's Web page informs employees about such things as training schedules and C++ Guidelines.
There is also some information which is company private and access is restricted to company
employees only. Another company example is Sun Microsystems which similarily contains general
information about the Sun Microsystems company.

3. Product Information

One of the ways businesses share information is to present their product information on a Web page.
Some examples are: Cray Research, Sun Microsystems, Hewlet Packard, and GM's Pontiac Site. The
Web provides an easy and efficient way for companies to distribute product information to their
current and potential customers.

4. Advertising

Along these lines, companies are beginning to actually advertise online. Some examples of different
ways to advertise online are Netscape's Ad Page. Netscape has a list of advertising companies. They
also use a banner for advertisements on their Yahoo Web Page. Starware similarly uses banner
advertisement. These advertisements are created in the established advertising model where the
advertising is positioned between rather than within editorial items. Another type of advertising
focuses on entertaining the customers and keeping them at the companies' site for a longer time
period.

5. Business & Commerce on the Net

Commercial use restrictions of the Internet were lifted in 1991. This has caused an explosion of
commercial use. More information about business on the Internet can be found at the Commerce Net.
This site has information such as the projected growth of advertising on the Internet and online
services. Commercial Services on the Net has a list of various businesses on the Internet. They are
many unusual businesses listed here such that you begin to wonder if they are legitimate businesses.
This topic is discussed in more detail in the section on risks and consumer confidence. Business and
Commerce provides consumer product information. The Federal Trade Commission is also quite
concerned about legal business on the Internet.

WWW users are clearly upscale, professional, and well educated compared with the population as a
whole. For example, from CommerceNet's Survey (CommerceNet is a not for-profit 501c(6) mutual
benefit corporation which is conducting the first large-scale market trial of technologies and business
processes to support electronic commerce via the Internet) as of 10/30/95 :

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 185

• 25% of WWW users earn household income of more than $80,000 whereas only 10% of the
total US and Canadian population has that level of income.
• 50% of WWW users consider themselves to be in professional or managerial occupations. In
contrast, 27% of the total US and Canadian population categorize themselves to have such
positions.
• 64% of WWW users have at least college degrees while the US and Canadian national level is
29%.

CommerceNet's study also found that there is a sizable base of Internet Users in the US and Canada.
With 24 million Internet users (16 years of age or older) and 18 million WWW users (16 years of age
or older), WWW users are a key target for business applications. Approximately 2.5 million people
have made purchases using the WWW. The Internet is, however, heavily skewed to males in terms of
both usage and users. Access through work is also an important factor for both the Internet and online
services such as America Online and CompuServe. For an example of the size of the market, the total
Internet usage exceeds online services and is approximately equivalent to playback of rented
videotapes.

6. Magazines

Magazines are starting to realize that they can attract customers online. Examples of magazines now
published online are Outside, Economist, and Business Week. These magazines are still published in
hard copy, but they are now also available online. Many of these publications are available free
sometimes because of the time delay (i.e. publications online are past issues) or usually to draw in
subscribers for a free initial trial period. Some of these publications may remain free online if
advertisers pay for the publications with their advertisement banners.

7. Newspapers

Some newspapers are beginning to publish online. The San Jose Mercury News is a full newspaper
online, while the Seattle Times offers just classified ads and educational information. The Dow Jones
Wall Street Journal publishes its front page online with highlighted links from the front page to
complete stories. The Journal also provides links to briefing books, which provide financial
information on the company, stock performance, and recent articles and press releases. For an
example of a briefing book see, Netscape Briefing Book. This is all free by the Wall Street Journal
during the trial period which should last until mid 1996.

8. Employment Ads

Companies are also beginning to list their employment ads online to attract talented people who they
might not have been able to reach by the more tradition method of advertising in local papers. Sun
Microsystems provides a list of job openings on the Internet. Interested parties can submit a resume or
call to schedule an interview, which saves time for everyone involved. Universities can also help their
students find jobs more easily by using job listings on the Internet. The University of Washington has
a job listing site. Local papers can also make it easier for job searchers by creating a database search
feature. The job searchers can select the type of jobs that they are interested in and the search will
return a list of all the matching job openings. San Jose Mercury News is a good example of this
approach.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 186

9. Stock Quotes

There are several time delayed (15 minutes) ways to track stock performance, and they are all free.
The first to provide this service was PAWWS Financial Network, and now CNN also lets you track
stocks. These are commercial companies which provide stock quotes for free but charge for other
services. A non-commercial site, MIT's Stock & Mutual Fund Charts, updates information daily and
provides a history file for a select number of stocks and mutual funds. Information in these history
files can be graphically displayed so that it is easier to see a stock's performance over time.

10. Country Investment Information

Thinking about investing in a particular country? Information on countries can be found online. For
example, check out the graphical information (GDP, inflation, direct foreign investment, etc.) on
Indonesia.

11. Order Food

You can order a pizza online. This Web site is actually a joke, but you can easily imagine people
working late at their offices and ordering out for food online.

12. Software Distribution

A very effective and efficient use of the Web is to order software online. This reduces the
packaging and shipping costs. Also documentation can now be provided online. A good example is
Netscape Navigator. Another example is Macromedia's Shockwave. What is Shockwave for Director?
The description online is as following:

"Shockwave for Director is the product name for the Macromedia Director-on-the-Internet project.
Shockwave for Director includes two distinct pieces of functionality:

(1) Shockwave Plug-In for Web browsers like Netscape Navigator 2.0 which allows movies to be
played seamlessly within the same window as the browser page.

(2) Afterburner is a post-processor for Director movie source files. Multimedia developers use it to
prepare content for Internet distribution. Afterburner compresses movies and makes them ready for
uploading to an HTTP server, from which they'll be accessed by Internet users."

So by reading about the product online, you can decide if it sounds interesting. You can then
immediately get the software by downloading it from Macromedia's computer to yours. Next, you
install it on your system and you're all set. You didn't even have to leave your terminal, and there was
no shipping cost to you or the company.

13. Traffic Information

Ever wonder what the rush hour traffic was like before you head home and get stuck in it? Many
different cities are putting traffic information online. In Seattle, a graphical traffic report is available.

14. Tourism

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 187

Plan a trip to Australia or New Zealand with information gathered off the Internet. These and other
countries are on the Internet. So you can plan your vacation from your computer.

15. Movie Previews

Who needs Siskel and Ebert, when you can be your own movie critic? Buena Vista Movie Clips
provides movie clips from many of their new releases.

16. Chat Rooms on AOL

Chat rooms are a more interactive technology. America Online provides areas where people can "log
on" and converse with others with similar interests in real time. This is the first popular use of
interactivity by the general public. The other uses up until recently have been more static, one-way
distribution of information. Interactivity is the future of the Internet .

Forecast of How the Internet & WWW Might Be Used in the Future
There are many ways that the Internet could be used in the next 3 to 5 years. The main aspect that
they all have in common is the increased use of interactivity on the Internet. This means that the
Internet will shift from being a one-way distribution of information to a two-way information stream.
Scientists will continue to lead the way in this area by watching the results from scientific
experiments and exchanging ideas through live audio and video feeds. Due to budget cuts, this
collaboration should be expected to increase even more to stretch what budget they do have.

1. Interactive Computer Games

One of the first areas where interactivity will increase on the Internet are computer games. People will
no longer have to take turns playing solitary or crowd around one machine. Instead they will join a
computer network game and compete against players located at distant sites. An example of this is
Starwave's Fantasy Sports Game. This game is still a more traditional approach of updating statistics
on the computer and players looking at their status. A more active game is Marathon Man, which
portrays players on the screen reacting to various situations. In the future, many of these games will
also include virtual reality.

2. Real Estate

Buying a home online will become possible. While very few people would want to buy a home
without seeing it in person, having house listings online will help reduce the time it takes to purchase
a home. People can narrow down which houses that they are actually interested in viewing by seeing
their description and picture online. An example is a list of house descriptions by region of the
country. This will be improved when database search capabilities are added. People can select the
features that they are interested in and then search the database. In response, they will receive a list of
houses that meet their criteria. Also, having several different images of the House as well as a short
video clip of a walk-through of the house, will help buyers make their selection quicker. This area is
growing quickly

3. Process Mortgages online

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 188

After a house is chosen, potential buyers can apply for a mortgage online. No longer will buyers be
restricted to local lending institutions, since many lenders will be able to compete online for business.
Visit an example of an online mortgage computation. In the future, each lender will have a Web page
which will process the mortgage application. One of the main reasons this has not been implemented
is security, which is discussed further under the strategic risks and security section.

4. Buying stocks

Stocks will soon be able to be purchased over the Internet without the assistance of a broker. Charles
Schwab has a prototype that is being tested currently in Florida. Once the security issues are ironed
out, this application will also be active.

5. Ordering products.

Ordering products online is an important application. As mentioned above, the Pizza Page showed
how easy it could be done. Other companies are setting up Web pages to actually do this. An example
is TSI Soccer. Customers can actually order online if they choose to do so. They can even send their
credit card number over the network. Since this is non-secure, most people probably still call the
company to order any item.

6. Live Video

Viewing live video clips will become more common in the future. CNN has files of video clips of
news stories at video vault which can be downloaded and viewed on a home computer. Seeing actual
live video feed is dependent on network speed, and most home users do not have fast enough
connections to make this a practical application yet. Once the speed of network connection increases,
more people will be interested in live video clips.

7. "Chat" Internet Telephone

While AOL users are currently accessing "Chat Rooms" to communicate with other people on the
Internet, they are restricted to text-based communication or possibly an icon as their identity online.
CUCME from Carneige Mellon provides a means for people to actually see other people online.
However, network speed is once again a limiting factor. If a user is not directly connected to the
Internet (most connections are via modem), then the image is extremely slow. This application will
become more popular with increased network connections.

8. Video Conferencing

On the other hand, businesses will begin using video to communicate with others. There should also
be some applications that businesses can choose to help set up video conferencing. IBM bought
LOTUS Notes for this reason last summer. IBM needs to make it a more flexible solution by
interacting LOTUS Notes with the Internet. They currently are in the process of doing this. Netscape
also offers a solution based on the software company Collabora that they purchased. These possible
solutions should encourage businesses to use video conferencing and online training. Additional
information on Video Conferencing is also available.

Strategic Risks Associated with Business Uses of the Internet

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 189

Some of the risks associated with conducting business through the internet include:

1. Targeting right market segments.

It is important for advertisers to spend their advertisement money wisely. They can achieve this by
using appropriate methods of advertising and targeting the right market segments. Two different types
of advertising are entertainment ads and traditional advertising. Entertainment ads focus on
entertaining a customer whereas traditional advertising is more direct and usually positioned between
rather than within editorial items. When the entertainment ads work well, they can be quite successful
in drawing customers to their site; however, it is very easy for this type of ad to flop resulting in no
one returning to visit the advertisement site after they see it once. Traditional advertising has better
readership. It can also be used well in targeting the right market segments. For instance, the ESPN
Sports page would be a good site to place ads by Gatorade and Nike. Sports minded people that might
be interested in these products would be likely to access these pages. A good reference for researching
this topic further is at Advertising Age.

2. Security

One of the main factors holding back businesses' progress on the Internet, is the issue of security.
Customers do not feel confident sending their credit card numbers over the Internet. Computer
hackers can grab this information off the Internet if it is not encrypted. Netscape and several other
companies are working on encryption methods. Strong encryption algorithms and public education in
the use of the Internet should increase the number of online transactions. After all, getting your credit
card number stolen in every day transactions is easier. In addition, securing private company
information and enforcing copyright issues still need to be resolved before the business community
really takes advantage of Internet transactions. There are, however, currently some methods within
Netscape for placing the information online yet restricting it to only certain people such as company
employees.

3. Consumer confidence

Consumer confidence is essential for conducting business online. Although related to security,
consumer confidence also deals with feeling confident about doing business online. For instance, can
consumers believe that a company is legitimate if it is on the Internet, or could it be some kind of
boiler room operation? Also, companies must be able to substantiate their advertising claims if they
are published online.

4. Speed of network access

The speed of network access is a risk for businesses. If businesses spend a lot of money for fast
network connections and design their sites with this in mind yet customers have lower speed
connections, this may result in less consumers accessing their site. Less consumers accessing their site
most likely results in lower profits which is in addition to the extra cost of the faster network
connection. On the other hand, if the company designed for slower access yet customers have faster
access, they could still lose out in profits. Currently, some of the options that home users have to
choose from are traditional modems, ISDN, and Cable Modems. Traditional modems are cheaper but
the current speed is a maximum of 28.8 Kbps. ISDN is faster at 56 Kbps, but more expensive. Cable

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 190

modems are faster yet with a speed of 4 Mbps. However, two-way interaction with a cable modem
needs some more testing to be sure that it works as well as ISDN.

5. Picking Wrong Industry Standards

Along these lines of picking industry standards, companies must also be sure that the Web Browser
that they develop for is the standard. Otherwise, some of the features that they are using to highlight
their site may not work. Currently the defacto standard is Netscape. There also needs to be a standard
language that adds high quality features such as animation, so that software applications written for
the Internet will run on all the different types of architectures customers may have. Major computer
industry players have backed JAVA by Sun Microsystems. So while some areas are becoming
standardized, companies must be alert to industry changes to avoid becoming obsolete in hardware,
software, and data communications.

6. Internet Community & Philosophy

The Internet was originally developed with a philosophy for sharing information and assisting others
in their research. The original intent emphasised concern for others, technological advances, and not
for profit organizations.

With the lifting of commercial restrictions in 1991, businesses are now joining the Internet
community. As with any small town that has a sudden increase in population, fast growth can cause
problems. Old residents could create animosity if they feel that the new residents are taking over their
community and causing congestion and prices to increase. Businesses need to be conscious of this
phenomenon.

While businesses can expect help from Internet users, businesses will lose this help if they only use it
to make a quick profit. As in a large city, people will start to feel less like helping others in need.
Businesses will be more successful on the Internet if they can emphasize how they can help add value
to the Internet rather than focusing on how to make a quick profit. For example, businesses can take
advantage of the opportunity to provide additional Internet services (e.g., services discussed in the
sections on current uses of the Internet and future uses) now that funding from the government is
being reduced.

An example of a city that has grown rapidly, yet still considered very livable, is Seattle. One of the
reasons attributed to Seattle's successful growth is, that despite it being a large city, there are
numerous small communities within the city. These small communities retain such benefits as
concern for others within the framework of services that a large city can provide. If businesses along
with the Internet community follow this model, the Internet will have a chance to keep its successful
small town atmosphere while adding increased services for more people.

MODELS OF E-COMMERCE
E Commerce is one of the popular aspects of spreading business on a large scale. Online media is
used as a platform to carry out business transactions. For an e-commerce site proper e-commerce web
development is required. E-commerce involves the buying and selling of products or services using an
electronic payment processor. It can be either a business-to-business (B2B) or business-to-consumer

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 191

transaction. Business activities takes place on the Internet, or more specifically, the Web. It's the
newest form of business transaction and has grown exponentially since the start of the 21st century.

Features
E-commerce has made it possible for customers to contact a business at any time of the day at the
customer's convenience. It makes use of the Internet's communication capabilities through product
displays, sales presentations and order processing and delivery. Using a website as the storefront, a
business carries out the same interactions and transactions as occur within a physical storefront, minus
the face-to-face interaction. Products are selected and placed in shopping carts, then customers
purchase their selected items through an order form or payment page. These pages are typically set up
through a merchant account provider and provide security encryptions that protect the customer's
payment information

Function
The actual e-commerce transaction is where the sale is made. This is where the customer provides her
financial information--credit card information, e-check data and shipping information (if applicable),
in exchange for delivery of the product. In the case of electronic products, such as e-books or software
applications, product delivery is immediate. The seller has set up some form of automated delivery by
which customers are redirected to a download page or sent an email with the download link. With
physical products, retail sites typically go through a third party distributor that handles the shipping
and delivery process

A company can carry out E commerce projects based on 5 different models :-

1. Business-to-Business (B2B) is one of the major forms of e commerce. Here the seller and the
buyer participate as business entities. Here the business is carried out the same way a
manufacturer supplies goods to a wholesaler.
2. Business-to-Consumer (B2C) In this case transactions take place between consumers and
business houses. Here individuals are also involved in the online business transactions.
3. Consumer-to-Consumer (C2C) model is applicable when the business transaction is carried
between two individuals. But for this type of e commerce, the individuals require a platform
or an intermediary for business transactions.
4. Peer-to-Peer (P2P) is another model of e-commerce. This model is technologically more
sound than the other e commerce models. During this type of transactions, people can share
computer resources. Here it is not required to use a common server; instead a common
platform can be used for the transactions.
5. With technological advancements, the business transactions can be done through mobile
devices. The latest model for e commerce is the M-Commerce. The e commerce sites can be
specially optimized and programmed so that they can be viewed and used through mobiles.
Here two mobile users can contact each other to carry out the business transactions.
Considerations
Not unlike the "brick-and-mortar" storefronts, e-commerce sites have to let customers know where
they are and what they have to offer. As new as the e-commerce model is, new methods for marketing

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 192

are popping up every day. Some of the most popular methods include email marketing, banner
exchanges, classified ads, pay-per-click advertising and article marketing.

Online businesses are increasingly becoming aware of the importance of building a relationship of
trust with potential and current customers. A common practice is to offer free products, services or
information as a way to get customers interested in a business's products. The built-in speed and
convenience of the Internet has become a new business world in which marketing, product display,
customer relations and product purchase can all happen at one virtual site.

The Marketing Function - Market Environment, Marketing Cycle and


Components of Marketing Information System
The role of information technology and systems is to improve productivity of organization.
Information systems are deployed across functional department of organization.

The Marketing Function


In broader terms, marketing is defined as a process through which organizations are able to deliver
products and services as per the need of the customers. Organizations conduct market research to
identify needs and requirement of customers.

The marketing process ensures the following:

 It ensures that customers are able to buy the products they want.
 It ensures that producers are able to sell products in a free market.
 It ensures a stable financing is available to conduct production.
 It ensures that perishable goods are stored in an appropriate manner for consumption.
 It ensures that products are transported to all customer markets.
 It ensures that quality standards are always maintained.

Market Environment
The market environment directly impacts the function and working of an organization. The 3
categories of market environment are internal environment, micro environment and macro
environment. Organizations develop strategies as to be successful in all three environments.

The culture and environment of organizations play an important role in delivering value to customers.
Internal customers of organization are the ones which contribute in delivering the final products.
Organization needs to look at strategies to motivate internal organization to satisfy external
customers.

Internal customers, suppliers, etc. combine to make the micro-environment of an organization.


Organization to deliver a good final product needs to develop and maintain strong relationship with
vendors and external agencies. Therefore, it is important for organization to maintain continuous
analysis of ever-changing micro-environment.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 193

The macro-environment of an organization consists of government policies, global economic


condition, and political stability. Organization does not have direct control or say in the macro-
environment.

The Marketing Cycle


The marketing cycle is closely associated with the product life cycle. The marketing life cycle is
divided into development stage, introduction stage, growth stage, maturity stage and decline stage.

Companies deploy different marketing strategies during each stage of marketing life cycle. These
strategies are closely associated with revenue generation from product sales.

Marketing Information System


An information system which captures, stores, analyzes and distributes marketing information to
facilitate the decision-making process is called marketing information system.

The source of marketing information comes through internal records and external records. The
internal record includes day to day production data as well as product sales data. Internal data helps
manager track marketing impact on the different product mix.

External data is market performance of a competitor also plays important in the decision-making
process. Company's sales force is a huge data source. Therefore, it is essential for system to capture
their market intelligence input.

The data collected through external or internal market research agencies plays an important to provide
a holistic market view to the managers.

An information system captures information from all the different sources. The information is
analyzed and then distributed to managers for decision-making process.

Marketing information systems advantages is as follows:

 It helps organizing data from different sources at one location.


 It helps in development and tracking of marketing plans.
 It helps in manipulation data as per management requirement.
 It facilitates historical analysis of marketing data.
Marketing Channel Systems
The last two decades have changed the way business is getting conducted. Some businesses are still
using traditional channel systems but advent of the Internet has revolutionized distribution channels.
Companies are changing business models to leverage Internet advantage.

With open proliferation of information, customer expectations are reaching new heights. Companies
need to figure out the right channel mix with multi channels’ strategies. From a manager stand point
marketing channel is defined as any external agencies, which facilitate distribution of products and
services.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 194

The marketing channel is one of the key drivers for strategies around the marketing mix, i.e. product,
price, place and promotion.

Channel Flow and Structure


The channel flow is a flow which relates different agencies involved in the distribution of goods and
products.

The channel structure is referred to as the combination of different channel members in achieving
organization’s marketing mix strategy.

Channel Participants
The marketing channel consists of various players like manufacturers, producers, wholesalers and
retailers. Manufacturers and producers develop their own marketing channel to reach the end user.
However, not all manufacturers have the expertise in managing channel participants. Therefore, they
need wholesalers and retailers for distribution of goods.

There are three types of wholesalers; merchant wholesalers, agents and producer’s branch offices.
Merchant wholesalers usually have good capacity of storing and managing goods. In contrast, agent
works as middlemen for producers and end users. Retailers are responsible for selling goods and
products to end users.

Importance of Channel Participants


The major role of channel participants is to make the distribution and selling of goods and products
efficient. Intermediaries provide manufactures opportunities which for them financially would not be
feasible. Intermediaries provide greater market exposure, market intelligence, economies of scale and
operational knowledge.

Managing Channel Conflict


Conflict among channel partners adversely affects the distribution of goods and products. It is
important for the channel managers to understand the nature of conflict and come with solution,
which strengthens the distribution network.

However, all issues in the channel cannot be considered as a conflict. The channel manager needs to
assess the frequency of disagreement, level of disagreement and importance of issue.

The top three reasons for an emergence of conflict among channel partners are as follows. The first
reason is the different business objectives of channel partners (producers, wholesalers and retailers).
The other reason is a narrow vision of each channel partner, i.e. they do not view channel on whole
but only at their level.

Conflict between the channel partners can be resolved by improving communication among
themselves and also with producer. Another way of solving conflict is by directing all channels to a
single objective of creating customer delight.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 195

Multi-Channel Marketing System


Multi-channel marketing system has become a prominent way through which goods and products are
delivered to end users. The multi-channel system enables the companies to deliver goods and products
to end users as per their preference. The delivery of goods can be through store, website, mail order,
etc.

Franchise
Another innovation in the marketing channel system is the franchise. Franchise enables brand
recognition, standardization of operation structure, access to learning curve and less financial
investment.

Sales Support System


Sales support systems were developed to assist the sales force in improving productivity. This
improvement in productivity was through continuous communication with the field offices about
customer activities and requirements. The sales force focus was on cultivating long and fruitful
customer relation.

Before the advent of information system, customer-related information was recorded in individual
sales representative’s personal books rather than on a centralized data center. This meant that the
customer information would be lost with movement of sales representative.

Therefore, to preserve and utilized customer relation and improve performance of sales force, sales
support system was developed.

Sales Support System

A sales support system can be divided into:

 Sales Activity Management


 Sales and Territory Management
 Contact Management
 Lead Management
 Configuration Management
 Knowledge Management

Sales Activity Management

This module of the sales support system looks at offering calendar based activities to plan and
coordinate meetings with customer relationship accounts. The module looks at consolidating team
activities for a given period of time. The module provides in-depth analysis of the historical and
current sales cycle.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 196

Sales and Territory Management

Sales force racks up into team member and team member into the sales manager. Therefore, sales
manager has to monitor activities of more than one sales team. This module helps sales manager
generate reports, which provide data point around current sale activities performed by the different
sales team. This module helps sales force connect with various product specialists based out of
various sales office locations. Territory-wise pipeline management becomes easy for the sales
manager.

Contact Management

One of the important needs of sales force and sales team is management of various contact points
across different organization. Contact management module should be able to organize contact across
current and potential client organization.

Lead Management

Sales force work relentlessly to generate a sales lead. Lead management module provides
management of leads, which come through marketing campaigns and referral management. This
module also tracks characteristics of each lead as to highlight other possible leads.

Configuration Support

Every organization has distinctive and varied product requirement. It is important for the sales force
to have ready access to different product configuration and associated price. This module facilitates
configuration support.

Knowledge Management

The modern information system can hold large volume of data, which can be effectively converted
into information.

Advent of the Internet has simplified sales force access to centralized databases. It helps sales force
stay in touch with each other as well as sales manager. Availability of the Internet has reduced the
cost of managing communication. Mobile devices have further contributed to proliferation sales
support system.

Post Sales Service Support

Another aspect of sales is after sales product support. Organizations have on-site or field service staff.
The Internet has made field service management possible on a real-time basis.

Sales Support System & Customer Relationship Management

Sales support system enhances productivity and efficiency of the sales force. The sales force remains
aware of development around the potential clients on a real-time basis. This increases probability of
closing a sales deal. The productive sales force not only increases market share but also improves
profitability of organization.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 197

Information System in Retail Sector


An important element of the supply chain is the retail. Retail is the place where the products and
goods are sold to the end users. Retailer purchases goods and products from producers in large
quantities and in turn sells them to consumers in smaller quantities.

Information Flow
It is very important for the retailer to communicate with the supplier as well as the consumer. From
the producer, the retail should know the following:

 Retailer should know when a new product is getting launched or whether the producer is
introducing a new variant for the existing product.
 Retailers should get a regular training from the manufacturer about brand new products and
fresh technology.
 Retailer should have information well in advance about any impending pricing change.
 Retailer should also know about sales forecast from producer for given line of product.

Consumer is also as important for the retailer as the producer. From the consumer, the retailer should
know the following:

 What attract the consumer to a particular retailer?


 What are good and bad points about a particular retailer?
 How did they hear about a particular retailer?

Retail Management Information System


If the retailer is on top of above information, then he would be able manage his business efficiently. In
the current scenario, large retailers have their shop across physical geographies. For them, it becomes
very important to centrally manage all shops. Retail management information system precisely does
this with help of hardware, software, database and various modules.

Objective

The objective of the retail information systems is as follows:

a) An information system should provide relevant information to retail manager regularly.


b) An information system should anticipate needs and requirement of the retail manager.
c) An information system should be flexible enough to incorporate constant evolving needs of
the consumer market.
d) An information system should be able to capture, store and organize all the relevant data on a
regular and continuous basis.
e) The retail Information systems should be aligned with strategic and business plans of the
organization. Therefore, it should be able to provide information, which supports and drives
this objective

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 198

Characteristics of Retail Information System

The retail information system should have following characteristics:

i. Retail Information systems should connect all the stores under the company's
ii. Retail information system should allow instant information exchange between stores and
management.
iii. Retail information system should handle the various aspect of product management.
iv. Retail information system should handle customer analysis.
v. Retail information system should allow the store manager flexible pricing over a financial
year.

Role of Retail Information System

Retail information system should support basic retail function like material procurement, storage,
dispatch, etc. It should allow the manager to monitor sales of product mix and daily sales volume. An
information system should help in inventory management.

Variety of Retail Information System

Retail information system is applicable to different types industry within retail management. An
information system can be developed to manage fashion store, pharmacy, a grocery store as well as a
toy store.

BUSINESS OPPORTUNITIES IN E-COMMERCE


We all know that the Internet has become the lifeline of any business. The simplest definition of
business is – any activity or transaction which involves the exchange of goods and services with an
objective of earning an income by making a profit. If this very transaction is executed over the
internet it is called E-commerce.

The current trends show that the use of the Internet, smart phones and the confidence of the people in
using their credit cards online is growing exponentially. Hence, ecommerce is here to stay, and we
have to adapt ourselves to become smarter online buyers and sellers and web entrepreneurs – because
all the basic principles of the real world business apply to ecommerce also.

The e-commerce spending and online buyers and penentration of e-commerce will surely grow but the
growth will vary from country to country and affect the online market at various time periods, but
eventually when all the continent markets mature, the global market will shrink the geographic
boundaries further – giving rise to further impetus for a favorable online scenario.

The development of the internet in the 20th century led to the birth of an electronic marketplace or it
is called e-marketplace, which is now a kernel of electronic commerce (e-commerce). An e-
marketplace provides a virtual space where sellers and buyers trade with each other as in the
traditional marketplace.

Various kinds of economic transactions and buying and selling of goods and services, as well as
exchanges of information, take place in e-marketplaces. E-marketplaces have become an alternative
place for trading. Finally, an e-marketplace can serve as an information agent that provides buyers and
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 199

sellers with information on products and other participants in the market. These features have been
reshaping the economy by affecting the behavior of buyers and sellers.

a. E-business
E-business affects the whole business and the value chains in which it operates. It enables a much
more integrated level of collaboration between the different components of a value chain than ever
before. Adopting e-Business also allows companies to reduce costs and improve customer response
time. Organizations that transform their business practices stand to benefit immensely from
innumerable new possibilities brought about by technology

E-commerce as anything that involves an online transaction. This can range from ordering online,
through online delivery of paid content, to financial transactions such as movement of money between
bank accounts. One area where there are some positive indications of e-commerce is financial
services. Online stock trading saw sustained growth throughout the period of broadband diffusion. E-
shopping is available to all these who use a computer.

b. E-commerce integration
The rationale for infusion of e-commerce education into all business courses is that technological
developments are significantly affecting all aspects of today's business. An e-commerce dimension
can be added to the business curriculum by integrating e-commerce topics into existing upper-level
business courses. Students would be introduced to e-commerce education and topics covered in a
variety of business courses in different disciplines e.g. accounting, economics, finance, marketing,
management, management information systems. To help assure that all related business courses in all
disciplines such as e.g., accounting, finance, economics, marketing, management, information
systems pay proper attention to the critical aspects of e-commerce, certain e-commerce topics should
be integrated into existing business courses.

c. Open and distance learning


Diana Oblinger (2001) reported that one is that education and continuous learning have become so
vital in all societies that the demand for distance and open learning will increase. As the availability of
the Internet expands, as computing devices become more affordable, and as energy requirements and
form factors shrink, e-learning will become more popular. In addition to the importance of lifelong
learning, distance education and e-learning will grow in popularity because convenience and
flexibility are more important decision criteria than ever before. E learning will become widely
accepted because exposure to the Internet and e-learning often begins in the primary grades, thus
making more students familiar and comfortable with online learning. In fact, for many countries,
distance education has been the most viable solution for providing education to hundreds of thousands
of students.

d. E-commerce and E-insurance


Prithviraj Dasgupta and Kasturi Sengupta (2002) reported that the recent growth of Internet
infrastructure and introduction of economic reforms in the insurance sector have opened up the
monopolistic Indian insurance market to competition from foreign alliances. Although the focus of e-
commerce has been mainly on business to consumer (B2C) applications, the emphasis is now shifting
towards business to business (B2B) applications. The insurance industry provides an appropriate
model that combines both B2C and B2B applications.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 200

Traditional insurance requires a certificate for every policy issued by the insurance company.
However, paper certificates encumber problems including loss, duplication and forging of the
certificate. The conventional certificate is now replaced with an electronic certificate that can be
digitally signed by both the insurer and the insurance company and verified by a certifying authority.

Online policy purchase is faster, more user-friendly and definitely more secure than the traditional
processes. Therefore it is more attractive to the insurer. At the same time it incurs less cost and
requires fewer resources than traditional insurance and is therefore more profitable for the insurance
company.

E-insurance also makes the insurance procedure more secure since the policy details are stored
digitally and all transactions are made over secure channels. These channels provide additional market
penetration that is absent in traditional channels and help in earning more revenue than traditional
insurance processes.

e. Future media of e-commerce


99% of e-commerce today is done using PCs either desktops or laptops. For B2B e-commerce this is
unlikely to change .For B2C e-commerce however, things will be more complex. There will be wider
range of relevant media, including interactive digital TV, and a range of mobile and wireless services.
There will be huge difference between different consumers' ownership of equipment and access
technology. Some will have broad band access and others have no digital communication at all.

f. Current and future B2C digital media


Digital media able to support consumer e-commerce can be grouped under five main headings, with
in the home PCS, IDTV and with in next five years a range of other online device such as games,
computers, utility meters etc. In summary, the online PC is well established while the other B2C
digital media are still emerging.

CHALLENGES OF E-COMMERCE
Internet based e-commerce has besides, great advantages, posed many threats because of its being
what is popularly called faceless and borderless.

Some examples of ethical issues that have emerged as a result of electronic commerce. All of the
following examples are both ethical issues and issues that are uniquely related to electronic
commerce.

A. Ethical issues
The following ethical issues related to e-commerce.

1) Privacy

Privacy has been and continues to be a significant issue of concern for both current and prospective
electronic commerce customers. With regard to web interactions and e- commerce the following
dimensions are most salient:

a. Privacy consists of not being interfered with, having the power to exclude; individual privacy
is a moral right.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 201

b. Privacy is "a desirable condition with respect to possession of information by other persons
about him/herself on the observation/perceiving of him/herself by other persons"

2) Security concerns

In addition to privacy concerns, other ethical issues are involved with electronic commerce. The
Internet offers unprecedented ease of access to a vast array of goods and services. The rapidly
expanding arena of "click and mortar" and the largely unregulated cyberspace medium have however
prompted concerns about both privacy and data security.

B. Perceptions of risk in e-service encounters


Mauricio S. Featherman, Joseph S. Valacich & John D. Wells(2006) reported that as companies race
to digitize physical-based service processes repackaging them as online e-services, it becomes
increasingly important to understand how consumers perceive the digitized e-service alternative. E-
service replacements may seem unfamiliar, artificial and non-authentic in comparison to traditional
service processing methods. Consumers may believe that new internet-based processing methods
expose them to new potential risks the dangers of online fraud , identity theft and phishing swindles
means schemes to steal confidential information using spoofed web sites, have become commonplace,
and are likely to cause alarm and fear within consumers.

C. E-commerce Integration
Beside many an advantages offered by the education a no. of challenges have been posed to the recent
education system.

Zabihollah Rezaee, Kenneth R. Lambert and W. Ken Harmon(2006) reported that E-commerce
Integration assures coverage of all critical aspects of e-commerce, but it also has several obstacles.
First, adding e-commerce materials to existing business courses can overburden faculty and students
alike trying to cope with additional subject matter in courses already saturated with required
information. Second, many business faculty members may not wish to add e-commerce topics to their
courses primarily because of their own lack of comfort with technology-related subjects. Third and
finally, this approach requires a great deal of coordination among faculty and disciplines in business
schools to ensure proper coverage of e-commerce education.

D. Legal system
Beside many an advantages offered by the IT a no. of challenges have been posed to the legal system.
The information transferred by electronic means which culminates into a contract raises many legal
issues which cannot be answered within the existing provisions of the contract act. The IT act does
not form a complete code for the electronic contracts.

Farooq Ahmed(2001) reported that some of the multifaceted issues raised are summarized in
following manner.

1. Formation of e-contracts

a) Contracts by e-data interchange

b) Cyber contracts

2. Validity of e-transactions.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 202

3. Dichotomy of offer and invitation to treat.

4. Communication of offer and acceptance

5. Mistake in e-commerce

a) Mutual mistake

b) Unilateral mistake

6. Jurisdiction: cyber space transactions know no national and international boundaries and are not
analogous to 3- dimensional world in which common law principles involved.

7. Identity of parties

The issues of jurisdiction, applicable law and enforcement of the judgments are not confined to only
national boundaries. The problems raised are global in nature and need global resolution.

E. Human skills required for E-Commerce


It's not just about E-commerce; it's about redefining business models, reinventing business processes,
changing corporate cultures, and raising relationships with customers and suppliers to unprecedented
levels of intimacy.

Internet-enabled Electronic Commerce:

• Web site development


• Web Server technologies
• Security
• Integration with existing applications and processes
Developing Electronic Commerce solutions successfully across the Organization means building
reliable, scalable systems for

• security,
• E- commerce payments
• Supply- chain management
• Sales force, data warehousing, customer relations
• Integrating all of this existing back-end operation.

For more than two decades, organizations have conducted business electronically by employing a
variety of electronic commerce solutions. In the traditional scenario, an organization enters the
electronic market by establishing trading partner agreements with retailers or wholesalers of their
choosing. These agreements may include any items that cannot be reconciled electronically, such as
terms of transfer, payment mechanisms, or implementation conventions. After establishing the proper
business relationships, an organization must choose the components of their electronic commerce
system. Although these systems differ substantially in terms of features and complexity, the core
components typically include:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 203

a. Workflow application - a forms interface that aids the user in creating outgoing requests or
viewing incoming requests. Information that appears in these forms may also be stored in a
local database.
b. Electronic Data Interchange (EDI) translator - a mapping between the local format and a
globally understood format.
c. Communications - a mechanism for transmitting the data; typically asynchronous or
bisynchronous
d. Value-Added Network (VAN) - a store and forward mechanism for exchanging business
messages
Using an electronic commerce system , a retailer may maintain an electronic merchandise inventory
and update the inventory database when items are received from suppliers or sold to customers. When
the inventory of a particular item is low, the retailer may create a purchase order to replenish his
inventory. As the purchase order passes through the system, it will be translated into its EDI
equivalent, transmitted to a VAN, and forwarded to the supplier’s mailbox. The supplier will check
his mailbox, obtain the EDI purchase order, translate it into his own local form, process the request,
and ship the item.

These technologies have primarily been used to support business transactions between organizations
that have established relationships (i.e. retailer and the wholesaler). More recently, due largely to the
popularity of the Internet and the World Wide Web, vendors are bringing the product directly to the
consumer via electronic shopping malls. These electronic malls provide the consumer with powerful
browsing and searching capabilities, somewhat duplicating the traditional shopping experience. In this
emerging business-to- consumer model, where consumers are businesses are meeting electronically,
business relationships will have to be automatically negotiated.

MOBILE COMPUTING
Mobile computing is human–computer interaction by which a computer is expected to be transported
during normal usage. Mobile computing involves mobile communication, mobile hardware, and
mobile software. Communication issues include ad-hoc and infrastructure networks as well as
communication properties, protocols, data formats and concrete technologies. Hardware includes
mobile devices or device components. Mobile software deals with the characteristics and
requirements of mobile applications.

Devices
Many types of mobile computers have been introduced since the 1990s including the:

a. Personal digital assistant/enterprise digital assistant


b. Smartphone
c. Tablet computer
d. Ultra-Mobile PC
e. Wearable computer

Some of the limitations of mobile computing include:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 204

1. Range & Bandwidth


Mobile Internet access is generally slower than direct cable connections, using technologies such as
GPRS and EDGE, and more recently HSDPA and HSUPA 3G and 4G networks. These networks are
usually available within range of commercial cell phone towers. Higher speed wireless LANs are
inexpensive but have very limited range.

2. Security standards
When working mobile, one is dependent on public networks, requiring careful use of VPN. Security is
a major concern while concerning the mobile computing standards on the fleet. One can easily attack
the VPN through a huge number of networks interconnected through the line.

3. Power consumption
When a power outlet or portable generator is not available, mobile computers must rely entirely on
battery power. Combined with the compact size of many mobile devices, this often means unusually
expensive batteries must be used to obtain the necessary battery life.

4. Transmission interferences
Weather, terrain, and the range from the nearest signal point can all interfere with signal reception.
Reception in tunnels, some buildings, and rural areas is often poor.

5. Potential health hazards


People who use mobile devices while driving are often distracted from driving and are thus assumed
more likely to be involved in traffic accidents.(While this may seem obvious, there is considerable
discussion about whether banning mobile device use while driving reduces accidents or not.) Cell
phones may interfere with sensitive medical devices. Questions concerning mobile phone radiation
and health have been raised.

6. Human interface with device


Screens and keyboards tend to be small, which may make them hard to use. Alternate input methods
such as speech or handwriting recognition require training.

INTERNET LAB
A Laboratory Information Management System (LIMS), sometimes referred to as a Laboratory
Information System (LIS) or Laboratory Management System (LMS), is a software-based laboratory
and information management system that offers a set of key features that support a modern
laboratory's operations. Those key features include — but are not limited to — workflow and data
tracking support, flexible architecture, and smart data exchange interfaces, which fully "support its
use in regulated environments."The features and uses of a LIMS have evolved over the years from
simple sample tracking to an enterprise resource planning tool that manages multiple aspects of
laboratory informatics.

Due to the rapid pace at which laboratories and their data management needs shift, the definition of
LIMS has become somewhat controversial. As the needs of the modern laboratory vary widely from
lab to lab, what is needed from a laboratory information management system also shifts. The end
result: the definition of a LIMS will shift based on who you ask and what their vision of the modern
lab is Dr. Alan McLelland of the Institute of Biochemistry, Royal Infirmary, Glasgow highlighted this
problem in the late 1990s by explaining how a LIMS is perceived by an analyst, a laboratory manager,

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 205

an information systems manager, and an accountant, "all of them correct, but each of them limited by
the users' own perceptions."

Historically the LIMS, LIS, and Process Development Execution System (PDES) have all performed
similar functions. Historically the term "LIMS" has tended to be used to reference informatics systems
targeted for environmental, research, or commercial analysis such as pharmaceutical or petrochemical
work. "LIS" has tended to be used to reference laboratory informatics systems in the forensics and
clinical markets, which often required special case management tools. The term "PDES" has generally
applied to a wider scope, including, for example, virtual manufacturing techniques, while not
necessarily integrating with laboratory equipment.

In recent times LIMS functionality has spread even farther beyond its original purpose of sample
management. Assay data management, data mining, data analysis, and electronic laboratory notebook
(ELN) integration are all features that have been added to many LIMS, enabling the realization of
translational medicine completely within a single software solution. Additionally, the distinction
between a LIMS and a LIS has blurred, as many LIMS now also fully support comprehensive case-
centric clinical data.

Technology
The LIMS is an evolving concept, with new features and functionality being added often. As
laboratory demands change and technological progress continues, the functions of a LIMS will likely
also change. Despite these changes, a LIMS tends to have a base set of functionality that defines it.
That functionality can roughly be divided into five laboratory processing phases, with numerous
software functions falling under each:

i. the reception and log in of a sample and its associated customer data
ii. the assignment, scheduling, and tracking of the sample and the associated analytical workload
iii. the processing and quality control associated with the sample and the utilized equipment and
inventory
iv. the storage of data associated with the sample analysis
v. the inspection, approval, and compilation of the sample data for reporting and/or further
analysis
There are several pieces of core functionality associated with these laboratory processing phases that
tend to appear in most LIMS:

Sample Management
A lab worker matches blood samples to documents. With a LIMS, this sort of sample management is
made more efficient.

The core function of LIMS has traditionally been the management of samples. This typically is
initiated when a sample is received in the laboratory, at which point the sample will be registered in
the LIMS. Some LIMS will allow the customer to place an "order" for a sample directly to the LIMS
at which point the sample is generated in an "unreceived" state. The processing could then include a
step where the sample container is registered and sent to the customer for the sample to be taken and
then returned to the lab. The registration process may involve accessioning the sample and producing
barcodes to affix to the sample container. Various other parameters such as clinical or phenotypic
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 206

information corresponding with the sample are also often recorded. The LIMS then tracks chain of
custody as well as sample location. Location tracking usually involves assigning the sample to a
particular freezer location, often down to the granular level of shelf, rack, box, row, and column.
Other event tracking such as freeze and thaw cycles that a sample undergoes in the laboratory may be
required.

Modern LIMS have implemented extensive configurability, as each laboratory's needs for tracking
additional data points can vary widely. LIMS vendors cannot typically make assumptions about what
these data tracking needs are, and therefore vendors must create LIMS that are adaptable to individual
environments. LIMS users may also have regulatory concerns to comply with such as CLIA, HIPAA,
GLP, and FDA specifications, affecting certain aspects of sample management in a LIMS solution.
One key to compliance with many of these standards is audit logging of all changes to LIMS data, and
in some cases a full electronic signature system is required for rigorous tracking of field-level changes
to LIMS data.

Instrument and application integration


Modern LIMS offer an increasing amount of integration with laboratory instruments and applications.
A LIMS may create control files that are "fed" into the instrument and direct its operation on some
physical item such as a sample tube or sample plate. The LIMS may then import instrument results
files to extract data for quality control assessment of the operation on the sample. Access to the
instrument data can sometimes be regulated based on chain of custody assignments or other security
features if need be.

Modern LIMS products now also allow for the import and management of raw assay data results.
Modern targeted assays such as QPCR and deep sequencing can produce tens of thousands of data
points per sample. Furthermore, in the case of drug and diagnostic development as many as 12 or
more assays may be run for each sample. In order to track this data, a LIMS solution needs to be
adaptable to many different assay formats at both the data layer and import creation layer, while
maintaining a high level of overall performance. Some LIMS products address this by simply
attaching assay data as BLOBs to samples, but this limits the utility of that data in data mining and
downstream analysis.

Electronic data exchange


The exponentially growing volume of data created in laboratories, coupled with increased business
demands and focus on profitability, have pushed LIMS vendors to increase attention to how their
LIMS handles electronic data exchanges. Attention must be paid to how an instrument's input and
output data is managed, how remote sample collection data is imported and exported, and how mobile
technology integrates with the LIMS. The successful transfer of data files in Microsoft Excel and
other formats, as well as the import and export of data to Oracle, SQL, and Microsoft Access
databases is a pivotal aspect of the modern LIMS. In fact, the transition "from proprietary databases to
standardized database management systems such as Oracle ... and SQL" has arguably had one of the
biggest impacts on how data is managed and exchanged in laboratories.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 207

Client-side options
A LIMS has utilized many architectures and distribution models over the years. As technology has
changed, how a LIMS is installed, managed, and utilized has also changed with it.

The following represents architectures which have been utilized at one point or another:

Thick-client
A thick-client LIMS is a more traditional client/server architecture, with some of the system residing
on the computer or workstation of the user (the client) and the rest on the server. The LIMS software
is installed on the client computer, which does all of the data processing. Later it passes information to
the server, which has the primary purpose of data storage. Most changes, upgrades, and other
modifications will happen on the client side.

This was one of the first architectures implemented into a LIMS, having the advantage of providing
higher processing speeds (because processing is done on the client and not the server) and slightly
more security (as access to the server data is limited only to those with client software). Additionally,
thick-client systems have also provided more interactivity and customization, though often at a greater
learning curve. The disadvantages of client-side LIMS include the need for more robust client
computers and more time-consuming upgrades, as well as a lack of base functionality through a web
browser. The thick-client LIMS can become web-enabled through an add-on component.

Thin-client
A thin-client LIMS is a more modern architecture which offers full application functionality accessed
through a device's web browser. The actual LIMS software resides on a server (host) which feeds and
processes information without saving it to the user's hard disk. Any necessary changes, upgrades, and
other modifications are handled by the entity hosting the server-side LIMS software, meaning all end-
users see all changes made. To this end, a true thin-client LIMS will leave no "footprint" on the
client's computer, and only the integrity of the web browser need be maintained by the user. The
advantages of this system include significantly lower cost of ownership and fewer network and client-
side maintenance expenses. However, this architecture has the disadvantage of requiring real-time
server access, a need for increased network throughput, and slightly less functionality. A sort of
hybrid architecture that incorporates the features of thin-client browser usage with a thick client
installation exists in the form of a web-based LIMS.

Some LIMS vendors are beginning to rent hosted, thin-client solutions as "software as a service"
(SaaS). These solutions tend to be less configurable than on premise solutions and are therefore
considered for less demanding implementations such as laboratories with few users and limited
sample processing volumes.

Web-enabled
A web-enabled LIMS architecture is essentially a thick-client architecture with an added web browser
component. In this setup, the client-side software has additional functionality that allows users to
interface with the software through their device's browser. This functionality is typically limited only
to certain functions of the web client. The primary advantage of a web-enabled LIMS is the end-user

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 208

can access data both on the client side and the server side of the configuration. As in a thick-client
architecture, updates in the software must be propagated to every client machine. However, the added
disadvantages of requiring always-on access to the host server and the need for cross-platform
functionality mean that additional overhead costs may arise.

Web-based
Arguably one of the most confusing architectures, web-based LIMS architecture is a hybrid of the
thick- and thin-client architectures. While much of the client-side work is done through a web
browser, the LIMS also requires the additional support of Microsoft's .NET Framework technology
installed on the client device. The end result is a process that is apparent to the end-user through the
Microosoft-compatible web browser, but perhaps not so apparent as it runs thick-client-like
processing in the background. In this case, web-based architecture has the advantage of providing
more functionality through a more friendly web interface. The disadvantages of this setup are more
sunk costs in system administration and support for Internet Explorer and .NET technologies, and
reduced functionality on mobile platforms.

Configurability
LIMS implementations are notorious for often being lengthy and costly. This is due in part to the
diversity of requirements within each lab, but also to the inflexible nature of LIMS products for
adapting to these widely varying requirements. Newer LIMS solutions are beginning to emerge that
take advantage of modern techniques in software design that are inherently more configurable and
adaptable — particularly at the data layer — than prior solutions. This means not only that
implementations are much faster, but also that the costs are lower and the risk of obsolescence is
minimized.

Distinction between a LIMS and a LIS


Up until recently, the LIMS and laboratory information system (LIS) have exhibited a few key
differences, making them noticeably separate entities:

i. A LIMS traditionally has been designed to process and report data related to batches of
samples from biology labs, water treatment facilities, drug trials, and other entities that handle
complex batches of data. A LIS has been designed primarily for processing and reporting data
related to individual patients in a clinical setting.
ii. A LIMS needs to satisfy good manufacturing practice (GMP) and meet the reporting and
audit needs of the U.S. Food and Drug Administration and research scientists in many
different industries. A LIS, however, must satisfy the reporting and auditing needs of hospital
accreditation agencies, HIPAA, and other clinical medical practitioners.
iii. A LIMS is most competitive in group-centric settings (dealing with "batches" and "samples")
that often deal with mostly anonymous research-specific laboratory data, whereas a LIS is
usually most competitive in patient-centric settings (dealing with "subjects" and "specimens")
and clinical labs.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 209

REVISION EXERCISES
1. What are some of the impacts of internet on business
2. Discuss the future of internet in business.
3. What are the risks associated with use of internet in business?
4. Discuss the models of e-commerce in business
5. What is the importance of channel participants in marketing
6. What is a sales support system?
7. What are the characteristics of a retail information system
8. Discuss the challenges of e-commerce
9. What is mobile computing and what are some of its limitation
10. Discuss the business opportunities in e-commerce

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 210

CHAPTER 7
INFORMATION SYSTEMS STRATEGY
SYNOPSIS
Introduction…………………………………………………………… 21
…. 0
Overview of Business Strategy 21
Hierarchy……………………………….. 2
The Strategic Process and Information
Systems 22
Planning………………………………………………………… 4
Development of an Information Systems 23
Strategy………………………. 5
Aligning Information Systems to The
Organisation's Corporate 23
Strategy……………………………………….. 8
Managing Information Systems 24
Strategy……………………………….. 1
Information Systems For Competitive 26
Advantage………………………. 5

INTRODUCTION
Through in-depth analyses of the business environment and the strategy of the business as well as an
examination of the role that information and systems can and could fulfill in the business, a set of
known requirements and potential opportunities can be identified. These needs and options will result
from business pressures, the strategy of the business and the organization of the various activities,
resources and people in the organization. Information needs and relationships can then be converted
into systems requirements and an appropriate organization of data and information resources.

To enable these 'ideal applications to be developed and managed successfully, resources and
technologies will have to be acquired and deployed effectively. In all cases, systems and information
will already exist, and, normally, IS resources and technology will already be deployed.

Any strategy, therefore, must not only identify what is eventually required and must also understand
accurately how much has already been achieved.

The IS/IT strategic plan must therefore define a migration path that overcomes existing weaknesses,
exploits strengths and enables the new requirements to be achieved in such a way t h a t it can be
resourced and managed appropriately.

A strategy has been defined as 'an integrated set of actions aimed at increasing the long-term well-
being and strength of the enterprise.'

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 211

The IS/IT strategy must be integrated not only in terms of information, systems and technology via a
coherent set of actions but also in terms of a process of adaptation to meet the changing needs of the
business as they evolve. "Long term' suggests uncertainty, both in terms of the business requirements
and the potential benefits that the various applications and technologies will offer. Change is the only
thing that is certain. These changing circumstances will mean that the organization will have to be
capable of effective responses to unexpected opportunities and problems.

Prior research on IS strategy has been heavily influenced by the treatment of strategy in the field of
strategic management.

Strategy in Management Studies


Strategy researchers have spent significant effort discussing the strategy construct from various
angles. Several streams of strategy research receive considerable attention, including research
dedicated to defining strategy, distinguishing the characteristics of strategic and understanding the
central issues of strategy at different levels. We describe each of these research streams briefly here.

The first of these streams focuses on the central question of what is strategy, or what constitutes a
strategy. Although, to date, there is no model that has received consensus, there are several strategy
models, including Porter’s five-forces and the value chain model, core competency theory, the
resource based view of the firm, and other tools that aid in the analysis, development, and execution
of strategy. While each of these tools reflects a useful perspective of strategy, they do not provide
direct help in providing a clear definition of strategy.

The second major stream emphasizes characteristics for distinguishing strategic decisions from non-
strategic decisions. Frequently cited characteristics of strategic decisions include their irreversible
nature, the expected impact on long-term firm performance, and the directional nature, that give
guidance to non-strategic decisions. Similar to the first stream of research, this line of strategy
research does not offer a tight definition of strategy per se.

The third stream has focused on the central questions that emerge from the existence of strategy at
different organizational levels. For example, at a corporate level, strategy that involves answering
what businesses the corporation should be in is viewed as a major area of interest .

In contrast, business unit strategy deals primarily with addressing how to gain competitive advantage
in a given business and hence is also referred to as competitive strategy. Finally, functional strategy is
primarily concerned with resource allocations to achieve the maximization of resource productivity.
While strategy may include various decisions at different organizational levels, strategy is
nevertheless recognized to be more than the sum of the strategic decisions it includes. In this sense,
Lorange and Vancil (1977) consider strategy as a “conceptual glue” that ensures coherence between
individual strategic decisions. However, whether this form of integration is achieved ex ante (i.e.,
through planning) or ex post (i.e., emergent) has remained a point of debate.

Definition of Information Systems Strategy


Whereas strategy in management studies has drawn a long tradition of scholarly debate, IS strategy
research, by way of comparison, has tended to eschew explicit discussion of what

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 212

IS strategy is and, instead, has focused more on how to conduct strategic planning, how to align IS
strategy with a given business strategy, or who should be involved in forming the strategy. On one
hand, it is quite clear that, applying Whittington’s (1993) framework, most IS strategies described in
the extant literature fall into the “classical” quadrant of strategy (i.e., IS strategic planning is a product
of calculated deliberation with profit maximization as the goal). On the other hand, there remains a
large degree of obscurity about IS strategy due to the absence of established typologies such as those
found within business strategy literature. Moreover, a variety of terms have been employed to
represent similar constructs such as IT strategy, IS strategy, IS/IT strategy or information strategy,
among others. This plethora of terms creates confusion among researchers trying to interpret existing
works. As stated earlier, information systems is a broad concept (covering the technology components
and human activities related to the management and employment process of technology within the
organization); therefore, we find it most meaningful to use the term IS strategy throughout this paper.
More specifically, following Mintzberg’s (1987) fifth definition of strategy as a perspective, we
define IS strategy as the organizational perspective on the investment in, deployment, use, and
management of information systems. We note that the term of IS strategy is chosen to embrace rather
than to exclude the meanings of the other terms. With this definition, we do not regard the notion of
IS strategy as an ex post only or “realized IS strategy” as defined in the IS strategic alignment
literature. Nor do we suggest that an IS strategy must be intentional as implied in the strategic
information systems planning literature. This is because organizations, without an (formal or
intentional) IS strategy, do use IS and hence make decisions regarding IS. For example, recent
research has examined the pattern of IS deployment as an indication of IS strategy. However, we
cannot infer an intentional IS strategy from the mere existence of IS within a company. Therefore, we
contend that examining IS strategy as a perspective may resolve this dilemma. Furthermore, our
definition of IS strategy suggests that while IS strategy is part of a corporate strategy, conceptually it
should not be examined as part of a business strategy. Rather, it is a separate perspective from the
business strategy that addresses the scope of the entire organization (i.e., IS investment, deployment,
and management) to improve firm performance. This view is consistent with Earl’s (1989) work,
which argues that IS strategy should both support and question business strategy. Therefore, this
definition also implies that IS strategy should be examined at the organizational level, rather than at a
functional level. Hence, while each individual business and IS executive can have his/her own view of
IS, organizational IS strategy should reflect the collective view shared across the upper echelon of the
organization. Meanwhile, this notion has implications for advancements in the stream of research that
seeks to “align” the two separate strategies—business and IS.

OVERVIEW OF BUSINESS STRATEGY HIERACHY


Two of the classics in the field of strategic management, the first by Ansoff (1965) and the other by
Andrews (1971), both had corporate strategy in their titles. Strategy making, at the time, was
considered the sole preserve of the firm’s corporate officers; hence the term corporate strategy. Only
with the eventual democratization of strategy making did a hierarchy of strategies begin to emerge.

The origin of the hierarchical view of strategies dates back to the 1920s when some of the largest US
firms started pursuing a strategy of diversification. At that time, these firms were typically organized
functionally. But diversified growth using these organization structures soon led to severe
coordination and resource allocation problems. Top management, in firms such as Dupont and

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 213

General Motors, responded to this problem with the creation of the multidivisional organization
structure, or the M-Form.

Following Chandler’s (1962) pioneering work showing how a strategy of diversification led to the use
of a multidivisional structure, other researchers sought theoretical reasons for the emergence and
adoption of the M-form organization structure. Using transaction cost economics reasoning,
Williamson (1975) argued that the M-form was adopted because it did a better job than capital
markets in allocating scarce capital between competing investment proposals. He suggested that both
the monitoring and policing costs were also lower in the multidivisional structure when compared to
capital markets.

However, the multidivisional structure was itself becoming unwieldy. Leading firms like General
Electric (GE) invited McKinsey & Company, one of the founders of the now flourishing management
consulting industry, to examine its corporate structure. GE had at that time nearly 200 profit centers
and 145 departments. The McKinsey consultants advised GE’s top management to organize their
firm’s businesses along strategic lines, influenced more by external industry conditions than internal
organizational considerations. GE’s profit centers and departments were consolidated into a smaller
number of Strategic Business Units (SBU).

Each SBU became a stand-alone entity deserving of its own strategy and dedicated functional support.
While corporate strategy was concerned with domain selection (the portfolio of businesses that the
firm should have in order to deliver value to its shareholders); business unit strategy was concerned
with domain navigation (competitive positioning of each of the firm’s business within its industry
environment). Finally, functional strategies specified the contributions that were expected from each
function and their relative salience to the success of the firm’s business strategy.

Corporations also turned to consultants for answers regarding resource allocation. Starting with
BCG’s growth share matrix, numerous other consulting firms introduced portfolio planning matrix as
an answer to the resource allocation problem. The two axes of the matrix were typically the industry’s
attractiveness and the company’s position within the industry. Each of the corporation’s strategic
business units could be mapped onto this matrix. SBUs with strong market positions in growing
industries, the “star” businesses, were lavished with additional resources; even as SBUs with weak
positions in stagnating or declining industries, the so called “dog “ businesses, were slated for
divestment. By the mid 1970s, portfolio planning became very popular. Indeed, by the early 1980s
over half of the Fortune 500 had introduced portfolio planning techniques.

Further, in order to bridge the multiple levels of decision making within the firm top management
needed a process. Formal planning and control systems began filling this void. A study by Stanford
Research Institute showed that a majority of US companies used formal planning systems by 1963.
Vancil and Lorange (1977) and Lorange (1980) describe three distinct phases in a typical strategic
planning process: agenda setting, strategic programming and budgeting. Aspirations of top
management when cycled through these three phases and three layers of management (corporate,
divisional and functional) resulted in concrete budgets for business units and functions within the
firm. When the three phases were followed in a rigid sequential fashion, the intent was frozen when
strategic programs began to be developed. In turn, the programs were non-negotiable once budgets
were decided.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 214

By the early 1980s, with the diffusion of M-form structure, the creation of SBUs, the adoption of
formal planning systems and portfolio planning techniques, the separation of business unit and
corporate strategies was complete in the US and Europe. Functional strategies had to be subservient to
the business strategies that they supported, and business strategies in turn had to be aligned with
strategy.

Furthermore, this hierarchical view of strategy was also mapped on to levels of management within
the firm. The locus of decision making for each strategy was thus clearly specified. The corporate
office was the primary architect of strategy.

Divisional managers helped in a more restricted fashion by detailing their business strategy within
strict corporate guidelines. Functional managers supported their divisional heads with well aligned
functional strategies.

It was assumed then that this unidirectional causality and hierarchically determined locus of decision
making was the sine qua non for superior firm performance. No theoretical basis was provided for this
assertion. Nor were there systematic empirical studies conducted to verify this claim. The assumption
was that since the framework emerged from the practices of high performing companies like General
Motors, Dupont, ITT and GE, it had to have universal appeal. It appeared to be a useful framework in
practice and that seemed to have sufficed.

However, the hierarchical view of strategies has since unraveled because of both empirical and
theoretical developments on corporate, business and functional strategies. It has also lost its relevance
today mostly because strategic management has changed dramatically due to an increasingly turbulent
business context. Strategy making in a transnational corporation cannot afford to be hierarchical.

The Information Systems Strategy Triangle

Business strategy

Organizational Strategy Information Strategy

In the business world, managers must take a part in their decisions about information systems, but do
not have to understand the total concepts of it. If managers leave it up to other people to make their

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 215

IT decisions, it could cause problems for their company. Information systems or IS manages the
company’s infrastructure and must be aligned the same way it manages its employees. A framework
for understanding IS’s impact on companies is called Information Systems Strategy Triangle that
relates business, organization and IS strategies. Companies try to balance and compliment these three
strategies. If you make a change in one strategy, you must reflect a change in the other two. Also all
three strategies must constantly be adjusted to keep up with the changing world.

These strategies, in order to work, must be aligned. Alignment in this sense means, the companies’
current business strategy is enabled, supported and unconstrained by technology. Two other concepts
that are similar are synchronization and convergence. Synchronization means technology helps
current business strategies and helps create new ones to use for the future. Convergence means that
business and IS strategies are combined and the leaders of these two understand both concepts.
Alignment is the most important concept of these and is important in achieving harmony in
organization, business and IS strategies.

A strategy is a coordinated set of actions to fulfill goals, objectives and purposes. You must set
certain limits on what you want to achieve. To formulate a strategy you must have a mission, a clear
and compelling statement that unifies your effort and describes what your organization is about. A
mission statement describes what your company can do and why it exists. A business strategy is a
strategy stating where the business is going and how it expects to achieve its results. It also shows
how a company can communicate its goals. A business strategy is formulated in response to market
forces, customer demands and organizational capabilities.

There are two well-accepted business strategy models:

i. the generic strategies framework


ii. the hypercompetition model.

i. the generic strategies framework

Michael Porter created the generic strategies framework. This framework helps managers learn new
strategies to enhance their competitive advantage. All businesses must sell their products against
other competitors. There are three primary strategies in Porter’s framework. One is cost leadership,
which results when the companies’ goal is to have the lowest costs without diminishing quality in
their products. Only one leader in cost cutting can emerge and if everyone starts cutting costs a price
war can start. This can eventually lead to higher costs or loss of profit.

ii. the hypercompetition model.

The second Porter framework is differentiation, where the company’s products or services are unique
to others in the marketplace. In order to work, the price for the unique product/service must be
important enough to the consumer. The third Porter framework is focus, which a company will limit
its scope to a smaller segment of the market. Focus has two variants, cost focus – where the goal is to
seek a cost advantage within that group and differentiation focus – where it distinguishes its
products/services within the same group. By doing this, the goal is to have a local competitive
advantage over a larger one in the entire market.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 216

There are variations on Porter’s differentiation strategy. The shareholder value model states that the
timing of the use of specialized knowledge can create differentiation advantage as long as the said
knowledge remains unique. Customers buy products from this company to gain unique knowledge.
This is a one-time event and the advantage is static. Another variant is the unlimited resources model,
which has a larger base of resources that allows one company to outlast others by using a
differentiation strategy. With more resources a company can pull from greater resources and sustain
losses more easily than others. The Porter models and variants are useful for understanding how a
company seeks its profits and building new advantages. They balance the competitive forces enforced
by buyers, suppliers, competitors in the market and new products and services within the industry.
The Porter models were created in a time when the rate of change was much slower than it is today.

Hypercompetition Framework, created by Richard D’Aveni, in contrast to Porter’s framework, offers


new tools for making competitive strategies in fast-pasted environments. The hypercompetition model
states that the speed and aggressiveness of moves and countermoves in any given market create
environments where advantages are created rapidly and erode. Four ways to create the competitive
advantages in hypercompetition are cost/quality, timing/know-how, strongholds and deep pockets.
Three assumptions of hypercompetition are: every advantage is eroded, sustaining an advantage can
be a dangerous distraction, the goal of advantage should be disruption, not sustainability and Initiative
are achieved with a series of small steps.

The D’Aveni framework has seven approaches to where an organization can create their business
strategy:

1) Superior stakeholder satisfaction,


2) Strategic soothsaying,
3) Positioning for speed,
4) Positioning for surprise,
5) Shifting the rules of competition,
6) Signaling strategic intent,
7) Simultaneous and sequential strategic thrusts.

These are a useful framework for identifying different aspects of the business strategy and help make
the company more competitive. Managers can identify new answers to their competition and new
opportunities to strength their current abilities. One application of hypercompetition is to destroy
your business. Basically take apart your current business models and create new ones that will
actually help grow it.

When a manager places IS decisions on another player, this will hurt their business strategy. The
business strategy needs to drive IS strategy, not the other way around. Changes in both should be
reflected in both. To understand the business strategy you must know what the business goal is, what
the plan for achieving it is and who are the crucial competitors in the field? The Porter and D’Aveni
frameworks help answer these questions.

Organizational strategy includes its design and choices it makes to define, set up, coordinate and
control its work processes. It’s a plan that answers how the company will organize to seek its goals
and implement its business strategy. One simple framework for understanding organizational design
is a business diamond. The business diamond includes the organizational plan and the following four
concepts: business processes, values and beliefs, tasks and structures and management/measurement
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 217

systems. The business diamond states that the execution of the organization strategy is composed of
the best combination of control, cultural and organizational variables. Organizational variables are
decision rights, business processes, formal reporting relationships and informal networks. Control
variables are the availability of data, nature and quality of planning and effectiveness of performance
measurement/evaluation systems. Cultural variables are the values of the company. These three
variables are managerial levers used by decision makers to enforce necessary changes in their
company. To understand organizational strategy you must answer these questions:

 What are the important structures and reporting relationships within the company?
 Who holds the decision rights to critical decisions?
 What are the characteristics, experiences and skill levels of people within the company?
 What are the key business processes?
 What control systems are in place? What is the culture of the company?
 Answers to these inform any assessment of the company’s use of IS.
IS strategy is a plan the company uses to provide its information services and allows it to complete its
business strategy. Business strategy is the function of competition, positioning, and capabilities.
There are four IS infrastructure components including hardware – physical components like
computers, software – programs on the computers, network – how the information is exchanged with
others and data – how its stored.

The Halo Effect is an error, which the basic human tendency is to make specific inferences on the
basis of a general impression. Three misconceptions created by the halo effect are:

1) There exists a formula that companies can apply makes them succeed.
2) Firm performance is driven completely by internal factors.
3) Because a decision may turn out bad, doesn’t mean it was poorly executed.
Managers should avoid formulas and understand that success is relative, think of decisions as
probabilities and evaluate the decision making process not just the outcomes.

Strategy can be formulated on three different levels:

a) corporate level
b) business unit level
c) functional or departmental level.
While strategy may be about competing and surviving as a firm, one can argue that products, not
corporations compete, and products are developed by business units. The role of the corporation then
is to manage its business units and products so that each is competitive and so that each contributes to
corporate purposes.

a) Corporate Level Strategy


Corporate level strategy fundamentally is concerned with the selection of businesses in which the
company should compete and with the development and coordination of that portfolio of businesses.

Corporate level strategy is concerned with:

i. Reach

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 218

Defining the issues that are corporate responsibilities; these might include identifying the overall
goals of the corporation, the types of businesses in which the corporation should be involved, and the
way in which businesses will be integrated and managed.

ii. Competitive Contact


Defining where in the corporation competition is to be localized. Take the case of insurance: In the
mid-1990's, Aetna as a corporation was clearly identified with its commercial and property casualty
insurance products. The conglomerate Textron was not. For Textron, competition in the insurance
markets took place specifically at the business unit level, through its subsidiary, Paul Revere.

iii. Managing Activities and Business Interrelationships


Corporate strategy seeks to develop synergies by sharing and coordinating staff and other resources
across business units, investing financial resources across business units, and using business units to
complement other corporate business activities. Igor Ansoff introduced the concept of synergy to
corporate strategy.

iv. Management Practices


Corporations decide how business units are to be governed: through direct corporate intervention
(centralization) or through more or less autonomous government (decentralization) that relies on
persuasion and rewards.

v. Corporations are responsible for creating value through their businesses


They do so by managing their portfolio of businesses, ensuring that the businesses are successful over
the long-term, developing business units, and sometimes ensuring that each business is compatible
with others in the portfolio.

b) Business Unit Level Strategy


A strategic business unit may be a division, product line, or other profit center that can be planned
independently from the other business units of the firm.

At the business unit level, the strategic issues are less about the coordination of operating units and
more about developing and sustaining a competitive advantage for the goods and services that are
produced. At the business level, the strategy formulation phase deals with:

i. positioning the business against rivals


ii. anticipating changes in demand and technologies and adjusting the strategy to accommodate
them
iii. influencing the nature of competition through strategic actions such as vertical integration and
through political actions such as lobbying.
Michael Porter identified three generic strategies (cost leadership, differentiation, and focus) that can
be implemented at the business unit level to create a competitive advantage and defend against the
adverse effects of the five forces.

c) Functional Level Strategy

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 219

The functional level of the organization is the level of the operating divisions and departments. The
strategic issues at the functional level are related to business processes and the value chain. Functional
level strategies in marketing, finance, operations, human resources, and R&D involve the
development and coordination of resources through which business unit level strategies can be
executed efficiently and effectively.

Functional units of an organization are involved in higher level strategies by providing input into the
business unit level and corporate level strategy, such as providing information on resources and
capabilities on which the higher level strategies can be based. Once the higher-level strategy is
developed, the functional units translate it into discrete action-plans that each department or division
must accomplish for the strategy to succeed.

Information and Strategy - The Virtual Value Chain


In today’s digital age information technology and information systems play an important role in
success of organization. Information technology has challenged the way the business gets conducted.
A company with superior product and service content become market leaders. There is a constant urge
for the companies to provide a better and competitive content.

Organizations invest in research and development for superior content production, or they
acquire/merge with companies. The purpose of acquisition is to either expand current product offering
or add content as to provide end to end solutions.

Organization strategy can be devised using Porter’s Five Force model. Organization’s strategy should
be to increase customer base and provide customized solution. Service also plays an important role in
organization strategy. Service is the key factor in maintaining good customer relationship.
Organization needs to devise a strategy which is convergence of technology, brand marketing, product
innovation and world-class service.

Virtual Value Chain

A physical value chain consist procurement of raw materials, operations, delivery, sales and
marketing and service. Information technology has changed the way we look at the value chain.
Information technology has introduced concept of virtual value chain.

The components of a virtual value chain are as follows:

a) Gather
Information age has helped digitization of information. Proliferation of information is higher than
ever before. The internet provides data and information about markets, economies, government
policies, etc. Companies gather information relevant to them as a first stage in the virtual value chain.

b) Organizing
Information gathered in the first stage of the virtual value chain is in form of text, data tables, video,
etc. The challenge in the second stage is to organize the gathered information in a way to retrieve
easily for further analysis.
c) Selection
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 220

In the third stage of virtual value chain, organizations analyze captured information to add value to
customers. Organizations develop better ways of dealing with customers, product delivery, etc. using
information.

d) Synthesization
In the fourth stage of virtual value chain, organizations synthesize the available data. The data reaches
the end user in the desired format.

e) Distribution
The last stage of the virtual value chain is delivery of information to the end user. In a physical value
chain, products are delivered to customers, in the virtual value chain this is replaced by a digital
product. For example, digital movie streaming of movies compared to mail delivery of DVD.
Therefore, today’s businesses are also known as information business.
Importance of Virtual Value Chain

The concept of a virtual value chain was devised looking at current internet penetration. It provides
addition to existing value chain. Information technology helps in holistic view of physical value and
making it efficient and effective.

Today’s information systems are capable of capturing information from every part of the value chain.
This information is utilized to optimize performance at each stage. However, this information can also
be utilized to improve customer experience at each stage. This enhanced experience can be through
new product and services, thus generating more revenue to the company.

Value Chain and E-Strategy - Components of Commercial Value Chain


All companies undertake series of activities in order to deliver a product to the customers. These
series of activities like procurement of raw material, storage, production, distribution, etc. are referred
as value chain activities. The function of value chain activities is to add value to product at every
stage before it is delivered to the customers. There are two components, which make value chain –

 Primary Activities
The primary activities are directly associated with the manufacturing of products like supply
management, plant operations, etc.

 Secondary Activities.
The secondary activities are referred to as support functions such as finance, HR, information
technology, etc.

In the era of advanced information and communication technology, many businesses have started
operations on the internet as its medium. Through the internet, many commercial activities like
buying, selling, auctioning is taking place. This online commercial activity is known as e-commerce.
E-commerce value chain has series of activities like electronic fund transfer, internet marketing,
distribution channel, supply chain etc.

Value Chain and E-Strategy

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 221

Every activity within a physical value chain has an inherent information component. The amount of
information that is present in activities determines, company’s orientation towards e-commerce. It has
been observed that companies with high information presence will adopt e-commerce faster rather
than companies with lower information presence.

For example, a computer manufacturer has high information presence, i.e. they can provide a great
deal of product information through their website. Consumers also have flexibility to determine the
product configuration using the website. Such computer manufacturers and companies with
comparative business model are also likely to adopt e-commerce.

Activities which comprise of the value chain are undertaken by companies to produce and sell product
and services. Some of the activities done within the value chain are understanding customer needs,
designing products, procuring materials for production, production, storage of products, distribution
of products, after sale services of products and customer care.

Understanding Information Presence

There are two ways to assess information presence. The first way is by looking at the industry, and
second way is by looking at the product. In an industry with high information presence, it has been
observed that:

 Industry will have large number customer base.


 Production process is complex.
 Order turnaround cycle is long.
For a product with high information presence following is observed:

 Product is simple to manufacture.


 Product has multiple functionalities.
 Product requires in department end user training.
Industry and product which satisfy above conditions are likely to adapt e-commerce.

E-Strategy

Companies with high information presence were the first to look at e-commerce as an alternate way of
conducting business. For example, software companies, much of there is business is done through the
internet. Their website provides in-depth product information through e-brochure, video, client
opinion, etc. Sales leads are generated online; purchase and fund transfer is done, and also after-sales
service is done online.

These high information companies have made substantial investment in human resources and
information/communication technology.

Challenges

Companies which are moving towards e-commerce need to have business model developed to support
online activities. The dotcom burst of 2000 has served hard example about companies doing e-
commerce.

Components of Commercial Value Chain


MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 222

The concept of the value chain was introduced by Michael Porter. The concept helps categories’
activities undertaken by enterprise to deliver a successful product to a customer. The concept since its
introduction in 1980s has become a forefront in developing strategies around customer delight and
commercial success. The value chain is series of activities undertaken by organization to deliver a
product to end users. Here the concept does not apply to one single manufacturing organization, but it
also applies to the players in the value chain. One of the purposes of the value chain is to understand
activities, which add value during creation of the end product.

Value Chain

Enterprise undertakes several primary activities as well as secondary activities to deliver the final
product to customers. Here primary activities are defined as activities, which directly support
production of product or service. Secondary activities or support activities are activities which
primary activities.

Primary Activities

Primary activities in the value chain are directly related with the production and delivery of the final
product. The objective of these activities is adding value to product that is more than the cost of
product. This will ensure that company can generate healthy margin and stay in business. Primary
activities mainly consist of inbound supply chain, operations, dispatch, sales and marketing and
service.

 Inbound supply chain is made up of activities like receiving raw materials, storing raw
materials and inventory management.
 Operations consist of activities which convert different raw material into final product.
 Dispatch activities consist of sending final product to distributors, retailers etc.
 Sales and Marketing activities includes promotion of products to potential as well as existing
customers, networking with channel partners etc.
 Service consists of activities like solving customer issues before the sale of the product as
well after sale of the product i.e customer care or customer support.

Commercial Value Chain

Commercial value chain is defined as any value chain used to achieve its organizational goal. Every
company in any given industry will have its own value. However objective all the different value
chain is to add value chain at every stage till product is delivered. The value chain of business
includes activities:

a) Potential Customer Attraction and Existing Customer Repeat


For online business it is very important that they are able to generate visitors for their website. This
will ensure customers are aware of available products and pricing. Companies also want to ensure that
website is able repeat customers also.

b) Customer Interaction

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 223

Website design and navigation should ensure that potential buyers are able to reach the required web
page. Another option available is customers entering their requirement and website displaying
potential products.

c) Order Processing and Payment


Once a potential buyer has selected the product, website should be equipped to display other product
similar to purchase or pop a question whether customer would be interested in making another
purchase. Purchase order should also highlight possible shipping date and number of days before
product will arrive. After purchase transaction, the next important step is payment through secured
fund transfer.

d) Order Delivery and Customer Care


Website should be able to provide online tracking of the product; it should also provide details about
possible delays. Website should be equipped to solve any queries online through frequently asked
question, email support etc.

The Quantitative Approach for e-Strategy - Seven Dimensions of e-


Commerce
The way business or commerce gets conducted has undergone a great deal of change due to the advent
of information and communication revolution. In the last two decades or so there has been a
phenomenal growth in e-commerce. Electronic commerce or e-commerce consists of buying selling
and auction of various products and services through an online medium such as the internet. The
payment of transaction is done through a secure online payment system. All or majorities of today’s
companies either have websites or conduct e-commerce. In such a scenario, it becomes very important
to have well defined business model and formulated e-commerce strategy.

E-strategy Formulation

The two very important factors which determines a successful strategy is customer requirements and
commercial scalability. Without either, business will fail in its venture. Customers expect superior
quality in product and service they purchase. For e-commerce, quality means easy negotiable website,
secure transaction and web-site management.

For companies to develop and manage e-commerce sites, it has to invest in manpower and
technology. E-commerce sites consist of complex software and hardware structure. Companies make
a choice for technology to run its site based on cost-benefit analysis and project scalability.

Therefore, it is important for companies to undertake a quantitative approach towards e-commerce.

Seven Dimensions of e-Commerce

A successful e-commerce strategy model consists of organization structure policy and positional
structure policy.

Organization Structure

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 224

Organization structure is building block of successful strategy. It consists of leadership, infrastructure


and organizational learning curve.

A successful strategy starts with vision and mission statement. This vision comes from corporate
leadership. Corporate leadership should keep open mind about prevailing new technology and should
be flexible in changing strategy to tune with an ever-changing world.

Another building block of successful strategy is technology infrastructure. The technology


infrastructure has to be adaptive to constant innovation and requirements throughout the organization.
The technology infrastructure needs to be cost effective, secure and manageable.

The last important portion of organizational structure is organizational learning. Organization needs to
maintain and encourage culture of organizational learning. This prepares company for adaption of
new strategy and introduction of new technology.

Organizational Positioning

The second important factor of e-strategy is the organizational positioning in technology, brand,
service and market.

Technology leadership provides companies the competitive advantage. Therefore, it is important to


identify emerging trend and invest in that technology solution.

The internet has provided an alternate medium through which an organization can benefit in brand
development. People are logging onto the internet more than ever. This has provided golden
opportunity for organization to reinforce it brand leadership or create brand awareness.

Another dimension of successful organization positioning is service leadership. Service includes


providing customer with delightful experience in pre and post sales scenario. Delightful service does
not translate into revenue immediately, but helps in building relationship, creating brand awareness
and creating brand ambassadors.

Organizations have managed to achieve phenomenal growth using the internet. They have assessed
market conditions preemptively and responded by providing correct market offering.

Clearly from above in the current business environment, it is important to acknowledge importance of
e-commerce and prepare a strategy which provides an organization competitive.

THE STRATEGIC PROCESS AND INFORMATION SYSTEM


PLANNING
The concept of Strategic Information Systems or "SIS" was first introduced into the field of
information systems in 1982-83 by Dr. Charles Wiseman, President of a newly formed consultancy
called "competitive applications."

Strategic information systems planning, or SISP, are based on two core arguments. The first is that, at
a minimum, a firm’s information systems investments should be aligned with the overall business
strategy and in some cases may even become an emerging source of competitive advantage. While no
one disagrees with this, operations management researchers are just starting to study how this

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 225

alignment takes place and what the measurable benefits are. An issue under examination is how a
manufacturer’s business strategy, characterized as either “market focused” or “operations focused,”
affects its ability to garner efficiency versus customer service benefits from its Economic Resource
Planning (ERP) investments.

The second core argument behind SISP is that companies can best achieve IS-based alignment or
competitive advantage by following a proactive, formal and comprehensive process that includes the
development of broad organizational information requirements. This is in contrast to a “reactive”
strategy, in which the IS group sits back and responds to other areas of the business only when a need
arises. Such a process is especially relevant to ERP investments, given their costs and long-term
impact. Seegars, Grover and Teng have identified six dimensions that define an excellent SISP
process (notice that many of these would apply to the strategic planning process in other areas as
well):

1. Comprehensiveness

Comprehensiveness is “the extent to which an organization attempts to be exhaustive or inclusive in


making and integrating strategic decisions”.

2. Formalization

Formalization is “the existence of structures, techniques, written procedures, and policies that guide
the planning process”.

3. Focus

Focus is “the balance between creativity and control orientations inherent within the strategic
planning system”. An innovative orientation emphasizes innovative solutions to deal with
opportunities and threats. An integrative orientation emphasizes control, as implemented through
budgets, resource allocation, and asset management.

4. Top-down flow

SISP should be initiated by top managers, with the aid of support staff.

5. Broad participation

Even though the planning flow is top-down, participation must involve multiple functional areas and,
as necessary, key stakeholders at lower levels of the organization.

6. High consistency

SISP should be characterized by frequent meetings and reassessments of the overall strategy.

The recommendations found in the SISP literature have been echoed in the operations management
literature. It has been suggested that firms should institutionalize a formal top-down planning process
for linking information systems strategy to business needs as they move toward evolution in their
management orientation, planning, organization, and control aspects of the IT function.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 226

Background
For a long time relationship between information system functions and corporate strategy was not of
much interest to Top Management of firms. Information Systems were thought to be synonymous
with corporate data processing and treated as some back-room operation in support of day-to-day
mundane tasks. In the 80’s and 90’s, however, there has been a growing realization of the need to
make information systems of strategic importance to an organization. Consequently, strategic
information systems planning (SISP) is a critical issue. In many industry surveys, improved SISP is
often mentioned as the most serious challenge facing IS managers.

Planning for information systems, as for any other system, begins with the identification of needs. In
order to be effective, development of any type of computer-based system should be a response to
need--whether at the transaction processing level or at the more complex information and support
systems levels. Such planning for information systems is much like strategic planning in management.
Objectives, priorities, and authorization for information systems projects need to be formalized. The
systems development plan should identify specific projects slated for the future, priorities for each
project and for resources, general procedures, and constraints for each application area. The plan must
be specific enough to enable understanding of each application and to know where it stands in the
order of development. Also the plan should be flexible so that priorities can be adjusted if necessary.
Strategic capability architecture - a flexible and continuously improving infrastructure of
organizational capabilities – is the primary basis for a company's sustainable competitive advantage.
He has emphasized the need for continuously updating and improving the strategic capabilities
architecture.

SISP is the analysis of a corporation’s information and processes using business information models
together with the evaluation of risk, current needs and requirements. The result is an action plan
showing the desired course of events necessary to align information use and needs with the strategic
direction of the company (Battaglia, 1991). The same article emphasizes the need to note that SISP is
a management function and not a technical one. This is consistent with the earlier distinction between
the older data processing views and the modern strategic importance view of Information Systems.
SISP thus is used to identify the best targets for purchasing and installing new management
information systems and help an organization maximize the return on its information technology
investment. A portfolio of computer-based applications is identified that will assist an organization in
executing its business plans and realize its business goals. There is a growing realization that the
application of information technology (IT) to a firm’s strategic activities has been one of the most
common and effective ways to improve business performance.

Overview
Strategic systems are information systems that are developed in response to corporate business
initiative. They are intended to give competitive advantage to the organization. They may deliver a
product or service that is at a lower cost, that is differentiated, that focuses on a particular market
segment, or is innovative.

Strategic information management is a salient feature in the world of information technology (IT). In
a nutshell, strategic information management helps businesses and organizations categorize, store,
process and transfer the information they create and receive. It also offers tools for helping companies

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 227

apply metrics and analytical tools to their information repositories, allowing them to recognize
opportunities for growth and pinpoint ways to improve operational efficiency.

General Definition
Strategic information systems are those computer systems that implement business strategies; They
are those systems where information services resources are applied to strategic business opportunities
in such a way that the computer systems have an impact on the organization’s products and business
operations. Strategic information systems are always systems that are developed in response to
corporate business initiative. The ideas in several well-known cases came from information Services
people, but they were directed at specific corporate business thrusts. In other cases, the ideas came
from business operational people, and Information Services supplied the technological capabilities to
realize profitable results.

Most information systems are looked on as support activities to the business. They mechanize
operations for better efficiency, control, and effectiveness, but they do not, in themselves, increase
corporate profitability. They are simply used to provide management with sufficient dependable
information to keep the business running smoothly, and they are used for analysis to plan new
directions. Strategic information systems, on the other hand, become an integral and necessary part of
the business, and they affect the profitability and growth of a company. They open up new markets
and new businesses. They directly affect the competitive stance of the organization, giving it an
advantage against the competitors.

Most literature on strategic information systems emphasizes the dramatic breakthroughs in computer
systems, such as American Airlines' Sabre System and American Hospital Supply’s terminals in
customer offices. These, and many other highly successful approaches are most attractive to think
about, and it is always possible that an equivalent success may be attained in your organization. There
are many possibilities for strategic information systems, however, which may not be dramatic
breakthroughs, but which will certainly become a part of corporate decision making and will, increase
corporate profitability. The development of any strategic information systems always enhances the
image of information Services in the organization, and leads to information management having a
more participatory role in the operation of the organization.

The three general types of information systems that are developed and in general use are financial
systems, operational systems, and strategic systems. These categories are not mutually exclusive and,
in fact, they always overlap to some. Well-directed financial systems and operational systems may
well become the strategic systems for a particular organization.

Financial systems are the basic computerization of the accounting, budgeting, and finance operations
of an organization. These are similar and ubiquitous in all organizations because the computer has
proven to be ideal for the mechanization and control or financial systems; these include the personnel
systems because the headcount control and payroll of a company is of prime financial concern.
Financial systems should be one of the bases of all other systems because they give a common,
controlled measurement of all operations and projects, and can supply trusted numbers for indicating
departmental or project success. Organizational planning must be tied to financial analysis. There is
always a greater opportunity to develop strategic systems when the financial systems are in place, and
required figures can be readily retrieved from them.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 228

Operational systems, or services systems, help control the details of the business. Such systems will
vary with each type of enterprise. They are the computer systems that operational managers need to
help run the business on a routing basis. They may be useful but mundane systems that simply keep
track of inventory, for example, and print out reorder points and cost allocations. On the other hand,
they may have a strategic perspective built into them, and may handle inventory in a way that
dramatically impacts profitability. A prime example of this is the American Hospital Supply inventory
control system installed on customer premises. Where the great majority of inventory control systems
simply smooth the operations and give adequate cost control, this well-known hospital system broke
through with a new version of the use of an operational system for competitive advantage. The great
majority of operational systems for which many large and small computer systems have been
purchased, however, simply help to manage and automate the business. They are important and
necessary, but can only be put into the "strategic" category it they have a pronounced impact on the
profitability of the business.

All businesses should have both long-range and short-range planning of operational systems to ensure
that the possibilities of computer usefulness will be seized in a reasonable time. Such planning will
project analysis and costing, system development life cycle considerations, and specific technology
planning, such as for computers, databases, and communications. There must be computer capacity
planning, technology forecasting, and personnel performance planning. It is more likely that those in
the organization with entrepreneurial vision will conceive of strategic plans when such basic
operational capabilities are in place and are well managed.

Operational systems, then, are those that keep the organization operating under control and most cost
effectively. Any of them may be changed to strategic systems if they are viewed with strategic vision.
They are fertile grounds for new business opportunities.

Strategic systems are those that link business and computer strategies. They are the systems where
new business strategies has been developed and they can be realized using Information Technology.
They may be systems where new computer technology has been made available on the market, and
planners with an entrepreneurial spirit perceive how the new capabilities can quickly gain competitive
advantage. They may be systems where operational management people and Information Services
people have brainstormed together over business problems, and have realized that a new competitive
thrust is possible when computer methods are applied in a new way.

There is a tendency to think that strategic systems are only those that have been conceived at what
popular, scientific writing sometimes calls the "achtpunckt." This is simply synthetic German for "the
point where you say ‘acht!’ or ‘that’s it!’" The classical story of Archimedes discovering the principle
of the density of matter by getting into a full bathtub, seeing it overflow, then shouting "Eureka!" or "I
have found it!" is a perfect example of an achtpuncht. It is most pleasant and profitable if someone is
brilliant enough, or lucky enough, to have such an experience. The great majority of people must be
content, however, to work step-by-step at the process of trying to get strategic vision, trying to
integrate information services thinking with corporate operational thinking, and trying to conceive of
new directions to take in systems development. This is not an impossible task, but it is a slow task that
requires a great deal of communication and cooperation. If the possibilities of strategic systems are
clearly understood by all managers in an enterprise, and they approach the development of ideas and
the planning systematically, the chances are good that strategic systems will be result. These may not
be as dramatic as American Airline’s Sabre, but they can certainly be highly profitable.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 229

There is general agreement that strategic systems are those information systems that may be used
gaining competitive advantage. How is competitive advantage gained?. At this point, different writers
list different possibilities, but none of them claim that there may not be other openings to move
through.

Some of the more common ways of thinking about gaining competitive advantage are:

a) Deliver a product or a service at a lower cost


This does not necessarily mean the lowest cost, but simply a cost related to the quality of the product
or service that will be both attractive in the marketplace and will yield sufficient return on investment.
The cost considered is not simply the data processing cost, but is the overall cost of all corporate
activities for the delivery of that product or service. There are many operational computer systems
that have given internal cost saving and other internal advantages, but they cannot be thought of as
strategic until those savings can be translated to a better competitive position in the market.

b) Deliver a product or service that is differentiated


Differentiation means the addition of unique features to a product or service that are competitive
attractive in the market. Generally such features will cost something to produce, and so they will be
the setting point, rather than the cost itself. Seldom does a lowest cost product also have the best
differentiation. A strategic system helps customers to perceive that they are getting some extras for
witch they will willingly pat.

c) Focus on a specific market segment


The idea is to identify and create market niches that have not been adequately filled. Information
technology is frequently able to provide the capabilities of defining, expanding, and filling a particular
niche or segment. The application would be quite specific to the industry.

d) Innovation
Develop products or services through the use of computers that are new and appreciably from other
available offerings. Examples of this are automatic credit card handing at service stations, and
automatic teller machines at banks. Such innovative approaches not only give new opportunities to
attract customers, but also open up entirely new fields of business so that their use has very elastic
demand.

Almost any data processing system may be called "strategic" if it aligns the computer strategies with
the business strategies of the organization, and there is close cooperation in its development between
the information Services people and operational business managers. There should be an explicit
connection between the organization’s business plan and its systems plan to provide better support of
the organization’s goals and objectives, and closer management control of the critical information
systems.

Many organizations that have done substantial work with computers since the 1950s have long used
the term "strategic planning" for any computer developments that are going to directly affect the
conduct of their business. Not included are budget, or annual planning and the planning of developing
Information Services facilities and the many "housekeeping" tasks that are required in any
corporation. Definitely included in strategic planning are any information systems that will be used by
operational management to conduct the business more profitably. A simple test would be to ask
whether the president of the corporation, or some senior vice presidents, would be interested in the

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 230

immediate outcome of the systems development because they felt it would affect their profitability. If
the answer is affirmative, then the system is strategic.

Strategic system, thus, attempt to match Information Services resources to strategic business
opportunities where the computer systems will have an impact on the products and the business
operations. Planning for strategic systems is not defined by calendar cycles or routine reporting. It is
defined by the effort required to impact the competitive environment and the strategy of a firm at the
point in time that management wants to move on the idea.

Effective strategic systems can only be accomplished, of course, if the capabilities are in place for the
routine basic work of gathering data, evaluating possible equipment and software, and managing the
routine reporting of project status. The calendarized planning and operational work is absolutely
necessary as a base from which a strategic system can be planned and developed when a priority
situation arises. When a new strategic need becomes apparent, Information Services should have laid
the groundwork to be able to accept the task of meeting that need.

Strategic systems that are dramatic innovations will always be the ones that are written about in the
literature. Consultants in strategic systems must have clearly innovative and successful examples to
attract the attention of senior management. It should be clear, however, that most Information
Services personnel will have to leverage the advertised successes to again funding for their own
systems. These systems may not have an Olympic effect on an organization, but they will have a good
chance of being clearly profitable. That will be sufficient for most operational management, and will
draw out the necessary funding and support. It helps to talk about the possibilities of great
breakthroughs, if it is always kept in mind that there are many strategic systems developed and
installed that are successful enough to be highly praised within the organization and offer a
competitive advantage, but will not be written up in the Harvard Business Review.

Characteristics of Strategic IS Planning


Some characteristics of strategic IS planning are:

•Main task: strategic/competitive advantage, linkage to business strategy.

•Key objective: pursuing opportunities, integrating IS and business strategies

•Direction from: executives/senior management and users, coalition of users/management and


information systems.

•Main approach: entrepreneurial (user innovation), multiple (bottom-up development, top down
analysis, etc.) at the same time.

Strategic Information Systems Planning in the present SIS era is not an easy task because such a
process is deeply embedded in business processes. These systems need to cater to the strategic
demands of organizations, i.e., serving the business goals and creating competitive advantage as well
as meeting their data processing and MIS needs. The key point here is that organizations have to plan
for information systems not merely as tools for cutting costs but as means to adding value. The
magnitude of this change in perspective of IS/IT’s role in organizations is highlighted in a Business
Week article, ‘The Technology Payoff’ (Business Week, June 14, 1993).

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 231

Throughout the 1980s US businesses invested a staggering $1 trillion in the information technology.
This huge investment did not result in a commensurate productivity gain - overall national
productivity rose at a 1% annual rate compared with nearly 5% in Japan. Using the information
technology merely to automate routine tasks without altering the business processes is identified as
the cause of the above productivity paradox. As IT is used to support breakthrough ideas in business
processes, essentially supporting direct value adding activities instead of merely cost saving, it has
resulted in major productivity gains. In 1992, productivity rose nearly 3% and the corporate profits
went up sharply. According to an MIT study quoted in the above article, the return on investment in
information systems averaged 54% for manufacturing and 68% for all businesses surveyed. This
impact of information technology on re-defining, re-engineering businesses is likely to continue and it
is expected that information technology will play increasingly important roles in future. For example,
Pant, et al. (1994) point out that the emerging vision of virtual corporations will become a reality only
if it is rooted in new visionary information technology. It is information technology alone which will
carve multiple ‘virtual corporations’ simultaneously out of the same physical resources and adapt
them without having to change the actual organizations. Thus, it is obvious that information
technology has indeed come a long way in the SIS era, offering unprecedented possibilities, which, if
not cashed on, would turn into unprecedented risks. As Keen (1993) has morbidly but realistically
pointed out that organizations not planning for strategic information systems may fail to spot the
business implications of competitors’ use of information technology until it is too late for them to
react. In situations like this, when information technology changes the basics of competition in an
industry, 50% of the companies in that industry disappear within ten years.

Strategic Information Systems Planning Methodologies


The task of strategic information systems planning is difficult and often time organizations do not
know how to do it. Strategic information systems planning is a major change for organizations, from
planning for information systems based on users’ demands to those based on business strategy. Also
strategic information systems planning changes the planning characteristics in major ways. For
example, the time horizon for planning changes from 1 year to 3 years or more and development plans
are driven by current and future business needs rather than incremental user needs. Increase in the
time horizon is a factor which results in poor response from the top management to the strategic
information systems planning process as it is difficult to hold their attention for such a long period.
Other questions associated with strategic information systems planning are related to the scope of the
planning study, the focus of the planning exercise – corporate organization vs. strategic business unit,
number of studies and their sequence, choosing a strategic information systems planning methodology
or developing one if none is suitable, targets of planning process and deliverables. Because of the
complexity of the strategic information systems planning process and uniqueness of each
organization, there is no one best way to tackle it. Vitale, et al. (1986) classify SISP methodologies
into two categories:

a) Impact and
b) alignment

a) Impact Methodologies
Impact methodologies help create and justify new uses of IT, while the methodologies in the
“alignment” category align IS objectives with organizational goals.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 232

Some of the impact methodologies are discussed below.

1. Value Chain Analysis


The concept of value chain is considered at length by Michael Porter (1984). According to him,
‘every firm is a collection of activities that are performed to design, produce, market, deliver, and
support its product. All these activities can be represented using a value chain.’ Porter goes on to
explain that information technology is one of the major support activities for the value chain.
“Information systems technology is particularly pervasive in the value chain, since every value
activity creates and uses information. The recent, rapid technological change in information systems is
having a profound impact on competition and competitive advantage because of the pervasive role of
information in the value chain. Change in the way office functions can be performed is one of the
most important types of technological trends occurring today for many firms, though few are devoting
substantial resources to it. .. A firm that can discover a better technology for performing an activity
than its competitors thus gains competitive advantage.

Once the value chain is charted, executives can rank order the steps in importance to determine which
departments are central to the strategic objectives of the organization. Also, executives can then
consider the interfaces between primary functions along the chain of production, and between support
activities and all of the primary functions. This helps in identifying critical points of inter-
departmental collaboration. Thus, value chain analysis:

(a) is a form of business activity analysis which decomposes an enterprise into its parts. Information
systems are derived from this analysis.

(b) helps in devising information systems which increase the overall profit available to a firm.

(c) helps in identifying the potential for mutual business advantages of component businesses, in the
same or related industries, available from information interchange.

(d) concentrates on value-adding business activities and is independent of organizational structure.

Strengths

The main strength of value chain analysis is that it concentrates on direct value adding activities of a
firm and thus pitches information systems right into the realm of value adding rather than cost cutting.

Weaknesses

Although a very useful and intuitively appealing, value chain analysis suffers from a few weaknesses,
namely,

a) it only provides a higher level information model for a firm and fails to address the
developmental and implementation issues.
b) it fails to define a data structure for the firm because of its focus on internal operations instead
of data,
c) the basic concept of a value chain is difficult to apply to non-manufacturing organizations
where the product is not tangible and there are no obvious raw materials.
d) it does not provide an automated support for carrying out analysis.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 233

Value chain analysis, therefore, needs to be used in conjunction with some other methodology which
addresses the development and implementation issues and defines a data structure.

2. Critical Success Factor Analysis


Critical success factors analysis can be considered to be both an impact as well as an alignment
methodology. Critical Success Factors (CSF) in the context of SISP are used for interpreting more
clearly the objectives, tactics, and operational activities in terms of key information needs of an
organization and its managers and strengths and weaknesses of the organization’s existing systems.
Rockart (1979) defines critical success factors as being ‘for any business the limited number of areas
in which results, if they are satisfactory, will ensure successful competitive performance for the
organization.’

Consequently, critical success factors are areas of activity that should receive constant and careful
attention from management.

Rockart originally developed the CSF approach as a means to understanding the information needs of
CEOs. The approach has subsequently been applied to the enterprise as a whole and has been
extended into a broader planning methodology. It has been made the basis of many consulting
practices and has achieved major results where it has been used well.

CSFs can exist at a number of levels, i.e., industry, organizational, business unit, or manager’s. CSFs
at a lower level are derived from those at the preceding higher level. The CSF approach introduces
information technology into the initial stages of the planning process and helps provide a realistic
assessment of the IT’s contribution to the organization

Strengths

CSF analysis provides a very powerful method for concentrating on key information requirements of
an organization, a business unit, or of a manager. This allows the management to concentrate
resources on developing information systems around these requirements. Also, CSF analysis is easy to
perform and can be carried out with few resources.

Weaknesses

(a) although a useful and widely used technique, CSF analysis by itself is not enough to perform
comprehensive SISP - it does not define a data architecture or provides automated support for
analysis.

(b) to be of value, the CSF analysis should be easily and directly related back to the objectives of the
business unit under review. It has been the experience of the people using this technique that generally
it loses its value when used below the third level in an organizational hierarchy (Ward, 1990, p.164).

(c) CSFs focus primarily on management control and thus tend to be internally focused and analytical
rather than creative

(d) CSFs partly reflect a particular executive’s management style. Use of CSFs as an aid in identifying
systems, with the associated long lead-times for developing these systems, may lead to giving an
executive information that s/he does not regard as important (Ibid.).

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 234

(e) CSFs do not draw attention to the value-added aspect of information systems. While CSF analysis
facilitates identification of information systems which meet the key information needs of an
organization/business unit, the value derived from these systems is not assessed.

b) Alignment Methodologies

Some of the alignment methodology include:

1. Business Systems Planning (BSP)


This methodology, developed by IBM, combines top down planning with bottom up implementation.
The methodology focuses on business processes which in turn are derived from an organization’s
business mission, objectives and goals. Business processes are analyzed to determine data needs and,
then, data classes. Similar data classes are combined to develop databases. The final BSP plan
describes an overall information systems architecture as well as installation schedule of individual
systems.

Strengths

Because BSP combines a top down business analysis approach with a bottom up implementation
strategy, it represents an integrated methodology. In its top down strategy, BSP is similar to CSF
method in that it develops an overall understanding of business plans and supporting IS needs through
joint discussions. IBM being the vendor of this methodology, it has the advantage of being better
known to the top management than other methodologies.

Weaknesses

Some of the weaknesses of this type of methodology include:

(a) BSP requires a firm commitment from the top management and their substantial involvement.

(b) it requires a high degree of IT experience within the BSP planning team.

c) there is a problem of bridging the gap between top down planning and bottom up implementation.

(d) it does not incorporate a software design methodology.

(e) major weakness of BSP is the considerable time and effort required for its successful
implementation.

2. Strategic Systems Planning (SSP)

Also known as PRO planner and developed by Robert Holland, this methodology is similar to BSP. A
business functional model is defined by analyzing major functional areas of a business. A data
architecture is derived from the business function model by combining information requirements into
generic data entities and subject databases. New systems and their implementation schedules are
derived from this architecture. This architecture is then used to identify new systems and their
implementation schedule. Although steps in the SSP procedure are similar to those in the BSP, a
major difference between SSP and BSP is SSP’s automated handling of the data collected during the

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 235

SISP process. Software produces reports in a wide range of formats and with various levels of detail.
Affinity reports show the frequencies of accesses to data and clustering reports give guidance for
database design. Users are guided through menus for on-line data collection and maintenance. The
software also provides a data dictionary interface for sharing SSP data with an existing data dictionary
or other automated design tools.

In addition to SSP, Holland System’s Corporation also offers two other methodologies - one for
guiding the information system architecture and another for developing data structures for modules
from the SISP study. The strengths and weaknesses of BSP apply to SSP as well

3.Information Engineering (IE)

This methodology was developed by James Martin (1982) and provides techniques for building
enterprise, data and process models. These models combine to form a comprehensive knowledge base
which is used to create and maintain information systems.

Basic philosophy underlying this technique is the use of structured techniques in all the tasks relating
to planning, analysis, design and construction of enterprise wide information systems. Such structured
techniques are expected to result in well integrated information systems. IE relies on an information
systems pyramid for an enterprise. The pyramid has three sides which represent the organization’s
data, the activities the organization carries out using the data and the technology that is employed in
implementing information systems. IE views all three aspects of information systems from a high-
level, management oriented perspective at the top to a fully detailed implementation at the bottom.
The pyramid describes the four levels of activities, namely, strategy, analysis, systems design and
construction, that involve data, activities and technology

In addition to information engineering, Martin advocates the use of critical success factors. A major
difference between IE and other methodologies is the automated tools provided by IE to link its
output to subsequent systems development efforts, and this is the major strength of this methodology.
Major weaknesses of IE have been identified as difficulty in securing top management commitment,
difficulty in finding the team leader meeting criteria, too much user involvement and that the planning
exercise takes long time.

DEVELOPMENT OF AN INFORMATION SYSTEM STRATEGY


Two types of knowledge are essential in method engineering: knowledge of IS development and
knowledge of method development.

Information System Development Methods


We define ISD as “a change process taken with respect to object systems in a set of environments by a
development group using tools and an organized collection of techniques collectively referred to as a
method to achieve or maintain some objectives” ISD is understood to include development of both
manual and computerized parts of an object system. An IS can therefore include both manual and
computer-supported parts. Although the definition emphasizes essential components of ISD, such as
its social nature and varying objectives, in this thesis we shall mainly focus on the italicized parts of
the definition, i.e. on the role of methods and techniques, and their supporting tools.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 236

By a technique we mean a set of steps and a set of rules which define how a representation of an IS is
derived and handled using some conceptual structure and related notation This definition is illustrated
in Figure below. By using a technique, system developers perceive, define and communicate on
certain aspects of the current or desired object system. These aspects are defined by the conceptual
structure of the technique and represented by the notation. By a tool we generally mean a computer-
based application which supports the use of a modeling technique. Tool-supported modeling
functionality includes abstraction of the object system into models, checking that models are
consistent, converting results from one form of model and representation to another, and providing
specifications for review

Examples of modeling techniques are data flow diagrams and activity models. As a technique, a data
flow diagram identifies and names the objects (e.g. process, store) and relationships (e.g. data flow,
control flow) which it considers important in developing an IS. Other techniques include other sets of
objects and relationships. Modeling techniques also have a notation and a representation form. In a
data flow diagram the notation for a process is a circle, and for a data flow a solid line with an arrow-
head. The representation form of a data flow diagram is a graphical diagram. Furthermore, a
technique defines some principles on how the models should be derived (e.g. decomposition of
processes while modeling with data flow diagrams). In other words, a modeling technique specifies
which kind of aspects of an object system need to be perceived, in what notation each aspect is
represented, and how such representations should be produced.

A method can be considered as a predefined and organized collection of techniques and a set of rules
which state by whom, in what order, and in what way the techniques are used to achieve or maintain
some objectives. In short, we call this method knowledge. Thus, our definition of method includes
both the product and process aspects, although dictionaries define the term “method” as meaning “the
procedure of obtaining an object” and therefore emphasize the process rather than the representation

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 237

(i.e. product of the method use). In contrast, Wijers (1991) notes that most ISD method text-books
focus on feasible specifications rather than on the process of how to develop such specifications. In
addition, a method also includes knowledge about method users, development objectives and values.
We will analyze the types of method knowledge in more detail in the next section.

Examples of methods include Structured Analysis and Design, and the object-oriented methods of
Booch (1991) and Rumbaugh et al. (1991). A short example of method knowledge is in order. The
method knowledge of SA/SD can be discussed in terms of the techniques (e.g. data flow diagram,
entity-relationship diagram) and their interrelations. In SA/SD the overall view of the object system is
perceived through a hierarchical structure of the processes that the system includes. This overall
topology is completed by data transformations; how data is used and produced by different processes,
how it is transformed between processes, and where it is stored. Moreover, the data used in the system
needs to be defined in a data-dictionary and interrelations between data need to be specified with
entity-relationship diagrams. Thus, methods describe not only how models are developed but also
how they are organized and structured. Furthermore, since ISD methods aim to carry out the change
process from a current to a desired state they should also include knowledge for creating alternative
design solutions and provide guidelines to select among them (Tolvanen and Lyytinen 1994).

SA/SD and other methods put forward a defined and a limited number of techniques including their
conceptual structures and notations. In the same way as there is variety in techniques, there is also
diversity among methods (Welke and Konsynski 1980). Different methods include different types and
sets of techniques. Interrelations between techniques can be defined differently even between methods
which use the same techniques, and the procedures for building and analyzing models can be
different. Although there is diversity among ISD methods they include similarities, e.g. they apply the
same concepts and notations. To understand these differences and similarities we shall analyze several
methods in more detail by describing types of method knowledge.

Types of Method Knowledge


There are many approaches to analyzing and characterizing different facets of methods including their
structure, content and use. These different categorizations are almost as numerous as the methods
available. For the purposes of ME, we combine some of them which have been applied in ME
research to analyze what type of knowledge ISD methods contain.

The categorization applied here is illustrated in the figure below whose shape leads us to call it a shell
model. According to the model, methods are based on a number of concepts and their interrelations.
These concepts are applied in modeling techniques to represent models of ISs according to a notation.
Processes must be based on the concepts and they describe how models are created, manipulated, and
used with the notation. The concepts and their representations are derived, analyzed, corrected etc. by
various stakeholders. In addition, methods include specific development objectives about a ‘good’ IS,

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 238

and have some underlying values, “weltanschauung” and other philosophical assumptions

The Shell Model

The shape of a shell emphasizes that different types of method knowledge are neither exclusive, nor
orthogonal. Each type of knowledge complements the others and all are required to yield a
“complete” method, although many methods focus only on the concepts and notations included in
modeling techniques. In the procedural guidelines of Structured Analysis (DeMarco 1979) this
concept is described as a top-down refinement of the system starting from the high level diagram. In
the modeling technique this concept is implemented as the possibility for every process to have a sub-
diagram, and in the balancing of the data flows between the decomposed process and its sub-diagram.
The concept of decomposition also affects other method knowledge in several ways: the method
should explain who identifies, specifies, and reviews decompositions; the partitioning of the system
into a hierarchical structure dominates the design decisions and reveals the underlying assumptions of
the method, i.e. that an IS can be effectively designed by partitioning the system based on its
processes.

ALIGNING INFORMATION SYSTEM STRATEGY TO THE


ORGANIZATION’S CORPORATE STRATEGY
In the digital age, information technology plays an important role in the success of an organization.
Technology provides edge in this globalized world. Companies are facing competition not only from
local companies but from international companies as well.

In such a scenario, it is important that company invest in technology which is aligned with overall
strategy of the company. This calls for technology strategy formulation.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 239

Technology Strategy Formulation


Technology strategy formulation talks about alignment between technology strategy and the overall
strategy of the organization. Here the role of the Chief Information Officer (CIO)comes into
prominence. The CIO should have short term as well as long term vision of technology advancement.
CIO should bridge implication of technology advancement and organization strategy. A clear
communication of technology impact on organization needs to reach executive leadership.

This alignment between CIO and CEO revolves around issues like:

 CIO roles and involvement in overall strategy formulation of organization.


 Financial resources available to make investment in technology.
 Earlier results of alignment between technology and organization strategy.
 External business conditions.
CIO faces challenge to provide technology value adds for organization in achieving its objective.

Planning

Corporate planning plays an important role in alignment of technology with organization strategy. In a
perfect scenario CIO and CEO will have a same planning horizon. However, it is observed that the
CEO and CIO do not share same vision, from planning to execution.

This introduces the concept of planning lead time. In some organization, strategy execution does not
match to technology planning horizon and execution. By the time technology strategy is executed,
more advancement is observed in that system, thus competitive edge is lost.

In the above scenario, companies become reactive rather than pro-active. Companies need to adjust
with challenges posed by market leaders and trend setters. A strong CIO-CEO relationship ensure
organization develop understanding of technological challenges and its impact on overall
organization.

Organizational Structure

Organization needs to ensure that their structure is agile and flexible as to accommodate changes in
the technology. They should be efficient and effective enough to deal demands of the market change.

Organization needs to develop and maintain technology systems, which are flexible and adaptive.
There are three types of technology infrastructure available with companies’ ERP, data warehousing
and knowledge management.

All three dimensions ERP, Data Warehousing and Knowledge Management provide cutting edge to
the organization.

Organizational Systems

Organization invests in technology looking at its present needs; future requirements and its capability
to provide a competitive edge. Systems can be classified into three categories depending upon
technology timeline, new systems, matured systems and declining systems.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 240

New systems have latest technology and provide a competitive edge. As time progresses system and
technology are adopted by more companies, thus losing competitive edge. Finally, systems and
technology reach the obsolete stage where its usage has declined and is to be phased out.

Executive leadership of organizations is responsible to manage new systems range as to enjoy


competitive edge. However, this requires substantial investment and clear vision of future technology
state. Therefore, organization has to walk a tight rope in investment in new technology and phasing
out the obsolete.

Information System for Business Effectiveness


In this digital age with fierce competition, it is essential that managers within organization are
completely aware and receptive to evolving changes. One the quickest evolving change is within
information systems. This change in information systems is contributed to advances in computing and
information technology.

Applying a concept that information system is strictly under the purview of IT department can lead to
adverse situation for the company. Therefore, it is essential for organization to recognize information
systems contribution in business effectiveness.

Systems and Innovation Opportunities

Development in information systems has brought opportunities but also threats. The onus is on the
organization to identify opportunity and implement it. Organization needs to develop strategies, which
can best utilize information systems to increase overall productivity.

The most common practice with regards to information systems is automation. Though automation is
helpful, innovation using information systems give the organization a competitive edge.

Systems and Customer Delight

Organizations are fully aware that proliferation of information systems has reduced product life cycle,
reduced margin and brought in new products. In such scenario customer satisfaction alone will not
suffice, organization needs to strive for customer delight. Information systems with data warehousing
and analytics capability can help organization collect customer feedback and develop products, which
exceed customer expectation. This customer delight will lead to a loyal customer base and brand
ambassador.

Systems and Organizational Productivity

Organizations require different types of information systems to mitigate distinctive process and
requirements. Efficient business transaction systems make organization productive. Business
transaction systems ensure that routine process are captured and acted upon effectively, for example,
sales transaction, cash transaction, payroll, etc.

Further, information systems are required for executive decision. Top leadership requires precise
internal as well as external information to devise a strategy for organization. Decision support systems
are designed to execute this exact function.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 241

Business transaction systems and executive decision support systems contribute to overall
organizational productivity.

System and Workers Productivity

Information systems have facilitated the increase in workers’ productivity. With introduction of email,
video conferencing and shared white board collaboration across organization and departments have
increased. This increased collaboration ensures smooth execution and implementation of various
projects across geographies and locations.

Information systems as a Value Add for Organization

Organization use information systems to achieve its various strategy as well as short-term and long-
term goals. Development of information systems was to improve productivity and business
effectiveness of organization. Success of information systems is highly dependent on the prevalent
organization structure, management style and overall organization environment.

With correct development, deployment and usage of information systems, organization can achieve
lower costs, improved productivity, growth in top-line as well as the bottom-line and competitive
advantage in the market.

The readiness of workers into accepting the information systems is the key in realizing the full
potential of them.

Development and deployment of information systems have revolutionized the way business is
conducted. It has contributed to business effectiveness and increased in productivity.

MANAGING INFORMATION SYSTEM STRATEGY


All businesses share one common asset, regardless of the type of business. It does not matter if they
manufacture goods or provide services. It is a vital part of any business entity, whether a sole
proprietorship or a multinational corporation. That common asset is information.

Information enables us to determine the need to create new products and services. Information tells us
to move into new markets or to withdraw from other markets. Without information, the goods do not
get made, the orders are not placed, the materials are not procured, the shipments are not delivered,
the customers are not billed, and the business cannot survive.

But information has far lesser impact when presented as raw data. In order to maximize the value of
information, it must be captured, analyzed, quantified, compiled, manipulated, made accessible, and
shared. In order to accomplish those tasks, an information system (IS) must be designed, developed,
administered, and maintained.

Improving information management practices is a key focus for many organisations, across both the
public and private sectors. This is being driven by a range of factors, including a need to improve the
efficiency of business processes, the demands of compliance regulations and the desire to deliver new
services.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 242

In many cases, ‘information management’ has meant deploying new technology solutions, such as
content or document management systems, data warehousing or portal applications. These projects
have a poor track record of success, and most organisations are still struggling to deliver an integrated
information management environment.

Effective information management is not easy. There are many systems to integrate, a huge range of
business needs to meet, and complex organisational (and cultural) issues to address. This topic draws
together a number of ‘critical success factors’ for information management projects. These do not
provide an exhaustive list, but do offer a series of principles that can be used to guide the planning
and implementation of information management activities.

Information is a vital ingredient for the operations and management of any organization. A computer
based management information system is designed to both reduce the costs and increase the
capabilities of organizational information processing. Information systems support the operations and
effective managing of major functions in an organization. Online operations facilitate user machine,
dialogue, interactive analysis, planning, and decision making. Information systems may be viewed as
a substantial extension of the concepts of managerial accounting, operation research, and
organizational theories related to management and decision making. Information systems call for
analysis of a business, management views and policies, organizational cultures and management
styles. An open system of information system offers an ability of continues changes and adjustments
or corrections in the system inline with environmental changes in which it works. An understanding
of the effective and responsible use of management of information systems and technologies is
important for managers, business professional and other knowledge workers in today’s internet work
enterprises
Information system plays a vital role in e-business and e-commerce operations, enterprise
collaborations and strategic success of business. An information system like any other system receives
inputs of data, and instructions, processes the data according to these instructions and produces
outputs. This information-processing model can be used to depict an information system. The major
purpose of an information system is to convert data into valuable information. Information is data
with meaning. In a business context regarding an organization like IBM, an information system is
subsystem of the business system of an organization. Each business system has goals such as
increasing profits, expanding market shares and providing service to potential customers. Any
organization deals with three main levels of organization, namely Operational Level, Tactical Level,
and Strategic Level. Illustratively, Operational information systems of an organization provide
information on the day-to-day activities of a business such as processing sales order or checking
credit, ordering new stock. These activities are decided and judged by junior managers, and are done
almost instantly.

Information systems that provide information that lets management allocate resources effectively to
achieve business objectives are known as tactical systems, this may include promotion of a particular
product. Tactical information is used by middle level managers . Finally, information systems that
support the strategic plans of the business are known as strategic planning system. Strategic decisions
are effectively made by senior managers. These decisions need time and care, particularly if it
requires major investment, like setting up a new plant. Furthermore, information provides managers
with the feedback they need about a system and its operations, which they can use for decision

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 243

making. Using this information, a manager can reallocate resources, redesign jobs or reorganize
procedures to accomplish the objectives, set for business growth, successfully.

Conceptually, information systems are classified as:

1. Operations Support System


The role of a business firm’s OSS is to efficiently process business transactions, control industrial
processes, support enterprise communication and collaborations; update corporate databases.
Transactions processing systems record and process data, that result from business transactions.
Transactions can be processed in two ways namely
 Batch processing
 Real time processing
Batch processing, where transactions data are accumulated over a period of time and processed
periodically
Real time processing where data is processed immediately after a transaction occurs. Process control
systems monitor and control physical processes. Enterprise collaborations systems enhance team and
workgroup communications and productivity.

2. Management Support Systems


When information systems apply focus on providing information and support for effective decision
making by managers they are called management support systems. There are three major types of
information systems that support a variety of decision making responsibilities.
A. Management information systems
B. Decision support systems
C. Executive information system.

Management information systems provide information in the form of reports and displays to managers
and many business professionals. Decision support systems give direct computers support to
managers during the decision-making system. Executive information system provides critical
information from a wide variety of internal and external sources in easy to use displays to executives
and managers. However, several other categories of Information Systems can support either
operations or management applications, for example expert systems can provide expert advice for
operations chooses like equipments diagnostics or managerial decisions. Knowledge management
system supports the creation, organization and dissemination of business knowledge to employees and
managers throughout a company. Finally, Strategic Information technology terms to products,
services or business processes to help it gain a strategic advantage over its competitors.

In literal terms, Implementation is doing what you have planned to do. Thus Implementation is the
most important responsibility of a manager. Implementation can be viewed as a process that carries
out the plans for changes in business IT/strategies and applications. The figure below illustrates the
business /IT planning process of any large scale organization, here IBM, which focuses on
discovering innovative approaches to satisfying a company’s customer value and business value
goals. This planning process leads to development of strategies and business models for new e-
business and e-commerce platforms, processes products and services, then a company can develop IT
strategies and an IT architecture that supports building and implementing their newly planned
business applications. Both the C.E.O and the Chief Information Officer (C.I.O) of a company must
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 244

efficiently manage the development of complementary business and IT strategies to meet its customer
value and business value vision. This co-adaptation process is necessary because information
technologies are fast changing, but a vital component in many strategic business initiatives. With the
introduction of information technology in business, organizations like IBM have undergone major
changes by implementing new e-business strategies and applications, as shown in figure below. It
clearly illustrates the impact and the levels and scope of business changes that applications of
information technology introduce into an organization.

L New business
Redefine Core
e Initiative
businesses
v
e
l
Best Practices Process
s
Reengineerin
o
g
f
Model Best
Improve
Practices
C efficiency
h
a Efficiency
n
g
e Single Core Supply Extended
Function Processes chain

For instance, IBM exhaustively uses and implements in its day to day operations, applications
like online transaction processing that bring efficiency to single function or core business processes.
However, implementing e-business application such as Enterprise Resource Management or Customer
Relationship Management (CRM) requires a reengineering of core business process internally and
with supply chain partners, thus forcing a company to model and implement business practices being
implemented by leading firms in their industry. Of course, any major new business and initiatives can
enable a company to redefine its core lines of business and precipitate dramatic changes within the
entire inter-enterprise value chain of a business. Implementing new business/IT strategies require
managing the effects of major changes in key organizational dimensions such as business processes,
organizational structures, managerial roles, employee work assignments, and stakeholder relationships
that arise from the deployment of new business information systems (Chou). Induction of E.D.I and
E.C as part of an organization’s infrastructure, while providing many benefits can also result in
resistance to change that is brought about by the new ways of working. IBM is a real world example
that demonstrates the challenges of implementing major business/ IT strategies and applications, and
the change management challenges that confront management.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 245

IBM embraces customer relationship management (CRM) as a primary key for e-business
applications. It is designed to implement a business strategy of using IT to support a total customer
care focus for all areas of the company. Business challenges also include aggregating business
functions and information to drive greater efficiency and responsiveness, automating processes for
managing data to improve quality, efficiency and reduce costs, utilizing actionable information to
enable better business decision-making, adhering to regulatory requirements, and improving data
storage and distribution processes to increase efficiency and reduce overall costs, and lastly, enabling
brand new business functions and processes through better access to data and diverse applications.
IBM’s high level industry expertise and global investment in diverse application platforms and
application skills helps to provide a strong foundation for leadership in application design,
development, implementation, and management.

Even more important is end user involvement in organizational changes and in the development of
new information systems. Organizations have a variety of strategies to help manage business change,
so planning for change is carried out well in advance of introduction of EDI/EC so that the result is a
win-win situation across the organization. Direct end user participation in business planning and
application development projects before a new system is implemented is especially important in
reducing the potential for end user resistance. Such involvement helps ensure that the system design
meets the end user needs. The following section illustrates some of the key dimensions of
organizational change management, and the level of difficulty and business impact involved. Note
that some of the people, process, and technology factors involved in the implementation of E-business
strategies and applications, or other changes caused by introducing new information technologies.
Thus people are a major focus of organizational change management. This includes activities such as
developing innovative ways to measure, motivate and reward performance. So is designing programs
to recruit and train employees in the core competencies required in a changing work place. Change
management also involves analyzing and defining all changes facing the organization, and developing
programs to reduce the risks and costs and to maximize the benefits of the change. For example,
implementing a new e-business process like customer relationship management, might involve
developing a change action plan, assigning selected managers as change sponsor, developing
employee change teams and encouraging open communications and feedbacks about organizational
changes. Some key tactics change experts recommend include; involve as many people as possible in
E-business planning and application development, make constant change in expected part of the
culture, tell everyone as much as possible about everything as often as possible, preferably in person,
make liberal use of financial incentives and recognition, and lastly, work within the company culture
and not around it. E-business vision created in the strategy planning phase should be communicated in
compelling change story to the people in the organization. Evaluating the readiness for the e-business
changes within an organization, developing change strategies, choosing and training change leaders
and champions based on that assessment could be the next steps in managing organizational changes.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 246

Business Challenges

Management

Information Information System Business Solutions


Technology
Organization

Enterprise resource planning system provides a holistic view of the enterprise and is devised to draw
benefits form IT. It works around the core activities of the organization, and facilitates seamless flow
of information across departmental barriers. ERP systems optimally plan and manage all the resources
of the organization, and hence cover the techniques and concepts employed for the integrated
management of businesses as a whole, from the viewpoint of the effective usage of management
resources to improve the efficiency of an enterprise. Direct benefits of E.R.P include
 improved efficiency,
 information integration for better-decision making, and
 faster response time to customer queries.

However, the indirect advantages of E.R.P include better corporate image, improved customer
goodwill, and customer satisfaction. Thus, ERP’s best hope for demonstrating value is a sort of
battering ram for improving the business performance.

Title 2:

Strategic management is the set of decisions and actions used to formulate and implement strategies
that will provide a competitively superior fit between the organization and its environment so as to
achieve organizational goals. Managers ask questions such as “What changes and trends are occurring
in the competitive environment? Who are our customers? What products or services should we offer?
How can we offer those products and services most efficiently?” Answers to these questions help
managers make choices about how to position their organization in the environment with respect to
rival companies. Superior organizational performance is not a matter of luck. It is determined by the
choices managers make. Top executives use strategic management to define an overall direction for
the organization, which is the firm’s grand strategy. Grand strategy is the general plan of major action
by which a firm intends to achieve its long term goals. Within the overall grand strategy of an
organization executives define an explicit strategy, which is the plan of action that describes resource
allocation and activities for dealing with the environment and attending the organization goals. The
essence of strategy is choosing to perform different activities or to execute activities differently than
competitors do. Strategy necessarily changes over time to fit environmental conditions but to remain

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 247

competitive; companies develop strategies that focus on core competencies, develop synergy, and
create value for customers (Sethi 2009).

a) Core competence: a business activity that an organization does particularly well in


comparison to competitors.
b) Synergy: The condition that exists when the organization’s parts interact to produce a joint
effect that is greater than the sum of the parts acting alone.
c) Value creation: exploiting core competencies and attending synergy help companies create
value for their customers. Value can be defined as the combination of benefits received and
cost paid by the customer. A product that is low in cost but does not provide benefits is not a
good value.

The final aspect of strategic management involves the stages of formulation and implementation.
Strategy formulation includes the planning and decision making that lead to the establishment of the
firm’s goals and development of a specific strategic plan. It may include assessing the external
environment and internal problems and integrating result into goals and strategy. This is contrast to
strategy implementation, which is the use of managerial and organizational tools to direct resources
and information toward accomplishing strategic result. Strategy implementation is the administration
and execution of the strategic plan. Managers may use persuasion, new equipment, changes in
organization structure, or reward system to ensure that employees and resources are used to make
formulated strategy a reality .

Long range strategic planning


Like other business activity, planning also has a process and methodology. In the very beginning of
the planning process it is necessary to decide the purpose of the organization for which it works.
Many organizations call it mission. The mission or aim of the organization is a broad statement of the
organization’s existence, which sets the direction of the organization and decides the scope and the
boundaries of the business. The task after deciding the mission of the aim is to set the goal(s) for the
organization. The goal is more specific and has a time scale of three to five years. It is described in the
quantitative terms in the form of a ratio, a norm of a level of certain business aspects, such as the
largest share leader in the industry dominant in certain product, quality, reach and distribution, etc. the
goals become reference for the top management in planning the business activities. After determining
the mission and the goals, the next task is to set various objectives for the organization. The objectives
are described in terms of business results to be achieved in a short duration of a year or two. They are
measurable and can be monitored with the help of the business tools and technologies. Objectives
may be the profitability, the sales, the quality standards, the capacity utilization, etc. When achieved,
the objectives will be contributing to the accomplishment of the goals and the subsequently the
mission. The next step in the planning process is to targets for more detailed working and reference.
The objective of the business is to be translated in terms of functional and operational units for easy
communication and decision making. The targets may be monthly for the sales, production, inventory,
and so on. The targets will be the direct descendants of the objective (s). The success in achieving the
goals and objective is directly dependent on the management’s business strategy.

The development of the strategy also considers the environmental factors such as the technology, the
markets, the lifestyle the work culture, the attitudes, the policies of the government and so on. A

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 248

strategy helps to meet the external forces affecting the business development effectively ad further
ensures that the goals and objectives are achieved. The development of the strategy considers the
strength of the organization in deploying the resources and at the same time it compensates for the
weaknesses. The strategy formulation therefore is an unstructured exercise of a complex nature
riddled with the uncertainties. It sets the guidelines for use of the resources in kind and manner during
the planning period. Myburgh has defined strategic information management that “focuses on
corporate strategy and direction. It emphasizes the quality of decision making and information use
needed to improve overall business performance.”

I
Knowledge Management
N Information
Information Governance Use
F

O
Information Information
Information
R Processing Distribution
Acquisition
M
Information
A Human Resources
Use

T IT infrastructure

Information management is a set of activities that travels along the logical succession of
interdependent stages of organization development. Information management strategies involve
harnessing information resources and information capabilities, to enable the organization to learn and
adapt to its changing environment. In other words, information management centers on effectively
managing and controlling the use of information with respect to coordination and control, strategic
decision making and tactical problem solving. Information system strategy is a classic model of
representing decision making processes in information systems. Information systems strategy is the
plan and steps of execution taken by the organization in providing information systems and services.
Improvising information management is a key focus for many firms and organizations . This is driven
by an array of various factors that also include a need to improve the efficiency business processes,
the desire to deliver new services, and the demands of compliance regulations. In most of the cases,
information management involves deploying new technology solutions, like portal applications,
content or document management systems or data warehousing.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 249

Portals and content management services offered by IBM:


Portals: IBM effectively brings together and combines a broad range of Web technologies and
practices like content management, workflow, personalization, SOA integration, application
integration, Web 2.0, portal designing and development (IBM). Content Management: Creation,
processing, management, and delivery of content and information is supported by design, consulting,
and solution implementation services. IBM also provides the full project life cycle services required to
implement and recommend change across diverse and broad range of key content management
technologies, such as e-discovery and search, workflow, document imaging, record management,
document management, electronic forms, digital asset management, report and output management
(IBM).

With these specialized software solutions, IBM delivers an integrated information management
environment for deployment of applications. Information management strategies is the collection and
management of valuable data and information extracted from one or more resources and distribution
of that information to potential audience. Management increases the efficiency of all the business
functions like marketing, finance, administration, production, personnel, purchase and inventory.
Knowledge base is created for people in organization. Forecasting and long term perspective planning
is effectively executed. Information management also impacts the enterprises in the following ways:
Exceptional situations are brought to the notice well in time, keeping information about the
achievements and shortfalls in the implementation of the set goals, keeping traces of probable trends
in various aspects of business, understanding business with clarity by defining data entity and its
attributes, improve the decision-making ability considerably, systemization of business operation,
creates and information based work culture in organization, making and using data dictionary and
providing common understanding of term and terminologies in the organization .

Only those companies that create new knowledge and disseminate it widely throughout the
organization and quickly embody it in the new technologies and products will survive in today’s
competitive world. Additionally, knowledge management strategies are developed to effectively
implement a range of policies and practices that the organization uses to create, develop, identify,
represent, distribute, and enable adoption of experiences and valuable insights. Such experiences and
insights comprise knowledge that are either embedded in organizational practices or processes, or are
embodied in individuals. Knowledge management strategies are derived from information
management as a discipline.

The value of information-as-knowledge and knowledge management essentially lies in the conversion
of tacit information resources to manageable information products, and the resulting expansion of the
organization’s information resource base (Schlögl 2005). Furthermore, KM strategies are aimed at
facilitating individual as well organizational learning and focuses on efficiency gains of the
organization. Information ecology and organizational culture are most important with respect to
knowledge management. Strategies for knowledge demand that successful knowledge management is
achieved as an outcome of willingness among organizational members and staff to share their insights
and expertise, in enhancing the organizational activities thereby increasing the chances of
achievement of desired goals and targets. Knowledge management has thus become one of the major
strategic uses of information technology Another factor on which Information management,
knowledge management ands information system strategies depend on, is information acquisition.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 250

This key factor is essential in identifying market trends, environmental risks, opportunities, customer
preferences, internal process inefficiencies, demand patterns and an array of other information
resources that are leveraged to create challenging outcomes and competitive advantages . Enterprise
content management (ECM) is the strategies, tools and methods, used in the context of knowledge
management, to capture, store, manage, preserve, and distribute and deliver documents and content
related to organizational processes.
.
Information governance rules out and encompasses the internal guidelines and policies for effectively
handling enormous information resources namely, information acquisition, storage, processing,
security, distribution, maintenance, and disposal. The valuation of information governance relies on
the development of common organization-wide policies and standards for obtaining information
resources based on the organization’s information requirements.

The overall company strategy considers a very long term business perspective, deals with the overall
strength of the entire organization and evolves those policies of the business which will dominate with
course of the business movement. It is the most productive strategy, if chosen correctly and fatal if
chosen wrongly. These strategies are broad-based having a far reaching effect on the different facets
of the business and forming the basis, generating strategies in the other potential areas of business.

Information System Design and Administration


The design of an information system is based on various factors. Cost is a major consideration, but
there certainly are others to be taken into account, such as the number of users; the modularity of the
system, or the ease with which new components can be integrated into the system, and the ease with
which outdated or failed components can be replaced; the amount of information to be processed; the
type of information to be processed; the computing power required to meet the varied needs of the
organization; the anticipated functional life of the system and/or components; the ease of use for the
people who will be using the system; and the requirements and compatibility of the applications that
are to be run on the system.

There are different ways to construct an information system, based upon organizational requirements,
both in the function aspect and the financial sense. Of course, the company needs to take into
consideration that hardware that is purchased and assembled into a network will become outdated
rather quickly. It is almost axiomatic that the technologies used in information systems steadily
increase in power and versatility on a rapid time scale. Perhaps the trickiest part of designing an
information system from a hardware standpoint is straddling the fine line between too much and not
enough, while keeping an eye on the requirements that the future may impose.

Applying foresight when designing a system can bring substantial rewards in the future, when system
components are easy to repair, replace, remove, or update without having to bring the whole
information system to its knees. When an information system is rendered inaccessible or inoperative,
the system is considered to be "down."

A primary function of the maintaining an information system is to minimize downtime, or hopefully,


to eradicate downtime altogether. The costs created by a department, facility, organization, or
workforce being idled by an inoperative system can become staggering in a short amount of time. The

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 251

inconvenience to customers can cost the firm even more if sales are lost as a result, in addition to any
added costs the customers might incur.

Another vital consideration regarding the design and creation of an information system is to determine
which users have access to which information. The system should be configured to grant access to the
different partitions of data and information by granting user-level permissions for access. A common
method of administering system access rights is to create unique profiles for each user, with the
appropriate user-level permissions that provide proper clearances.

Individual passwords can be used to delineate each user and their level of access rights, as well as
identify the tasks performed by each user. Data regarding the performance of any user unit, whether
individual, departmental, or organizational can also be collected, measured, and assessed through the
user identification process.

The OSI seven-layer model attempts to provide a way of partitioning any computer network into
independent modules from the lowest (physical/hardware) layer to the highest (application/program)
layer. Many different specifications can exist at each of these layers.

A crucial aspect of administering information systems is maintaining communication between the IS


staff, who have a technical perspective on situations, and the system users, who usually communicate
their concerns or needs in more prosaic terminology. Getting the two sides to negotiate the language
barriers can be difficult, but the burden of translation should fall upon the IS staff. A little patience
and understanding can go a long way toward avoiding frustration on the part of both parties.

There is more to maintaining an information system than applying technical knowledge to hardware
or software. IS professionals have to bridge the gap between technical issues and practicality for the
users. The information system should also have a centralized body that functions to provide
information, assistance, and services to the users of the system. These services will typically include
telephone and electronic mail "help desk" type services for users, as well as direct contact between the
users and IS personnel.

Information System Functions


Information systems play the following functions:

1. Document and Record Management


Document and record management may well be the most crucial aspect of any information system.
Some examples of types of information maintained in these systems would be accounting, financial,
manufacturing, marketing, and human resources. An information system can serve as a library. When
properly collected, organized, and indexed in accordance with the requirements of the organization,
its stored data becomes accessible to those who need the information.

The location and retrieval of archived information can be a direct and logical process, if careful
planning is employed during the design of the system. Creating an outline of how the information
should be organized and indexed can be a very valuable tool during the design phase of a system. A
critical feature of any information system should be the ability to not only access and retrieve data,
but also to keep the archived information as current as possible.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 252

2. Collaborative Tools
Collaborative tools can consist of software or hardware, and serve as a base for the sharing of data and
information, both internally and externally. These tools allow the exchange of information between
users, as well as the sharing of resources. As previously mentioned, real-time communication is also a
possible function that can be enabled through the use of collaborative tools.

3. Data Mining
Data mining, or the process of analyzing empirical data, allows for the extrapolation of information.
The extrapolated results are then used in forecasting and defining trends.

4. Query Tools
Query tools allow the users to find the information needed to perform any specific function. The
inability to easily create and execute functional queries is a common weak link in many information
systems. A significant cause of that inability, as noted earlier, can be the communication difficulties
between a management information systems department and the system users.

Another critical issue toward ensuring successful navigation of the varied information levels and
partitions is the compatibility factor between knowledge bases. For maximum effectiveness, the
system administrator should ascertain that the varied collection, retrieval, and analysis levels of the
system either operate on a common platform, or can export the data to a common platform. Although
much the same as query tools in principle, intelligent agents allow the customization of the
information flow through sorting and filtering to suit the individual needs of the users. The primary
difference between query tools and intelligent agents is that query tools allow the sorting and filtering
processes to be employed to the specifications of management and the system administrators, and
intelligent agents allow the information flow to be defined in accord with the needs of the user.

Key Points
Managers should keep in mind the following advice in order to get the most out of an information
system:

 Use the available hardware and software technologies to support the business. If the
information system does not support quality and productivity, then it is misused.
 Use the available technologies to create and facilitate the flow of communication within your
organization and, if feasible, outside of it as well. Collaboration and flexibility are the key
advantages offered for all involved parties. Make the most of those advantages.
 Determine if any strategic advantages are to be gained by use of your information system,
such as in the areas of order placement, shipment tracking, order fulfillment, market
forecasting, just-in-time supply, or regular inventory. If you can gain any sort of advantage by
virtue of the use of your information system, use it.
 Use the quantification opportunities presented by your information system to measure,
analyze, and benchmark the performances of an individual, department, division, plant, or
entire organization.

An information system is more than hardware or software. The most integral and important
components of the system are the people who design it, maintain it, and use it. While the overall

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 253

system must meet various needs in terms of power and performance, it must also be usable for the
organization's personnel. If the operation of day-to-day tasks is too daunting for the workforce, then
even the most humble of aspirations for the system will go unrealized.

A company will likely have a staff entrusted with the overall operation and maintenance of the system
and that staff will be able to make the system perform in the manner expected of it. Pairing the
information systems department with a training department can create a synergistic solution to the
quandary of how to get non-technical staff to perform technical tasks. Oft times, the individuals
staffing an information systems department will be as technical in their orientation as the operative
staff is non-technical in theirs. This creates a language barrier between the two factions, but the
communication level between them may be the most important exchange of information within the
organization. Nomenclature out of context becomes little more than insular buzzwords.

If a company does not have a formal training department, the presence of staff members with a natural
inclination to demonstrate and teach could mitigate a potentially disastrous situation. Management
should find those employees who are most likely to adapt to the system and its operation. They should
be taught how the system works and what it is supposed to do. Then they can share their knowledge
with their fellow workers. There may not be a better way to bridge the natural chasm between the IS
department and non-technical personnel. When the process of communicating information flows
smoothly and can be used for enhancing and refining business operations, the organization and its
customers will all profit.

Information Management Challenges


Organisations are confronted with many information management problems and issues. In many
ways, the growth of electronic information (rather than paper) has only worsened these issues over the
last decade or two.

Common information management problems include:

1) Large number of disparate information management systems.


2) Little integration or coordination between information systems.
3) Range of legacy systems requiring upgrading or replacement.
4) Direct competition between information management systems.
5) No clear strategic direction for the overall technology environment.
6) Limited and patchy adoption of existing information systems by staff.
7) Poor quality of information, including lack of consistency, duplication, and out-of-date
information.
8) Little recognition and support of information management by senior management.
9) Limited resources for deploying, managing or improving information systems.
10) Lack of enterprise-wide definitions for information types and values (no corporate-wide
taxonomy).
11) Large number of diverse business needs and issues to be addressed.
12) Lack of clarity around broader organisational strategies and directions.
13) Difficulties in changing working practices and processes of staff.
14) Internal politics impacting on the ability to coordinate activities enterprise-wide

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 254

10 Principles of Effective Information Management


There are ten key principles to ensure that information management activities are effective and
successful:

1) recognise (and manage) complexity


2) focus on adoption
3) deliver tangible & visible benefits
4) prioritise according to business needs
5) take a journey of a thousand steps
6) provide strong leadership
7) mitigate risks
8) communicate extensively
9) aim to deliver a seamless user experience
10) choose the first project very carefully

Principle 1: recognize (and manage) complexity

Organisations are very complex environments in which to deliver concrete solutions. As outlined
above, there are many challenges that need to be overcome when planning and implementing
information management projects.

When confronted with this complexity, project teams often fall back upon approaches such as:

 Focusing on deploying just one technology in isolation.


 Purchasing a very large suite of applications from a single vendor, in the hope that this can be
used to solve all information management problems at once.
 Rolling out rigid, standardised solutions across a whole organisation, even though individual
business areas may have different needs.
 Forcing the use of a single technology system in all cases, regardless of whether it is an
appropriate solution.
 Purchasing a product ‘for life’, even though business requirements will change over time.
 Fully centralising information management activities, to ensure that every activity is tightly
controlled.

All of these approaches will fail, as they are attempting to convert a complex set of needs and
problems into simple (even simplistic) solutions. The hope is that the complexity can be limited or
avoided when planning and deploying solutions.

In practice, however, there is no way of avoiding the inherent complexities within organisations. New
approaches to information management must therefore be found that recognise (and manage) this
complexity.

Organisations must stop looking for simple approaches, and must stop believing vendors when they
offer ‘silver bullet’ technology solutions.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 255

Instead, successful information management is underpinned by strong leadership that defines a clear
direction (principle 6). Many small activities should then be planned to address in parallel the many
needs and issues (principle 5).

Risks must then be identified and mitigated throughout the project (principle 7), to ensure that
organisational complexities do not prevent the delivery of effective solutions. Information systems are
only successful if they are used

Principle 2: focus on adoption

Information management systems are only successful if they are actually used by staff, and it is not
sufficient to simply focus on installing the software centrally.

In practice, most information management systems need the active participation of staff throughout
the organisation.

For example:

 Staff must save all key files into the document/records management system.
 Decentralised authors must use the content management system to regularly update the
intranet.
 Lecturers must use the learning content management system to deliver e-learning packages to
their students.
 Front-line staff must capture call details in the customer relationship management system.
In all these cases, the challenge is to gain sufficient adoption to ensure that required information is
captured in the system. Without a critical mass of usage, corporate repositories will not contain
enough information to be useful.

This presents a considerable change management challenge for information management projects. In
practice, it means that projects must be carefully designed from the outset to ensure that sufficient
adoption is gained.

This may include:

 Identifying the ‘what’s in it for me’ factors for end users of the system.
 Communicating clearly to all staff the purpose and benefits of the project.
 Carefully targeting initial projects to build momentum for the project (see principle 10).
 Conducting extensive change management and cultural change activities throughout the
project.
 Ensuring that the systems that are deployed are useful and usable for staff.
These are just a few of the possible approaches, and they demonstrate the wide implications of
needing to gain adoption by staff. It is not enough to deliver ‘behind the scenes’ fixes

Principle 3: deliver tangible & visible benefits

It is not enough to simply improve the management of information ‘behind the scenes’. While this
will deliver real benefits, it will not drive the required cultural changes, or assist with gaining
adoption by staff (principle 2).

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 256

In many cases, information management projects initially focus on improving the productivity of
publishers or information managers.

While these are valuable projects, they are invisible to the rest of the organisation. When challenged,
it can be hard to demonstrate the return on investment of these projects, and they do little to assist
project teams to gain further funding.

Instead, information management projects must always be designed so that they deliver tangible and
visible benefits.

Delivering tangible benefits involves identifying concrete business needs that must be met (principle
4). This allows meaningful measurement of the impact of the projects on the operation of the
organisation.

The projects should also target issues or needs that are very visible within the organisation. When
solutions are delivered, the improvement should be obvious, and widely promoted throughout the
organisation.

For example, improving the information available to call centre staff can have a very visible and
tangible impact on customer service.

In contrast, creating a standard taxonomy for classifying information across systems is hard to
quantify and rarely visible to general staff.

This is not to say that ‘behind the scenes’ improvements are not required, but rather that they should
always be partnered with changes that deliver more visible benefits.

This also has a major impact on the choice of the initial activities conducted (principle 10). Tackle the
most urgent business needs first

Principle 4: prioritise according to business needs

It can be difficult to know where to start when planning information management projects.

While some organisations attempt to prioritise projects according to the ‘simplicity’ of the technology
to be deployed, this is not a meaningful approach. In particular, this often doesn’t deliver short-term
benefits that are tangible and visible (principle 3).

Instead of this technology-driven approach, the planning process should be turned around entirely, to
drive projects based on their ability to address business needs.

In this way, information management projects are targeted at the most urgent business needs or issues.
These in turn are derived from the overall business strategy and direction for the organisation as a
whole.

For example, the rate of errors in home loan applications might be identified as a strategic issue for
the organisation. A new system might therefore be put in place (along with other activities) to better
manage the information that supports the processing of these applications.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 257

Alternatively, a new call centre might be in the process of being planned. Information management
activities can be put in place to support the establishment of the new call centre, and the training of
new staff. Avoid ‘silver bullet’ solutions that promise to fix everything

Principle 5: take a journey of a thousand steps

There is no single application or project that will address and resolve all the information management
problems of an organisation.

Where organisations look for such solutions, large and costly strategic plans are developed. Assuming
the results of this strategic planning are actually delivered (which they often aren’t), they usually
describe a long-term vision but give few clear directions for immediate actions.

In practice, anyone looking to design the complete information management solution will be trapped
by ‘analysis paralysis’: the inability to escape the planning process.

Organisations are simply too complex to consider all the factors when developing strategies or
planning activities.

The answer is to let go of the desire for a perfectly planned approach. Instead, project teams should
take a ‘journey of a thousand steps’.

This approach recognises that there are hundreds (or thousands) of often small changes that are
needed to improve the information management practices across an organisation. These changes will
often be implemented in parallel.

While some of these changes are organisation-wide, most are actually implemented at business unit
(or even team) level. When added up over time, these numerous small changes have a major impact
on the organisation.

This is a very different approach to that typically taken in organisations, and it replaces a single large
(centralised) project with many individual initiatives conducted by multiple teams.

While this can be challenging to coordinate and manage, this ‘thousand steps’ approach recognises the
inherent complexity of organisations (principle 1) and is a very effective way of mitigating risks
(principle 7). It also ensures that ‘quick wins’ can be delivered early on (principle 3), and allows
solutions to be targeted to individual business needs (principle 4). Successful projects require strong
leadership

Principle 6: provide strong leadership

Successful information management is about organisational and cultural change, and this can only be
achieved through strong leadership.

The starting point is to create a clear vision of the desired outcomes of the information management
strategy. This will describe how the organisation will operate, more than just describing how the
information systems themselves will work.

Effort must then be put into generating a sufficient sense of urgency to drive the deployment and
adoption of new systems and processes.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 258

Stakeholders must also be engaged and involved in the project, to ensure that there is support at all
levels in the organisation.

This focus on leadership then underpins a range of communications activities (principle 8) that ensure
that the organisation has a clear understanding of the projects and the benefits they will deliver.

When projects are solely driven by the acquisition and deployment of new technology solutions, this
leadership is often lacking. Without the engagement and support of key stakeholder outside the IT
area, these projects often have little impact. Apply good risk management to ensure success

Principle 7: mitigate risks

Due to the inherent complexity of the environment within organisations (principle 1), there are many
risks in implementing information management solutions. These risks include:

 selecting an inappropriate technology solution


 time and budget overruns
 changing business requirements
 technical issues, particularly relating to integrating systems
 failure to gain adoption by staff
At the outset of planning an information management strategy, the risks should be clearly identified.
An approach must then be identified for each risk, either avoiding or mitigating the risk.

Risk management approaches should then be used to plan all aspects of the project, including the
activities conducted and the budget spent.

For example, a simple but effective way of mitigating risks is to spend less money. This might involve
conducting pilot projects to identifying issues and potential solutions, rather than starting with
enterprise-wide deployments.

Principle 8: communicate extensively

Extensive communication from the project team (and project sponsors) is critical for a successful
information management initiative.

This communication ensures that staff have a clear understanding of the project, and the benefits it
will deliver. This is a pre-requisite for achieving the required level of adoption.

With many projects happening simultaneously (principle 5), coordination becomes paramount. All
project teams should devote time to work closely with each other, to ensure that activities and
outcomes are aligned.

In a complex environment, it is not possible to enforce a strict command-and-control approach to


management (principle 1).

Instead, a clear end point (‘vision’) must be created for the information management project, and
communicated widely. This allows each project team to align themselves to the eventual goal, and to
make informed decisions about the best approaches.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 259

For all these reasons, the first step in an information management project should be to develop a clear
communications ‘message’. This should then be supported by a communications plan that describes
target audiences, and methods of communication.

Project teams should also consider establishing a ‘project site’ on the intranet as the outset, to provide
a location for planning documents, news releases, and other updates. Staff do not understand the
distinction between systems

Principle 9: aim to deliver a seamless user experience

Users don’t understand systems. When presented with six different information systems, each
containing one-sixth of what they want, they generally rely on a piece of paper instead (or ask the
person next to them).

Educating staff in the purpose and use of a disparate set of information systems is difficult, and
generally fruitless. The underlying goal should therefore be to deliver a seamless user experience, one
that hides the systems that the information is coming from.

This is not to say that there should be one enterprise-wide system that contains all information.

There will always be a need to have multiple information systems, but the information contained
within them should be presented in a human-friendly way.

In practice, this means:

 Delivering a single intranet (or equivalent) that gives access to all information and tools.
 Ensuring a consistent look-and-feel across all applications, including standard navigation and
page layouts.
 Providing ‘single sign-on’ to all applications.

Ultimately, it also means breaking down the distinctions between applications, and delivering tools
and information along task and subject lines.

For example, many organisations store HR procedures on the intranet, but require staff to log a
separate ‘HR self-service’ application that provides a completely different menu structure and
appearance.

Improving on this, leave details should be located alongside the leave form itself. In this model, the
HR application becomes a background system, invisible to the user.

Care should also be taken, however, when looking to a silver-bullet solution for providing a seamless
user experience. Despite the promises, portal applications do not automatically deliver this.

Instead, a better approach may be to leverage the inherent benefits of the web platform. As long as the
applications all look the same, the user will be unaware that they are accessing multiple systems and
servers behind the scenes.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 260

Of course, achieving a truly seamless user experience is not a short-term goal. Plan to incrementally
move towards this goal, delivering one improvement at a time. The first project must build
momentum for further work

Principle 10: choose the first project very carefully

The choice of the first project conducted as part of a broader information management strategy is
critical. This project must be selected carefully, to ensure that it:

 demonstrates the value of the information management strategy


 builds momentum for future activities
 generates interest and enthusiasm from both end-users and stakeholders
 delivers tangible and visible benefits (principle 3)
 addresses an important or urgent business need (principle 4)
 can be clearly communicated to staff and stakeholders (principle 8)
 assists the project team in gaining further resources and support

Actions speak louder than words. The first project is the single best (and perhaps only) opportunity to
set the organisation on the right path towards better information management practices and
technologies.

The first project must therefore be chosen according to its ability to act as a ‘catalyst’ for further
organisational and cultural changes.

In practice, this often involves starting with one problem or one area of the business that the
organisation as a whole would be interested in, and cares about.

For example, starting by restructuring the corporate policies and procedures will generate little
interest or enthusiasm. In contrast, delivering a system that greatly assists salespeople in the field
would be something that could be widely promoted throughout the organisation.

Conclusion

Implementing information technology solutions in a complex and ever-changing organisational


environment is never easy.

The challenges inherent in information management projects mean that new approaches need to be
taken, if they are to succeed.

This topic has outlined ten key principles of effective information management. These focus on the
organisational and cultural changes required to drive forward improvements.

The also outline a pragmatic, step-by-step approach to implementing solutions that starts with
addressing key needs and building support for further initiatives. A focus on adoption then ensures
that staff actually use the solutions that are deployed.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 261

Of course, much more can be written on how to tackle information management projects. Future
articles will further explore this topic, providing additional guidance and outlining concrete
approaches that can be taken.

Information System for Recruitment and Selection


Human resource is one of the pillars which defines a strong and successful organization. Workforce is
the backbone of any organization and form integral part of its strategic plans and initiatives.

Recruitment and Selection

Recruitment and selection are two of the main function carried out by human-resource department. An
organization undertakes recruitment under following circumstances:

 If the organization is implementing business expansion plans. This expansion may be in line
with an increase in sales. Company may be looking forward to exploring brand new markets
or coming out with new products.
 If there is attrition within the existing workforce. This attrition could be that existing
employees are moving to other employers or changing industry or employee has some
personal reason like sickness, maternity, etc.
 Organization also undertaken recruitment if they require employees with a specific skill set
which they currently don’t have.
 If business is changing base of operation. In such case many employees may not prefer re-
locate hence the need for recruitment.

Change in Employee Mix

The current workforce is constantly evolving with regard to the employee mix. Organizations are
moving more and more toward temporary employees. Furthermore, there is an increase in single
parent employees. Women as percentage of workforce have as well significantly increased. Human-
resource manager needs to be aware of these changes and develop a recruitment process accordingly.

Recruitment Management System

Every Human resource department has a team to manage the recruitment and selection process.
Information systems have made it possible for companies to have a dedicated tool which helps in
organizing the complete recruitment and selection process.

Recruitment management system greatly enhances the performance of recruitment process and
delivers efficiency to the organization. The key characteristics of the recruitment management system
are as follows:

i. Organize the whole recruitment process in a well-defined and manageable manner.


ii. The system enhances and facilitates comprehensive, reliable, faster and precise online
application management.
iii. The system reduces the overall recruitment time cycle, thereby reducing cost for the
company.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 262

iv. The system consolidates online application, outside recruitment agency process, interview
stage, etc.
v. The system stores all the applicant information within the database as to facilitate faster future
requirement processing.
vi. The system facilitates a user friendly interface between applicant, talent acquisition team and
online application link.
The system has various tools to improve overall productivity of the recruitment process.

Selection

Selection is a process through which candidate’s qualification and job’s requirement are matched as to
establish suitability for the open position. The selection needs to have structured and definite process
flow.

Selection process consists of various steps like interview, aptitude test, interaction with hiring
manager, background verification, job offer and job acceptance.

Recruitment and Selection

Recruitment is a process in which there is search for potential applicants for various open positions,
where as selection is a process in which candidates are short listed based on their potential.

Employee recruitment and selection are building block of any successful organization. In recent years,
information system has played major role in driving efficiency in the process through standardization
and process evolution.

Information System for Training and Development


A successful organization is built on satisfied and trained employees. They are the company’s greatest
assets. Employee development is defined as formal education, on-the-job training, previous job
experience, personality mapping, and improvement in the current skill sets as to prepare the employee
for future.

A trained and developed staff will contribute to productivity increase, improved profitability and
significant increase in the market share. Therefore, it is very important for companies to design and
maintain efficient training/development systems for employees.

An employee development system consists of induction, training, development, periodic counseling,


performance appraisal and career management. This system is deployed to ensure that employees are
able to perform the task they have been hired for and are competent to make career progression along
with it.

Training and Development

Training and development are different from each other. The focus of training is short term while for
development, it is long term. The utilization of work experience is low in training and high in
development. The aim of training is preparation for current assignment while development looks at
upcoming assignment. Employee participation is voluntary in training while it is mandatory in
development.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 263

Importance of Training and Development

An employee development system ensures alignment between employee’s potential and


organizational expectation. There are various approaches to ensure this alignment. The 1st approach is
to inform an employee about his expectation and his progress towards the goal. The 2nd approach is
to improve the employee’s ability through continuous training. The 3rd approach is to assign
responsibility to each stakeholder in employee’s development and make them accountable.

The aim of employee development is not only to make them progress in their career but also to train
them as per company’s requirement.

Training and Development System

The key features of training system are as follows:

a) Training management systems is developed to ensure that all training requirements of


organization are effectively managed.
b) Employee management modules of the system help manager design and develop a training
calendar as per the employee’s requirement.
c) Employee management module automatically prepares a list of employees as per upcoming
development sessions.
d) Employee management module also helps in preparing the progress sheet for employees.
The development system is not only restricted to online tools but also includes various policies and
procedures. The comprehensive development system helps the coaching staff continuously asses’
progress of employee but also effectiveness of the development session. The development system
consists of software, hardware and company’s development policies.

Employee Development Tools


Employee development tools are also important part of training management system. 360-degree
feedback system helps to improve employee performance by gathering feedback from various sources
like peers, managers, customers, colleagues, etc. The feedback is anonymous in nature and should be
used as a developmental tool rather than as an administrative tool.

Companies should identify high-performing development system before investing in it. They should
continuously strive to improve developmental systems. They are possibilities that exiting system,
session and procedure may become monotonous in long term there by affecting employee motivation

One of biggest employer fear is that post training employees would look for employment change and
hence they do not encourage training. Though this concern is valid in some cases, but overall it has
shown that trained employee show better motivation level and loyalty.

Employee Relationship Management (ERM) through Information System


Employee relationship management is management of relationship between employees and
employers. It is made up of initiatives which improve employee morale and loyalty towards the
company. Employee relationship management approach looks to maintain effective relationship

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 264

through three way approach of continuous communication, conflict resolution and employee
development.

Importance of Employee Relationship Management

Employee is crucial and critical for overall progress of an organization. Employer-employee is very
complicated association and at times strenuous to manage. Employee relationship management is
much difficult compared to customer relationship management. For example, if the customer is not
satisfied with the association with a given company, they can move on to another company. However,
if the employee is unhappy with an employer, there are possibilities that he will continue his
association with company. However, this employee-employer relationship will not be fruitful and
convenient for both the parties.

If employee and employer are in cordial relationship and then overall efficiency and competitiveness
of the company will improve. An improvement in relation can result in employee with high morale,
which will increase his/her or her loyalty towards the company. If there is an increase in the loyalty
employee turnaround is possible and corresponding communication can be established.

Information System and Employee Relationship Management


In the last decade or so information systems have changed the face how business gets conducted.
Information systems are actively used to improve productivity of organization. In employee
relationship management also information systems are actively used.

Following is example information system’s usage in employee relationship management:

 The current payroll systems are linked with an information system which ensures that
employee are getting timely as well as accurate salary.
 Online learning and development tools can easily be managed by employees.
 Information systems facilitate leave, tax, and insurance management of employees.
 Performance appraisal and individual development management are done online with help
information systems.
 Employees are aware of the latest development within the organization through access to
Company’s blog and news board.
 Executive management of the company can communicate directly to staff through email.
 Online staff meeting brings together employees from all parts of the world.
Employee Relation Life Cycle
Employee relation life cycle starts as soon as talent is shortlisted for an interview. Post hiring process
employee undergoes training to become a full time contributing team member. Over time with
involvement in projects and various other association employees is considered as a family member.
Finally, employee reaches the stage of the brand.

Factors Influencing Employee-Employer Relation

There are several factors, which drive employee employer relation. The correct management of this
factor creates long-term and fruitful association for an employee as well as the employer. For
example; compensation, work culture and environment, rewards-recognition, etc.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 265

Every organization has its own work culture and environment. Any job within organization requires a
certain skill-set. Human resource team along with hiring manager scout for talent and hire an
employee. Companies invest time and resources for training employee. This training in turn enables
an employee to excel and help the company meet its business objective. For this whole process to
reach the desired end, it is essential healthy employee-employer relationship is maintained.
Information systems contribute a lot to this success.

INFORMATION SYSTEM FOR COMPETITIVE ADVANTAGE


The word “strategy” originates from the Greek word strategos, meaning “general.” In war, a strategy
is a plan to gain an advantage over the enemy. Other disciplines, especially business, have borrowed
the term. As you know from media coverage, corporate executives often discuss actions in ways that
make business competition sound like war. Businesspeople must devise decisive courses of action to
win—just as generals do. In business, a strategy is a plan designed to help an organization outperform
its competitors. Unlike battle plans, however, business strategy often takes the form of creating new
opportunities rather than beating rivals.

Although many information systems are built to solve problems, many others are built to seize
opportunities. And, as anyone in business can tell you, identifying a problem is easier than creating an
opportunity. Why? Because a problem already exists; it is an obstacle to a desired mode of operation
and, as such, calls attention to itself. An opportunity, on the other hand, is less tangible. It takes a
certain amount of imagination, creativity, and vision to identify an opportunity, or to create one and
seize it. Information systems that help seize opportunities are often called strategic information
systems (SISs). They can be developed from scratch, or they can evolve from an organization’s
existing ISs.

In a free-market economy, it is difficult for a business to do well without some strategic planning.
Although strategies vary, they tend to fall into some basic categories, such as developing a new
product, identifying an unmet consumer need, changing a service to entice more customers or retain
existing clients, or taking any other action that increases the organization’s value through improved
performance.

Many strategies do not, and cannot, involve information systems. But increasingly, corporations are
able to implement certain strategies—such as maximizing sales and lowering costs—thanks to the
innovative use of information systems. In other words, better information gives corporations a
competitive advantage in the marketplace. A company achieves strategic advantage by using strategy
to maximize its strengths, resulting in a competitive advantage. When a business uses a strategy with
the intent to create a market for new products or services, it does not aim to compete with other
organizations, because that market does not yet exist. Therefore, a strategic move is not always a
competitive move. However, in a free-enterprise society, a market rarely remains the domain of one
organization for long; thus, competition ensues almost immediately. So, we often use the terms
“competitive advantage” and “strategic advantage” interchangeably.

You might have heard statements about using the Web strategically. Business competition is no
longer limited to a particular country or even a region of the world. To increase the sale of goods and
services, companies must regard the entire world as their market. Because thousands of corporations
and hundreds of millions of consumers have access to the Web, augmenting business via the Web has

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 266

become strategic: many companies that utilized the Web early on have enjoyed greater market shares,
more experience with the Web as a business enabler, and larger revenues than latecomers. Some
companies developed information systems, or features of information systems, that are unique, such
as Amazon’s “one-click” online purchasing and Priceline’s “name your own price” auctioning.
Practically any Web-based system that gives a company competitive advantage is a strategic
information system.

Achieving a Competitive Advantage


Consider competitive advantage in terms of a for-profit company, whose major goal is to maximize
profits by lowering costs and increasing revenue. A for-profit company achieves competitive
advantage when its profits increase significantly, most commonly through increased market share. It
is important to understand that the eight listed are the most common, but not the only, types of
business strategy an organization can pursue. It is also important to understand that strategic moves
often consist of a combination of two or more of these initiatives and other steps. The essence of
strategy is innovation, so competitive advantage is often gained when an organization tries a strategy
that no one has tried before. The eight basic ways to gain competitive advantage include:

i. Reduce costs
A company can gain advantage if it can sell more units at a lower price while providing quality and
maintaining or increasing its profit margin.

ii. Raise barriers to market entrants


A company can gain advantage if it deters potential entrants into the market, enjoying less
competition and more market potential.

iii. Establish high switching costs


A company can gain advantage if it creates high switching costs, making it economically infeasible
for customers to buy from competitors.

iv. Create new products or services


A company can gain advantage if it offers a unique product or service.

v. Differentiate products or services


A company can gain advantage if it can attract customers by convincing them its product differs from
the competition’s.

vi. Enhance products or services


A company can gain advantage if its product or service is better than anyone else’s.

vii. Establish alliances


Companies from different industries can help each other gain advantage by offering combined
packages of goods or services at special prices.

viii. Lock in suppliers or buyers


A company can gain advantage if it can lock in either suppliers or buyers, making it economically
impractical for suppliers or buyers to deal with competitors.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 267

Creating and Maintaining Strategic Information Systems


There might be many opportunities to accomplish a competitive edge with IT, especially in industries
that are using older software, such as the insurance industry. Insurance companies were among the
early adopters of IT and have not changed much of their software. This is why some observers say the
entire industry is inefficient. Once an insurance company adopts innovative software applications, it
might gain competitive advantage. This might remind you of the airline industry. Most airlines still
use antiquated hardware and software. When JetBlue was established, it adopted the latest
technologies, and this was a major reason for its great competitive advantage.

Companies can implement some of the strategic initiatives described in the previous section by using
information systems. As we mentioned at the beginning of the chapter, a strategic information system
(SIS) is any information system that can help an organization achieve a long-term competitive
advantage. An SIS can be created from scratch, developed by modifying an existing system, or
“discovered” by realizing that a system already in place can be used to strategic advantage. While
companies continue to explore new ways of devising SISs, some successful SISs are the result of less
lofty endeavors: the intention to improve mundane operations using IT has occasionally yielded a
system with strategic qualities.

Strategic information systems combine two types of ideas:

i. ideas for making potentially winning business decisions and


ii. ideas for harnessing information technology to implement the decisions.
For an information system to be an SIS, two conditions must exist. First, the information system must
serve an organizational goal rather than simply provide information; and second, the organization’s IS
unit must work with managers of other functional units (including marketing, finance, purchasing,
human resources, and so on) to pursue the organizational goal.

Creating an SIS
To develop an SIS, top management must be involved from initial consideration through development
and implementation. In other words, the SIS must be part of the overall organizational strategic plan.
There is always the danger that a new SIS might be considered the IS unit’s exclusive property.
However, to succeed, the project must be a corporate effort, involving all managers who use the
system.

Reengineering and Organizational Change


Sometimes, to implement an SIS and achieve competitive advantage, organizations must rethink the
entire way they operate. While brainstorming about strategic plans, management should ask: “If we
established this business unit again, from scratch, what processes would we implement and how?”

The answer often leads to the decision to eliminate one set of operations and build others from the
ground up. Changes such as these are called reengineering. Reengineering often involves adoption of
new machinery and elimination of management layers. Frequently, information technology plays an
important role in this process.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 268

Reengineering’s goal is not to gain small incremental cost savings, but to achieve great efficiency
leaps—of 100 percent and even 1000 percent. With that degree of improvement, a company often
gains competitive advantage. Interestingly, a company that undertakes reengineering along with
implementing a new SIS cannot always tell whether the SIS was successful.

The reengineering process makes it impossible to determine how much each change contributed to the
organization’s improved position.

Implementation of an SIS requires a business to revamp processes—to undergo organizational


change—to gain an advantage. For example, when General Motors Corp. (GM) decided to
manufacture a new car that would compete with Japanese cars, it chose a different production process
from that of its other cars. Management first identified goals that could make the new car successful
in terms of how to build it and also how to deliver and service it. Realizing that none of its existing
divisions could meet these goals because of their organizational structures, their cultures, and their
inadequate ISs, management established Saturn as an independent company with a completely
separate operation.

Part of GM’s initiative was to recognize the importance of Saturn dealerships in gaining competitive
advantage. Through satellite communications, the new company gave dealers access to factory
information. Clients could find out if, and exactly when, different cars with different features would
be available.

Another feature of Saturn’s SIS was improved customer service. Saturn embeds an electronic
computer chip in the chassis of each car. The chip maintains a record of the car’s technical details and
the owner’s name. When the car is serviced after the sale, new information is added to the chip.

At their first service visit, many Saturn owners were surprised to be greeted by name as they rolled
down their windows. While the quality of the car itself has been important to Saturn’s success, the
new SIS also played an important role. This technology was later copied by other automakers.

Competitive Advantage as a Moving Target


As you might have guessed, competitive advantage is not often long lasting. In time, competitors
imitate the leader, and the advantage diminishes. So, the quest for innovative strategies must be
dynamic. Corporations must continuously contemplate new ways to use information technology to
their advantage. In a way, companies’ jockeying for the latest competitive advantage is a lot like an
arms race. Side A develops an advanced weapon, then side B develops a similar weapon that
terminates the advantage of side A, and so on.

In an environment where most information technology is available to all, SISs originally developed to
create a strategic advantage quickly become an expected standard business practice.

A prime example is the banking industry, where surveys indicate that increased IS expenditures did
not yield long-range strategic advantages. The few banks that provided services such as ATMs and
online banking once had a powerful strategic advantage, but now almost every bank provides these
services.

A system can only help a company sustain competitive advantage if the company continuously
modifies and enhances it, creating a moving target for competitors. American Airlines’ Sabre—the
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 269

online reservation system for travel agents—is a classic example. The innovative IS was redesigned in
the late 1970s to expedite airline reservations and sell travel agencies a new service. But over the
years, the company spun off an office automation package for travel agencies called Agency Data
Systems. The reservation system now encompasses hotel reservations, car rentals, train schedules,
theater tickets, and limousine rentals. It later added a feature that let travelers use Sabre from their
own computers. The system has been so successful that in its early years American earned more from
it than from its airline operations. The organizational unit that developed and operated the software
became a separate IT powerhouse at AMR Corp., the parent company of American Airlines, and now
operates as Sabre Inc., an AMR subsidiary. It is the leading provider of technology for the travel
industry. Travelocity, Inc., the popular Web-based travel site, is a subsidiary of Sabre, and, naturally,
uses Sabre’s software. Chances are you are using Sabre technology when you make airline
reservations through other Web sites, as well.

Using Information Systems to Achieve Competitive Advantage


In almost every industry you examine, you will find that some firms do better than most others.
There’s almost always a stand-out firm. In the automotive industry, Toyota is considered a superior
performer. In pure online retail, Amazon.com is the leader. In off-line retail Wal-Mart, the largest
retailer on earth, is the leader. In online music, Apple’s iTunes is considered the leader with more than
75 percent of the downloaded music market, and in the related industry of digital music players, the
iPod is the leader. In Web search, Google is considered the leader.

Firms that “do better” than others are said to have a competitive advantage over others: They either
have access to special resources that others do not, or they are able to use commonly available
resources more efficiently—usually because of superior knowledge and information assets. In any
event, they do better in terms of revenue growth, profitability, or productivity growth (efficiency), all
of which ultimately in the long run translate into higher stock market valuations than their
competitors.

But why do some firms do better than others and how do they achieve competitive advantage? How
can you analyze a business and identify its strategic advantages? How can you develop a strategic
advantage for your own business? And how do information systems contribute to strategic
advantages?

One answer to that question is Michael Porter’s competitive forces model

Porter’s Competitive Forces Model


Arguably, the most widely used model for understanding competitive advantage is Michael Porter’s
competitive forces model.

This model provides a general view of the firm, its competitors, and the firm’s environment. Porter’s
model is all about the firm’s general business environment. In this model, five competitive forces
shape the fate of the firm.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 270

Traditional Competitors

All firms share market space with other competitors who are continuously devising new, more
efficient ways to produce by introducing new products and services, and attempting to attract
customers by developing their brands and imposing switching costs on their customers.

New Market Entrants

In a free economy with mobile labour and financial resources, new companies are always entering the
marketplace. In some industries, there are very low barriers to entry, whereas in other industries, entry
is very difficult.

For instance, it is fairly easy to start a pizza business or just about any small retail business, but it is
much more expensive and difficult to enter the computer chip business, which has very high capital
costs and requires significant expertise and knowledge that is hard to obtain. New companies have
several possible advantages: They are not locked into old plants and equipment, they often hire
younger workers who are less expensive and perhaps more innovative, they are not encumbered by
old, worn-out brand names, and they are “more hungry” (more highly motivated) than traditional
occupants of an industry. These advantages are also their weakness: They depend on outside financing
for new plants and equipment, which can be expensive; they have a less experienced workforce; and
they have little brand recognition.

Substitute Products and Services

In just about every industry, there are substitutes that your customers might use if your prices become
too high. New technologies create new substitutes all the time. Even oil has substitutes: Ethanol can
substitute for gasoline in cars; vegetable oil for diesel fuel in trucks; and wind, solar, coal, and hydro
power for industrial electricity generation. Likewise, Internet telephone service can substitute for
traditional telephone service, and fiber-optic telephone lines to the home can substitute for cable TV
lines. And, of course, an Internet music service that allows you to download music tracks to an iPod is
a substitute for CDbased music stores. The more substitute products and services in your industry, the
less you can control pricing and the lower your profit margins.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 271

Customers

A profitable company depends in large measure on its ability to attract and retain customers (while
denying them to competitors), and charge high prices.

The power of customers grows if they can easily switch to a competitor’s products and services, or if
they can force a business and its competitors to compete on price alone in a transparent marketplace
where there is little product differentiation, and all prices are known instantly (such as on the
Internet). For instance, in the used college textbook market on the Internet, students (customers) can
find multiple suppliers of just about any current college textbook. In this case, online customers have
extraordinary power over used-book firms.

Suppliers

The market power of suppliers can have a significant impact on firm profits, especially when the firm
cannot raise prices as fast as can suppliers. The more different suppliers a firm has, the greater control
it can exercise over suppliers in terms of price, quality, and delivery schedules. For instance,
manufacturers of laptop PCs almost always have multiple competing suppliers of key components,
such as keyboards, hard drives, and display screens.

Information System Strategies for Dealing with Competitive Forces


What does a firm to do when it is faced with all these competitive forces? And how can the firm use
information systems to counteract some of these forces?

How do you prevent substitutes and inhibit new market entrants? There are four generic strategies,
each of which often is enabled by using information technology and systems:

a) low-cost leadership,
b) product differentiation,
c) focus on market niche, and
d) strengthening customer and supplier intimacy.

a) Low-Cost Leadership
Use information systems to achieve the lowest operational costs and the lowest prices. The classic
example is Wal-Mart. By keeping prices low and shelves well stocked using a legendary inventory
replenishment system, Wal-Mart became the leading retail business in the United States. Wal-Mart’s
continuous replenishment system sends orders for new merchandise directly to suppliers as soon as
consumers pay for their purchases at the cash register. Point-of-sale terminals record the bar code of
each item passing the checkout counter and send a purchase transaction directly to a central computer
at Wal-Mart headquarters. The computer collects the orders from all Wal-Mart stores and transmits
them to suppliers. Suppliers can also access Wal-Mart’s sales and inventory data using Web
technology.

Because the system replenishes inventory with lightning speed, Wal-Mart does not need to spend
much money on maintaining large inventories of goods in its own warehouses. The system also
enables Wal-Mart to adjust purchases of store items to meet customer demands. Competitors, such as
Sears, have been spending 24.9 percent of sales on overhead. But by using systems to keep operating

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 272

costs low, Wal-Mart pays only 16.6 percent of sales revenue for overhead. (Operating costs average
20.7 percent of sales in the retail industry.)

Wal-Mart’s continuous replenishment system is also an example of an efficient customer response


system. An efficient customer response system directly links consumer behavior to distribution and
production and supply chains. Wal-Mart’s continuous replenishment system provides such an
efficient customer response. Dell Computer Corporation’s assemble-to-order system is another
example of an efficient customer response system.

b) Product Differentiation
Use information systems to enable new products and services, or greatly change the customer
convenience in using your existing products and services.

For instance, Google continuously introduces new and unique search services on its Web site, such as
Google Maps. By purchasing PayPal, an electronic payment system, in 2003, eBay made it much
easier for customers to pay sellers and expanded use of its auction marketplace. Apple created iPod, a
unique portable digital music player, plus a unique online Web music service where songs can be
purchased for 99 cents. Continuing to innovate, Apple recently introduced a portable iPod video
player.

Manufacturers and retailers are using information systems to create products and services that are
customized and personalized to fit the precise specifications of individual customers. Dell Computer
Corporation sells directly to customers using assemble-to-order manufacturing. Individuals,
businesses, and government agencies can buy computers directly from Dell, customized with the
exact features and components they need. They can place their orders directly using a toll-free
telephone number or by accessing Dell’s Web site.

Once Dell’s production control receives an order, it directs an assembly plant to assemble the
computer using components from an on-site warehouse based on the configuration specified by the
customer.

Lands’ End customers can use its Web site to order jeans, dress pants, chino pants, and shirts custom-
tailored to their own specifications. Customers enter their measurements into a form on the Web site,
which then transmits each customer’s specifications over a network to a computer that develops an
electronic made-to-measure pattern for that customer. The individual patterns are then transmitted
electronically to a manufacturing plant, where they are used to drive fabric-cutting equipment. There
are almost no extra production costs because the process does not require additional warehousing,
production overruns, and inventories, and the cost to the customer is only slightly higher than that of a
mass-produced garment. This ability to offer individually tailored products or services using the same
production resources as mass production is called mass customization.

c) Focus on Market Niche


Use information systems to enable a specific market focus, and serve this narrow target market better
than competitors. Information systems support this strategy by producing and analyzing data for
finely tuned sales and marketing techniques. Information systems enable companies to analyze
customer buying patterns, tastes, and preferences closely so that they efficiently pitch advertising and
marketing campaigns to smaller and smaller target markets.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 273

The data come from a range of sources—credit card transactions, demographic data, purchase data
from checkout counter scanners at supermarkets and retail stores, and data collected when people
access and interact with Web sites.

Sophisticated software tools find patterns in these large pools of data and infer rules from them to
guide decision making. Analysis of such data drives one-to-one marketing that creates personal
messages based on individualized preferences. Contemporary customer relationship management
(CRM) systems feature analytical capabilities for this type of intensive data analysis..

Hilton Hotels uses a customer information system called OnQ, which contains detailed data about
active guests in every property across the eight hotel brands owned by Hilton. Employees at the front
desk tapping into the system instantly search through 180 million records to find out the preferences
of customers checking in and their past experiences with Hilton so they can give these guests exactly
what they want. OnQ establishes the value of each customer to Hilton, based on personal history and
on predictions about the value of that person’s future business with Hilton. OnQ can also identify
customers who are clearly not profitable. Profitable customers receive extra privileges and attention,
such as the ability to check out late without paying additional fees. After Hilton started using the
system, the rate of staying at Hilton Hotels rather than at competing hotels soared from 41 percent to
61 percent (Kontzer, 2004).

The Interactive Session on Technology shows how 7-Eleven improved its competitive position by
wringing more value out of its customer data. This company’s early growth and strategy had been
based on face-to-face relationships with its customers and intimate knowledge of exactly what they
wanted to purchase. As the company grew over time, it was no longer able to discern customer
preferences through personal face-to-face relationships.

A new information system helped it obtain intimate knowledge of its customers once again by
gathering and analyzing customer purchase transactions.

d) Strengthen Customer and Supplier Intimacy


Use information systems to tighten linkages with suppliers and develop intimacy with customers.
Chrysler Corporation uses information systems to facilitate direct access from suppliers to production
schedules, and even permits suppliers to decide how and when to ship suppliers to Chrysler factories.
This allows suppliers more lead time in producing goods. On the customer side, Amazon.com keeps
track of user preferences for book and CD purchases, and can recommend titles purchased by others
to its customers. Strong linkages to customers and suppliers increase switching costs (the cost of
switching from one product to a competing product), and loyalty to your firm.

Some companies focus on one of these strategies, but you will often see companies pursuing several
of them simultaneously. For example, Dell Computer tries to emphasize low cost as well as the ability
to customize its personal computers.

The Internet’s Impact on Competitive Advantage


The Internet has nearly destroyed some industries and has severely threatened more. The Internet has
also created entirely new markets and formed the basis for thousands of new businesses. The first
wave of e-commerce transformed the business world of books, music, and air travel.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 274

In the second wave, eight new industries are facing a similar transformation scenario: telephone
services, movies, television, jewelry, real estate, hotels, bill payments, and software. The breadth of e-
commerce offerings grows especially in travel, information clearinghouses, entertainment, retail
apparel, appliances, and home furnishings.

For instance, the printed encyclopedia industry and the travel agency industry have been nearly
decimated by the availability of substitutes over the Internet. Likewise, the Internet has had a
significant impact on the retail, music, book, brokerage, and newspaper industries. At the same time,
the

Internet has enabled new products and services, new business models, and new industries to spring up
every day, from eBay and Amazon.com to iTunes and Google. In this sense, the Internet is
“transforming” entire industries, forcing firms to change how they do business.

Because of the Internet, the traditional competitive forces are still at work, but competitive rivalry has
become much more intense (Porter, 2001). Internet technology is based on universal standards that
any company can use, making it easy for rivals to compete on price alone and for new competitors to
enter the market. Because information is available to everyone, the Internet raises the bargaining
power of customers, who can quickly find the lowest-cost provider on the Web. Profits have been
dampened. Some industries, such as the travel industry and the financial services industry, have been
more impacted than others.

However, contrary to Porter’s somewhat negative assessment, the Internet also creates new
opportunities for building brands and building very large and loyal customer bases that are willing to
pay a premium for the brand, for example, Yahoo, eBay, BlueNile, RedEnvelope, Overstock.com,
Amazon.com, Google, and many others. In addition, as with all IT-enabled business initiatives, some
firms are far better at using the Internet than other firms are, which creates new strategic opportunities
for the successful firms.

REVISION EXERCISES
1. Define information system strategy
2. What is a business strategy hierarchy
3. What are the component of a virtual value system
4. Discuss the 6 dimensions of excellent strategic process and information system process.
5. What are the characteristics of a strategic information system plan?
6. How can information system be applied so that a business is effective
7. What are the function of an information system
8. What is the meaning of competitive advantage in business
9. What are some that a business can gain competitive advantage
10. Discuss the porters competitive force model.
11. How can the internet be used to gain competitive advantage

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 275

CHAPTER 8
MANAGING INFORMATION SYSTEMS SECURITY
SYNOPSIS
Introduction……………………………………………….. 275
Information Systems Threats………………………………. 278
Threats Control………………………………………….. 279
Systems Integrity…………………………………………. 288
Information Systems Risk Management………………… 315
Disaster Recovery And Business
Continuity Planning………………………………………. 317

INTRODUCTION
Information Systems Security Management (ISSM) from the emergent organization perspective e.g.
the e-commerce is way under study and requires attention from the academician. Although emergent
organization may be smaller in size and resources, the threats on the information systems is very
much similar and as disastrous as compared to the hierarchical organizations. Although that is the
case of current security threats, the steps towards managing information systems security between the
emergent organization and the hierarchical organization is very much different in terms of the
technology, the people and the procedure.

The evolution of business model from being hierarchical-oriented to an emergent organization, form
one of the crucial challenges requires serious attention concerning to the Information System Security
Management. Current policies and procedure are not ready to support the emergent organization.
Emergent organization is very dynamic and appears to have higher volatility feature. This statement is
true as findings of appears to show in the evaluation of selected standard approaches, where current
standards are succumb to supporting stable environment rather the emergent ones. This is because
bigger organizations face bigger threats and different organizations type face different type of threats.
As there are many type of organizations and business models, IT environment in these company is
unique for example, it has its own unique set of software products, which products may have been
evaluated in terms of different IS evaluation schemes, either in part or in full. Due to these factors,
evaluators are suggested -particularly in emergent organizations- to take more liberties in modifying
the evaluation process for their own purposes.

Critical success factors of e-commerce shows that trust factors and security issues are part and parcel
of e- commerce success. The critical success factors indicate that all businesses wishing to adopt e-
commerce as their business model or as an alternative profit generator must implement security
measures vital for competitive advantage. In whole, these e-commerce companies have to understand
and implement security measure appropriate to the business based on current security standards. E-
commerce company must be smart in choosing the most appropriate security measure for the business
and suitable to support the business objective. If wrong measures are adopted, company may face with
serious problem such as waste of resources. Current standards which are widely used such as BS

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 276

ISO/IEC17799: 2000 fail to look into the content of the standards, rather focus on the processes. The
processes also are often abstract and oversimplified.

There are no advices given to assist companies in the practice of IS security management.
Hierarchical companies with mountain of resources may not face too much problems to practices the
standards, but e- commerce company will. Limited resources and time constraint makes the task of IS
security management tiresome and in attractive, thus living this most important task aside. Most e-
commerce retailers have a business model unique to its own entity and require a dynamic procedure to
safeguard its Information Systems. Thus, an Information Systems Security Management (ISSM)
supporting a dynamic business model is highly needed. A method appropriate to this business context
is required to fast-forward their business to enter the market before their competitors.

Strategic management has the ambition to be the field that informs the decisions and actions of
general managers. In pursuit of this high goal the field has from time to time worshipped at various
theoretical altars, both in economics and sociology. For example, it has looked to industrial-
organization and transaction-cost economics, agency, network, contingency and, more recently,
resource-based (or its cousin the dynamic-capabilities-based) theories of the firm, for inspiration.
While some of these theories have helped guide general management decisions and actions, many
have been hard to operationalize. The field still lacks an actionable theory.

While theoretical anchors are seen as giving the field academic respectability, strategic management
has helped practitioners more by its frameworks and typologies.

The traditional hierarchical view of strategies: corporate, business unit and functional, must be viewed
in this light. While the hierarchical view of strategies has never had the pretensions of being a theory,
it did capture the essence of what was seen as best practice in the 1960s and 1970s. It was a useful
framework.

While the hierarchy of strategy is still often taught in business schools today, its theoretical relevance
and empirical support have been severely questioned. It does not mirror the actual locus of decision
making or the causality of strategy making in a global firm today. In a transnational firm, the
corporate office continues to drive corporate strategy for optimal portfolio balance. But this portfolio
is defined not just along business lines but also along geography and resource dimensions, traditional
prerogatives for business units and functions Business units and functions are run globally and heads
of these business units and functions are also corporate officers.

Strategic initiatives at a business or functional level may indeed drive the development of corporate
strategy, which, in the hierarchy of strategy, is viewed from the top down

Corporate, business and functional strategies are not hierarchical anymore; they are contemporaneous
and interactive. Instead of a hierarchy of strategies, we should think more in terms of a heterarchy of
strategies . In a hierarchy every strategic decision making node is connected to at most one parent
node. In a hierarchy, however, a node can be connected to any of its surrounding nodes without
needing to go through or get permission from some other node.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 277

Definition of Computer Security – Threats, Hazards and Controls

Information is a strategic resource and a significant portion of organizational budget is spent on


managing information. A security system is a set of mechanisms and techniques that protect a computer
system, specifically the assets. They are protected against loss or harm including unauthorized access,
unauthorized disclosure and interference of information.

Assets can be categorized into:


 Resources – all instances of hardware, software, communication channels, operating environment,
documentation and people
 Data – files, databases, messages in transit etc.

A security attack is the act or attempt to exploit vulnerability in a system. Security controls are the
mechanisms used to control an attack. Attacks can be classified into active and passive attacks.

 Passive attacks – attacker observes information without interfering with information or flow of
information. He/she does not interfere with operation. Message content and message traffic is what
is observed.

 Active attacks – involves more than message or information observation. There is interference of
traffic or message flow and may involve modification, deletion or destruction. This may be done
through the attacker masquerading or impersonating as another user. There is denial or repudiation
where someone does something and denies later. This is a threat against authentication and to some
extent integrity.

Security Goals
To retain a competitive advantage and to meet basic business requirements organizations must
endeavour to achieve the following security goals.

a. Confidentiality
Protect information value and preserve the confidentiality of sensitive data. Information should not be
disclosed without authorization. Information the release of which is permitted to a certain section of
the public should be identified and protected against unauthorized disclosure.

b. Integrity
Ensure the accuracy and reliability of the information stored on the computer systems. Information has
integrity if it reflects some real world situation or is consistent with real world situation. Information
should not be altered without authorization. Hardware designed to perform some functions has lost
integrity if it does not perform those functions correctly. Software has lost integrity if it does not
perform according to its specifications. Communication channels should relay messages in a secure
manner to ensure that integrity. People should ensure the system functions according to the
specifications.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 278

c. Availability
Ensure the continued availability of the information system and all its assets to legitimate users at an
acceptable level of service or quality of service. Any event that degrades performance or quality of a
system affects availability

d. Ensure conformity to laws, regulations and standards.

Hazards (exposures) to information security

An exposure is a form of possible loss or harm. Examples of exposures include:


 Unauthorized access resulting in a loss of computing time
 Unauthorized disclosure – information revealed without authorization
 Destruction especially with respect to hardware and software
 Theft
 Interference with system operation.

INFORMATION SYSTEM THREATS


Information systems security remains high on the list of key issues facing information systems
executives. Traditional concerns range from forced entry into computer and storage rooms to
destruction by fire, earthquake, flood, and hurricane. Recent attention focuses on protecting
information systems and data from accidental or intentional unauthorized access, disclosure,
modification, or destruction. The consequences of these events can range from degraded or disrupted
service to customers to corporate failure. This topic reports on a study investigating MIS executives'
concern about a variety of threats. A relatively new threat, computer viruses, was found to be a
particular concern. The results highlight a gap between the use of modern technology and the
understanding of the security implications inherent in its use. Many of responding information
systems managers have migrated their organizations into the highly interconnected environment of
modern technology but continue to view threats from a perspective of a pre-connectivity era. They
expose their firms to unfamiliar risks of which they are unaware, refuse to acknowledge, or are often
poorly equipped to manage.

Threats to Information Security

These are circumstances that have potential to cause loss or harm i.e. circumstances that have a
potential to bring about exposures.
 Human error
 Disgruntled employees
 Dishonest employees
 Greedy employees who sell information for financial gain
 Outsider access – hackers, crackers, criminals, terrorists, consultants, ex-consultants, ex-
employees, competitors, government agencies, spies (industrial, military etc), disgruntled
customers
 Acts of God/natural disasters – earthquakes, floods, hurricanes

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 279

 Foreign intelligence
 Accidents, fires, explosion
 Equipment failure
 Utility outage
 Water leaks, toxic spills
 Viruses – these are programmed threats

Vulnerability
Vulnerability is a weakness within the system that can potentially lead to loss or harm. The threat of
natural disasters has instances that can make the system vulnerable. If a system has programs that
have threats (erroneous programs) then the system is vulnerable.

THREATS CONTROL
The 2005 CSI/FBI Computer Crime and Security Survey of 700 computer security practitioners
revealed that the frequency of system security breaches has been steadily decreasing since 1999 in
almost all threats except the abuse of wireless networks. There have been financial losses resulting
from the threats individually. Note, however, that the survey report pointed that the implicit losses
(e.g., lost sales) are difficult to measure and might not have been included by survey participants.
Some of the system security threats are discussed below.

a. Viruses
A computer virus is a software code that can multiply and propagate itself. A virus can spread into
another computer via e-mail, downloading files from the Internet, or opening a contaminated file. It is
almost impossible to completely protect a network computer from virus attacks; the CSI/FBI survey
indicated that virus attacks were the most widespread attack for six straight years since 2000.

Viruses are just one of several programmed threats or malicious codes (malware) in today’s
interconnected system environment. Programmed threats are computer programs that can create a
nuisance, alter or damage data, steal information, or cripple system functions. Programmed threats
include, computer viruses, Trojan horses, logic bombs, worms, spam, spyware, and adware.

According to a recent study by the University of Maryland, more than 75% of participants received e-
mail spam every day. There are two problems with spam: Employees waste time reading and deleting
spam, and it increases the system overhead to deliver and store junk data.

Spyware is a computer program that secretly gathers users’ personal information and relays it to third
parties, such as advertisers. Common functionalities of spyware include monitoring keystrokes,
scanning files, snooping on other applications such as chat programs or word processors, installing
other spyware programs, reading cookies, changing the default homepage on the Web browser, and
consistently relaying information to the spyware home base. Unknowing users often install spyware
as the result of visiting a website, clicking on a disguised pop-up window, or downloading a file from
the Internet.

Adware is a program that can display advertisements such as pop-up windows or advertising banners
on webpages. A growing number of software developers offer free trials for their software until users
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 280

pay to register. Free-trial users view sponsored advertisements while the software is being used. Some
adware does more than just present advertisements, however; it can report users’ habits, preferences,
or even personal information to advertisers or other third parties, similar to spyware.

To protect computer systems against viruses and other programmed threats, companies must have
effective access controls and install and regularly update quarantine software. With effective
protection against unauthorized access and by encouraging staff to become defensive computer users,
virus threats can be reduced. Some viruses can infect a computer through operating system
vulnerabilities. It is critical to install system security patches as soon as they are available.
Furthermore, effective security policies can be implemented with server operating systems such as
Microsoft Windows XP and Windows Server 2003. Other kinds of software (e.g., Deep Freeze) can
protect and preserve original computer configurations. Each system restart eradicates all changes,
including virus infections, and resets the computer to its original state. The software eliminates the
need for IT professionals to perform time-consuming and counterproductive rebuilding, re-imaging,
or troubleshooting when a computer becomes infected.

Fighting against programmed threats is an ongoing and ever-changing battle. Many organizations,
especially small ones, are understaffed and underfunded for system security. Organizations can use
one of a number of effective security suites (e.g., Norton Internet Security 2005, ZoneAlarm Security
Suite 5.5, McAfee Virus Scan) that offer firewall, anti-virus, anti-spam, anti-spyware, and parental
controls (for home offices) at the desktop level. Firewalls and routers should also be installed at the
network level to eliminate threats before they reach the desktop. Anti-adware and anti-spyware
software are signature-based, and companies are advised to install more than one to ensure effective
protection. Installing anti-spam software on the server is important because increasing spam results in
productivity loss and a waste of computing resources. Important considerations for selecting anti-
spam software include a system’s effectiveness, impact on mail delivery, ease of use, maintenance,
and cost. Many Internet service providers conveniently reduce spam on their servers before it reaches
subscribers. Additionally, companies must maintain in-house and off-site backup copies of corporate
data and software so that data and software can be quickly restored in the case of a system failure.

b. Insider Abuse of Internet Access


Annual U.S. productivity growth was 2.5% during the second half of the 1990s, as compared to 1.5%
from 1973 to 1995, a jump that has been attributed to the use of IT. Unfortunately, IT tools can be
abused. For example, e-mail and Internet connections are available in almost all offices to improve
productivity, but employees may use them for personal reasons, such as online shopping, playing
games, and sending instant messages to friends during work hours.

The 2005 Electronic Monitoring and Surveillance Survey conducted by the American Management
Association (AMA) and the Policy Institute revealed that 76% of employers monitor employees’ web
connections, while 50% of employers monitor and store employee computer files. The survey also
revealed that 26% of participating employers have fired workers for workplace offenses related to the
Internet; 25% have fired employees for misuse of e-mail; and 65% of those surveyed used software to
block employee access to inappropriate websites. Most U.S. companies allow reasonable use of
computers for personal reasons, but many never define “reasonable.” As a preventive control, every
company should have a written policy regarding the use of corporate computing facilities. In addition,
companies should update their monitoring policies periodically, because IT evolves rapidly.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 281

If an Internet monitoring policy is clearly stated, companies need not worry about employee privacy
concerns; the Electronic Communications Privacy Act does give companies the right to monitor
electronic communications in the ordinary course of business.

c. Laptop or Mobile Theft


Because they are relatively expensive, laptops and PDAs have become the targets of thieves.
Although the percentage has declined steadily since 1999, about half of network executives indicated
that their corporate laptops or PDAs were stolen in 2005. Besides being expensive, they often contain
proprietary corporate data, access codes to company networks, and sensitive information.

The following suggestions can help minimize the chance of theft when outside the office:

 Never leave a notebook or PDA unattended, including in a car or hotel room.


 Install a physical protection device such as a lock and cable or an alarm.
 Put the notebook in a nondescript bag or case.
 Install stealth-tracking software.
 If notebooks are stolen, automatic logins make it easy for a thief to access sensitive
information. Password protection does not deter a theft, but it does make it more difficult for
thieves to use the stored information. Biometric security, such as the fingerprint readers
included in some new ThinkPad models, is even better.
 Back up data regularly, or install a desktop/notebook/PDA sync program.

d. Denial of Service
A denial of service (DoS) attack is specifically designed to interrupt normal system functions and
affect legitimate users’ access to the system. Hostile users send a flood of fake requests to a server,
overwhelming it and making a connection between the server and legitimate clients difficult or
impossible to establish. The distributed denial of service (DDoS) allows the hacker to launch a
massive, coordinated attack from thousands of hijacked (zombie) computers remotely controlled by
the hacker. A massive DDoS attack can paralyze a network system and bring down giant websites.
For example, the 2000 DDoS attacks brought down websites such as Yahoo! and eBay for hours.
Unfortunately, any computer system can be a hacker’s target as long as it is connected to the Internet.

DoS attacks can result in significant server downtime and financial loss for many companies, but the
controls to mitigate the risk are very technical. Companies should evaluate their potential exposure to
DoS attacks and determine the extent of control or protection they can afford.

e. Unauthorized Access to Information


To control unauthorized access to information, access controls, including passwords and a controlled
environment, are necessary. Computers installed in a public area, such as a conference room or
reception area, can create serious threats and should be avoided if possible. Any computer in a public
area must be equipped with a physical protection device to control access when there is no business
need. The LAN should be in a controlled environment accessed by authorized employees only.
Employees should be allowed to access only the data necessary for them to perform their jobs.

f. Abuse of Wireless Networks


Wireless networks offer the advantage of convenience and flexibility, but system security can be a big
issue. Attackers do not need to have physical access to the network. Attackers can take their time

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 282

cracking the passwords and reading the network data without leaving a trace. One option to prevent an
attack is to use one of several encryption standards that can be built into wireless network devices.
One example, wired equivalent privacy (WEP) encryption, can be effective at stopping amateur
snoopers, but it is not sophisticated enough to foil determined hackers. Consequently, any sensitive
information transmitted over wireless networks should be encrypted at the data level as if it were
being sent over a public network.

g. System Penetration
Hackers penetrate systems illegally to steal information, modify data, or harm the system. The
following factors are related to system penetration:

i. System holes
The design deficiency of operating systems or application systems that allow hijacking, security
bypass, data manipulation, privilege escalation, and system access.

ii. Port scanning


A hacking technique used to check TCP/IP ports to reveal the services that are available and to
identify the weaknesses of a computer or network system in order to exploit them.

iii. Network sniffing


A hardware and software program to collect network (traffic) data in order to decipher passwords with
password-cracking software, which may result in unauthorized access to a network system.

iv. IP spoofing
A technique used to gain unauthorized access to computers, whereby hackers send messages to a
computer with a deceived IP address as if it were coming from a trusted host.

v. Back door/trap door


A hole in the security of a computer system deliberately left in place by designers or maintainers.

vi. Tunneling
A method for circumventing a firewall by hiding a message that would be rejected by the firewall
inside another, acceptable message.

According to Symantec, unpatched operating system (OS) holes are one of the most common ways to
break into a system network; using a worm is also becoming more common. Therefore, the first step
to guard against hackers is to download free patches to fix security holes when OS vendors release
them. Routinely following this step can dramatically improve network security for many companies.
Companies can use patch-management software to automate the distribution of authentic patches from
multiple software vendors throughout the entire organization. Not all patches can work flawlessly
with existing applications, however, and sometimes the patches may conflict with a few applications,
especially the older ones. If possible, patches should first be tested in a simulated environment, and
existing systems should be backed up before the patch is installed.

Companies can use software tools or system-penetration testing to scan the system and assess
systems’ susceptibility and the effectiveness of any countermeasures in place. The testing techniques
must be updated regularly to detect ever-changing threats and vulnerabilities. Other controls to
mitigate system penetration are as follows:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 283

i. Install anti-sniffer software to scan the networks; use encryption to mitigate data-sniffing
threats.
ii. Install all the server patches released by vendors. Servers have incorporated numerous
security measures to prevent IP spoofing attacks.
iii. Install a network firewall so that internal addresses are not revealed externally.
iv. Establish a good system-development policy to guard against a back door/trap door; remove
the back door as soon as the new system development is completed.
v. Design security and audit capabilities to cover all user levels.

h. Telecom Fraud
In the past, telecom fraud involved fraudulent use of telecommunication (telephone) facilities.
Intruders often hacked into a company’s private branch exchange (PBX) and administration or
maintenance port for personal gains, including free long-distance calls, stealing (changing)
information in voicemail boxes, diverting calls illegally, wiretapping, and eavesdropping.

As analog and digital data communications have converged, some companies have utilized the Voice
over Internet Protocol (VOIP) to lower phone bills. The originating and receiving phone numbers are
converted to IP addresses and the PBX is linked to a company’s networked computers, and hackers
can get into systems through PBX or computerized branch exchange (CBX). In addition, every
PBX/CBX system is equipped with a software program that makes it vulnerable to remote-access
fraud, and intruders use sophisticated software to find an easy target. Once a PBX is hacked, hackers
have the same access to a company’s phone system and computer network as do the employees.

Companies should install software to monitor service usage at various points on the network,
including the VOIP gatekeeper, VOIP media controller, and broadcast server. The software can
monitor the system packet performance and the router applications on the converged network. The
software can also automatically alert the responsible person if any abnormal activities have been
detected.

i. Theft of Proprietary Information


Information is a commodity in the e-commerce era, and there are always buyers for sensitive
information, including customer data, credit card information, and trade secrets. Data theft by an
insider is common when access controls are not implemented. Outside hackers can also use “Trojan”
viruses to steal information from unprotected systems. Beyond installing firewall and anti-virus
software to secure systems, a company should encrypt all of its important data.

Access privilege and data encryption are good preventive controls against data theft by unauthorized
employees who steal for personal gain. The access controls include the traditional passwords, smart-
card security, and more-sophisticated biometric security devices. Companies can implement some
appropriate controls, including limiting access to proprietary information to authorized employees,
controlling access where proprietary information is available, and conducting background checks on
employees who will have access to proprietary information. There will, however, always be some risk
that authorized employees will misuse data they have access to in the course of their work. Companies
can also work with an experienced intellectual property attorney, and require employees to sign
noncompeting and nondisclosure agreements.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 284

j. Financial Fraud
The nature of financial fraud has changed over the years with information technology. System-based
financial fraud includes scam e-mails, identity theft, and fraudulent transactions. With spam, con
artists can send scam e-mails to thousands of people in hours. Victims of the so-called 419 scam are
often promised a lottery winning or a large sum of unclaimed money sitting in an offshore bank
account, but they must pay a “fee” first to get their shares. Anyone who gets this kind of e-mail is
recommended to forward a copy to the U.S. Secret Service.

Companies should review bank statements as soon as they arrive and report any suspicious or
unauthorized electronic transactions. Under the Electronic Fund Transfer Act, if victims notify the
bank of an unauthorized transaction within 60 days of the date the statement is delivered, they are not
liable for any loss. Otherwise, victims could lose all the money in their account, and the unused
portion of the maximum line of credit established for overdrafts.

Phishing is a form of identity theft. Spam is sent claiming to be from an individual’s bank or credit
union or a reputable e-commerce organization. The e-mail urges the recipient to click on a link to
update their personal data. The link takes the victim to a fake website designed to elicit personal or
financial information and transmit it to the criminals.

User should never give out credit card numbers, PINs, or any personal information in response to
unsolicited e-mail. Instead of clicking a link in a suspicious e-mail, call the office or use a URL that is
legitimate to verify an e-mail that claims to be from a bank or financial institution. When submitting
sensitive financial and personal information over the Internet, make sure the server uses the Secure
Sockets Layer protocol (the URL should be https:// instead of the typical http://).

k. Misuse of Public Web Applications


The nature of e-commerce—convenience and flexibility—makes Web applications vulnerable and
easily abused. Hackers can circumvent traditional network firewalls and intrusion-prevention systems
and attack web applications directly. They can inject commands into databases via the web
application user interfaces and surreptitiously steal data, such as customer and credit card information.

User authentication is the foundation of Web application security, and inadequate authentication may
make applications vulnerable. Companies must install a Web application firewall to ensure that all
security policies are closely followed. The following additional controls can mitigate Web application
abuses:

i. Installing security patches promptly.


ii. Using a Web application scanner to discover any vulnerability.
iii. Monitoring the server and applications to identify any potential problems and terminate
malicious requests.
iv. Hiding information that end users do not need to know, including the server machine type and
the operating system.

l. Website Defacement
Website defacement is the sabotage of webpages by hackers inserting or altering information. The
altered webpages may mislead unknowing users and represent negative publicity that could affect a
company’s image and credibility. Web defacement is in essence a system attack, and the attackers
often take advantage of undisclosed system vulnerabilities or unpatched systems.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 285

Network firewalls cannot guard against all web vulnerabilities. Companies should install additional
Web application security to mitigate the defacement risk. All known vulnerabilities must be patched
to prevent unauthorized remote command execution and privilege escalation. It is also important that
only a few authorized users are allowed root access to a website’s contents. Access to different Web
server resources, such as executables, processes, data files, and configuration files, should be
monitored. Commercial website monitoring services are also available.

m. Sabotage
System security incidents are committed by insiders about as often as by outsiders. Some of the
controls discussed above can provide protection against the sabotages committed by outsiders, but no
organization is immune from an employee abusing its trust. For example, Omega Engineering was a
thriving defensive manufacturing firm in the 1990s; it used more than 1,000 programs to produce
various products with 500,000 different designs for their customers, including NASA and the U.S.
Navy. On July 31, 1996, Omega Engineering’s server crashed and all of the software programs were
lost. To make matters worse, on the same day the backup tape also disappeared. The investigation
quickly revealed that it was a deliberate sabotage by the former system administrator, Tim Lloyd, who
had been terminated 30 days before the catastrophe. Lloyd designed and planted a time bomb to erase
all the programs on the server. The crash resulted in $10 million in lost revenues and led to 80 layoffs.

When it comes to security, companies often pay attention only to the perimeter of the organization,
not the inside. Sabotages by insiders is often orchestrated when employees know their termination is
coming. In some cases, disgruntled employees are still able to gain access after being terminated. The
2005 insider-threat case study results by CERT/SEI (www.cert.org/archive/pdf/inside
cross051105.pdf) help identify, assess, and manage sabotage threats from insiders. Their key findings
were as follows:

 The majority of insiders planned their activities in advance.


 A negative work-related event (e.g., firing, downsizing, or promotion pass-over) triggered
most insiders’ actions.
 Most of the insiders had acted out in the workplace.
 Less than half of all of the insiders had authorized access at the time of the incident.
 Insiders used unsophisticated methods for exploiting systemic vulnerabilities in applications,
processes, or procedures, but relatively sophisticated attack tools were also employed.
 The majority of insiders compromised computer accounts, created unauthorized back-door
accounts, or used shared accounts.
 Remote access was used to carry out the majority of the attacks.
 The majority of the insider attacks were detected only after the damage was already done.
 System logs were the most prevalent means by which the insiders were identified.

As indicated by the CERT/SEI study, the convenience of remote access facilitates the majority of
sabotage attacks. Another potential threat of unauthorized use is when employees quit or are
terminated but there is no coordination between the personnel department and the computer center. In
some cases, employees still have system access and an e-mail account after they have left an
organization. It is also not unusual that employees know the user IDs and passwords of their
colleagues. Companies can adopt some of the following steps to protect against such threats:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 286

 Disable an employee’s system access promptly.


 Enforce a company-wide password change on a regular basis, including the day an employee
resigns or is terminated. (This control is not feasible with huge organizations, because people
leave every day.)
 Use biometric access control if possible.
 Obtain the password and encryption code to an employee’s laptop or encrypted files on the
server.
 Maintain a system activity log as a detect control. (The creation of an activity log, however,
can increase system overhead, especially for larger organizations.)

n. Company Awareness
Business operations can be disrupted by many factors, including system security breaches. System
downtime, system penetrations, theft of computing resources, and lost productivity have quickly
become critical system security issues. The financial loss of these security breaches can be significant.
In addition, system security breaches often taint a company’s image and may compromise a
company’s compliance with applicable laws and regulations. The key to protecting a company’s
accounting information system against security breaches is to be well prepared for all possible major
threats. A combination of preventive and detective controls can mitigate security threats.

Security controls
These include:
1. Administrative controls – they include
a. Policies – a policy can be seen as a mechanism for controlling security
b. Administrative procedures – may be put by an organization to ensure that users
only do that which they have been authorized to do
c. Legal provisions – serve as security controls and discourage some form of physical
threats
d. Ethics
2. Logical security controls – measures incorporated within the system to provide protection
from adversaries who have already gained physical access
3. Physical controls – any mechanism that has a physical form e.g. lockups
4. Environmental controls

Administering security
It includes:
 Risk analysis
 Security planning – a security plan identifies and organizes the security activities of an
organization.
 Security policy

Risk Analysis
The process involves:
- Identification of the assets
- Determination of the vulnerabilities
- Estimate the likelihood of exploitation

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 287

- Computation of expected annual loss


- Survey of applicable controls and their costs
- Projection of annual savings

Security Policy
Security failures can be costly to business. Losses may be suffered as a result of the failure itself or
costs can be incurred when recovering from the incident, followed by more costs to secure systems and
prevent further failure. A well-defined set of security policies and procedures can prevent losses and
save money.

The information systems security policy is the responsibility of top management of an organization
who delegate its implementation to the appropriate level of management with permanent control. The
policy contributes to the protection of information assets. Its objective is to protect the information
capital against all types of risks, accidental or intentional. An existing and enforced security policy
should ensure systems conformity with laws and regulations, integrity of data, confidentiality and
availability.

Key components of such a policy include the following:


 Management support and commitment – management should approve and support formal
security awareness and training.
 Access philosophy – access to computerized information should be based on a documented
‘need-to-know, need-to-do’ basis.
 Compliance with relevant legislation and regulations
 Access authorization – the data owner or manager responsible for the accurate use and
reporting of the information should provide written authorization for users to gain access to
computerized information.
 Reviews of access authorization – like any other control, access controls should be evaluated
regularly to ensure they are still effective.
 Security awareness – all employees, including management, need to be made aware on a
regular basis of the importance of security. A number of different mechanisms are available
for raising security awareness including:
 Distribution of a written security policy
 Training on a regular basis of new employees, users and support staff
 Non-disclosure statements signed by employees
 Use of different media in promulgating security e.g. company newsletter, web
page, videos etc.
 Visible enforcement of security rules
 Simulate security incidents for improving security procedures
 Reward employees who report suspicious events
 Periodic audits

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 288

SYSTEM INTEGRITY
System integrity begins with selecting and deploying the right hardware and software components to
authenticate a user’s identity—and help prevent others from assuming it. In doing so, it needs to offer
efficient administrative functions to restrict access to administrator-level functions, and give
administrators processes and controls to manage changes to the system. There are many individual
components to system integrity, such as vulnerability assessment, antivirus, and anti-malware
solutions. However, the ultimate goal from an access control standpoint is to prevent the installation
and execution of malicious code—while protecting valuable data—from the outset.

Data Integrity Testing


Data integrity testing is a series of substantive tests that examines accuracy, completeness, consistency
and authorization of data holdings. It employs testing similar to that used for input control. Data
integrity tests will indicate failures in input or processing controls. Controls for ensuring the integrity
of accumulated data on a file can be exercised by checking data on the file regularly. When this
checking is done against authorized source documentation, it is usual to check only a portion of the
file at a time. Since the whole file is regularly checked in cycles, the control technique is often referred
to as cyclical checking. Data integrity issues can be identified as data that conform to the following
definitions.

(i)Domain integrity
This testing is really aimed at verifying that the data conform to definitions; that is, that the data items
are all in the correct domains. The major objective of this exercise is to verify that edit and validation
routines are working satisfactorily. These tests are field level based and ensure that the data item has a
legitimate value in the correct range or set.

(ii)Relational integrity
These tests are performed at the record based level and usually involve calculating and verifying
various calculated fields such as control totals. Examples of their use would be in checking aspects
such as payroll calculations or interest payments. Computerized data frequently have control totals
built into various fields and by the nature of these fields, they are computed and would be subject to
the same type of tests. These tests will also detect direct modification of sensitive data i.e. if someone
has bypassed application programs, as these types of data are often protected with control totals.

(iii)Referential integrity
Database software will sometimes offer various procedures for checking or ensuring referential
integrity (mainly offered with hierarchical and network-based databases). Referential integrity checks
involve ensuring that all references to a primary key from another file (called foreign key) actually
exist in their original file. In non-pointer databases e.g. relational databases, referential integrity checks
involve making sure that all foreign keys exist in their original table.

Security in Operating System: Access Control Security Function

This is a function implemented at the operating system level and usually also availed at the application
level by the operating system. It controls access to the system and system resources so that only
authorized accesses are allowed, e.g.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 289

 Protect the system from access by intruders


 Protect system resources from unauthorized access by otherwise legitimate system user
 Protect each user from inadvertent or malicious interference from another

It is a form of logical access control, which involves protection of resources from users who have
physical access to the computer system.

The access control reference monitor model has a reference monitor, which intercepts all access
attempts. It is always invoked when the target object is referenced and decides whether to deny or grant
requests as per the rules incorporated within the monitor.

The components of an access control system can be categorized into identification, authentication and
authorization components. Typical operating system based access control mechanisms are:

 User identification and authentication


 Access control to the systems general objects e.g. files and devices
 Memory protection – prevent one program from interfering with another i.e. any form of
unauthorized access to another program’s memory space.

Identification
Involves establishing identity of the subject (who are you?). Identification can use:
- ID, full name
- Workstation ID, IP address
- Magnetic card (requires a reader)
- Smart card (inbuilt intelligence and computation capability)

Biometrics is the identification based on unique physical or behavioural patterns of people and may
be:

 Physiological systems – something you are e.g. fingerprints


 Behavioural systems – how you work

They are quite effective when thresholds are sensible (substantial difference between two different
people) and physical conditions of person are normal (equal to the time when reference was first made).
They require expensive equipment and are rare. Also buyers are deterred by impersonation or belief
that devices will be difficult to use. In addition users dislike being measured.

Authentication
Involves verification of identity of subject (Are you who you say you are? Prove it!). Personal
authentication may involve:
- Something you know: password, PIN, code phrase
- Something you have: keys, tokens, cards, smart cards
- Something you are: fingerprints, retina patterns, voice patterns

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 290

- The way you work: handwriting (signature), keystroke patterns


- Something you know: question about your background, favourite colour, pet name etc.

Authorization
Involves determining the access rights to various system objects/resources. The security requirement
to be addressed is the protection against unauthorized access to system resources. There is need to
define an authorization policy as well as implementation mechanisms. An authorization policy defines
activities permitted or prohibited within the system. Authorization mechanisms implement the
authorization policy and includes directory of access rights, access control lists (ACL) and access
tickets or capabilities.

Logical Security
Logical access into the computer can be gained through several avenues. Each avenue is subject to
appropriate levels of access security. Methods of access include the following:

1. Operator console
These are privileged computer terminals which controls most computer operations and functions. To
provide security, these terminals should be located in a suitably controlled location so that physical
access can only be gained by authorized personnel. Most operator consoles do not have strong logical
access controls and provide a high level of computer system access; therefore, the terminal must be
located in a physically secured area.

2. Online terminals
Online access to computer systems through terminals typically require entry of at least a logon-
identifier (logon-ID) and a password to gain access to the host computer system and may also require
further entry of authentication data for access to application specific systems. Separate security and
access control software may be employed on larger systems to improve the security provided by the
operating system or application system.

3. Batch job processing


This mode of access is indirect since access is achieved via processing of transactions. It generally
involves accumulating input transactions and processing them as a batch after a given interval of time
or after a certain number of transactions have been accumulated. Security is achieved by restricting
who can accumulate transactions (data entry clerks) and who can initiate batch processing (computer
operators or the automatic job scheduling system).

4. Dial-up ports
Use of dial-up ports involves hooking a remote terminal or PC to a telephone line and gaining access
to the computer by dialling a telephone number that is directly or indirectly connected to the computer.
Often a modem must interface between the remote terminal and the telephone line to encode and
decode transmissions. Security is achieved by providing a means of identifying the remote user to
determine authorization to access. This may be a dial-back line, use of logon-ID and access control
software or may require a computer operator to verify the identity of the caller and then provide the
connection to the computer.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 291

5. Telecommunications network
Telecommunications networks link a number of computer terminals or PCs to the host computer
through a network of telecommunications lines. The lines can be private (i.e. dedicated to one user) or
public such as a nation’s telephone system. Security should be provided in the same manner as that
applied to online terminals.

Logical Access Issues and Exposures

Inadequate logical access controls increase an organization’s potential for losses resulting from
exposures. These exposures can result in minor inconveniences or total shutdown of computer
functions. Logical access controls reduce exposure to unauthorized alteration and manipulation of data
and programs. Exposures that exist from accidental or intentional exploitation of logical access control
weaknesses include technical exposures and computer crime.

Technical Exposures

This is the unauthorized (intentional or unauthorized) implementation or modification of data and


software. Some of the technical exposures include:

1. Data diddling involves changing data before or as it is being entered into the computer. This is
one of the most common abuses because it requires limited technical knowledge and occurs before
computer security can protect data.
2. Trojan horses involve hiding malicious, fraudulent code in an authorized computer program. This
hidden code will be executed whenever the authorized program is executed. A classic example is
the Trojan horse in the payroll-calculating program that shaves a barely noticeable amount off each
paycheck and credits it to the perpetrator’s payroll account.
3. Rounding down involves drawing off small amounts of money from a computerized transaction
or account and rerouting this amount to the perpetrator’s account. The term ‘rounding down’ refers
to rounding small fractions of a denomination down and transferring these small fractions into the
unauthorized account. Since the amounts are so small, they are rarely noticed.
4. Salami techniques involve the slicing of small amounts of money from a computerized transaction
or account and are similar to the rounding down technique. The difference between them is that in
rounding down the program rounds off by the cent. For example, if a transaction amount was
234.39 the rounding down technique may round the transaction to 234.35. The salami technique
truncates the last few digits from the transaction amount so 234.39 become 234.30 or 234.00
depending on the calculation built into the program.
5. Viruses are malicious program code inserted into other executable code that can self-replicate and
spread from computer to computer, via sharing of computer diskettes, transfer of logic over
telecommunication lines or direct contact with an infected machine or code. A virus can harmlessly
display cute messages on computer terminals, dangerously erase or alter computer files or simply
fill computer memory with junk to a point where the computer can no longer function. An added
danger is that a virus may lie dormant for some time until triggered by a certain event or occurrence,
such as a date (1 January – Happy New Year!) or being copied a pre-specified number of times.
During this time the virus has silently been spreading.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 292

6. Worms are destructive programs that may destroy data or utilize tremendous computer and
communication resources but do not replicate like viruses. Such programs do not change other
programs, but can run independently and travel from machine to a machine across network
connections. Worms may also have portions of themselves running on many different machines.
7. Logic bombs are similar to computer viruses, but they do not self-replicate. The creation of logic
bombs requires some specialized knowledge, as it involves programming the destruction or
modification of data at a specific time in the future. However, unlike viruses or worms, logic bombs
are very difficult to detect before they blow up; thus, of all the computer crime schemes, they have
the greatest potential for damage. Detonation can be timed to cause maximum damage and to take
place long after the departure of the perpetrator. The logic bomb may also be used as a tool of
extortion, with a ransom being demanded in exchange for disclosure of the location of the bomb.
8. Trap doors are exits out of an authorized program that allow insertion of specific logic, such as
program interrupts, to permit a review of data during processing. These holes also permit insertion
of unauthorized logic.
9. Asynchronous attacks occur in multiprocessing environments where data move asynchronously
(one character at a time with a start and stop signal) across telecommunication lines. As a result,
numerous data transmissions must wait for the line to be free (and flowing in the proper direction)
before being transmitted. Data that is waiting is susceptible to unauthorized accesses called
asynchronous attacks. These attacks, which are usually very small pinlike insertions into cable,
may be committed via hardware and are extremely hard to detect.
10. Data leakage involves siphoning or leaking information out of the computer. This can involve
dumping files to paper or can be as simple as stealing computer reports and tapes.
11. Wire-tapping involves eavesdropping on information being transmitted over telecommunications
lines.
12. Piggybacking is the act of following an authorized person through a secured door or electronically
attaching to an authorized telecommunication link to intercept and possibly alter transmissions.
13. Shut down of the computer can be initiated through terminals or microcomputers connected
directly (online) or indirectly (dial-up lines) to the computer. Only individuals knowing a high-
level systems logon-ID can usually initiate the shut down process. This security measure is
effective only if proper security access controls are in place for the high-level logon-ID and the
telecommunications connections into the computer. Some systems have proven to be vulnerable
to shutting themselves down under certain conditions of overload.
14. Denial of service is an attack that disrupts or completely denies service to legitimate users,
networks, systems or other resources. The intent of any such attack is usually malicious in nature
and often takes little skill because the requisite tools are readily available.

Viruses
Viruses are a significant and a very real logical access issue. The term virus is a generic term applied
to a variety of malicious computer programs. Traditional viruses attach themselves to other executable
code, infect the user’s computer, replicate themselves on the user’s hard disk and then damage data,
hard disk or files. Viruses usually attack four parts of the computer:

 Executable program files


 File-directory system that tracks the location of all the computer’s files

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 293

 Boot and system areas that are needed to start the computer
 Data files

Control over viruses

Computer viruses are a threat to computers of any type. Their effects can range from the annoying but
harmless prank to damaged files and crashed networks. In today’s environment, networks are the ideal
way to propagate viruses through a system. The greatest risk is from electronic mail (e-mail)
attachments from friends and/or anonymous people through the Internet. There are two major ways to
prevent and detect viruses that infect computers and network systems.

 Having sound policies and procedures in place


 Technical means, including anti-virus software

Policies and procedures

Some of the policy and procedure controls that should be in place are:
 Build any system from original, clean master copies. Boot only from original diskettes whose
write protection has always been in place.
 Allow no disk to be used until it has been scanned on a stand-alone machine that is used for
no other purpose and is not connected to the network.
 Update virus software scanning definitions frequently
 Write-protect all diskettes with .EXE or .COM extensions
 Have vendors run demonstrations on their machines, not yours
 Enforce a rule of not using shareware without first scanning the shareware thoroughly for a
virus
 Commercial software is occasionally supplied with a Trojan horse (viruses or worms). Scan
before any new software is installed.
 Insist that field technicians scan their disks on a test machine before they use any of their disks
on the system
 Ensure that the network administrator uses workstation and server anti-virus software
 Ensure that all servers are equipped with an activated current release of the virus detection
software
 Create a special master boot record that makes the hard disk inaccessible when booting from
a diskette or CD-ROM. This ensures that the hard disk cannot be contaminated by the diskette
or optical media
 Consider encrypting files and then decrypt them before execution
 Ensure that bridge, route and gateway updates are authentic. This is a very easy way to place
and hide a Trojan horse.
 Backups are a vital element of anti-virus strategy. Be sure to have a sound and effective backup
plan in place. This plan should account for scanning selected backup files for virus infection
once a virus has been detected.
 Educate users so they will heed these policies and procedures
 Review anti-virus policies and procedures at least once a year

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 294

 Prepare a virus eradication procedure and identify a contact person.

Technical means
Technical methods of preventing viruses can be implemented through hardware and software means.

The following are hardware tactics that can reduce the risk of infection:
 Use workstations without floppy disks
 Use boot virus protection (i.e. built-in firmware based virus protection)
 Use remote booting
 Use a hardware based password
 Use write protected tabs on floppy disks

Software is by far the most common anti-virus tool. Anti-virus software should primarily be used as a
preventative control. Unless updated periodically, anti-virus software will not be an effective tool
against viruses.

The best way to protect the computer against viruses is to use anti-viral software. There are several
kinds. Two types of scanners are available:

 One checks to see if your computer has any files that have been infected with known viruses
 The other checks for atypical instructions (such as instructions to modify operating system files)
and prevents completion of the instruction until the user has verified that it is legitimate.

Once a virus has been detected, an eradication program can be used to wipe the virus from the hard
disk. Sometimes eradication programs can kill the virus without having to delete the infected program
or data file, while other times those infected files must be deleted. Still other programs, sometimes
called inoculators, will not allow a program to be run if it contains a virus.

There are three different types of anti-virus software:


a) Scanners look for sequence of bits called signatures that are typical of virus programs.
Scanners examine memory, disk boot sectors, executables and command files for bit
patterns that match a known virus. Scanners therefore need to be updated periodically to
remain effective.
b) Active monitors interpret DOS and ROM basic input-output (BIOS) calls, looking for
virus like actions. Active monitors can be annoying because they cannot distinguish
between a user request and a program or virus request. As a result, users are asked to
confirm actions like formatting a disk or deleting a file or set of files.
c) Integrity checkers compute a binary number on a known virus-free program that is then
stored in a database file. The number is called a cyclical redundancy check or CRC. When
that program is called to execute, the checker computes the CRC on the program about to
be executed and compares it to the number in the database. A match means no infection;
a mismatch means that a change in the program has occurred. A change in the program
could mean a virus within it. Integrity checkers take advantage of the fact that executable
programs and boot sectors do not change very often, if at all.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 295

Computer Crime Exposures


Computer systems can be used to steal money, goods, software or corporate information. Crimes also
can be committed when the computer application process or data are manipulated to accept false or
unauthorized transactions. There also is the simple, non-technical method of computer crime by
stealing computer equipment.

Computer crime can be performed with absolutely nothing physically being taken or stolen. Simply
viewing computerized data can provide an offender with enough intelligence to steal ideas or
confidential information (intellectual property).
Committing crimes that exploit the computer and the information it contains can be damaging to the
reputation, morale and very existence of an organization. Loss of customers, embarrassment to
management and legal actions against the organization can be a result.

Threats to business include the following:


 Financial loss – these losses can be direct, through loss of electronic funds or indirect, through the
costs of correcting the exposure.

 Legal repercussions – there are numerous privacy and human rights laws an organization should
consider when developing security policies and procedures. These laws can protect the
organization but can also protect the perpetrator from prosecution. In addition, not having proper
security measures could expose the organization to lawsuits from investors and insurers if a
significant loss occurs from a security violation. Most companies also must comply with industry-
specific regulatory agencies.

 Loss of credibility or competitive edge – many organizations, especially service firms such as
banks, savings and loans and investment firms, need credibility and public trust to maintain a
competitive edge. A security violation can severely damage this credibility, resulting in loss of
business and prestige.

 Blackmail/Industrial espionage – by gaining access to confidential information or the means to


adversely impact computer operations, a perpetrator can extort payments or services from an
organization by threatening to exploit the security breach.

 Disclosure of confidential, sensitive or embarrassing information – such events can damage an


organization’s credibility and its means of conducting business. Legal or regulatory actions against
the company may also be the result of disclosure.

 Sabotage – some perpetrators are not looking for financial gain. They merely want to cause
damage due to dislike of the organization or for self-gratification.

Logical access violators are often the same people who exploit physical exposures, although the skills
needed to exploit logical exposures are more technical and complex. Such people include:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 296

a) Hackers – hackers are typically attempting to test the limits of access restrictions to prove their
ability to overcome the obstacles. They usually do not access a computer with the intent of
destruction; however, this is quite often the result.
b) Employees – both authorized and unauthorized employees
c) Information system personnel – these individuals have the easiest access to computerized
information since they are the custodians of this information. In addition to logical access
controls, good segregation of duties and supervision help reduce logical access violations by
these individuals.
d) End users
e) Former employees
f) Interested or educated outsiders
 Competitors
 Foreigners
 Organized criminals
 Crackers (hackers paid by a third party)
 Phreackers (hackers attempting access into the telephone/communication system)
 Part-time and temporary personnel – remember that office cleaners often have a great
deal of physical access and may well be competent in computing
 Vendors and consultants
 Accidental ignorant – someone who unknowingly perpetrates a violation

Access Control Software


Access control software is designed to prevent unauthorized access to data, use of system functions
and programs, unauthorized updates/changes to data and to detect or prevent an unauthorized attempt
to access computer resources. Access control software interfaces with the operating system and acts as
a central control for all security decisions. The access control software functions under the operating
system software and provides the capability of restricting access to data processing resources either
online or in batch processing.

Access control software generally performs the following tasks:


 Verification of the user
 Authorization of access to defined resources
 Restriction of users to specific terminals
 Reports on unauthorized attempts to access computer resources, data or programs

Access control software generally processes access requests in the following way:
 Identification of users – users must identify themselves to the access control software such as
name and account number
 Authentication – users must prove that they are who they clam to be. Authentication is a two
way process where the software must first verify the validity of the user and then proceed to
verify prior knowledge information. For example, users may provide the following
information:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 297

a) Remembered information such as name, account number and password


b) Processor objects such as badge, plastic cards and key
c) Personal characteristics such as fingerprint, voice and signature

Logical Security Features, Tools and Procedures


Some of the logical security features, tools and procedures include:
1) Logon-IDs and passwords
This two-phase user identification/authentication process based on something you know can be used
to restrict access to computerized information, transactions, programs and system software. The
computer can maintain an internal list of valid logon-Ids and a corresponding set of access rules for
each logon-ID. These access rules identify the computer resources the user of the logon-ID can access
and constitute the user’s authorization.
The logon-ID provides individual’s identification and each user gets a unique logon-ID that can be
identified by the system. The format of logon-Ids is typically standardized. The password provide
individual’ authentication. Identification/authentication is a two-step process by which the computer
system first verifies that the user has a valid logon-ID (user identification) and then requires the user
to substantiate his/her validity via a password.
Passwords have the following features
 A password should be easy to remember but difficult for a perpetrator to guess.
 Initial password assignment should be done discreetly by the security administrator. When the
user logs on for the first time, the system should force a password change to improve
confidentiality. Initial password assignments should be randomly generated and assigned
where possible on an individual and not a group basis. Accounts never used with or without
an initial password should be removed from the system.
 If the wrong password is entered a predefined number of times, typically three, the logon-ID
should be automatically and permanently deactivated (or at least for a significant period of
time).
 If a logon-ID has been deactivated because of a forgotten password, the user should notify the
security administrator. The administrator should then reactivate the logon-ID only after
verifying the user’s identification.
 Passwords should be internally one-way encrypted. Encryption is a means of encoding data
stored in a computer. This reduces the risk of a perpetrator gaining access to other users’
passwords (if the perpetrator cannot read and understand it, he cannot use it).
 Passwords should not be displayed in any form either on a computer screen when entered, on
computer reports, in index or card files or written on pieces of paper taped inside a person’s
desk. These are the first places a potential perpetrator will look.
 Passwords should be changed periodically. The best method is for the computer system to
force the change by notifying the user prior to the password expiration date.
 Password must be unique to an individual. If a password is known to more than one person,
the responsibility of the user for all activity within their account cannot be enforced.

Password syntax (format) rules

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 298

 Ideally, passwords should be five to eight characters in length. Anything shorter is too easy to
guess, anything longer is too hard to remember.
 Passwords should allow for a combination of alpha, numeric, upper and lower case and special
characters.
 Passwords should not be particularly identifiable with the user (such as first name, last name,
spouse name, pet’s name etc). Some organizations prohibit the use of vowels, making word
association/guessing of passwords more difficult.
 The system should not permit previous password(s) to be used after being changed.
 Logon-Ids not used after a number of days should be deactivated to prevent possible misuse.
 The system should automatically disconnect a logon session if no activity has occurred for a
period of time (one hour). This reduces the risk of misuse of an active logon session left
unattended because the user went to lunch, left home, went to a meeting or otherwise forgot to
logoff. This is often referred to as ‘time out’.

2) Logging computer access


With most security packages today, computer access and attempted access violations can be
automatically logged by the computer and reported. The frequency of the security administrator’s
review of computer access reports should be commensurate with the sensitivity of the computerized
information being protected.
The review should identify patterns or trends that indicate abuse of access privileges, such as
concentration on a sensitive application. It should also identify violations such as attempting computer
file access that is not authorized and/or use of incorrect passwords. The violations should be reported
and appropriate action taken.

3) Token devices, one-time passwords


A two-factor authentication technique such as microprocessor-controlled smart cards generates one-
time passwords that are good for only one logon session. Users enter this password along with a
password they have memorized to gain access to the system. This technique involves something you
have (a device subject to theft) and something you know (a personal identification number). Such
devices gain their one time password status because of a unique session characteristic (e.g. ID or time)
appended to the password.
4) Biometric security access control
This control restricts computer access based on a physical feature of the user, such as a fingerprint or
eye retina pattern. A reader is utilized to interpret the individual’s biometric features before permitting
computer access. This is a very effective access control because it is difficult to circumvent, and
traditionally has been used very little as an access control technique. However due to advances in
hardware efficiencies and storage, this approach is becoming a more viable option as an access control
mechanism. Biometric access controls are also the best means of authenticating a user’s identity based
on something you are.
5) Terminal usage restraints

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 299

 Terminal security – this security feature restricts the number of terminals that can
access certain transactions based on the physical/logical address of the terminal.
 Terminal locks – this security feature prevents turning on a computer terminal until a
key lock is unlocked by a turnkey or card key.
6) Dial-back procedures
When a dial-up line is used, access should be restricted by a dial-back mechanism. Dial-back interrupts
the telecommunications dial-up connection to the computer by dialling back the caller to validate user
authority.
7) Restrict and monitor access to computer features that bypass security
Generally, only system software programmers should have access to these features:
 Bypass Label Processing (BLP) – BLP bypasses computer reading of the file label. Since
most access control rules are based on file names (labels), this can bypass access security.
 System exits – this system software feature permits the user to perform complex system
maintenance, which may be tailored to a specific environment or company. They often exist
outside of the computer security system and thus are not restricted or reported in their use.
 Special system logon-Ids – these logon-Ids are often provided with the computer by the
vendor. The names can be easily determined because they are the same for all similar computer
systems. Passwords should be changed immediately upon installation to secure them.
8) Logging of online activity
Many computer systems can automatically log computer activity initiated through a logon-ID or
computer terminal. This is known as a transaction log. The information can be used to provide a
management/audit trail.
9) Data classification
Computer files, like documents have varying degrees of sensitivity. By assigning classes or levels of
sensitivity to computer files, management can establish guidelines for the level of access control that
should be assigned. Classifications should be simple, such as high, medium and low. End user
managers and the security administrator can the use these classifications to assist with determining
who should be able to access what.
A typical classification has four data classifications:
 Sensitive – applies to information that requires special precautions to assure the integrity of
the information, by protecting it from unauthorized modification or deletion. It is information
that requires a higher than normal assurance of accuracy and completeness e.g. passwords,
encryption parameters.
 Confidential – applies to the most sensitive business information that is intended strictly for
use within an organization. Its unauthorized disclosure could seriously and adversely impact
the organization’s image in the eyes of the public e.g. application program source code, project
documentation etc.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 300

 Private – applies to personal information that is intended for use within the organization. Its
unauthorized disclosure could seriously and adversely impact the organization and/or its
customers e.g. customer account data, e-mail messages etc.
 Public – applies to data that can be accessed by the public but can be updated/deleted by
authorized people only e.g. company web pages, monetary transaction limit data etc.

10) Safeguards for confidential data on a PC


In today’s environment, it is not unusual to keep sensitive data on PCs and diskettes where it is more
difficult to implement logical and physical access controls.
Sensitive data should not be stored in a microcomputer. The simplest and most effective way to secure
data and software in a microcomputer is to remove the storage medium (such as the disk or tape) from
the machine when it is not in use and lock it in a safe. Microcomputers with fixed disk systems may
require additional security procedures for theft protection. Vendors offer lockable enclosures, clamping
devices and cable fastening devices that help prevent equipment theft. The computer can also be
connected to a security system that sounds an alarm if equipment is moved.
Passwords can also be allocated to individual files to prevent them being opened by an unauthorized
person, one not in possession of the password. All sensitive data should be recorded on removable hard
drives, which are more easily secured than fixed or floppy disks. Software can also be used to control
access to microcomputer data. The basic software approach restricts access to program and data files
with a password system. Preventative controls such as encryption become more important for
protecting sensitive data in the event that a PC or laptop is lost, stolen or sold.
11) Naming conventions for access controls
On larger mainframe and midrange systems, access control naming conventions are structures used to
govern user access to the system and user authority to access or use computer resources such as files,
programs and terminals. These general naming conventions and associated files are required in a
computer environment to establish and maintain personal accountability and segregation of duties in
the access of data. The need for sophisticated naming conventions over access controls depends on the
importance and level of security that is needed to ensure that unauthorized access has not been granted.

Physical Access Exposures


Exposures that exist from accidental or intentional violation of these access paths include:
 Unauthorized entry
 Damage, vandalism or theft to equipment or documents
 Copying or viewing of sensitive or copyrighted information
 Alteration of sensitive equipment and information
 Public disclosure of sensitive information
 Abuse of data processing resources
 Blackmail
 Embezzlement

Possible Perpetrators
 Employees with authorized or unauthorized access who are:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 301

o Disgruntled (upset by or concerned about some action by the organization or its


management)
o On strike
o Threatened by disciplinary action or dismissal
o Addicted to a substance or gambling
o Experiencing financial or emotional problems
o Notified of their termination
 Former employees
 Interested or informed outsiders such as competitors, thieves, organized crime and hackers
 Accidental ignorant – someone who unknowingly perpetrates a violation (could be an
employee or outsider)

The most likely source of exposure is from the uninformed, accidental or unknowing person, although
the greatest impact may be from those with malicious or fraudulent intent.

From an information system perspective, facilities to be protected include the following:


 Programming area
 Computer room
 Operator consoles and terminals
 Tape library, tapes, disks and all magnetic media
 Storage room and supplies
 Offsite backup file storage facility
 Input/output control room
 Communication closet
 Telecommunication equipment (including radios, satellites, wiring. Modems and external
network connections)
 Microcomputers and personal computers (PCs)
 Power sources
 Disposal sites
 Minicomputer establishments
 Dedicated telephones/Telephone lines
 Control units and front end processors
 Portable equipment (hand-held scanners and coding devices, bar code readers, laptop
computers and notebooks, printers, pocket LAN adapters and others)
 Onsite and remote printers
 Local area networks

Physical Access Controls


Physical access controls are designed to protect the organization from unauthorized access. They
reduce exposure to theft or destruction of data and hardware. These controls should limit access to only
those individuals authorized by management. This authorization may be explicit, as in a door lock for
which management has authorized you to have a key; or implicit, as in a job description that implies a
need to access sensitive reports and documents. Examples of some of the more common access controls
are:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 302

 Bolting door locks – these locks require the traditional metal key to gain entry. The key should be
stamped ‘Do not duplicate’.

 Combination door locks (cipher locks) – this system uses a numeric keypad or dial to gain entry.
The combination should be changed at regular intervals or whenever an employee with access is
transferred, fired or subject to disciplinary action. This reduces the risk of the combination being
known by unauthorized people.

 Electronic door locks – this system uses a magnetic or embedded chip-based plastic card key or
token entered into a sensor reader to gain access. A special code internally stored in the card or
token is read by the sensor device that then activates the door locking mechanism. Electronic door
locks have the following advantages over bolting and combination locks:

o Through the special internal code, cards can be assigned to an identifiable individual.
o Through the special internal code and sensor devices, access can be restricted based on the
individual’s unique access needs. Restriction can be assigned to particular doors or to
particular hours of the day.
o They are difficult to duplicate
o Card entry can be easily deactivated in the event an employee is terminated or a card is
lost or stolen. Silent or audible alarms can be automatically activated if unauthorized entry
is attempted. Issuing, accounting for and retrieving the card keys is an administrative
process that should be carefully controlled. The card key is an important item to retrieve
when an employee leaves the firm.

 Biometric door locks – an individual’s unique body features, such as voice, retina, fingerprint or
signature, activate these locks. This system is used in instances when extremely sensitive facilities
must be protected, such as in the military.

 Manual logging – all visitors should be required to sign a visitor’s log indicating their name,
company represented, reason for visiting and person to see. Logging typically is at the front
reception desk and entrance to the computer room. Before gaining access, visitors should also be
required to provide verification of identification, such as a driver’s license, business card or vendor
identification tag.

 Electronic logging – this is a feature of electronic and biometric security systems. All access can
be logged, with unsuccessful attempts being highlighted.

 Identification badges (photo IDs) – badges should be worn and displayed by all personnel.
Visitor badges should be a different colour from employee badges for easy identification.
Sophisticated photo Ids can also be utilized as electronic card keys. Issuing, accounting for and
retrieving the badges in an administrative process must be carefully controlled.
 Video cameras – cameras should be located at strategic points and monitored by security guards.
Sophisticated video cameras can be activated by motion. The video surveillance recording should
be retained for possible future playbacks.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 303

 Security guards – guards are very useful if supplemented by video cameras and locked doors.
Guards supplied by an external agency should be bonded to protect the organization from loss.

 Controlled visitor access – all visitors should be escorted by a responsible employee. Visitors
include friends, maintenance personnel, computer vendors, consultants (unless long-term, in which
case special guest access may be provided) and external auditors.

 Bonded personnel – all service contract personnel, such as cleaning people and off-site storage
services, should be bonded. This does not improve physical security but limits the financial
exposure of the organization.

 Deadman doors – this system uses a pair of (two) doors, typically found in entries to facilities
such as computer rooms and document stations. For the second door to operate, the first entry door
must close and lock, with only one person permitted in the holding area. This reduces risk of
piggybacking, when an unauthorized person follows an authorized person through a secured entry.

 Not advertising the location of sensitive facilities – facilities such as computer rooms should not
be visible or identifiable from the outside, that is, no windows or directional signs. The building
or department directory should discreetly identify only the general location of the information
processing facility.

 Computer terminal locks – these lock devices to the desk, prevent the computer from being
turned on or disengage keyboard recognition, preventing use.

 Controlled single entry point – a controlled entry point monitored by a receptionist should be
used by all incoming personnel. Multiple entry points increase the risk of unauthorized entry.
Unnecessary or unused entry points should be eliminated or deadlocked.

 Alarm system – an alarm system should be linked to inactive entry points, motion detectors and
the reverse flow of enter or exit only doors. Security personnel should be able to hear the alarm
when activated.

 Secured report/document distribution cart – secured carts, such as mail carts, should be covered
and locked and should not be left unattended.

Personnel Issues
Employee responsibilities for security policy are:

 Reading the security policy and adhering to it


 Keeping logon-Ids and passwords secret
 Reporting suspected violations of security

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 304

 Maintaining good physical security by keeping doors locked, safeguarding access keys, not
disclosing access door lock combinations and questioning unfamiliar people
 Conforming to local laws and regulations
 Adhering to privacy regulations with regard to confidential information e.g. health, legal etc.

Non-employees with access to company systems should be held accountable for security policies and
responsibilities. This includes contract employees, vendors, programmers, analysts, maintenance
personnel and clients.

Segregation of Responsibilities
A traditional security control is to ensure that there are no instances where one individual is solely
responsible for setting, implementing and policing controls and, at the same time, responsible for the
use of the systems. The use of a number of people, all responsible for some part of information system
controls or operations, allows each to act as a check upon another. Since no employee is performing
all the steps in a single transaction, the others involved in the transaction can monitor for accidents and
crime.

The logical grouping of information systems activities might be:

 Systems development
 Management of input media
 Operating the system
 Management of documentation and file archives
 Distribution of output

Where possible, to segregate responsibilities fully, no one person should cross these task boundaries.
Associated with this type of security control is the use of rotation of duties and unannounced audits.

Other human resources policies and practices include:

 Hiring practices – to ensure that the most effective and efficient staff is chosen and that
the company is in compliance with legal requirements. Practices include:
a) Background checks
b) Confidentiality agreements
c) Employee bonding to protect against losses due to theft
d) Conflict of interest agreements
e) Non-compete agreements
 Employee handbook – distributed to all employees upon being hired, should explain items
such as
a) Security policies and procedures
b) Company expectations
c) Employee benefits
d) Disciplinary actions
e) Performance evaluations etc.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 305

 Promotion policies – should be fair and understood by employees. Based on objective


criteria considering performance, education, experience and level of responsibility.
 Training – should be provided on a fair and regular basis
 Scheduling and time reporting – proper scheduling provides for a more efficient operation
and use of computing resources
 Employee performance evaluations – employee assessment must be a standard and regular
feature for all IS staff
 Required vacations – ensures that once a year, at a minimum, someone other than the
regular employee will perform a job function. This reduces the opportunity to commit
improper or illegal acts.
 Job rotation – provides an additional control (to reduce the risk of fraudulent or malicious
acts), since the same individual does not perform the same tasks all the time.
 Termination policies – policies should be structured to provide adequate protection for the
organization’s computer assets and data. Should address:
a) Voluntary termination
b) Immediate termination
c) Return of all access keys, ID cards and badges to prevent easy physical access
d) Deletion of assigned logon-ID and passwords to prohibit system access
e) Notification to other staff and facilities security to increase awareness of the
terminated employee’s status.
f) Arrangement of the final pay routines to remove the employee from active
payroll files
g) Performance of a termination interview to gather insight on the employee’s
perception of management
h) Return of all company property
i) Escort from the premises.

Network Security
Communication networks (wide area or local area networks) generally include devices connected to
the network, and programs and files supporting the network operations. Control is accomplished
through a network control terminal and specialized communications software.

The following are controls over the communication network:

 Network control functions should be performed by technically qualified operators.


 Network control functions should be separated and duties rotated on a regular basis where
possible.
 Network control software must restrict operator access from performing certain functions
such as ability to amend or delete operator activity logs.
 Network control software should maintain an audit trail of all operator activities.
 Audit trails should be reviewed periodically by operations management to detect any
unauthorized network operation activities.
 Network operation standards and protocols should be documented and made available to
the operators and should be reviewed periodically to ensure compliance.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 306

 Network access by system engineers should be closely monitored and reviewed to direct
unauthorized access to the network.
 Analysis should be performed to ensure workload balance, fast response time and system
efficiency.
 A terminal identification file should be maintained by the communication software to
check the authentication of a terminal when it tries to send or receive messages.
 Data encryption should be used where appropriate to protect messages from disclosure
during transmission.

Some common network management and control software include Novell NetWare, Windows NT,
UNIX, NetView, NetPass etc.

LAN security
Local area networks (LANs) facilitate the storage and retrieval of programs and data used by a group
of people. LAN software and practices also need to provide for the security of these programs and data.
Risks associated with use of LANs include:

 Loss of data and program integrity through unauthorized changes


 Lack of current data protection through inability to maintain version control
 Exposure to external activity through limited user verification and potential public network
access from dial-up connections
 Virus infection
 Improper disclosure of data because of general access rather than need-to-know access
provisions
 Violating software licenses by using unlicensed or excessive number of software copies
 Illegal access by impersonating or masquerading as a legitimate LAN user
 Internal user’s sniffing (obtaining seemingly unimportant information from the network
that can be used to launch an attack, such as network address information)
 Internal user’s spoofing (reconfiguring a network address to pretend to be a different
address)
 Destruction of the logging and auditing data

The LAN security provisions available depend on the software product, product version and
implementation. Commonly available network security administrative capabilities include:

 Declaring ownership of programs, files and storage


 Limiting access to read only
 Implementing record and file locking to prevent simultaneous update to the same record
 Enforcing user ID/password sign-on procedures, including the rules relating to password
length, format and change frequency

Dial-Up Access Controls


It is possible to break LAN security through the dial-in route. Without dial-up access controls, a caller
can dial in and try passwords until they gain access. Once in, they can hide pieces of software

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 307

anywhere, pass through Wide Area Network (WAN) links to other systems and generally cause as
much or as little havoc as they like.

 To minimize the risk of unauthorized dial-in access, remote users should never store their
passwords in plain text login scripts on notebooks and laptops. Furthermore, portable PCs
should be protected by physical keys and/or basic input output system (BIOS) based
passwords to limit access to data if stolen.

 In order to prevent access by the guessing of passwords, a dial-back modem should be


used. When a call is answered by the modem, the caller must enter a code. The modem
then hangs up the connection and looks up a corresponding phone number that has been
authorized for dial-in access and calls the number back if it is authenticated.

Client/Server Security
A client/server system typically contains numerous access points. Client/server systems utilize
distributed techniques, creating increased risk of access to data and processing. To effectively secure
the client/server environment, all access points should be identified. In mainframe-based applications,
centralized processing techniques require the user to go through one pre-defined route to access all
resources. In a client/server environment, several access points exist, as application data may exist on
the client or the server. Each of these routes must therefore be examined individually and in relation to
each other to determine that no exposures are left unchecked.

In order to increase the security in a client/server environment, the following control techniques should
be in place:

 Securing access to the data or application on the client/server may be performed by


disabling the floppy disk drive, much like a keyless workstation that has access to a
mainframe. Diskless workstations prevent access control software from being by-passed
and rendering the workstation vulnerable to unauthorized access. By securing the
automatic boot or start up batch files, unauthorized users may be prevented from
overriding login scripts and access.
 Network monitoring devices may be used to inspect activity from known or unknown
users.
 Data encryption techniques can help protect sensitive or proprietary data from
unauthorized access.
 Authentication systems may provide environment-wide, logical facilities that can
differentiate among users. Another method, system smart cards, uses intelligent hand-held
devices and encryption techniques to decipher random codes provided by client/server
systems. A smart card displays a temporary password that is provided by an algorithm
(step-by-step calculation instructions) on the system and must be re-entered by the user
during the login session for access into the client/server system.
 The use of application level access control programs and the organization of users into
functional groups is a management control that restricts access by limiting users to only
those functions needed to perform their duties.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 308

Client/Server Risks and Issues


Since the early 1990s, client/server technology has become one of the predominant ways many
organizations have processed production data and developed and delivered mission critical products
and services.

The areas of risk and concern in a client/server environment are:


 Access controls may be inherently weak in a client/server environment if network
administration does not properly set up password change controls or access rules.
 Change control and change management procedures, whether automated or manual may
be inherently weak. The primary reason for this weakness is due to the relatively high level
of sophistication of client/server change control tools together with inexperienced staff
who are reluctant to introduce such tools for fear of introducing limitations on their
capability.
 The loss of network availability may have a serious impact on the business or service
 Obsolescence of the network components, including hardware, software and
communications.
 Unauthorized and indiscriminate use of synchronous and asynchronous modems to
connect the network to other networks.
 Connection of the network to public switched telephone networks.
 Inaccurate, unauthorized and unapproved changes to systems or data.
 Unauthorized access to confidential data, the unauthorized modification of data, business
interruption and incomplete and inaccurate data.
 Application code and data may not be located on a single machine enclosed in a secure
computer room as with mainframe computing.

Internet Threats
The very nature of the Internet makes it vulnerable to attack. It was originally designed to allow for
the freest possible exchange of information, data and files. However, today the freedom carries a price.
Hackers and virus-writers try to attack the Internet and computers connected to the Internet and those
who want to invade other’s privacy attempt to crack into databases of sensitive information or snoop
on information as it travels across Internet routes.
It is therefore important in this situation to understand the risks and security factors that are needed to
ensure proper controls are in place when a company connects to the Internet. There are several areas
of control risks that must be evaluated to determine the adequacy of Internet security controls:
 Corporate Internet policies and procedures
 Firewall standards
 Firewall security
 Data security controls

Internet threats include:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 309

a) Disclosure
It is relatively simple for someone to eavesdrop on a ‘conversation’ taking place over the Internet.
Messages and data traversing the Internet can be seen by other machines including e-mail files,
passwords and in some cases key-strokes as they are being entered in real time.

b) Masquerade
A common attack is a user pretending to be someone else to gain additional privileges or access to
otherwise forbidden data or systems. This can involve a machine being reprogrammed to masquerade
as another machine (such as changing its Internet Protocol – IP address). This is referred to as spoofing.

c) Unauthorized access
Many Internet software packages contain vulnerabilities that render systems subject to attack.
Additionally, many of these systems are large and difficult to configure, resulting in a large percentage
of unauthorized access incidents.

d) Loss of integrity
Just as it is relatively simple to eavesdrop a conversation, so it is also relatively easy to intercept the
conversation and change some of the contents or to repeat a message. This could have disastrous effects
if, for example, the message was an instruction to a bank to pay money.

e) Denial of service
Denial of service attacks occur when a computer connected to the Internet is inundated (flooded) with
data and/or requests that must be serviced. The machine becomes so tied up with dealing with these
messages that it becomes useless for any other purpose.

f) Threat of service and resources


Where the Internet is being used as a channel for delivery of a service, unauthorized access to the
service is effectively theft. For example, hacking into a subscription based news service is effectively
theft.

It is difficult to assess the impact of the threats described above, but in generic terms the following
types of impact could occur:

 Loss of income
 Increased cost of recovery (correcting information and re-establishing services)
 Increased cost of retrospectively securing systems
 Loss of information (critical data, proprietary information, contracts)
 Loss of trade secrets
 Damage to reputation
 Legal and regulatory non-compliance
 Failure to meet contractual commitments

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 310

Encryption
Encryption is the process of converting a plaintext message into a secure coded form of text called
cipher text that cannot be understood without converting back via decryption (the reverse process) to
plaintext again. This is done via a mathematical function and a special encryption/decryption password
called the key.

Encryption is generally used to:


 Protect data in transit over networks from unauthorized interception and manipulation
 Protect information stored on computers from unauthorized viewing and manipulation
 Deter and detect accidental or intentional alterations of data
 Verify authenticity of a transaction or document

The limitations of encryption are that it can’t prevent loss of data and encryption programs can be
compromised. Therefore encryption should be regarded as an essential but incomplete form of access
control that should be incorporated into an organization’s overall computer security program.

Key elements of encryption systems are:


(i) Encryption algorithm – a mathematically based function or calculation which encrypts/decrypts
data
(ii) Encryption keys – a piece of information that is used within an encryption algorithm (calculation)
to make the encryption or decryption process unique/ similar to passwords, a user needs to use the
correct key to access or decipher a message. The wrong key will decipher the message into an
unreadable form.
(iii) Key length – a predetermined length for the key. The longer the key, the more difficult it is to
compromise in a brute-force attack where all possible key combinations are tried.
Effective encryption systems depend upon the secrecy and the difficulty of compromising a
key, the existence of back doors by which an encrypted file can be decrypted without knowing
the key, the ability to decrypt an entire cipher text message if you know the way that a portion
of it decrypts (called a known text attack), and the properties of the plaintext known by a
perpetrator.

There are two common encryption or cryptographic systems:

a) Symmetric or private key system


Symmetric cryptosystem use a secret key to encrypt the plaintext to the cipher text. The same key is
also used to decrypt the cipher text to the corresponding plaintext. In this case the key is symmetric
because the encryption key is the same as the decryption key. The most common private key
cryptography system is data encryption standard (DES).

b) Asymmetric or public key system


Asymmetric encryption systems use two keys, which work together as a pair. One key is used to
encrypt data, the other is used to decrypt data. Either key can be used to encrypt or decrypt, but once
one key has been used to encrypt data, only its partner can be used to decrypt the data (even the key
that was used to encrypt the data cannot be used to decrypt it). Generally, with asymmetric encryption,
one key is known only to one person – the secret or private key – the other key is known by many

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 311

people – the public key. A common form of asymmetric encryption is RSA (named after its inventors
Rivest, Shamir and Adelman).

Firewall security
A firewall is a set of hardware and software equipment placed between an organization’s internal
network and an external network to prevent outsiders from invading private networks.

Companies should build firewalls to protect their networks from attacks. In order to be effective,
firewalls should allow individuals on the corporate network to access the Internet and at the same time
stop hackers or others on the Internet from gaining access to the corporate network to cause damage.

Firewalls are hardware and software combinations that are built using routers, servers and a variety of
software. They should sit in the most vulnerable point between a corporate network and the Internet
and they can be as simple or complex as system administrators want to build them.

There are many different types of firewalls, but many enable organizations to:
 Block access to particular sites on the Internet
 Prevent certain users from accessing certain servers or services
 Monitor communications between an internal and external networks
 Eavesdrop and record all communications between an internal network and the outside world
to investigate network penetrations or detect internal subversions.
 Encrypt packets that are sent between different physical locations within an organization by
creating a virtual private network over the Internet.

Problems faced by organizations that have implemented firewalls are:

 A false sense of security exists where management feels that no further security checks and
controls are needed on the internal network.
 Firewalls are circumvented through the use of modems connecting users to Internet Service
Providers.
 Mis-configured firewalls, allowing unknown and dangerous services to pass through freely.
 Misunderstanding of what constitutes a firewall e.g. companies claiming to have a firewall
merely having a screening router.
 Monitoring activities do not occur on a regular basis i.e. log settings not appropriately applied
and reviewed.

Intrusion detection systems (IDS)


Intrusion or intruder detection is the identification of and response to ill-minded activities. An IDS is
a tool aiding in the detection of such attacks. An IDS detects patterns and issues an alert. There are two
types of IDSs, network-based and host-based.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 312

Network-based IDSs identify attacks within the network that they are monitoring and issue a warning
to the operator. If a network-based IDS is placed between the Internet and the firewall it will detect all
the attack attempts, whether they do or do not enter the firewall. If the IDS is placed between a firewall
and the corporate network it will detect those attacks that could not enter the firewall i.e. it will detect
intruders. The IDS is not a substitute for a firewall, but complements the function of a firewall.

Host-based IDSs are configured for a specific environment and will monitor various internal resources
of the operating system to warn of a possible attack. They can detect the modification of executable
programs, the deletion of files and issue a warning when an attempt is made to use a privileged
command.

Environmental exposures and controls


Environmental exposures are primarily due to naturally occurring events; however, with proper
controls exposure to these elements can be reduced. Common exposures are:

 Fire
 Natural disasters – earthquake, volcano, hurricane, tornado
 Power failure
 Power spike
 Air conditioning failure
 Electrical shock
 Equipment failure
 Water damage/flooding – even with facilities located on upper floors of high-rise buildings,
water damage is a risk, typically occurring from broken water pipes
 Bomb threat/attack

Other environmental issues and exposures include the following:


 Is the power supply to the computer equipment properly controlled to ensure that it remains
within the manufacturer’s specifications?
 Are the air conditioning, humidity and ventilation control systems for the computer equipment
adequate to maintain temperatures within manufacturers’ specifications?
 Is the computer equipment protected from the effects of static electricity, using an anti-static
rug or anti-static spray?
 Is the computer equipment kept free of dust, smoke and other particulate matter, such as food?
 Is consumption of food, beverage and tobacco products prohibited, by policy, around computer
equipment?
 Are backup media protected from damage due to temperature extremes, the effects of magnetic
fields and water damage?

Some of the controls for environmental exposures include


a) Water detectors
In the computer room, water detectors should be placed under the raised floor and near drain holes,
even if the computer room is on a high floor (remember water leaks). When activated, the detectors
should produce an audible alarm that can be heard by security and control personnel.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 313

b) Hand-held fire extinguishers


Fire extinguishers should be in strategic locations throughout the information system facility. They
should be tagged for inspection and inspected at least annually.

c) Manual fire alarms


Hand-pull fire alarms should be strategically placed throughout the facility. The resulting audible alarm
should be linked to a monitored guard station.

d) Smoke detectors
They supplement not replace fire suppression systems. Smoke detectors should be above and below
the ceiling tiles throughout the facility and below the raised computer room floor. They should produce
an audible alarm when activated and be linked to a monitored station (preferably by the fire
department).

e) Fire suppression system


These systems are designed to activate immediately after detection of high heat typically generated by
fire. It should produce an audible alarm when activated. Ideally, the system should automatically
trigger other mechanisms to localize the fire. This includes closing fire doors, notifying the fire
department, closing off ventilation ducts and shutting down nonessential electrical equipment.
Therefore fire suppression varies but is usually one of the following:

 Water based systems (sprinkler systems) – effective but unpopular because they damage
equipment
 Dry-pipe sprinkling – sprinkler systems that do not have water in the pipes until an
electronic fire alarm activates the water pumps to send water to the dry pipe system.
 Halon systems – release pressurized halon gases that remove oxygen from the air, thus
starving the fire. Halon is popular because it is an inert gas and does not damage
equipment.
 Carbon dioxide systems – release pressurized carbon dioxide gas into the area protected
to replace the oxygen required for combustion. Unlike halon, however, carbon dioxide is
unable to sustain human life and can therefore not be set to automatic release.

f) Strategically locating the computer room


To reduce the risk of flooding, the computer room should not be located in the basement. If located in
a multi-storey building, studies show that the best location for the computer room to reduce the risk of
fire, smoke and water damage is on 3rd, 4th, 5th or 6th floor.

g) Regular inspection by fire department


To ensure that all fire detection systems comply with building codes, the fire department should inspect
the system and facilities annually.

h) Fireproof walls, floors and ceilings surrounding the computer room


Walls surrounding the information processing facility should contain or block fire from spreading.
The surrounding walls would have at least a two-hour fire resistance rating.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 314

i) Electrical surge protectors


These electrical devices reduce the risk of damage to equipment due to power spikes. Voltage
regulators measure the incoming electrical current and either increase or decrease the charge to ensure
a consistent current. Such protectors are typically built into the uninterruptible power supply (UPS)
system.

j) Uninterruptible power supply system (UPS)/generator


A UPS system consists of a battery or petrol powered generator that interfaces between the electrical
power entering the facility and the electrical power entering the computer. The system typically
cleanses the power to ensure wattage into the computer is consistent. Should a power failure occur, the
UPS continues providing electrical power from the generator to the computer for a certain length of
time. A UPS system can be built into a computer or can be an external piece of equipment.

k) Emergency power-off switch


There may be a need to shut off power to the computer and peripheral devices, such as during a
computer room fire or emergency evacuation. Two emergency power-off switches should serve this
purpose, one in the computer room, the other near, but outside, the computer room. They should be
clearly labelled, easily accessible for this purpose and yet still secured from unauthorized people. The
switches should be shielded to prevent accidental activation.

l) Power leads from two substations


Electrical power lines that feed into the facility are exposed to many environmental hazards- water,
fire, lightning, cutting to due careless digging etc. To reduce the risk of a power failure due to these
events that, for the most part, are beyond the control of the organization, redundant power lines should
feed into the facility. In this way, interruption of one power line does not adversely affect electrical
supply.

m) Wiring placed in electrical panels and conduit


Electrical fires are always a risk. To reduce the risk of such a fire occurring and spreading, wiring
should be placed in fire-resistant panels and conduit. This conduit generally lies under the fire-resistant
raised computer room floor.

n) Prohibitions against eating, drinking and smoking within the information processing
facility
Food, drink and tobacco use can cause fires, build-up of contaminants or damage to sensitive
equipment especially in case of liquids. They should be prohibited from the information processing
facility. This prohibition should be overt, for example, a sign on the entry door.

o) Fire resistant office materials


Wastebaskets, curtains, desks, cabinets and other general office materials in the information processing
facility should be fire resistant. Cleaning fluids for desktops, console screens and other office
furniture/fixtures should not be flammable.

p) Documented and tested emergency evacuation plans

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 315

Evacuation plans should emphasize human safety, but should not leave information processing
facilities physically unsecured. Procedures should exist for a controlled shutdown of the computer in
an emergency situation, if time permits.

INFORMATION SYSTEM RISK MANAGEMENT


The fundamental precept of information security is to support the mission of the organization.

All organizations are exposed to uncertainties, some of which impact the organization in a negative
manner. In order to support the organization, IT security professionals must be able to help their
organizations’ management understand and manage these uncertainties.

Managing uncertainties is not an easy task. Limited resources and an ever-changing landscape of
threats and vulnerabilities make completely mitigating all risks impossible. Therefore, IT security
professionals must have a toolset to assist them in sharing a commonly understood view with IT and
business managers concerning the potential impact of various IT security related threats to the
mission. This toolset needs to be consistent, repeatable, cost-effective and reduce risks to a reasonable
level.

Risk management is nothing new. There are many tools and techniques available for managing
organizational risks. There are even a number of tools and techniques that focus on managing risks to
information systems. This paper explores the issue of risk management with respect to information
systems and seeks to answer the following questions:

 What is risk with respect to information systems?


 Why is it important to understand risk?
 How is risk assessed?
 How is risk managed?
 What are some common risk assessment/management methodologies and tools?

What Is Risk With Respect To Information Systems?


Risk is the potential harm that may arise from some current process or from some future event. Risk is
present in every aspect of our lives and many different disciplines focus on risk as it applies to them.
From the IT security perspective, risk management is the process of understanding and responding to
factors that may lead to a failure in the confidentiality, integrity or availability of an information
system. IT security risk is the harm to a process or the related information resulting from some
purposeful or accidental event that negatively impacts the process or the related information.

Risk is a function of the likelihood of a given threat-source’s exercising a particular potential


vulnerability, and the resulting impact of that adverse event on the organization

Threats
A threat can be defined as the potential for a threat source to exercise (accidentally trigger or
intentionally exploit) a specific vulnerability.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 316

Threat-Source:
It’s either intent and method targeted at the intentional exploitation of a vulnerability or a situation
and method that may accidentally trigger a vulnerability.

The threat is merely the potential for the exercise of a particular vulnerability. Threats in themselves
are not actions. Threats must be coupled with threat-sources to become dangerous.

This is an important distinction when assessing and managing risks, since each threat-source maybe
associated with a different likelihood, which, as will be demonstrated, affects risk assessment and risk
management. It is often expedient to incorporate threat sources into threats.

Vulnerabilities
Vulnerability is a flaw or weakness in system security procedures, design, implementation, or internal
controls that could be exercised (accidentally triggered or intentionally exploited) and result in a
security breach or a violation of the system’s security policy.

Notice that the vulnerability can be a flaw or weakness in any aspect of the system.

Vulnerabilities are not merely flaws in the technical protections provided by the system.

Significant vulnerabilities are often contained in the standard operating procedures that systems
administrators perform, the process that the help desk uses to reset passwords or inadequate log
review. Another area where vulnerabilities may be identified is at the policy level. For instance, a lack
of a clearly defined security testing policy may be directly responsible for the lack of vulnerability
scanning.

Here are a few examples of vulnerabilities related to contingency planning/ disaster recovery:

•Not having clearly defined contingency directives and procedures

•Lack of a clearly defined, tested contingency plan

•The absence of adequate formal contingency training

•Lack of information (data and operating system) backups

•Inadequate information system recovery procedures, for all processing areas (including
networks)

•Not having alternate processing or storage sites

•Not having alternate communication services

Why Is It Important to Manage Risk?


The principle reason for managing risk in an organization is to protect the mission and assets of the
organization. Therefore, risk management must be a management function rather than a technical
function.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 317

It is vital to manage risks to systems. Understanding risk, and in particular, understanding the specific
risks to a system allow the system owner to protect the information system commensurate with its
value to the organization. The fact is that all organizations have limited resources and risk can never
be reduced to zero. So, understanding risk, especially the magnitude of the risk, allows organizations
to prioritize scarce resources.

How Is Risk Assessed?


Risk is assessed by identifying threats and vulnerabilities, then determining the likelihood and impact
for each risk. It’s easy, right? Unfortunately, risk assessment is a complex undertaking, usually based
on imperfect information. There are many methodologies aimed at allowing risk assessment to be
repeatable and give consistent results.

DISASTER RECOVERY AND BUSINESS CONTINUITY PLANNING


Because of business interruptions ranging from catastrophic natural disasters to acts of terrorism to
technical glitches, organizations need business continuity and recovery resources, plans, and
management. A variety of products, consultants, and services are available. Which kind of help best
fits each company’s needs? Should the company build and maintain the solution or contract for
services?

Answering these questions requires understanding the services that support making business
continuity decisions.

Technology Basics
The September 2001 attack on the World Trade Center in New York City tested the contingency plans
of American businesses to an unanticipated degree. Companies that had business continuity plans and
contracts in place with vendors of recovery services were able to continue business at alternate sites
with minimum downtime and minimum loss of data, and the alternate facilities provided by the
vendors were not overcrowded even in this largest of disasters. Unfortunately, the massive loss of life
and its dramatic impact on co-workers, business processes, and communities was not anticipated. As
organizations throughout the world attempt to return to business as usual, they must not neglect the
very necessary review and updating of their business continuity plans and contracts. Only then will
the lessons of the World Trade Center disaster have value going forward.

The Need for Business Continuity/Disaster Recovery Planning and


Management
In the aftermath of recent natural disasters, terrorism, and equipment breakdown, businesses have
recognized more than ever the need for an organization to be prepared. Companies are striving to
meet the demand for continuous service. With the growth of e-commerce and other factors driving
system availability expectations toward 24x365, the average organization’s requirement for recovery
time from a major system outage now ranges between two and 24 hours. This requirement is pushed
by the expectation an organization faces on all sides:

• Customers expect supplies and services to continue— or resume rapidly— in all situations.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 318

• Shareholders expect management control to remain operational through any crisis.

• Employees expect both their lives and livelihoods to be protected.

• Suppliers expect their revenue streams to continue.

• Regulatory agencies expect their requirements to be met, regardless of circumstances.

• Insurance companies expect due care to be exercised.

Business Survival in an Uncertain World


Business survival necessitates planning for every type of business disruption including— but by no
means limited to— the categories of natural disasters; hardware and communications failures; internal
or external sabotage or acts of terrorism; and the failures of supply chain and sales affiliate
organizations. While such disruptions cannot be predicted, they can wreak havoc upon the business,
with results ranging from insured losses of replaceable tangibles to uninsurable capital losses to
customer dissatisfaction and possible desertion to complete insolvency. Other business disruptions,
such as a hurricane, may give advance warning. Others, such as terrorism, flash floods, fire, etc., can
strike without notice. A business continuity strategy, then, is a high-value— but high-maintenance—
proposition. Business continuity embraces a broad spectrum of technologies: old and new, paper-
based and electronic, manual and automated, individual and integrated.

The Challenge of Expecting the Unexpected


The key challenge of business continuity preparation is not technology, however, but the internal
marketing “business” aspects that begin at the foundation level of any project and continue throughout
its life cycle: justification, executive buy-in, broad organizational support, and governance and
politics. Perhaps the most important point to make about business continuity support technologies is
that their effectiveness depends entirely upon the organization’s top-down commitment to the entire
project, including the updating and testing necessary for maintenance.

Even among corporations with business continuity plans, a KPMG study shows that less than one half
meet an acceptable portion of their recovery objectives. The business infrastructure seems to be less
protected than its stewards think it is, and such surprises usually lie in failure to tend the corporate
domain. Two curable causes of disappointing continuity plan performance may be viewed as “spotty
plans” and “plan rust.” Spotty plans suffer from gaps either in the initial continuity plan or in the
current plan’s rust from lack of exercise (testing).

Basics of the Business Continuity Plan


What Does the Business Need?

A business continuity plan, adequately supported throughout the organization, embodies the strategic
framework for a corporate culture that embraces a variety of tactics to mitigate risks that might cause:

• Business process failure

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 319

• Asset loss
• Regulatory liability
• Customer service failure
• Damage to reputation or brand

The Phases of Business Continuity Planning, Implementation, and


Management
The significance of each major phase of continuity planning merits attention because each phase
contributes to building all four areas of business continuity: disaster recovery, business recovery,
business resumption, and contingency planning:

• Phase 1— Establish the foundation.

These alignment and analysis steps are necessary to obtain executive sponsorship and the commitment
of resources from all stakeholders. Without a basis of business impact analysis and risk assessment,
the plan cannot succeed and may not even be developed.

• Phase 2— Develop and implement the plan.

Here, attention to detail and active participation by all stakeholders ensure the development of a plan
worth implementing. The plan itself must include the recovery strategy with all of its detailed
components and the test plan.

• Phase 3— Maintain the plan.

The best plan is only as effective as it is current. Every tactic of business resumption and recovery
must be kept up to date and tested regularly.

Types of Plans
The separate plans that make up a business continuity plan include:

a. Disaster recovery plan


 to recover mission-critical technology and applications at an alternate site business
resumption plan
 to continue mission-critical functions at the production site through work-arounds until
the application is restored.
b. Business recovery plan
 recover mission-critical business processes at an alternate site (sometimes called “workspace
recovery”).

c. Contingency plan
 to manage an external event that has far-reaching impact on the business.

Service Options
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 320

One significant trend among business continuity service vendors is to focus on business continuity as
a whole. Recovery itself must be speedy (under 24 hours) for high-availability systems— and the
facilities must provide continuity not only of the data center (the “glass house”), but also of all critical
aspects of its clients’ businesses. This focus provides clients a more integrated service while allowing
the vendor to maintain better account control.

Consulting and Planning Assistance


Consulting and planning assistance can be divided into the following groups:

• Software and Consulting.

Many service providers offer combinations of tactical consulting with business continuity planning
and management software, sometimes including full continuity management services and hot-site
facilities.

•Hardware and Consulting.

Hardware vendors may combine continuity planning consultancy with rapid hardware replacement
shipment, mobile-site delivery, or hot-site facilities.

• Internet E-Commerce Continuity and Consulting.

Communications and networking vendors may offer high-availability networking and rapid recovery
solutions with tactical consulting.

• Product-Independent Consulting.

Consultants who provide analyses, audits, and tactical recommendations based upon such studies
offer objectivity in the development of the specifications a company should use to select business
continuity products and services.

• PC-Based Planning Tools.

Virtually all hot-site vendors offer some form of PC-based disaster recovery plan development tool. In
many cases (like consulting services), these packages are provided to a client organization as an
enticement to acquire full hot-site services.

Recovery Assistance

Stand-alone considerations for offsite recovery remain a significant part of the continuity management
strategy. Specific types of service may be combined to provide the exact package any company
specifies:

• OEM Insurance.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 321

Hardware companies may offer a form of insurance guaranteeing that they will replace damaged
computer equipment with a system of equal or greater processing capacity within a specified period of
time. The insurance cost is usually six to eight percent of the monthly maintenance bill.

• Quick Ship.

Most third-party leasing vendors provide guaranteed rapid shipment of replacement hardware as a
recovery option. Customers pay a priority equipment search fee and the normal leasing charges plus a
premium when they request shipment.

Why is Business Continuity Planning Important


Every organization is at risk from potential disasters that include:

 Natural disasters such as tornadoes, floods, blizzards, earthquakes and fire


 Accidents
 Sabotage
 Power and energy disruptions
 Communications, transportation, safety and service sector failure
 Environmental disasters such as pollution and hazardous materials spills
 Cyber attacks and hacker activity.

Creating and maintaining a BCP helps ensure that an institution has the resources and information
needed to deal with these emergencies.

Creating A Business Continuity Plan


A BCP typically includes five sections:

a) BCP Governance
b) Business Impact Analysis (BIA)
c) Plans, measures, and arrangements for business continuity
d) Readiness procedures
e) Quality assurance techniques (exercises, maintenance and auditing)

Establish control

A BCP contains a governance structure often in the form of a committee that will ensure senior
management commitments and define senior management roles and responsibilities.

The BCP senior management committee is responsible for the oversight, initiation, planning,
approval, testing and audit of the BCP. It also implements the BCP, coordinates activities, approves
the BIA survey, oversees the creation of continuity plans and reviews the results of quality assurance
activities.

Senior managers or a BCP Committee would normally:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 322

i. approve the governance structure;


ii. clarify their roles, and those of participants in the program;
iii. oversee the creation of a list of appropriate committees, working groups and teams to develop
and execute the plan;
iv. provide strategic direction and communicate essential messages;
v. approve the results of the BIA;
vi. review the critical services and products that have been identified;
vii. approve the continuity plans and arrangement;
viii. monitor quality assurance activities; and
ix. resolve conflicting interests and priorities.
This BCP committee is normally comprised of the following members:

 Executive sponsor has overall responsibility for the BCP committee; elicits senior
management's support and direction; and ensures that adequate funding is available for the
BCP program.
 BCP Coordinator secures senior management's support; estimates funding requirements;
develops BCP policy; coordinates and oversees the BIA process; ensures effective participant
input; coordinates and oversees the development of plans and arrangements for business
continuity; establishes working groups and teams and defines their responsibilities;
coordinates appropriate training; and provides for regular review, testing and audit of the
BCP.
 Security Officer works with the coordinator to ensure that all aspects of the BCP meet the
security requirements of the organization.
 Chief Information Officer (CIO) cooperates closely with the BCP coordinator and IT
specialists to plan for effective and harmonized continuity.
 Business unit representatives provide input, and assist in performing and analyzing the results
of the business impact analysis.
The BCP committee is commonly co-chaired by the executive sponsor and the coordinator.

Business Impact Analysis


The purpose of the BIA is to identify the organization's mandate and critical services or products; rank
the order of priority of services or products for continuous delivery or rapid recovery; and identify
internal and external impacts of disruptions.

a. Identify the mandate and critical aspects of an organization


This step determines what goods or services it must be delivered. Information can be obtained from
the mission statement of the organization, and legal requirements for delivering specific services and
products.

b. Prioritize critical services or products


Once the critical services or products are identified, they must be prioritized based on minimum
acceptable delivery levels and the maximum period of time the service can be down before severe
damage to the organization results. To determine the ranking of critical services, information is
required to determine impact of a disruption to service delivery, loss of revenue, additional expenses
and intangible losses.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 323

c. Identify impacts of disruptions


The impact of a disruption to a critical service or business product determines how long the
organization could function without the service or product, and how long clients would accept its
unavailability. It will be necessary to determine the time period that a service or product could be
unavailable before severe impact is felt.

d. Identify areas of potential revenue loss


To determine the loss of revenue, it is necessary to determine which processes and functions that
support service or product delivery are involved with the creation of revenue. If these processes and
functions are not performed, is revenue lost? How much? If services or goods cannot be provided,
would the organization lose revenue? If so, how much revenue, and for what length of time? If clients
cannot access certain services or products would they then to go to another provider, resulting in
further loss of revenue?

e. Identify additional expenses


If a business function or process is inoperable, how long would it take before additional expenses
would start to add up? How long could the function be unavailable before extra personnel would have
to be hired? Would fines or penalties from breaches of legal responsibilities, agreements, or
governmental regulations be an issue, and if so, what are the penalties?

f. Identify intangible losses


Estimates are required to determine the approximate cost of the loss of consumer and investor
confidence, damage to reputation, loss of competitiveness, reduced market share, and violation of
laws and regulations. Loss of image or reputation is especially important for public institutions as they
are often perceived as having higher standards.

g. Insurance requirements
Since few organizations can afford to pay the full costs of a recovery; having insurance ensures that
recovery is fully or partially financed.

When considering insurance options, decide what threats to cover. It is important to use the BIA to
help decide both what needs insurance coverage, and the corresponding level of coverage. Some
aspects of an operation may be overinsured, or underinsured. Minimize the possibility of overlooking
a scenario, and to ensure coverage for all eventualities.

Document the level of coverage of your institutional policy, and examine the policy for uninsured
areas and non-specified levels of coverage. Property insurance may not cover all perils (steam
explosion, water damage, and damage from excessive ice and snow not removed by the owner).
Coverage for such eventualities is available as an extension in the policy.

When submitting a claim, or talking to an adjustor, clear communication and understanding is


important. Ensure that the adjustor understands the expected full recovery time when documenting
losses. The burden of proof when making claims lies with the policyholder and requires valid and
accurate documentation.

Include an expert or an insurance team when developing the response plan.

h. Ranking

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 324

Once all relevant information has been collected and assembled, rankings for the critical business
services or products can be produced. Ranking is based on the potential loss of revenue, time of
recovery and severity of impact a disruption would cause. Minimum service levels and maximum
allowable downtimes are then determined.

i. Identify dependencies
It is important to identify the internal and external dependencies of critical services or products, since
service delivery relies on those dependencies.

Internal dependencies include employee availability, corporate assets such as equipment, facilities,
computer applications, data, tools, vehicles, and support services such as finance, human resources,
security and information technology support.

External dependencies include suppliers, any external corporate assets such as equipment, facilities,
computer applications, data, tools, vehicles, and any external support services such as facility
management, utilities, communications, transportation, finance institutions, insurance providers,
government services, legal services, and health and safety service.

Plans for business continuity


This step consists of the preparation of detailed response/recovery plans and arrangements to ensure
continuity. These plans and arrangements detail the ways and means to ensure critical services and
products are delivered at a minimum service levels within tolerable down times. Continuity plans
should be made for each critical service or product.

1. Mitigating threats and risks


Threats and risks are identified in the BIA or in a full-threat-and-risk assessment. Moderating risk is
an ongoing process, and should be performed even when the BCP is not activated. For example, if an
organization requires electricity for production, the risk of a short term power outage can be mitigated
by installing stand-by generators.

Another example would be an organization that relies on internal and external telecommunications to
function effectively. Communications failures can be minimized by using alternate communications
networks, or installing redundant systems.

2. Analyze current recovery capabilities


Consider recovery arrangements the organization already has in place, and their continued
applicability. Include them in the BCP if they are relevant.

3. Create continuity plans


Plans for the continuity of services and products are based on the results of the BIA. Ensure that plans
are made for increasing levels of severity of impact from a disruption. For example, if limited
flooding occurs beside an organization's building, sand bagging may be used in response. If water
rises to the first floor, work could be moved to another company building or higher in the same
building. If the flooding is severe, the relocation of critical parts of the business to another area until
flooding subsides may be the best option.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 325

Another example would be a company that uses paper forms to keep track of inventory until
computers or servers are repaired, or electrical service is restored. For other institutions, such as large
financial firms, any computer disruptions may be unacceptable, and an alternate site and data
replication technology must be used.

The risks and benefits of each possible option for the plan should be considered, keeping cost,
flexibility and probable disruption scenarios in mind. For each critical service or product, choose the
most realistic and effective options when creating the overall plan.

4. Response preparation
Proper response to a crisis for the organization requires teams to lead and support recovery and
response operations. Team members should be selected from trained and experienced personnel who
are knowledgeable about their responsibilities.

The number and scope of teams will vary depending on organization's size, function and structure,
and can include:

 Command and control teams that include


i. a crisis management team, and
ii. a response, continuation or recovery management team.
 Task oriented teams that include
i. an alternate site coordination team,
ii. contracting and procurement team,
iii. damage assessment and salvage team,
iv. finance and accounting team,
v. hazardous materials team,
vi. insurance team,
vii. legal issues team,
viii. telecommunications/ alternate communications team,
ix. mechanical equipment team,
x. mainframe/ midrange team,
xi. notification team,
xii. personal computer/ local area network team,
xiii. public and media relations team,
xiv. transport coordination team
xv. vital records management team
The duties and responsibilities for each team must be defined, and include identifying the team
members and authority structure, identifying the specific team tasks, member's roles and
responsibilities, creation of contact lists and identifying possible alternate members.

For the teams to function in spite of personnel loss or availability, it may be necessary to multitask
teams and provide cross-team training.

5. Alternate facilities
If an organization's main facility or Information Technology assets, networks and applications are
lost, an alternate facility should be available. There are three types of alternate facility:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 326

Cold site is an alternate facility that is not furnished and equipped for operation. Proper equipment
and furnishings must be installed before operations can begin, and a substantial time and effort is
required to make a cold site fully operational. Cold sites are the least expensive option.

Warm site is an alternate facility that is electronically prepared and almost completely equipped and
furnished for operation. It can be fully operational within several hours. Warm sites are more
expensive than cold sites.

Hot site is fully equipped, furnished, and often even fully staffed. Hot sites can be activated within
minutes or seconds. Hot sites are the most expensive option.

When considering the type of alternate facility, consider all factors, including threats and risks,
maximum allowable downtime and cost.

For security reasons, some organizations employ hardened alternate sites. Hardened sites contain
security features that minimize disruptions. Hardened sites may have alternate power supplies; back-
up generation capability; high levels of physical security; and protection from electronic surveillance
or intrusion.

Readiness Procedures
Readiness procedures include the following:

1. Training
Business continuity plans can be smoothly and effectively implemented by:

 Having all employees and staff briefed on the contents of the BCP and aware of their
individual responsibilities
 Having employees with direct responsibilities trained for tasks they will be required to
perform, and be aware of other teams' functions.

2. Exercises
After training, exercises should be developed and scheduled in order to achieve and maintain high
levels of competence and readiness. While exercises are time and resource consuming, they are the
best method for validating a plan. The following items should be incorporated when planning an
exercise:

a. Goal
The part of the BCP to be tested.

b. Objectives
The anticipated results. Objectives should be challenging, specific, measurable, achievable, realistic
and timely.

c. Scope

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 327

Identifies the departments or organizations involved, the geographical area, and the test conditions
and presentation.

d. Artificial aspects and assumptions


Defines which exercise aspects are artificial or assumed, such as background information, procedures
to be followed, and equipment availability.

e. Participant Instructions
Explains that the exercise provides an opportunity to test procedures before an actual disaster.

f. Exercise Narrative
Gives participants the necessary background information, sets the environment and prepares
participants for action. It is important to include factors such as time, location, method of discovery
and sequence of events, whether events are finished or still in progress, initial damage reports and any
external conditions.
g. Communications for Participants
Enhanced realism can be achieved by giving participants access to emergency contact personnel who
share in the exercise. Messages can also be passed to participants during an exercise to alter or create
new conditions.

h. Testing and Post-Exercise Evaluation


The exercise should be monitored impartially to determine whether objectives were achieved.
Participants' performance, including attitude, decisiveness, command, coordination, communication,
and control should be assessed. Debriefing should be short, yet comprehensive, explaining what did
and did not work, emphasizing successes and opportunities for improvement. Participant feedback
should also be incorporated in the exercise evaluation.

Exercise complexity level can also be enhanced by focusing the exercise on one part of the BCP
instead of involving the entire organization.

i. Quality assurance techniques


Review of the BCP should assess the plan's accuracy, relevance and effectiveness. It should also
uncover which aspects of a BCP need improvement. Continuous appraisal of the BCP is essential to
maintaining its effectiveness. The appraisal can be performed by an internal review, or by an external
audit.

j. Internal review
It is recommended that organizations review their BCP:

 On a scheduled basis (annually or bi-annually)


 when changes to the threat environment occur;
 when substantive changes to the organization take place; and
 after an exercise to incorporate findings.

External audit

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 328

When auditing the BCP, consultants nominally verify:

 Procedures used to determine critical services and processes


 Methodology, accuracy, and comprehensiveness of continuity plans

What to do when a disruption occurs


Disruptions are handled in three steps:

a) Response
b) Continuation of critical services
c) Recovery and restoration

a) Response
Incident response involves the deployment of teams, plans, measures and arrangements. The
following tasks are accomplished during the response phase:

a) Incident management
b) Communications management
c) Operations management

a) Incident management
Incident management includes the following measures:

 notifying management, employees, and other stakeholders;


 assuming control of the situation;
 identifying the range and scope of damage;
 implementing plans;
 identifying infrastructure outages; and
 coordinating support from internal and external sources.

b) Communications management
Communications management is essential to control rumors, maintain contact with the media,
emergency services and vendors, and assure employees, the public and other affected stakeholders.
Communications management requirements may necessitate building redundancies into
communications systems and creating a communications plan to adequately address all requirements.

c) Operations management
An Emergency Operations Center (EOC) can be used to manage operations in the event of a
disruption. Having a centralized EOC where information and resources can be coordinated, managed
and documented helps ensure effective and efficient response.

b) Continuation

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 329

Ensure that all time-sensitive critical services or products are continuously delivered or not disrupted
for longer than is permissible.

c) Recovery and restoration


The goal of recovery and restoration operations is to, recover the facility or operation and maintain
critical service or product delivery. Recovery and restoration includes:

 Re-deploying personnel
 Deciding whether to repair the facility, relocate to an alternate site or build a new facility
 Acquiring the additional resources necessary for restoring business operations
 Re-establishing normal operations
 Resuming operations at pre-disruption levels
Conclusion
When critical services and products cannot be delivered, consequences can be severe. All
organizations are at risk and face potential disaster if unprepared. A business continuity plan is a tool
that allows institutions to not only to moderate risk, but also continuously deliver products and
services despite disruption.

REVISION EXERCISES
1. Discuss some of the information system threats
2. How can an organization control some of the information threats it faces?
3. A trap door is a secret and undocumented entry point within a program which typically
bypasses normal methods of authentication, and usually included for debugging purposes but
may be forgotten or left deliberately. Trap doors can also be inserted by intruders who have
gained access. Suggest four counter measures of controlling trap doors.
4. Define system integrity
5. Define intrusion detection system.
6. What is a firewall and what functions does it perform in relation to organizational network
security.
7. What are some of the vulnerabilities in a contingency plan?
8. Briefly describe three advantages of implementing an online banking system
9. Identify six types of operational information systems in a bank.
10. Define the following terms:
(i) Virus
(ii) Worm
(iii) Logic bomb
(iv) Denial of service
11. Identify four hardware tactics of controlling viruses in an organization.
12. Briefly describe the following systems:
(i) CAD/CAM
(ii) Image Management Software
(iii) Automated Materials Handling Software
(iv) CIM
13. List ten controls over environmental exposures.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 330

14. Define intrusion detection system.


15. What is a firewall and what functions does it perform in relation to organizational network
security. Distinguish an active attack from a passive attack to security.
16. Discuss the plans a business would put in place for a business contingency
17. The rapidly increasing connectivity and use of the Internet has introduced security threats and
exposures to many organizations, and therefore the need to have security measures to
safeguard against such exposures. One of the major Internet threat to an organization is the
presence of hackers.
Required:
(a) Define the terms exposures, threats and vulnerability giving an example of each.

(b) What is meant by the term hacking? Identify four exposures that can be caused by
hackers.
(c) Describe three major factors that vulnerability of a system to hacking will depend on.

CHAPTER 9

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 331

LEGAL ETHICAL AND SOCIAL ISSUES IN


MANAGEMENT INFORMATION SYSTEMS
SYNOPSIS
Introduction………………………………………….. 331
Management Information Systems
Ethical and Social Concerns………………………….. 331
The Moral Dimension of Management
Information Systems………………………………….. 340
The Legal Issues in Management
Information Systems…………………………………. 341

INTRODUCTION
There is a frequently used expression that emphasizes that information has no ethics. The ethical
aspect of organizations and the manner in which information is managed resides with the values that
are inherent in the people that comprise the organization. The manner in which information is used is
dependent on the ethics and beliefs of the people that make up the organization, especially the
organization’s leadership. It has become increasingly clear that information is a valuable
organizational resource that must be carefully safeguarded and effectively managed just as other
organizational resources are managed. Information cannot secure itself or protect itself from phishers,
spyware, or identity thieves.

In general, people have become much more technologically savvy. Largely due to the dramatically
increased scope of information available via the Internet, the ease of access to information, and the
broadened scope of computer literacy, the security of information and the privacy of individuals have
become areas of significant concern. Concerns about security and privacy as well as ethical dilemmas
dominate our daily lives. As a result of personal concerns and fears, and the rapid increase of theft of
personal information, organizations have developed and / or revised codes of ethical conduct.
Simultaneously, our government agencies have enacted laws and legislation that are specifically
related to ensuring the privacy and security of information and individuals.

MANAGEMENT INFORMATION SYSTEM ETHICAL AND SOCIAL


CONCERNS
Information technology is a powerful tool that can be used to further organizational goals, pursue
national interest, or support environmentally sustainable development. The same technology has also
made it easier to engage in ethical or unethical business practices electronically anywhere in the
world. The way the technology is deployed in organizations depends on our decisions as managers,
computing professionals, and users of information systems. All of us therefore, should make these
decisions guided not only by the organizational and technological aspects of information systems, but
also in consideration of their effects on individuals.

Ethic refers to the principles of right and wrong that individuals use to make choices to guide their
behaviors. IT can be used to achieve social progress, but it can also be used to commit crimes and
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 332

threaten cherished social values. Ethical Issues - is governed by the general norms of behaviour and
by specific codes of ethics. Ethical considerations go beyond legal liability.

Knowledge of ethics as it applies to the issues arising from the development and use of information
systems helps us make decisions in our professional life. Professional knowledge is generally
assumed to confer a special responsibility within its domain. This is why the professions have evolved
codes of ethics, that is, sets of principles intended to guide the conduct of the members of the
profession.

End users and IS professionals would live up to their ethical responsibilities by voluntarily following
guidelines set in the code of conduct. For example, you can be a responsible end user by:

1. Acting with integrity


2. increasing your professional competence
3. Setting high standards of personal performance
4. Accepting responsibility for your work
5. Advancing the health, privacy, and general welfare of the public

Computer Ethics
Although ethical decision-making is a thoughtful process, based on one’s own personal fundamental
principles we need codes of ethics and professional conduct for the following reasons:

 To document acceptable professional conduct to:


o Establish status of the profession
o Educate professionals of their responsibilities to the public
o Inform the public of expectations of professionals
o Judge inappropriate professional behaviour and punish violators
 To aid the professional in ethical decision-making.

The following issues distinguish computing professionals’ ethics from other professionals’ ethics.
 Computing (automation) affects such a large segment of the society (personal,
professional, business, government, medical, industry, research, education, entertainment,
law, agriculture, science, art, etc); it changes the very fabric of society.
 Information technology is a very public business
 Computing is a young discipline
 It changes relationships between: people, businesses, industries, governments, etc
o Communication is faster
o Data can be fragile: it may be insecure, invalid, outdated, leaked, lost,
unrecoverable, misdirected, copied, stolen, misrepresented etc.
o The well-being of people, businesses, governments, and social agencies may be
jeopardized through faulty computing systems and/or unethical behaviour by
computing professionals
o Computing systems can change the way people work: it can be make people more
productive but can also isolate them from one another
o Conceivably could create a lower and upper class society
o People can lose their identity in cyberspace

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 333

o Computing systems can change humankind’s quality of life


o Computing systems can take control of parts of our lives: for good or bad.

Some of the issues addressed in computer ethics include:

 General moral imperatives


o Contribute to society and human well-being:
Minimize negative consequences of computing systems including threats to health and safety, ensure
that products will be used in socially responsible ways and be alert and make others aware of potential
damage to the environment.

o Avoid harm to others


This principle prohibits use of computing technology in ways that result in harm to the users, general
public, employees and employers. Harmful actions include intentional destruction or modification of
files and programs leading to serious loss of resources or unnecessary expenditure of human resources
such as the time and effort required to purge systems of computer viruses.

o Be honest and trustworthy


The honest computing professional will not make deliberately false or deceptive claims about a system
or system design, but will instead provide full disclosure of all pertinent system limitations and
problems. He has a duty to be honest about his qualifications and about any circumstance that may
lead to a conflict of interest.

o Be fair and take action not to discriminate


The values of equality, tolerance and respect for others and the principles of equal justice govern this
imperative.
o Honour property rights including copyrights and patents
Violation of copyrights, patents, trade secrets and the terms of license agreement is prohibited by the
law in most circumstances. Even when software is not so protected, such violations are contrary to
professional behaviour. Copies of software should be made only with proper authorization.
Unauthorized duplication of materials must not be condoned.

o Give proper credit for intellectual property


Computing professionals are obligated to protect the integrity of intellectual property. Specifically, one
must not take credit for other’s ideas or work, even in cases where the work has not been explicitly
protected by copyright, patent etc.

o Respect the privacy of others


Computing and communication technology enables the collection and exchange of personal
information on a scale unprecedented in the history of civilization. Thus there is increased potential
for violating the privacy of individuals and groups. It is the responsibility of professionals to maintain
the privacy and integrity of data describing individuals. This includes taking precautions to ensure the
accuracy of data, as well as protecting it from authorized access or accidental disclosure to
inappropriate individuals. Furthermore, procedures must be established to allow individuals to review
their records and correct inaccuracies.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 334

o Honour confidentiality
The principle of honesty extends to issues of confidentiality of information whenever one has made an
explicit promise to honour confidentiality or, implicitly, when private information not directly related
to the performance of one’s duties become available. The ethical concern is to respect all obligations
of confidentiality to employers, clients, and users unless discharged from such obligations by
requirements of the law or other principles of this code.

More specific professional responsibilities include:

o Strive to achieve the highest quality, effectiveness and dignity in both the process and
product of professional work.
o Acquire and maintain professional competence
o Know and respect existing laws pertaining to professional work
o Accept and provide appropriate professional review
o Give comprehensive and thorough evaluations of computer systems and their impacts,
including analysis of possible risks.
o Honour contracts, agreements and assigned responsibilities
o Improve public understanding of computing and its consequences
o Access computing and communication resources only when authorized to do so

Organizational leadership imperatives include:

o Articulate social responsibilities of members of an organizational unit and encourage full


acceptance of those responsibilities
o Manage personnel and resources to design and build information systems that enhance the
quality of working life
o Acknowledge and support proper and authorized uses of an organization’s computing and
communication resources
o Ensure that users and those who will be affected by a system have their needs clearly
articulated during the assessment and design of requirements; later the system must be
validated to meet requirements.
o Articulate and support policies that protect the dignity of users and others affected by a
computing system
o Create opportunities for members of the organization to learn the principles and limitations
of computer systems

Software Engineering Code of Ethics and Professional Practice

Software engineers shall commit themselves to making the analysis, specification, design,
development, testing and maintenance of software a beneficial and respected profession. In accordance
with their commitment to the health, safety and welfare of the public, software engineers shall adhere
to the following eight principles.

a) Public – software engineers shall act consistently with public interest.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 335

b) Client and employer - software engineers shall act in a manner that is in the best interest
of their client and employer consistent with public interest.
c) Product – software engineers shall ensure that their products and related modifications
meet the highest professional standards possible.
d) Judgment – software engineers shall maintain integrity and independence in their
professional judgment.
e) Management – software engineering managers and leaders shall subscribe to and promote
an ethical approach to the management of software development and maintenance.
f) Profession – software engineers shall advance the integrity and reputation of the
profession consistent with the public interest.
g) Colleagues – software engineers shall be fair to and supportive of their colleagues
h) Self – software engineers shall participate in lifelong learning regarding the practice of
their profession and shall promote an ethical approach to the practice of the profession.

Ethical Theories
Ethical theories give us the foundation from which we can determine what course of action to take
when an ethical issue is involved. At the source of ethics lies the idea of reciprocity. There are two
fundamental approaches to ethical reasoning:

1. Consequentialist theories

It tells us to choose the action with the best possible consequences. Thus, the utilitarian theory that
represents this approach holds that our chosen action should produce the greatest overall good for the
greatest number of people affected by our decision. This approach is often difficult to apply, since it is
not easy to decide what good and how to measure and compare the resulting good

2. Obligational (deontological) theories

It argues that it is our duty to do what is right. Your actions should be such that they could serve as a
model of behaviour for others - and, in particular, you should act as you would want others to act
toward you. Our fundamental duty is to treat others with respect, and thus not to treat them solely as a
means to our own purposes.

Treating others with respect, means not violating their rights. The principal individual rights are:

1. The right to life and safety

2. The right of free consent

3. The right to privacy

4. The right to private property

5. The right of free speech

6. The right of fair treatment

7. The right to due process

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 336

Ethical Issue in the Development and Use of Information Systems


The welfare of individuals and their specific rights, need to be safeguarded in the environment of an
information society. The principal ethical issues of concern with regard to information systems have
been identified as the issues of:

1. Privacy
2. Accuracy
3. Property
4. Access

Tracing an ethical issue to its source and the understanding of which individual rights could be
violated helps understand the issue.

1. Privacy
Privacy is the right of individuals to retain certain information about themselves without disclosure
and to have any information collected about them with their consent protected against unauthorized
access.

Invasion of privacy is a potent threat in an information society. Individuals can be deprived of


opportunities to form desired professional and personal relationships, or can even be politically
neutralized through surveillance and gathering of data from the myriad databases that provide
information about them.

The Privacy Act serves as a guideline for a number of ethics codes adopted by various organizations.
The Act specifies the limitations on the data records that can be kept about individuals. The following
are the principal privacy safeguards specified:

1. No secret records should be maintained about individuals

2. No use can be made of the records for other than the original purposes without the individuals
consent.

3. The individual has the right of inspection and correction of records pertaining to him or her.

4. The collecting agency is responsible for the integrity of the record-keeping system

The power of information technology to store and retrieve information can have a negative effect on
the right to privacy of every individual. Computers and related technologies enable the creation of
massive databases containing minute details of our lives which can be assembled at a reasonable cost
and can be made accessible anywhere and at any time over telecommunications network throughout
the world.

Two database phenomena create specific dangers.

i. Database matching
Database matching makes it possible to merge separate facts collected about an individual in several
databases. If minute facts about a person are put together in this fashion in a context unrelated to the

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 337

purpose of the data collection and without the individual's consent or ability to rectify inaccuracies,
serious damage to the rights of the individual may result.

ii. Statistical databases


Statistical databases are databases that contain large numbers of personal records, but are intended to
supply only statistical information. A snooper, however, may deduce personal information by
constructing and asking a series of statistical queries that would gradually narrow the field down to a
specific individual.

Legislation and enforcement in the area of privacy in the United States are behind those in a number
of other countries. The countries of the European Union offer particularly extensive legal safeguards
of privacy. In the environment of business globalization, this creates difficulties in the area of
transborder data flow, or transfer of data across national boundaries. Countries with more stringent
measures for privacy protection object to a transfer of personal data into the states where this
protection in more lax. The United Nations has stated the minimum privacy guarantees recommended
for incorporation into national legislation.

Privacy protection relies on the technical security measures and other controls that limit access to
databases and other information stored in computer memories or transmitted over the
telecommunication networks.

2. Accuracy
Pervasive use of information in our societal affairs means that we have become more vulnerable to
misinformation. Accurate information is error-free, complete, and relevant to the decisions that are to
be based on it.

Professional integrity is one of the guarantors of information accuracy. An ethical approach to


information accuracy calls for the following:

1. A professional should not misrepresent his or her qualifications to perform a task.

2. A professional should indicate to his or her employer the consequences to be expected if his or her
judgment is overruled

3. System safeguards, such as control audits are necessary to maintain information accuracy. Regular
audits of data quality should be performed and acted upon.

4. Individuals should be given an opportunity to correct inaccurate information held about them in
databases.

5. Contents of databases containing data about individuals should be reviewed at frequent intervals,
with obsolete data discarded.

3. Property

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 338

The right to property is largely secured in the legal domain. However, intangibility of information is
at the source of dilemmas which take clarity away from the laws, moving many problems into the
ethical domain. At issue primarily are the rights to intellectual property: the intangible property that
results from an individual's or a corporation's creative activity.

Intellectual property is protected by three mechanisms:

a. Copyright
A method of protecting intellectual property that protects the form of expression (for example, a given
program) rather than the idea itself (for example, an algorithm).

b. Patent
It is a method of protecting intellectual property that protects a non-obvious discovery falling within
the subject matter of the Patent Act.

c. Trade secret
Intellectual property protected by a license or a non-disclosure agreement

Computer programs are valuable property and thus are the subject of theft from computer systems.
Unauthorized copying of software (software piracy) is a major form of software theft because
software is intellectual property which is protected by copyright law and user licensing agreements.

4 Access
It is the hallmark of an information society that most of its workforce is employed in the handling of
information and most of the goods and services available for consumption are information-related.
Three necessities for access to the benefits of an information society include:

1. The intellective skills to deal with information

2. Access to information technology

3. Access to information

One should strive to broaden the access of individuals to the benefits of information society. This
implies broadening access to skills needed to deal with information by further enabling literacy,
access to information technology, and the appropriate access to information itself.

Intensive work is being done on developing assistive technologies - specialized technologies than
enhance access of the handicapped to the information technology and, in many cases, to the world at
large.

Impacts of Information Technology on the Workplace


Due to the pervasive use of information technology and its dual potential to be used for good or bad,
we need to consider the specific issues that arise when people work with information systems.

It has been established that people experience job satisfaction when:

1. They have a sense that their work is meaningful

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 339

2. They feel a sense of responsibility for the results of their work and have a sense of autonomy and
control

3. They receive feedback about their accomplishments

Sociotechnical design of information systems is performed in recognition of these crucial factors of


employee motivation. Information systems with very similar functionalities can have positive or
negative consequences in the workplace. The same information technology can have different
impacts, depending on the way it is used in an organization.

Some of the negative effects of information technology include:

1. Use of computers has displaced workers in middle management (whose primary purpose
was to gather and transfer information) and in clerical jobs.
2. Some categories of work have virtually disappeared which has created unemployment for a
number of workers
3. May create a permanent underclass that will not be able to compete in the job market
4. Computer crime is a growing threat (money theft, service theft, software theft, data
alteration or theft, computer viruses, malicious access, crime on the internet).
5. Health issues
6. Societal issues (privacy, accuracy, property, and access)
Some of the positive effects of information technology include:

1. The ability to work from remote locations.


2. Access to individuals with disabilities
3. Medical diagnosis
4. Computer-assisted instruction (learning aids)
5. Environmental quality control
6. Law enforcement
Emerging Technologies: Opportunities and Threats in the Workplace
The emerging new technologies keeping offering opportunities to improve the effectiveness and
efficiency of people's work - and present new threats to their rights.

Health issues - the use of technology in the workplace raises a variety of health issues. Heavy use of
computers is reportedly causing health problems such as:

1. Job stress
2. Damaged arm and neck muscles
3. Eye strain
4. Radiation exposure
5. Death by computer-caused accidents

Ergonomics - solutions to some health problems are based on the science of ergonomics, sometimes
called human factors engineering. The goal of ergonomics is to design health work environments that
are safe, comfortable, and pleasant for people to work in, thus increasing employee morale and
productivity.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 340

Ergonomics stresses the healthy design of the workplace, workstations, computers and other
machines, and even software packages. Other health issues may require ergonomic solutions
emphasizing job design, rather than workplace design.

Ethical behaviour of employees is highly dependent on the corporate values and norms - on the
corporate culture as a whole. Open debate of ethical issues in the workplace and continuing self-
analysis help keep ethical issues in focus. Many corporations have codes of ethics and enforce them as
part of a general posture of social responsibility.

THE MORAL DIMENSION OF MANAGEMENT INFORMATION


SYSTEM
A moral dimension is a social dimension that lies beyond a person consciousness and between him
and others.

Morality is a social attribute. Whatever I will count as 'moral' is irrelevant because I am the 'doer'. it
will come from my ego. Only a person whom I 'acted upon" can say if he felt my action to be 'moral'
or not, he will be able to teach me and by that I will go through a change.

Meaning, morality cannot exist in a singular dimension (between me and myself), but is sustained in
duality, in relativity, when two forces gravitate towards each other (when my friend teaches me of
morality). Hence the 'moral dimension' can be revealed only with in a group of people.

Loudon proposes five (5) moral dimensions of the information age are:

1) Rights and obligations of information


What are the rights of individuals and corporations about information about themselves? What are the
legal means to protect it? And what are the obligations are for that information. “These rights include:

Privacy is the right of individuals to be left in peace.

Technology and information systems threaten the privacy of individuals to make cheap, efficient and
effective invasion.

Due process requires the existence of a set of rules or laws that clearly define how we treat
information about individuals and that appeal mechanisms available.

2) Property rights
How to move the classical concepts of patent and intellectual property in digital technology? What are
these rights and how to protect? Information technology has hindered the protection of property
because it is very easy to copy or distribute computer information networks. Intellectual property is
subject to various protections under three patents:

Trade secrets: Any intellectual work product used for business purposes may be classified as secret.

Copyright: It is a concession granted by law to protect creators of intellectual property against


copying by others for any purpose for a period of 28 years.

Patents: A patent gives the holder, for 17 years, an exclusive monopoly on the ideas on which an
invention.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 341

3) Responsibility and control


Who is responsible and who controls the use and abuse of information from the People. The new
information technologies are challenging existing laws regarding liability and social practices, to force
individuals and institutions accountable for their actions.

4) Quality systems
What data standards, information processing programs should be required to ensure the protection of
individual rights and society? It can hold individuals and organizations for avoidable and foreseeable
consequences if their obligation is to see and correct.

5) Quality of life
What values should be preserved and protected in a society based on information and knowledge?
What institutions should protect and which should be protected? The negative social costs of
introducing information technologies and systems are growing along with the power of technology.
Computers and information technologies can destroy valuable elements of culture and society, while
providing benefits.

These five dimensions represent very good guideline considerations, ethical questions and answers
should be a company when introducing a new technology.

THE LEGAL ISSUES IN MANAGEMENT INFORMATION SYSTEMS


This topic discusses the legal protection of information and the security issues of computer data and
electronic information systems and is organised into four parts: First, it focuses briefly on the basic
conceptual distinction between information and data, providing a basis of understanding of the
primary object of legal and technical means of protection. Second, access to Government information
will be discussed. Third, protection of personal data in the administration of criminal justice will be
presented. Finally, security of data and network communications will be explored.

Information and Data: Legal Protection of Information and Data


Information and Data

Data is a formal representation of concepts, facts or instructions. Information is the meaning that data
has for human beings. Data has, therefore, two different aspects: as potential information for human
beings or as instructions meant for a computer.

Information is not material, but a process or relationship that occurs between a person=s mind and
some sort of stimulus. Information, therefore, is a subjective notion that can be drawn from its
objective representation which we call data.

Different information may be received from the same data. As in the various natural languages the
same word may have different meanings, so in computer programming the same byte or set of digits
(e.g. 01100010) may serve as a carrier of different content.

Legal Protection of Information and Data

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 342

The new legal doctrine of information law and law on information technology recognises information
as a third fundamental factor besides matter and energy. This concept realises that modern
information technology alters the characteristics of information, especially by strengthening its
importance and by treating it as an active factor that works without human intervention in automatic
processing systems. In this new approach, it is obvious that the legal evaluation of corporeal and
incorporeal (information) objects differs considerably.

Information, being an intangible and an entity that can be possessed, shared and reproduced by many,
is not capable of being property as most corporeal objects do. Unlike corporeal objects, which are
more exclusively attributed to certain persons, information is rather a public good. As such it must
principally flow freely in a free society. This basic principle of free flow of information is essential
for the economic and political system, as indispensable for the governments accountability and the
maintenance of a democratic order.

A second difference between the legal regime of tangibles and intangibles is that the protection of
information has not only to consider the economic interests of its proprietor or holder, but at the same
time must preserve the interests of those, who are concerned with the contents of information - an
aspect resulting in new issues of privacy protection.

A third difference originates from the vulnerability of data for manipulation, interception and erasure -
proprieties that constitute a major concern of computer security, and the criminal law provisions on
computer crime.

Access to Government Information


From Secrecy to Openness

In most countries, the disclosure of government documents is largely discretionary. Government


agencies, at both the central and the local level, are rarely forthcoming with information unless it is in
their interest. There are no general laws that provided a mechanism for public access.

Generally, access to government information can be defined as the availability for inspection or
coping of both records and recordings, possessed or controlled by a public authority. This mechanism
came, for the first time in history, in the eighteenth century Sweden with the passage of the Act on
Freedom of the Press (1766). After 1945 this regulatory approach was followed in other Scandinavian
countries, in the United States (since 1996, when the Freedom of Information Act was enacted), and
in several other countries. Among these are Australia, Canada, France, the Netherlands, and New
Zealand. Some other countries have constitutional clauses relating to a right of access, but not always
transformative legislation.

The route by which the promotion of the rights of access to official information has become a strong
political issue is varied. Initially, the publics right to government information had been found to be
closely related to the concept of human rights. Because of its importance for democratic society, the
publics right to information was even acknowledged to constitute a third generation of human rights,
after the civil and political rights of the eighteenth century, and the economic and social rights of the
first half of the twentieth century. As it was stressed in the Council of Europe Recommendation on
AAccess by the Public to Government Records and Freedom of Information@: AA parliamentary

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 343

democracy can function adequately only if people in general and their elected representatives are full
informed@2.

The most recent emphasis, however, is on the commercial rather than human rights aspect of public
sector information. There is now a widespread recognition by the private sector of the commercial
value of much government information. Large data sets, as land registers, company registers,
demographic statistics, and topographic information (maps) are routinely produced as a by-product of
the day-to-day functioning of public administration. Information is not an end in itself. Sound and
comprehensive information is needed if government is to frame workable public policies, plan
effective services and distribute resources fairly and equitably. Government information, therefore,
constitutes a resource of considerable importance. The potential of such data for exploitation via the
digital network was noted and encouraged.

Impact of Computerisation

Over the 1970s and 1980s, when computerisation of public sector information systems in the most
developed countries was in its infancy, there were fears that government agencies would use
computerisation as a technology of secrecy rather than a technology of freedom.

In fact, in some countries computerisation of government information had a strong impact on the way
the right of public access has been interpreted by the authorities. For example, when new
programming was necessary to extract information from computer systems, agencies and courts have
sometimes held that such programming is analogous to record creation, and is therefore not required
under the freedom of information laws, which only oblige to search for available records. There is a
common feature of these laws to grant access only to information which is available or can be made
available through reasonable effort.

As electronic records became more common, the freedom of information laws proved to be less useful
in the new environment. Because the wording of these laws usually provide access to paper records,
an authority was not obliged to accommodate a requesters preference for access in an electronic form,
for example a copy on computer tape or disk. There are well known, especially in the United States,
cases of the Governments agency refusal of making computerised records available to the party
concerned in their access4.

Today, in the United States these definitional problems have successfully been solved, With the
adoption of the Amendments Act on Electronic Freedom of Information of 1996, the Government
information maintained in electronic format has become accessible to the public on an equal footing
with paper-based documents. Though, there are still some national legislations that do not allow
requesters to obtain data in machine-readable format, the process of commercialisation of the public
sector information is a present development both in the United States and most countries of Western
Europe. Moreover, due to the traditional concept of the right of access, as a right to request the
handing out of identified documents, the right to search for documents has so far not been a
recognised part of the principle of public domain.

In view of the fast growing information networks, the powerful search engines, and, generally
speaking, the retrieval possibilities of electronic information increase the significance of search rights
as an integrated element of the traditional right of access.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 344

New developments in hardware and software technology, as relational databases and hypertext, not
only enhance computer flexibility and responsiveness to unanticipated form of requests, but also make
it easy to compile and format information for network access. The cost in money and effort to share
information is much lower. As a result, public access to government information can be enhanced.

The most recent event illustrating the tendency of making legal text databases freely available to
citizens is a decision of the Swedish parliament to make its on-line legal information service (Rixlex)
available to the public on a free of charge basis via the Internet.

Openness vs. Secrecy

Public access to official information does not prevent the Government from protecting information
from disclosure for their legitimate aims as stipulated by legal provisions.

In the United States, nine exemptions permit the withholding of records to protect legitimate
government or private interests. Thus, national security information, trade secrets, law enforcement
investigative files, personal data, pre-decisional documents, and other categories of government
records can lawfully be denied to a FOIA requester. The early experience under the Act on Freedom
of Information shows some negative consequences of this legislation for effective law enforcement. It
was estimated that only 7 percent of the 30,000 FOIA requests received annually by the Department
of Justice came from media and other researchers. Many requests came from persons who were
obviously seeking improper personal advantage, including convicted offenders, organised crime
people, drug traffickers, and persons in litigation with the United States who are attempting to use the
FOIA to circumvent the rules of discovery contained in the rules of criminal or civil procedure.
Consequently, the ability of the federal, state, and local governments to combat crime was thought to
be affected, mainly by a decline in the number of informants. A highly detailed Swedish Secrecy Act
contains 16 chapters and more than a hundred articles.

They provide a specific requirements of damage to the interest concerned, as well as a maximum
period of time during which secrecy applies. For example, where the protection of personal
circumstances of individuals is concerned, usually a term of 50 or 70 years is applicable. With regard
to secret information on matters of national defence or foreign relations a maximum period of 40
years has been established. In principle the restrictions laid down in the Secrecy Act are mandatory in
nature, i.e. if a restriction applies the authority involved must refuse access. United States Copyright
Act, ’105 (1994). The prohibition on copyright protection for United States Government works is not
intended to limit protection abroad. Thus, under the Copyright Act, the Federal Government can seek
copyright for its information of other countries.

In Germany and Switzerland, for instance, legislation and jurisprudence is not copyrighted. The
Italian law explicitly bars statutes, regulations, rulings and the like from being copyrighted by Italian
Government, local authorities or a foreign one. In Turkey, legislation and jurisprudence are not
copyrighted as far as they are published officially. Speeches are not copyrighted in the scope of mass
communications, otherwise they are copyrighted. All other governmental works, such as reports,
plans, maps, drawings etc. are copyrighted.

The legal nature of the restrictions based on secrecy interests differs among the various jurisdictions.
In the United States of America, Denmark and France for example the limitations are not mandatory
as is the case in Sweden and the Netherlands but are discretionary in nature. This means that if a

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 345

restriction is applicable, the public authority concerned is under no obligation to give access to the
information, but is nevertheless entitled to do so.

Data Protection in Computerisation in Criminal Justice


Computerisation of criminal justice has far-reaching implications for human values that are involved
in the automatic processing of personal data. The fears that computerisation of criminal justice is able
to induce are mainly related to the potentials for over-control of individuals, including the possible
breaches of their privacy through misuse of sensitive data about them recorded in computer files:

1. An application of increasingly sophisticated information gathering devices for surveillance


activities may reduce the individuals sense of security and liberty;

2. Accumulation of personal data in various databases connected throughout computer networks


would make possible the creation of personality profiles or so-called computer shadows of the data
subject;

3. Susceptibility of computerised information systems for an unauthorised access to data stored and
their possible abuses have constituted another cause of concern;

4. Use of information provided by centralised computer systems on large sectors of the population
who have no opportunity to inspect the accuracy of the information held, may also affect the legal
position of the data subject in a way being harmful for their civil liberties.

Firewall Technology
A firewall is one of several methods of protecting ones network from another mistrusted network. It is
deemed as absolutely indispensable for the Internet users who are running their own Internet World
Wide Web site. The hardware and software that makes up the firewall screens all traffic. The firewall
can be thought of as a pair of mechanisms: one which blocks traffic, and one which permits traffic.
Some firewalls permit only e-mail traffic through them, thereby protecting the network against attacks
other than attacks against the e-mail service. Other firewalls provide less strict protection, and block
services that are known to be problems.

Generally, firewalls are configured to protect against unauthenticated interactive log-ins from the
outside world.

This, more than anything, helps prevent vandals from logging into computers on the network. More
elaborate firewalls block traffic from the outside to the inside, but permit users on the inside to
communicate freely with the outside.

The most straightforward way of use of a firewall is to create a so-called internal site, one that is
accessible only to computers within one’s own local network. Then, all what needs to be done is to
place the server inside the firewall:

As to the web-servers connected to the Internet, they need to place it somewhere outside the firewall.
From the point of security of an organisation as a whole, the safest place to put it is outside the local
network:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 346

This is called a sacrificial lamb configuration. The server is at risk of being broken in, but at least
when it's broken in it does not breach the security of the inner network. On the other hand, web pages
at the server are vulnerable for an unauthorised alteration and other forms of vandalism. give the
world access to public information while giving the internal network access to private documents.

However, the system with the really secret data should be isolated from the rest of the corporate
network, and should not be hooked up to the Internet at all.

Encryption
Encryption is the transformation of data into a form unreadable by anyone without a secret decryption
key. Its purpose is to ensure privacy by keeping the information hidden from anyone for whom it is
not intended, even those who can see the encrypted data. For example, one may wish to encrypt files
on a hard disk to prevent an intruder from reading them. Encryption can also be used to protect e-mail
messages and to verify the identity of the sending part

The combination of advanced mathematical techniques with the enormous growth of the possibilities
for automatic data processing has resulted in very strong cryptographic systems, which are almost
impossible to break. In the open and unsecured networks like the Internet, strong encryption has
become one of the main tools for the protection of privacy, trust, access control and corporate
security, to name only basic possible application of so called public-private key encryption systems.

Under a more traditional single key system, the same key is used both for encrypting and decrypting
the message. Although this is reasonably secure, there is a risk that this key will be intercepted when
parties involved exchange keys. A public key system, however, does not necessitate the exchange of a
secret key in the transmission of messages. The sender encrypts the message with the recipients’
freely-disclosed, unique public key. The recipient, in turn, uses his unique private key to decrypt the
message. It is also possible to encrypt messages with the senders’ private key, allowing anyone who
knows the senders public key to decrypt the message. This process is crucial to creating digital
signature that provides verification of the identity of the message sender.

Currently, the two main cryptographic systems providing for secure e-mail are Pretty Good Privacy
(PGP) and Privacy Enhanced Mail (PEM). Despite export restrictions, PGP is widely available
outside the United States in different versions, becoming de facto international standard24. It is
available for most computers and can be easily configured to work in several different languages,
including Spanish, French and German.

Today, an acute and mostly unresolved conflict exists, however, between the private interests in
protection of secrecy of information by means of encryption, and the interests of the investigating
authorities to obtain timely access to the content of sized or intercepted data. To minimise the
negative effects of the use of cryptography on the investigation of criminal offences two different
approaches have been developed at national level. The legislation of France and the Russian
Federation prohibits the use, distribution, development and export of any cryptographic tool without a
license granted by a special government agency. An alternative approach, supported by a number of
the most developed countries and some international organisations as the Organisation for Economic
Cooperation and Development, the Council of Europe, the European Commission and the
International Chamber of Commerce have proposed the key-escrow scheme, based on the cooperation

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 347

of one or more trusted third parties who will hold keys and be required to hand them over to law
enforcement authorities under certain conditions.

Encryption is often recommended as the solution to all security problems. Unfortunately, this is not
the case. Encryption does nothing to protect against many common methods of attack including those
that exploit bad default settings or vulnerabilities in network protocols or software. Information
security requires much more than just encryption. Authentication, configuration management, good
design, access controls, firewalls, auditing, security practices, and security awareness training are a
few of the other techniques needed.

REVISION EXERCISES
1. What are some of the responsibilities of end users of information systems
2. What are some of the ethical theories related to information systems
3. Discuss some of the ethics in development and use of information systems
4. What is the impact of information technology in the workplace?
5. What are the emerging trends and opportunities in the workplace?
6. What is the moral dimension information system?
7. What are the legal issues associated with management of information system?

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 348

CHAPTER 10
EMERGING ISSUES IN MANAGEMENT
INFORMATION SYSTEMS
INTRODUCTION
The demand for MIS skills has seen a tremendous resurgence in the past few years. Forecasts are
extremely strong with MIS skill sets dominating the top job roles expected to grow in the future. While
the MIS careers are expected to expand at an accelerated rate, the mix of skill requirements has changed
considerably. With the explosive growth of technology accompanying the usage of the Internet in the
late 1990s, the role of application development (programming) dominated the MIS field. Since then,
outsourcing has moved many of the low level programming jobs overseas. However, the increased
need for higher level technology jobs has become prevalent. Now, the web, communication and
database technologies are maturing and their usage has begun to extend throughout every area of
business practices. These new information technologies are being employed in expansive and creative
ways. The result is that the need for MIS professionals has increased -- but in a different way than
decades past. MIS is now a "people skill" rather than a purely "technical skill"

ELECTRONIC COMMERCE
Electronic commerce (e-commerce) is the buying and selling of goods and services over the Internet.
Businesses on the Internet that offer goods and services are referred to as web storefronts. Electronic
payment to a web storefront can include check, credit card or electronic cash.

Web Storefronts

Web storefronts are also known as virtual stores. This is where shoppers can go to inspect merchandise
and make purchases on the Internet. Web storefront creation package is a new type of program to help
businesses create virtual stores. Web storefront creation packages (also known as commerce servers)
do the following:
 Allow visitors to register, browse, place products into virtual shopping carts and purchase
goods and services.
 Calculate taxes and shipping costs and handle payment options
 Update and replenish inventory
 Ensure reliable and safe communications
 Collects data on visitors
 Generates reports to evaluate the site’s profitability

Web Auctions
Web auctions are a recent trend in e-commerce. They are similar to traditional auctions but buyers and
sellers do not meet face to face. Sellers post descriptions of products at a web site and buyers submit
bids electronically. There are two basic types of web auction sites:

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 349

a) Auction house sites


b) Person-to-person sites

d) Auction House Sites


Auction house owners present merchandise typically from companies’ surplus stocks. Auction house
sites operate in a similar way to a traditional auction. Bargain prices are not uncommon on this type of
site and are generally considered safe places to shop.
e) Person-To-Person Sites
Owner of site provides a forum for buyers and sellers to gather. The owner of the site typically
facilitates rather than being involved in transactions. Buyers and sellers on this type of site must be
cautious.

Electronic Payment
The greatest challenge for e-commerce is how to pay for the purchases. Payment methods must be fast,
secure and reliable. Three basic payment methods now in use are:
(i) Checks
 After an item is purchased on the Internet, a check for payment is sent in the mail
 It requires the longest time to complete a purchase
 It is the most traditional and safest method of payment

(ii) Credit card


 Credit card number can be sent over the Internet at the time of purchase
 It is a faster and a more convenient method of paying for Internet purchases
 However, credit card fraud is a major concern for buyers and sellers
 Criminals known as carders specialize in stealing, trading and using stolen credit cards stolen
from the Internet.

(iii) Electronic cash


 Electronic cash is also known as e-cash, cyber cash or digital cash
 It is the Internet’s equivalent of traditional cash
 Buyers purchase e-cash from a third party such as a bank that specializes in electronic currency
 Sellers convert e-cash to traditional currency through a third party
 It is more secure than using a credit card for purchases

ELECTRONIC DATA INTERCHANGE (EDI)


EDI is an electronic means for transmitting business transactions between organizations. The
transmissions use standard formats such as specific record types and field definitions. EDI has been in
use for 20 years, but has received significant attention within recent years as organizations seek ways
to reduce costs and be more responsive.
The EDI process is a hybrid process of systems software and application systems. EDI system software
can provide utility services used by all application systems. These services include transmission,
translation and storage of transactions initialised by or destined for application processing. EDI is an

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 350

application system in that the functions it performs are based on business needs and activities. The
applications, transactions and trading partners supported will change over time and the co-mingling of
transactions, purchase orders, shipping notices, invoices and payments in the EDI process makes it
necessary to include application processing procedures and controls in the EDI process.
EDI promotes a more efficient paperless environment. EDI transmissions may replace the use of
standard documents including invoices or purchase orders. Since EDI replaces the traditional paper
document exchange such as purchase orders, invoices or material release schedules, the proper controls
and edits need to be built within each company’s application system to allow this communication to
take place.

OUTSOURCING PRACTICES
Outsourcing is a contractual agreement whereby an organization hands over control of part or all of
the functions of the information systems department to an external party. The organization pays a fee
and the contractor delivers a level of service that is defined in a contractually binding service level
agreement. The contractor provides the resources and expertise required to perform the agreed service.
Outsourcing is becoming increasingly important in many organizations.
The specific objective for IT outsourcing vary from organization to organization. Typically, though,
the goal is to achieve lasting, meaningful improvement in information system through corporate
restructuring to take advantage of a vendor’s competencies.
Reasons for embarking on outsourcing include:
 A desire to focus on a business’ core activities
 Pressure on profit margins
 Increasing competition that demands cost savings
 Flexibility with respect to both organization and structure

The services provided by a third party can include:


 Data entry (mainly airlines follow this route)
 Design and development of new systems when the in-house staff do not have the requisite
skills or is otherwise occupied in higher priority tasks
 Maintenance of existing applications to free in-house staff to develop new applications
 Conversion of legacy applications to new platforms. For example, a specialist company may
enable an old application.
 Operating the help desk or the call centre

Possible disadvantages of outsourcing include:


 Costs exceeding customer expectations
 Loss of internal information system experience
 Loss of control over information system
 Vendor failure
 Limited product access

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 351

 Difficulty in reversing or changing outsourced arrangements

Business risks associated with outsourcing are hidden costs, contract terms not being met, service costs
not being competitive over the period of the entire contract, obsolescence of vendor IT systems and
the balance of power residing with the vendor. Some of the ways that these risks can be reduced are:
 By establishing measurable partnership enacted shared goals and rewards
 Utilization of multiple suppliers or withhold a piece of business as an incentive
 Formalization of a cross-functional contract management team
 Contract performance metrics
 Periodic competitive reviews and benchmarking/benchtrending
 Implementation of short-term contracts

Outsourcing is the term used to encompass three quite different levels of external provision of
information systems services. These levels relate to the extent to which the management of IS, rather
than the technology component of it, have been transferred to an external body. These are time-share
vendors, service bureaus and facilities management.

TIME-SHARE VENDORS
These provide online access to an external processing capability that is usually charged for on a time-
used basis. Such arrangements may merely provide for the host processing capability onto which the
purchaser must load software. Alternatively the client may be purchasing access to the application. The
storage space required may be shared or private. This style of provision of the ‘pure’ technology gives
a degree of flexibility allowing ad hoc, but processor intensive jobs to be economically feasible.

SERVICE BUREAUS
These provide an entirely external service that is charged by time or by the application task. Rather
than merely accessing some processing capability, as with time-share arrangements, a complete task is
contracted out. What is contracted for is usually only a discrete, finite and often small, element of
overall IS.
The specialist and focused nature of this type of service allows the bureaus to be cost effective at the
tasks it does since the mass coverage allows up-to-date efficiency-oriented facilities ideal for routine
processing work. The specific nature of tasks done by service bureaus tend to make them slow to
respond to change and so this style of contracting out is a poor choice where fast changing data is
involved.

FACILITIES MANAGEMENT (FM)


This may be the semi-external management of IS provision. In the physical sense all the IS elements
may remain (or be created from scratch) within the client’s premises but their management and
operation become the responsibility of the contracted body. FM contracts provide for management
expertise as well as technical skills. FM deals are legally binding equivalent of an internal service level
agreement. Both specify what service will be received but significantly differ in that, unlike when
internal IS fails to deliver, with an FM contract legal redress is possible. For most organizations it is
this certainty of delivery that makes FM attractive.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 352

FM deals are increasingly appropriate for stable IS activities in those areas that have long been
automated so that accurate internal versus external cost comparisons can be made. FM can also be
appealing for those areas of high technology uncertainty since it offers a form of risk transfer. The
service provider must accommodate unforeseen changes or difficulties in maintaining service levels.

SOFTWARE HOUSES
A software house is a company that creates custom software for specific clients. They concentrate on
the provision of software services. These services include feasibility studies, systems analysis and
design, development of operating systems software, provision of application programming packages,
‘tailor-made’ application programming, specialist system advice etc. A software house may offer a
wide range of services or may specialize in a particular area.

INFORMATION RESOURCE CENTRES


Information Resource Centres co-ordinate all information activities within their areas of interest and
expertise. Information within that area is analysed, abstracted and indexed for effective storage,
retrieval and dissemination.

DATA WAREHOUSING
A data warehouse is a subject-oriented, integrated, time-variant, non-volatile collection of data in
support of management’s decision-making process.

Data warehouses organize around subjects, as opposed to traditional application systems which
organize around processes. Subjects in a warehouse include items such as customers, employees,
financial management and products. The data within the warehouse is integrated in that the final
product is a fusion of various other systems’ information into a cohesive set of information. Data in
the warehouse is accurate to some date and time (time-variant). An indication of time is generally
included in each row of the database to give the warehouse time variant characteristics. The warehouse
data is non-volatile in that the data which enters the database is rarely, if ever, changed. Change is
restricted to situations where accuracy problems are identified. Information is simply appended to or
removed from the database, but never updated. A query made by a decision support analyst last week
renders exact results one week from now.

The business value of data warehousing includes:


 More cost effective decision-making – the reallocation of staff and computing resources
required to support ad hoc inquiry and reporting.
 Better enterprise intelligence – increased quality and flexibility of analysis based on multi-
tiered data structures ranging from detailed transactions to high level summary information
 Enhanced customer service – information can be correlated via the warehouse, thus resulting
in a view of the complete customer profile
 Enhanced asset/liability management – purchasing agents and financial managers often
discover cost savings in redundant inventory, as well as previously unknown volume discount
opportunities.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 353

 Business processing reengineering – provides enterprise users access to information yielding


insights into business processes. This information can provide an impetus for fact-based
reengineering opportunities
 Alignment with enterprise right-sizing objectives – as the enterprise becomes flatter, greater
emphasis and reliance on distributed decision support will increase.

DATA MINING
This is the process of discovering meaningful new correlations, patterns, and trends by digging into
(mining) large amounts of data stored in warehouses, using artificial intelligence and statistical and
mathematical techniques.

Industries that are already taking advantage of data mining include retail, financial, medical,
manufacturing, environmental, utilities, security, transportation, chemical, insurance and aerospace
industries. Most organizations engage in data mining to:
 Discover knowledge – the goal of knowledge discovery is to determine explicit hidden
relationships, patterns, or correlations from data stored in an enterprise’s database.
Specifically, data mining can be used to perform:

o Segmentation – e.g. group customer records for custom-tailored marketing


o Classification – assignment of input data to a predefined class, discovery and
understanding of trends, text-document classification.
o Association – discovery of cross-sales opportunities
o Preferencing – determining preference of customer’s majority
 Visualize data – make sense out of huge data volumes e.g. use of graphics
 Correct data – identify and correct errors in huge amounts of data

Applications of data mining include:


 Mass personalization – personalized services to large numbers of customers
 Fraud detection – using predictive models, an organization can detect existing fraudulent
behaviour and identify customers who are likely to commit fraud in the future.
 Automated buying decisions – data mining systems can uncover consumer buying patterns
and make stocking decisions for inventory control
 Credit portfolio risk evaluation – a data mining system can help perform credit risk analysis
by building predictive models of various portfolios, identifying factors and formulating rules
affecting bad risk decisions.
 Financial planning and forecasting – data mining provides a variety of promising techniques
to build predictive models forecasting financial outcome on a macroeconomic style.
 Discovery sales – for companies that excel in data mining, an innovative source of revenue is
the sale of some of their data mining discoveries.

INFORMATION TECHNOLOGY AND THE LAW


This is an area that has received little attention in developing countries. However in developed
countries substantial efforts have been made to ensure that computers are not used to perpetrate

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 354

criminal activities. A number of legislation has been passed in this direction in these countries. In
Kenya, the law is yet to reflect clearly how computer crime is to be dealt with.

The Internet does not create new crimes but causes problems of enforcement and jurisdiction. The
following discussion shows how countries like England deals with computer crime through legislation
and may offer a point of reference for other countries.

COMPUTERS AND CRIME


Computers are associated with crime in two ways:
1. Facilitate the commission of traditional crimes. This does not usually raise new legal issues.
2. They make possible new forms of “criminal” behaviour, which have raised new legal issues.

Computer crime is usually in the form of software piracy, electronic break-ins and computer sabotage
be it industrial, personal, political etc.

Fraud and Theft

Computer fraud is any fraudulent behaviour connected with computerization by which someone
intends to gain financial advantage. The different kinds of computer fraud includes:

(i) Input fraud – entry of unauthorized instructions, alteration of data prior to entry or entry
of false data. Requires few technical skills.
(ii) Data fraud – alteration of data already entered on computer, requires few technical skills.
(iii) Output fraud – fraudulent use of or suppression of output data. Less common than input
or data fraud but evidence is difficult to obtain.
(iv) Program fraud – creating or altering a program for fraudulent purposes. This is the real
computer fraud and requires technical expertise and is apparently rare.

The legal response prior to 1990 was as follows:


 Direct benefit – use of a computer to directly transfer money or property. This is traditional theft.
This criminal behaviour is tried under traditional criminal law e.g. governed by Theft Act 1968 in
England, common law in Scotland.
 Indirect benefit – obtaining by deception. E.g. Theft Act of 1968 and 1978 deals with dishonestly
obtaining property or services by deception.
 Forgery – the Forgery and Counterfeiting Act 1981 defines it as making a false instrument
intending to pass it off as genuine.
 Theft of information – unauthorized taking of “pure” information is not legally theft in England
and Scotland because information is not regarded as property and offence of theft requires that
someone is deprived of his property.

Damage to Software and Data


It refers to possible to corrupting/erasing data without apparently causing any physical damage. In
England the Criminal Damage Act 1971 states that a person who without lawful excuse destroys or

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 355

damages any property belonging to another intending to destroy or damage such property shall be
guilty of an offence.

Hacking

Gaining unauthorized access to computer programs and data. This was not criminal in England prior
to Computer Misuse Act of 1990.

Computer Misuse Act 1990

It is not a comprehensive statute for computer crime and does not generally replace the existing
criminal law. It however creates three new offences.

 The Unauthorized Access Offence


A person is guilty of an offence if he causes a computer to perform any function with intent to secure
access to any program or data held in any computer and the access he intends to secure is unauthorized,
and he knows at the time when he causes the computer to perform the function that this is the case.
Maximum penalty is 6 months imprisonment and/or maximum £5,000 fine.

 The Ulterior Intent Offence


A person is guilty of this offence if he commits the Unauthorized Access Offence with intent to commit
an offence or to facilitate the commission of such an offence (whether by himself or another person).
Maximum penalty for Ulterior Intent Offence is 5 years imprisonment and/or unlimited fine.

 The Unauthorized Modification Offence


A person is guilty of this offence if he does any act which causes an unauthorized modification of the
contents of any computer and at the time he does the act he has the requisite intent (intent to impair
operation or hinder access) and the requisite knowledge (knowledge that actions are unauthorized).

Computers and Pornography

Pornography is perceived as one of the major problems of computer and Internet use. Use of computers
and the Internet have facilitated distribution of and access to illegal pornography, but have not created
many new legal issues. Specific problems and how they are addressed include:

a. Pseudo-photographs
These are combined and edited images to make a single image. The Criminal Justice Act 1988 and
Protection of Children Act 1978 (if the image appears to be an indecent image of a child) amended to
extend certain indecency offences to pseudo-photographs.

b. Multimedia pornography
Video Recordings Act 1984: supply of video recordings without classification certificate is an offence.

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 356

Cyberstalking
Using a public telecommunication system to harass another person may be an offence under the
Telecommunications Act 1984. Pursuing a course of harassing conduct is an offence under the
Protection from Harassment Act 1997.

INTELLECTUAL PROPERTY RIGHTS


Intellectual property (IP) refers to creations of the mind: inventions, literary and artistic works, and
symbols, names, images, and designs used in commerce. In Kenya, The Industrial Property Act 2011
is responsible for intellectual property. The Act in Section 3 establishes an institute known as the Kenya
Industrial Property Institute (KIPI) which shall be a body corporate with perpetual succession and a
common seal.

The functions of the Institute shall be to :


a) Consider applications for and grant industrial property rights;
b) Screen technology transfer agreements and licenses;
c) Provide to the public with industrial property information for technological and
economic development; and
d) Promote inventiveness and innovativeness in Kenya.

Intellectual Property is divided into two categories:


1. Industrial property, which includes inventions (patents), trademarks, industrial designs, and
geographic indications of source.
2. Copyright, which includes literary and artistic works such as novels, poems and plays, films,
musical works, artistic works such as drawings, paintings, photographs and sculptures, and
architectural designs. Rights related to copyright include those of performing artists in their
performances, producers of phonograms in their recordings, and those of broadcasters in their
radio and television programs.

Categories of Intellectual property rights

Rights differ according to subject matter being protected, scope of protection and manner of creation.
Broadly include:

 Patents – a patent is the monopoly to exploit an invention for up to twenty years (in UK). Computer
programs as such are excluded from patenting – but may be patented if applied in some technical
or practical manner. The process of making semiconductor chips falls into the patent regime.
 Copyrights – a copyright is the right to make copies of a work. Subject matter protected by
copyrights include:

o Original literary, dramatic, musical and artistic works


o Sound recordings, films, broadcasts and cable programs
o Typographical arrangement of published editions

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 357

Computer programs are protected as literary works. Literal copying is the copying of program code
while non-literal copying is judged on objective similarity and “look and feel”. Copyright protects most
material on the Internet e.g. linking (problem caused by deep links), framing (displaying a website
within another site), caching and service provider liability.

 Registered designs
 Trademarks – A trademark is a sign that distinguishes goods and services from each other.
Registration gives partial monopoly over right to use a certain mark. Most legal issues of
trademarks and information technology have arisen from the Internet such as:
o Meta tags – use of a trademarked name in a meta tag by someone not entitled to use it may
be infringement.
o Search engines – sale of “keywords” that are also trademarked names to advertisers may
be infringement
o Domain names – involves hijacking and “cybersquatting” of trademarked domain names
 Design rights
 Passing off
 Law of confidence
 Rights in performances

Conflicts of Intellectual Property


Some of the conflicts of intellectual property include:
a) Plagiarism
b) Piracy
c) Repacking data and database
d) Reverse Engineering
e) Copying in transmission

a) Plagiarism
Increased plagiarism because of the Internet. Violates academic dishonesty because copying does not
increase writing and synthesis of skills. One must give credit to the original author.

b) Piracy
In 1994 an MIT student was indicted for placing commercial software on website for copying purposes.
Student was accused of wire fraud and the interstate transportation of stolen property. The case was
thrown out on a technicality ground since the student did not benefit from the arrangement and did not
download the software himself. His offence also did not come under any existing law.
Software publishers estimate that more than 50% of the software in US is pirated and 90% in some
foreign countries. In US, software companies can copyright it and thus control its distribution. It is
illegal to make copies without authorization.
c) Repackaging data and databases

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 358

A company produced a CD-ROM containing a large compilation of phone numbers. A university


student put this CD-ROM on his website. Company sued saying the student had violated the shrink-
wrap license agreement that came with the CD-ROM. Governments have been asking for more laws
to copyright databases.
d) Reverse Engineering
Interfaces are often incomplete, obscure and inaccurate, so developers must look at what the code really
does. Reverse engineering is often a necessity for reliable software design. Companies doing reverse
engineering must not create competing products. Courts have allowed reverse engineering under
certain restrictions.
e) Copying in transmission
“Store and forward networks”, a network node gets data in transmission, stores it and forwards to the
next node until it reaches its destination. Everybody gets a copy, who archives them? Are the
intermediate copies a violation of copyright? If users email pictures or documents which contain
trademarks or copyrighted materials, do email copies on servers put the server’s company in jeopardy?

LIABILITY FOR INFORMATION TECHNOLOGY


Liability may arise out of sale/supply of defective software or liability for online information.
Liability for defective software may arise out of contractual or non-contractual terms. A contract is a
voluntary legally binding agreement between two or more parties. Parties may agree as they may wish
subject to legislation such as the Sale of Goods Act. The legislation limits contractual freedom and
imposes terms and conditions in certain kind of contracts. The question that usually arises is whether
software is ‘goods’ or ‘services’. However mass produced software packages are generally goods, but
custom written or modified software is a service. Non-contractual liability is based on negligence. The
law of negligence is based on the principle that a person should be liable for his careless actions where
this causes loss or damage to another. To bring a successful action for negligence, the pursuer needs
to prove that the defender owed him a duty of care.
Liability for online information involves defective information and defamation. Where a person acts
on information given over the Internet and suffers a loss because information was inaccurate, will
anyone be liable. Two problems that arise are; one, a person who puts information on the Internet will
only be liable if he owes a duty of care to the person who suffers the loss. Two, damage caused in this
way will normally be pure economic loss, which cannot usually be claimed for in delict (tort).
However, there is a limited exception to this general principle in respect of negligent misstatement.
This is where according to Hedley Byrne & Co v Heller & Partners:
 the person giving the advice/information represented himself as an expert.
 The person giving the advice/information knew (or should have known) that the recipient was
likely to act on it, and
 The person giving the advice/information knew (or should have known) that the recipient of
information was likely to suffer a loss if the information was given without sufficient care.
Can an Internet Service Provider be liable for defective information placed by someone else? ISP may
be regarded as a publisher. Traditional print publishers have been held not to be liable for inaccurate

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 359

information contained in the books they publish. But ISP may be liable if it is shown that they had been
warned that the information was inaccurate and did nothing to remove it.
Defamatory statements may be published on the WWW, in newsgroups and by email. Author of the
statements will be liable for defamation, but may be difficult to trace or not worth suing. But employers
and Internet service providers may be liable. Defamation is a delict (tort) and employers are vicariously
liable for delicts committed by their employees in the course of their employment. Many employers
try to avoid the possibility of actionable statements being published by their staff by monitoring email
and other messages. Print publishers are liable for defamatory statements published by them, whether
they were aware of them or not. ISPs could be liable in the same way.

TERMINOLOGY
Data Mart
A data mart is a repository of data gathered from operational data and other sources that is designed to
serve a particular community of knowledge workers. In scope, the data may derive from an enterprise-
wide database or data warehouse or be more specialized. The emphasis of a data mart is on meeting
the specific demands of a particular group of knowledge users in terms of analysis, content,
presentation, and ease-of-use. Users of a data mart can expect to have data presented in terms that are
familiar.

In practice, the terms data mart and data warehouse each tend to imply the presence of the other in
some form. However, most writers using the term seem to agree that the design of a data mart tends to
start from an analysis of user needs and that a data warehouse tends to start from an analysis of what
data already exists and how it can be collected in such a way that the data can later be used.

A data warehouse is a central aggregation of data (which can be distributed physically); a data mart is
a data repository that may derive from a data warehouse or not and that emphasizes ease of access and
usability for a particular designed purpose. In general, a data warehouse tends to be a strategic but
somewhat unfinished concept; a data mart tends to be tactical and aimed at meeting an immediate need.
In practice, many products and companies offering data warehouse services also tend to offer data mart
capabilities or services.

REVISION EXERCISES
1. Name the goals that are achieved through the implementation of a computer network.
2. List three advantages of adopting network protocols.
3. E-mail communication has become a popular mode of communication. What advantages do
users of e-mail gain from using this mode of communication?

4. Briefly define e-commerce.


5. Define the following terms:
(i) Multiplexors
(ii) Front end processors
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 360

(iii) Cluster controllers


(iv) Protocol converters
(v) Spools
(vi) Buffers
6. The common models of e-commerce are B2C (Business-to-Customer) and B2B (Business to
Business). Describe three emerging areas of e-commerce.
7. What does ISO/OSI reference model stand for and what is its significance in computer
networks?
8. Expand the following acronyms:
a) TCP/IP
b) FTP
c) HTTP
d) HTML
e) WWW
9. Describe the unique features of e-commerce as opposed to traditional commerce.
10. Name the risks associated with outsourcing and suggest possible ways of eliminating or
reducing such risks.
11. List four factors that have led to the surge and popularity of the Internet.
12. Define information superhighway.
13. Briefly describe four types of computer crime
14. List the various layers of the ISO/OSI reference model and identify one functionality of each
layer.
15. Describe the Client/Server Model.
16. Define Information Resource Centre (IRC).
17. What is a software house?
18. Define the following terms:
a. Circuit switching networks
b. Packet switching networks
c. Message switching networks
d. Non switching networks
19. List the reasons as to why businesses engage in outsourcing.
20. What is software piracy? Suggest three ways of reducing software piracy.
21. Differentiate between a distributed system and a computer network.
22. List four services offered by online service providers
23. Internet addresses are classified by domains. Most domain names are general categories of
the type of organization. What do the following Internet address extensions mean:
a. .edu
b. .com
c. .gov
d. .net
e. .org
24. Clearly define privacy and confidentiality. What different aspects of privacy do different
government legal instruments and legislation handle?
25. List two advantages of managing computer communication through layered protocols

MANAGEMENT INFORMATION SYSTEMS


STUDY TEXT 361

MANAGEMENT INFORMATION SYSTEMS

You might also like