Management Information System CPA
Management Information System CPA
CHAPTER 1
INTRODUCTION TO INFORMATION COMMUNICATION
TECHNOLOGY (ICT)
SYNOPSIS
Introduction…………………………………………………………. 1
Overview of Computer Systems……………………………………... 1
Overview of Components of Information
Communication Technology………………………………………… 39
Information Communication Technology
Personnel and Information Communication Structure………………. 42
Role of ICT in Business Environments……………………………… 49
Information Centers…………………………………………………. 51
INTRODUCTION
Information and Communications Technology or (ICT), is often used as an extended synonym for
information technology (IT), but is a more specific term that stresses the role of unified
communications and the integration of telecommunications (telephone lines and wireless signals),
computers as well as necessary enterprise software, middleware, storage, and audio-visual systems,
which enable users to access, store, transmit, and manipulate information.
The phrase ICT had been used by academic researchers since the 1980s, but it became popular after it
was used in a report to the UK government by Dennis Stevenson in 1997and in the revised National
Curriculum for England, Wales and Northern Ireland in 2000.
The term ICT is now also used to refer to the convergence of audio-visual and telephone networks
with computer networks through a single cabling or link system. There are large economic incentives
(huge cost savings due to elimination of the telephone network) to merge the audio-visual, building
management and telephone network with the computer network system using a single unified system
of cabling, signal distribution and management.
The term Info-communications is used in some cases as a shorter form of information and
communication(s) technology. In fact info-communications is the expansion of telecommunications
with information processing and content handling functions on a common digital technology base.
What is a computer?
A computer is an information-processing machine. It may also be defined as a device that works
under the control of stored programs automatically accepting, storing and processing data to produce
information that is the result of that processing.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 2
Data – e.g. invoices, sales ledger and purchase ledger, payroll, stock controls etc.
Text – widely available in many offices with microcomputers
Graphics – e.g. business graphs, symbols
Images – e.g. pictures
Voice – e.g. telephone
Processing includes creating, manipulating, storing, accessing and transmitting.
a) Speed
Computers have higher processing speeds than other means of processing, measured as number of
instructions executed per second.
b) Accuracy
Computers are not prone to errors. So long as the programs are correct, they will always give correct
output. A computer is designed in such a way that many of the inaccuracies, which could arise due to
the malfunctioning of the equipment, are detected and their consequences avoided in a way, which is
completely transparent to the user.
c) Consistency
Given the same data and the same instructions computers will produce exactly the same answer every
time that particular process is repeated.
d) Reliability
Computer systems are built with fault tolerance features, meaning that failure of one of the
components does not necessarily lead to failure of the whole system.
e) Memory capability
A computer has the ability to store and access large volumes of data.
f) Processing capability
A computer has the ability to execute millions of instructions per second.
a. Communication
Digital communication- which uses computer- is popular and is being adopted worldwide as opposed
to analogue communication which using the telephony system. Computers have also enhanced
communication through email communication, electronic data interchange, electronic funds transfer,
Internet etc.
b. Banking
The banking sector has incorporated computer systems in such areas as credit analysis, fund transfers,
customer relations, automated teller machines, home banking, and online banking.
c. Organizational management
The proliferation of management information systems have aided greatly the processes of managerial
planning, controlling, directing as well as decision-making. Computers are used in organization for
transaction processing, managerial control as well as decision-support. Other specific areas where
computer systems have been incorporated include sales and marketing, accounting, customer service
etc.
e. Education
Computers incorporate databases of information that are useful in organizing and disseminating
educational resources. Such E-learning and virtual or distributed classrooms have enabled the
teaching industry to have a global reach to the students. Computers are also used for test scoring
uniform tests done in schools, school administration and computer aided instructions.
h. Entertainment
Use of computers in the entertainment industry has increased tremendously over the years. Computers
enable high-quality storage of motion pictures and music files using high-speed and efficient digital
storage devices such as CDs, VCDs and DVDs. The Internet is also a great source of entertainment
resources. Computer games have also become a major source of entertainment.
i. Retailing
Computers are used in point of sale systems and credit card payment systems as well as stock
inventories.
j. Home appliances
Computers (especially embedded computers or microprocessors) are included in household items for
reasons of economy and efficiency of such items. Major appliances such as microwave ovens, clothes
washers, refrigerators and sewing machines are making regular use of microprocessors.
k. Reservation systems
Guest booking, accommodation and bills accounting using computers in hotels have made the process
to be more efficient and faster. Airline computer reservation systems have also enhanced and
streamlined air travel across major airlines. Major players in the industry have also adopted online
reservation systems.
Characteristics of Computers
Some of the characteristics of computers include:
1. Speed – a computer is a very fast machine. It can perform in a very few seconds the amount of
work that a human being can do in a year if he/she worked day and night doing nothing else.
2. Accuracy – the computer accuracy is consistently high.
3. Diligence – computers are free from monotony, tiredness and lack of concentration etc. It can
therefore work for hours without creating an error. For example if 10 million calculations are
to be done, a computer will do the tenth million calculations with exactly the same speed and
accuracy as the first one.
4. Versatility – a computer performs various tasks with ease. I.e. it can search for a letter, the next
moment prepare an electricity bill, and write a report next then do an arithmetic calculation all
with ease.
5. Power of remembering – a computer can store and recall any information due to its secondary
storage capability.
6. No intelligence Quotient (IQ) – a computer cannot make its own decisions and has to be
instructed on what to do.
7. No feelings – computers are devoid of emotions. They have no feelings or instincts and none
possesses the equivalent of a human heart and soul.
History of Computers
Earliest Forms of Computing Devices:
Computer technology in its original form developed as an attempt to have a device that could handle
complex mathematical calculations faster and with ease. The two devices that were used in the early
ages of civilization were:
a) The abacus
It has several vertical threads (or poles) each with a number of beads. The position of each thread
represents a value. For instance, if the values are the decimal system, then thread values (from the
right) are ones, tens, hundreds, and so on. If using the binary system, then the values are ones, twos,
fours, and so on.
The slide rule is like two rulers placed side by side. As one ruler slides over the other a required
mathematical calculation is given from the values marked on the rulers. The position of the sliding
device provides the answer to the calculation that is being done.
1. Pascal's Cogs and Wheels - Devised by Blaise Pascal in 1642 to assist his father in his business.
2, Leibniz's "Stepped Reckoner"- Was an improvement of Pascal's work, though Leibniz hadn't
seen the actual machine made by Pascal. It could do more functions including multiplication, division
and others.
3. The Analytical Engine - The invention was made by an Englishman, Charles Babbage in 1830’s.
Although the man died before the completion of his work, the Analytical Engine provided the basic
components that make up the modern computer.
4. The ENIAC- (Electronic numerical Integrator and Computer). This was the first truly modern
computer. It was put together in 1946 by a team of American Scientists (J. Presper Eckert and John
W. Mauchly). ENIAC was made using a design invented by John Von Neumann. This design
utilized stored programs to work at the computer.
1870s: Development of the typewriter allows speedier communication and less copying.
1920s: Invention of the telephone enables both Wide Area Networks (WAN) and Local Area
Networks (LAN) communication in real time. This marks the beginning of telecommunication.
1930s: Use of scientific management is made available to analyse and rationalise.
1940s: Mathematical techniques developed in World War II (operations research) are applied to
the decision making process.
1950s: Introduction of copying facilitates cheap and faster document production, and the (limited)
introduction of Electronic Data Processing (EDP) speeds up large scale transaction processing.
1960s: Emergence of Management Information Systems (MIS) provides background within which
office automation can develop.
1970s: Setting up of telecommunication networks to allow for distant communication between
computer systems. There is widespread use of word processors in text editing and formatting,
advancement in personal computing- emergence of PCs. Use of spread sheets.
1980s: Development of office automation technologies that combine data, text, graphics and voice.
Development of DSS, EIS and widespread use of personal productivity software.
1990s: Advanced groupware; integrated packages, combining most of the office work- clerical,
operational as well as management.
2000s: Wide spread use of Internet and related technology in many spheres of organisations
including electronic commerce (e-commerce), e-learning, e-health
Landmark Inventions
500 B.C. - counting table with beads
1150 in China - ABACUS - beads on wires
1642 Adding machine - Pascal
1822 Difference machine/Analytic Engine - design by Babbage
1890 Holerith punched card machine - for U.S. census
1944 Mark I (Harvard) - first stored program computer
1947 ENIAC (Penn)- first electronic stored program computer
1951 UNIVAC - first commercial computer; 1954 first installation
1964 IBM - first all-purpose computer (business + scientific)
1973 HP-65, hand-held, programmable ‘calculator’
~1975 Altair, Intel - first Micro-computer; CPU on a “chip”
Generation of Computers
The view of computers into generations is based on the fundamental technology employed. Each new
generation is characterized by greater speed, larger memory capacity and smaller overall size than the
previous one.
The following table summarises the effect of technology on the main components of a computer
system (Baer 1984). The size values present an order of magnitude figure (followed by typical values
in bytes of storage)
GENERATION
Technology FIRST SECOND THIRD FOURTH
Processor SSI LSI, VLSI,
Vacuum tube Transistor
technology LSI ULSI
Processor Multifunction Microcomputers Workstations
Uniprocessor
structure units Minicomputers on LANs
Mainframe
1 100 2000 1000
speed
Microprocessor
- - 1 10
speed
hardwired drum hardwired
Control hardwired hardwired
microprogram & microprogram
Primary Semiconductor
Vacuum tube Core Semiconductor
memory 64K to 256K
1 10 200 2000
Size bytes
200 4000 64-1M 1M - 40M
Secondary channels & fixed-head
extended I/O
memory & I/O drum tape asynchronous &movable-arm
optical disk
paths I/O disks
1 10 500 5000
Size bytes
1K-5K 100K-64K 10M-500M 500M-5000M
Memory experimental segmentation & segmentation &
-
hierarchy paging systems paging, caches paging, caches
Classification of Computers
Computers can be classified in different ways as shown below:
i. Classification by processing
This is by how the computer represents and processes the data.
a) Digital computers are computers which process data that is represented in the form of discrete
values by operating on it in steps. Digital computers process data represented in the form of
discrete values like 0, 1, 2. They are used for both business data processing and scientific
purposes since digital computation results in greater accuracy.
b) Analog computers are used for scientific, engineering, and process-controlled purposes.
Outputs are represented in the form of graphs. Analogue computers process data represented
by physical variables and output physical magnitudes in the form of smooth graphs.
c) Hybrid computers are computers that have the combined features of digital and analog
computers. They offer an efficient and economical method of working out special problems in
science and various areas of engineering.
Classification by purpose
This is a classification by the use to which the computer is put.
a) Special purpose computers are used for a certain specific function e.g. in medicine, engineering,
manufacturing.
b) General-purpose computers can be used for a wide variety of tasks e.g. accounting, word
processing
a) First generation. Computers of the early 1940s. Used a circuitry of wires and vacuum tubes.
Produced a lot of heat, took a lot of space, were very slow and expensive. Examples are LEO 1
and UNIVAC 1.
b) Second generation. Computers of the early 1950s. Made use of transistors and thus were
smaller and faster. (200KHz). Examples include the IBM system 1000.
c) Third generation. Computers of the 1960s. Made use of Integrated Circuits. Speeds of up to
1MHz. Examples include the IBM system 360.
d) Fourth generation. Computers of the 1970s and 1980s. Used Large Scale Integration (LSI)
technology. Speeds of up to 10MHz. Examples include the IBM 4000 series.
e) Fifth generation. Computers of the 1990s. Use Very Large Scale Integration (VLSI)
technology and have speeds up to 400MHz and above.
Super computers
They are very large in size and use multiple processors and superior technology. Super computers are
biggest in size, the most expensive in price than any other is classified and known as super computer.
It can process trillions of instructions in seconds. This computer is not used as a PC in a home neither
by a student in a college. Governments specially use this type of computer for their different calculations
and heavy jobs. Different industries also use this huge computer for designing their products.
In most of the Hollywood’s movies it is used for animation purposes. This kind of computer is also
helpful for forecasting weather reports worldwide. They are known for von Newman’s design i.e.
multiple processor system with parallel processing. In such a system a task is broken down and shared
among processes for faster execution. They are used for complex tasks requiring a lot of computational
power.
Mainframe computers
A mainframe is another giant computer after the super computer and can also process millions of
instruction per second and capable of accessing billions of data .They are physically very large in size
with very high capacity of main memory. This computer is commonly used in big hospitals, airline
reservations companies, and many other huge companies prefer mainframe because of its capability of
retrieving data on a huge basis. They can be linked to smaller computers and handle hundreds of users
they are also used in space exploitation. The term mainframe was mainly used for earliest computers as
they were big in size though today the term is used to refer to large computers. A large number of
peripherals can be attached to them. They are expensive to install.
Minicomputers
They are smaller than the main frame but bigger than minicomputers. They support concurrent users.
They can be used as servers in companies. They are slower and less costly compared to mainframe
computers but more powerful, reliable and expensive than microcomputers.
Micro computers
They are of advanced technology i.e. the micro era based on large scale integration that confines several
physical components per small elements thumb size IC, hence the size reduced. It is the smallest of the
three computers. They are usually called personal computers since they are designed to be used by
individuals. The microchip technology has enabled reduction of size of computers. Microcomputers can
be a desktop, laptop, notebooks, or even palmtop
i. Notebook computer
It is an extremely lightweight personal computer. Notebook computers typically weigh less than 6
pounds and are small enough to fit easily in a briefcase. Aside from size and portability,. Notebook
computers use a variety of techniques, known as flat-panel technologies, to produce a lightweight and
non-bulky display screen.
iii. Laptop
A small portable computer light enough to carry comfortably, with a flat screen and keyboard that fold
together. Laptops are battery-operated, often have a thin, backlit or sidelit LCD display screen, and
some models can even mate with a docking station to perform as a full-sized desktop system back at
the office. Advances in battery technology allow laptop computers to run for many hours between
charges, and some models have a set of business applications built into ROM. Today's high-end
(Advanced) laptops provide all the capabilities of most desktop computers.
iv. Palmtop
It is a small computer that literally fits in your palm. Compared to full-size computers, palmtops are
severely limited, but they are practical for certain functions such as phone books and calendars.
Palmtops that use a pen rather than a keyboard for input are often called hand-held computers or PDAs.
Because of their small size, most palmtop computers do not include disk drives. However, many contain
PCMCIA slots in which you can insert disk drives, modems, memory, and other devices. Nowadays
palmtops are being integrated into the mobile phones as multipurpose devices.
A bit is either a 1 or a 0. These correspond to two electronic/magnetic states of ON (1) and OFF (0) in
digital circuits which are the basic building blocks of computers. All data operated by a computer and
the instructions that manipulate that data must be represented in these units. Other units are a
combination of these basic units. Such units include:
1 byte (B) = 23 bits = 8 bits – usually used to represent one character e.g. ‘A’
1 kilobyte (KB) – 210 bytes = 1024 bytes (usually considered as 1000 bytes)
1 megabyte (MB)– 220 bytes = 1048576 bytes (usually considered as 1000000 bytes/1000 KB)
1 gigabyte (GB)– 230 bytes = 1073741824 bytes (usually considered as 1,000,000,000
bytes/1000 MB)
1 terabyte (TB) – 240 bytes = 1099511627776 bytes (usually considered as one trillion
bytes/1000 GB)
Bit patterns (the pattern of 1s or 0s found in the bytes) represent various kinds of data:
Computer data is represented using number systems and either one of the character coding schemes.
ASCII (American Standard Code for Information Interchange) is the most common format for text
files in computers and on the Internet. In an ASCII file, each alphabetic, numeric, or special character
is represented with a 7-bit binary number (a string of seven 0s or 1s). 128 possible characters are
defined.
Unix and DOS-based operating systems use ASCII for text files. Windows NT and 2000 uses a newer
code, Unicode. IBM's S/390 systems use a proprietary 8-bit code called EBCDIC. Conversion
programs allow different operating systems to change a file from one code to another. ASCII was
developed by the American National Standards Institute (ANSI).
(ii) EBCDIC
EBCDIC is a binary code for alphabetic and numeric characters that IBM developed for its larger
operating systems. It is the code for text files that is used in IBM's OS/390 operating system for its
S/390 servers and that thousands of corporations use for their legacy applications and databases. In an
EBCDIC file, each alphabetic or numeric character is represented with an 8-bit binary number (a string
of eight 0's or 1's). 256 possible characters (letters of the alphabet, numerals, and special characters)
are defined.
(iii) Unicode
Unicode is an entirely new idea in setting up binary codes for text or script characters. Officially called
the Unicode Worldwide Character Standard, it is a system for "the interchange, processing, and display
of the written texts of the diverse languages of the modern world." It also supports many classical and
historical texts in a number of languages.
Number Systems
(i) Decimal system (base 10)
This is the normal human numbering system where all numbers are represented using base 10.The
decimal system consists of 10 digits namely 0 to 9. This system is not used by the computer for internal
data representation. The position of a digit represents its relation to the power of ten.
E.g. 45780 = {(0×100) + (8×101) + (7×102) + (5×103) + (4×104)}
The information supplied by a computer as a result of processing must be decoded in the form
understandable to the user.
= 1 + 2 + 4 + 8 = 15
For example: the binary number 10001110011 can be handled as 2163 octal number.
2 1 6 3
1 2 A 0
Storage Capacity
All of the data and programs that are used by a computer are represented as bits within the main
memory. The storage of these bits is made more manageable by grouping them together in multiples
of eight. In fact, the term byte is widely used when referring to memory size and file size rather than
bit.
When file sizes become particularly large it becomes cumbersome to describe them in terms of bytes
because the file may be in the order of, say, 2578 bytes or 456,347 bytes. As the computer is a two-
state machine it is convenient to express the capacity of memory and backing store in powers of 2.
Consequently, the following table represents the hierarchy of memory capacity
1 bit Can be 1 or 0
The units above are used to measure the capacity of both the main memory and the backing store.
However, the capacity of backing store devices is much larger than that of main memory.
At the time of writing this unit memory is measured in terms of megabytes and gigabytes (currently
up to 3 Gb of RAM), whereas a typical hard disk has a capacity in the order of 80 Gb. No doubt these
figures will seem low in future years.
Computer Structure
The diagram below shows the components used in a typical computer system. It is a simple
representation of how a computer works and is often referred to as the ‘four box diagram’
MEMORY
PROCESSOR
INPUT OUTPUT
BACKING STORE
Storage devices – Permanent storage of data and programs before and after it is processed by
the computer system.
Communication devices – Enable communication with other computers.
When your computer is switched off all programs and data are held on backing store media such as
hard drives, floppy disks, zip disks and CD-R/W. Once the computer is switched on, the operating
system is loaded from the backing store into main memory (RAM). The computer is now ready to run
programs.
When the user opens a word processor file both the application program and the file itself are loaded
into the main memory. The user may then edit the document by typing on the keyboard. It is the
processor that controls the timing of operations and runs the word- processing program, allowing the
user to add new text.
Once the editing is complete, the user saves the file to the backing store and these over-writes the
original file (unless a new file name is used). If there is a power failure or the user does not save the
document to the backing store then the file will be lost forever.
Throughout this process the document is outputted to the monitor so that the user can see what is
happening. The user may wish to obtain a hardcopy of the document by using the mouse (input) to
instruct the processor (process) to make a printout (output).
Hardware
Refers to the physical, tangible computer equipment and devices, which provide support for major
functions such as input, processing (internal storage, computation and control), output, secondary
storage (for data and programs), and communication.
Hardware categories
A computer system is a set of integrated devices that input, output, process, and store data and
information. Computer systems are currently built around at least one digital processing device. There
are five main hardware components in a computer system: the central processing unit (CPU); primary
storage (main memory); secondary storage; and input and output devices.
Input Devices
Most computers cannot accept data in forms customary to human communication such as speech or
hand-written documents. It is necessary, therefore, to present data to the computer in a way that provides
easy conversion into its own electronic pulse-based forms. This is commonly achieved by typing data
using the keyboard or using an electronic mouse or any other input device.
Keyboard
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 16
It can be connected to a computer system through a terminal. A terminal is a form of input and output
device. A terminal can be connected to a mainframe or other types of computers called a host computer
or server. There are four types of terminals namely dumb, intelligent, network and Internet.
Dumb Terminal
- Used to input and receive data only.
- It cannot process data independently.
- A terminal used by an airline reservation clerk to access a mainframe computer
for flight information is an example of a dumb terminal
Intelligent Terminal
- Includes a processing unit, memory, and secondary storage.
- It uses communications software and a telephone hookup or other
communications link.
- A microcomputer connected to a larger computer by a modem or network link is
an example of an intelligent terminal.
Network Terminal
- Also known as a thin client or network computer.
- It is a low cost alternative to an intelligent terminal.
- Most network terminals do not have a hard drive.
- This type of terminal relies on a host computer or server for application or system
software.
Internet Terminal
- Is also known as a web terminal.
- It provides access to the Internet and displays web pages on a standard television
set.
- It is used almost exclusively in the home.
Direct data entry devices – Direct entry creates machine-readable data that can go directly to the
CPU. It reduces human error that may occur during keyboard entry. Direct entry devices include
pointing, scanning and voice-input devices.
Scanning Devices
Scanning devices, or scanners, can be used to input images and character data directly into a computer.
The scanner digitises the data into machine-readable form. The scanning devices used in direct-entry
include the following:
Image Scanner – converts images on a page to electronic signals.
Fax Machine – converts light and dark areas of an image into format that can be sent
over telephone lines.
Bar-Code Readers – photoelectric scanner that reads vertical striped marks printed
on items.
Character and Mark Recognition Devices – scanning devices used to read marks
on documents.
Voice–input devices
Voice-Input Devices can also be used for direct input into a computer. Speech recognition can be used
for data input when it is necessary to keep your hands free. For example, a doctor may use voice
recognition software to dictate medical notes while examining a patient. Voice recognition can also
be used for security purposes to allow only authorized people into certain areas or to use certain
devices.
Note:
Point-of-sale (POS) terminals (electronic cash registers) use both keyboard and direct entry.
Keyboard Entry can be used to type in information.
Direct Entry can be used to read special characters on price tags.
Point-of-sale terminals can use wand readers or platform scanners as direct entry devices.
Wand readers or scanners reflect light on the characters.
Reflection is changed by photoelectric cells to machine-readable code.
Encoded information on the product’s barcode e.g. price appear on terminal’s digital display.
Output Devices
The output devices covered this sub-section include:
a. CRT monitors
CRT monitors comprise a sealed glass tube that has no air inside it. An electron gun at one end fires a
stream of tiny electrons at the screen located at the other end. The image is made by illuminating
particles on the screen.
Accuracy
The main factors are the refresh rate, the number of pixels and also the physical size of the monitor.
What is really important is what the refresh rate will be at the maximum desired resolution. To keep it
simple, every pixel or dot on the screen is refreshed or redrawn many times every second. If this
flicker can be detected it can cause eyestrain and image quality is simply not the same as if it were
flicker-free. The industry standard for flicker-free images is 75 Hz as very few people can detect
flicker at or above 75 Hz. Most flicker-free monitors offer a refresh rate of 85 Hz. Those that use
higher rates do not offer any additional advantage and could even be considered counter-productive.
Resolution
A monitor image is made up of pixels, or picture elements. Pixels are either illuminated or not; the
pattern they show is what makes up the image.
A 17" monitor may have a maximum resolution of 1280 ×1024. Not only does this ratio (5:4) cause
image distortion but text is simply too small to read at this high a resolution on this size of monitor. A
17" monitor should use either an 800 × 600 or 1024 ×768 resolution, which are thedesired (4:3) ratio.
A 15" monitor should use 640 × 480 (4:3) or 800 ×600 resolution
b. LCD panels
Applying a voltage across an LCD material changes the alignment and light-polarising properties of
its molecules so that they can be used in conjunction with polarising filters to create an electronic
shutter that will either let light pass or stop it passing. Thus, the LCD display works by allowing
different amounts of white backlight through an active filter.
The red, green and blue of each pixel are achieved by filtering the white light that is allowed through.
LCD stands for Liquid Crystal Display. LCD is also known as TFT (Thin Film Transistor).
Accuracy
The main factors are the refresh rate, the number of pixels and the physical size of the LCD monitor.
The refresh rate is set at an industry standard of 75 MHz.
Resolution
Like the CRT monitor this is based on the pixel array. Different screen modes can be selected but the
maximum resolution is often 1280×1024.
The number of bits allocated to represent each pixel is called the colour depth. The colour depth can
be as high as 24 bits, which allows more than 16 million different colours to be represented. It is
difficult to imagine any more than 16 million colours so 24-bit colour depth is often referred to as true
colour.
Typical uses
LCD monitors are lightweight, compact and can require little power to run compared to CRT
monitors. They are ideal for use in laptops, tablets and palmtops. Full size LCD monitors for desktop
systems are becoming very popular.
c. Inkjet printers
These work by spraying a fine jet of ink, which is both heated and under pressure, onto paper. Most
have a black cartridge and either a single colour cartridge or separate red, yellow and blue cartridges
Accuracy
The quality of the printed image is measured by the number and spacing of the dots of ink on the
page. The image resolution is generally measured in dpi. The higher the dpi, the better the quality or
sharpness of the printed image. The vertical and horizontal resolutions may, therefore, be different
depending on the number of nozzles on the print head and the distance moved. Typical resolution is
2880×1440.
Speed
The major factor here tends to be the mode of communication with the computer. Often this figure is
given in terms of pages per minute for black and white or colour, e.g. black and white 10 ppm and
colour 6 ppm.
Typical uses
Home, office and business. These printers are ideal for that occasional presentations and livening up
mostly text documents with some colour.
They are also good for creative home projects such as invitations, birth announcements and personal
greeting cards.
d. Laser printers
These operate by using a laser beam to trace the image of the page layout onto a photosensitive drum.
This image then attracts toner by means of an electrostatic charge. The toner is fused to the paper by
heat and pressure.
Accuracy
Determined by the dpi. A typical laser printer can print from 600 to 2400 dpi, which produces very
high quality images.
Speed
A laser printer needs to have all the information about a page in its memory before it can start
printing. If the page has a lot of detail then it will take longer to print. One way to speed up a printer is
to add more internal RAM. Once the first page has printed the rest normally follow directly. Like
inkjet printers, speeds are given in terms of pages per minute, e.g. 14 ppm for black and white, 8 Mb
RAM.
e. Loudspeakers
There are two types of speaker systems used on computers: those that are inbuilt and those that are
external. Most computers will have a speaker (or two) incorporated in the case or perhaps the monitor.
The purpose of inbuilt speaker systems is limited to producing a sound from the computer and nothing
more; the quality is poor.
Multimedia computers are intended to produce good sound quality that is comparable to hi-fi systems.
They include ‘active speakers’, which have their own power supply and usually have an amplifier. A
good quality system will include a sub-woofer and five speakers to produce surround sound.
Accuracy/Quality
This can be measured as the comparison between the original sound and that produced by the
computer. Speakers are only one component of sound quality; the formats of the sound tracks and
type of soundcard also have a significant effect.
If we consider the sound produced from a pre-recorded CD or DVD movie then active systems can be
as good as a professional hi-fi system.
Processing Devices
(i) The CPU (Central Processing Unit)
The CPU (Central Processing Unit) controls the processing of instructions. The CPU produces
electronic pulses at a predetermined and constant rate. This is called the clock speed. Clock speed is
generally measured in megahertz, that is, millions of cycles per second
It consists of:
d. Control Unit (CU) – The electronic circuitry of the control unit accesses program
instructions, decodes them and coordinates instruction execution in the CPU.
The main functions of the control unit are:
2. To send out signals that fetch instructions from the main memory
4. To carry out instructions that are fetched from the main memory
In general the control unit is responsible for the running of programs that are loaded into the main
memory.
e. Arithmetic and Logic Unit (ALU) – Performs mathematical calculations and logical
comparisons.
The main functions of the ALU are:
IF...THEN
f. Registers – These are high-speed storage circuitry that holds the instruction and the
data while the processor is executing the instruction.
g. Bus – This is a highway connecting internal components to each other.
i. Output
Results are taken from main storage and fed to an output device. This may be a printer, in which case
the information is automatically converted to a printed form called hard copy or to a monitor screen for
a soft copy of data or information.
When a computer is switch off the data has to be stored on a secondary storage device so that it can be
loaded back in at a later date. Current backing store devices fall into two categories: magnetic and
optical. We will examine the following devices in turn: magnetic storage devices/media:
a. floppy drive
b. hard drive
c. zip drive
d. magnetic tape optical storage devices/media:
e. CD-ROM
f. CD-R
g. CD-RW
h. DVD-ROM
i. rewritable DVD
– DVD-R
– DVD-RW
Random access is where the system can go straight to the data it requires. A disk is a random-access
medium. To read data stored on the disk, the system simply has to have the address on the disk where
the data is stored, and the read head can go directly to that location and begin the transfer. This makes
a disk drive a faster method of data storage and data access than a tape drive, which uses serial access.
An alternative to direct access is sequential access (serial access), in which a data location is found by
starting at one place and seeking through every successive location until the data is found.
Historically, tape storage is associated with sequential access, and disk storage is associated with
direct access.
Data is stored by magnetising the surface of flat, circular plates that constantly rotate at high speed
(typically 60 to 120 revolutions per second). A read/write head floats on a cushion of air a fraction of
a millimetre above the surface of the disk. The drive is inside a sealed unit because even a speck of
dust could cause the heads to crash.
Optical storage is any storage method in which data is written and read with a laser for archival or
backup purposes. Typically, data is written to optical media, such as CDs and DVDs. For several
years, proponents have spoken of optical storage as a near-future replacement for both hard drives in
personal computers and tape backup in mass storage.
Optical media is more durable than tape and less vulnerable to environmental conditions. On the other
hand, it tends to be slower than typical hard drive speeds, and to offer lower storage capacities.
According to OSTA (Optical Storage Technology Association), current optical speeds are
approaching those of hard drives. A number of new optical formats, such as Blu-ray and UDO (ultra
density optical), use a blue laser to dramatically increase capacities.
a. Floppy drive/disk
A floppy disk is a small disk that the user can remove from the floppy disk drive. The disk is made
from circular plastic plates coated in ferric oxide. When the disk is formatted or initialised, the surface
of the disk is divided into tracks and sectors on which data is stored as magnetic patterns.
Type of Access
Direct/random
Floppy disks are relatively slow to access because they rotate far more slowly than hard disks, at only
six revolutions per second, and only start spinning when requested. The access speed is about 36 Kb
per second.
Capacity
High-density disks hold 1.44 Mb of data (enough to store about 350 pages of A4 text). A floppy disk
needs to be formatted before it can be used but most disks are now sold already formatted.
Functions
Floppy disks used to be a convenient means of storing small files and of transferring files from one
computer to another. Many single files are now larger than 1.44 Mb, mainly due to graphics and video
(jpeg and mpeg) making the floppy disk an unsuitable medium for anything but small files
New USB flash drives (32 Mb to 2 Gb), which can be inserted into a USB port, are making the floppy
disk drive redundant to the extent that some computers are now sold without a floppy disk drive.
b. Hard Disk
A hard disk is a rigid disk with a magnetised surface. The surface is divided into tracks and sectors on
which data is stored magnetically. The data is read by a read/write head fixed to an arm that moves
across the surface of the disk. Hard disks are usually sealed in a protective container to prevent dust
corrupting the data.
Type of access
Random/direct
Hard disks rotate at much higher speeds than floppy disks, reaching speeds of up to 7200 rotations per
minute. This means that the fastest hard disk can transfer data from disk to computer at the rate of 22
Mb per second. Some can even manage higher transfer rates in short bursts of up to 33 Mb per
second.
Capacity
Measured in gigabytes, the standard amount for a desktop computer is currently 80 Gb but it is
possible to purchase hard disks with a capacity of 250 Gb
Functions
The hard drive is used in all computer systems: stand-alone, network and mainframe. It has become
an essential component of the modern computer, particularly with the increase in video editing, which
demands a great deal of storage space. A typical hard disk will store:
Type of access
Direct/random
This depends on the connection type. The USB 1.0 transfer rate is 0.9 Mb s–1, the USB 2.0 transfer
rate is 7.3 Mb s–1 and the Firewire rate is 7.3Mb s–1.
Capacity
Older zip drives take 100 Mb disks, but 250 Mb has become the standard and the latest devices hold a
massive 750 Mb. The newer drives can also read all previous zip media.
Functions
Good for storing large files on a portable medium, particularly photo images, which tend to be large,
desktop publishing files and video. Often used to back up data.
As with floppy disks, USB flash drives are likely to make zip drives (especially the smaller capacity
ones) obsolete.
d. Magnetic tape
For almost as long as computers have existed, magnetic tape has been the back-up medium of choice.
Tape is inexpensive, well understood and easy to remove and replace. But as hard drives grew larger
and databases became massive data warehouses, tape had to change to store more data and do it faster.
From large reel-to-reel mainframe tape, focus shifted to the speed and convenience of digital audio
tapes (DATs). Tape is a sequential medium so data has to be read from it in order.
Modern systems use cassettes. Some of these are even smaller than an audio cassette but hold more
data that the huge reels.
Type of access
Serial
Access speeds have been traditionally slow due to the serial access to the data; however, a data
transfer rate of between 0.92 Mb s–1 and 30 Mb s–1 is possible.
Capacity
Magnetic tape comes in a wide range of sizes, from 10 Gb to 500 Gb. Compressed data tapes can hold
up to a massive 1300 Gb of data on a single tape.
Functions
Magnetic tape can be used for permanent storage. Tapes are often used to make a copy of hard disks
for back-up (security) reasons. This is automatically done overnight and is suitable for network or
mainframe backups.
e. CD-ROM drive
The term CD-ROM is short for compact disk read-only memory. CD-ROM disks can only be used to
read information stored on them – the user cannot save data to a CD-ROM disk. CD-ROM writers use
a high-powered laser to store data by making tiny pits in the surface of the CD-ROM disk.
The pattern of these pits is read by a sensor in the CD-ROM drive that detects light reflected off the
surface of the disk. The patterns are then turned into binary numbers.
Type of access
Direct/random
The speed varies from drive to drive. The original CD drives read data at a rate of 150 Kb per second.
Rather than quoting speed in Kb s–1 the norm has been to relate the speed as multiples of 150 Kb s–1
The latest 56-speed drives read data at a rate of 56× (150 Kb s–1), i.e. 8.4 Mb s–1
Manufacturers quote the highest speeds achieved by their drives during tests in ideal conditions but
these speeds are often not achieved during typical use.
Capacity
The capacity of CD-ROM disks ranges from 650 Mb to 700 Mb of data. With compression the
capacity can be up to 1.3 Gb.
f. CD-RW drives
CD-Rewritable (CD-RW) drives let you burn, or write, CD-R and CD-RW media with your favourite
music or photos or just to back up data. The most important feature to look for is the drive’s record
speed, which tells you how long you’ll spend waiting for it to finish burning a CD.
Type of access
Direct/random
Three numbers are usually used to rate drive speed: record speed, rewrite speed and read speed
(usually in that order). The highest number listed is often for reading; the lowest is rewriting.
Recording frequently is the same as or less than reading. Note that a drive with a 48×record speed
theoretically could burn a CD in half the time a 24× drive requires, but in reality the speed difference
is less pronounced
g. CD-R (media)
Compact disk recordable (CD-R) is a also known as write once read many. This is a bit of a misnomer
as it is in fact possible to write to a CD-R in different sessions until the disk is finalised. Once
finalised the disk cannot be written to again. There are several different formats of CD-R and some
formats will not work in standard CD-ROM drives. The write process is irreversible.
Type of access
Direct/random
These disks are burned for the CD-ROM drive so access speeds are measured in multiples of 150 Kb
s–1. The latest speed is 56×read.
Capacity
Normally 700 Mb but up to 1.3 Gb with compression. The capacity can also be given as the time to
record music onto the CD until full, e.g. 80 minutes.
Typical uses
This is really the read speed, which is the same as for a CD-R. However, we also have to consider the
initial write speed to a blank CD (given as 52×in the above example) and the re-write speed to a used
CD (32×in the above example). Generally the re-write speed is the slower of the quoted speeds
Capacity
Typical uses
The data transfer rate from a DVD-ROM disk at 1×speed is roughly equivalent to a 9×CD-ROM drive
(1×CD-ROM data transfer rate is 150 Kbs–1, or 0.146 Mb s–1). The DVD physical spin rate is about
three times faster than that of a CD (that is, 1×DVD spin = 3×CD spin). A drive listed as ‘16×/40×’
reads a DVD at 16 times normal speed, or a CD at 40 times normal speed
Typical uses
• Encyclopedias
• Games
• Movies
DVD-RW combination drive
There are currently two main versions of rewritable DVDs: DVD-RW and DVD+RW. There is little
difference between the two other than speed of access to data. Modern DVD-RW drives allow access
to both types of disks. DVD-RW drives write DVD-R, DVD-RW, CD-R, and CD-RW disks.
The time it takes to burn a DVD depends on the speed of the recorder and the amount of data. The
playing time of the video may have little to do with the recording time, since half an hour at high data
rates can take more space than an hour at low data rates. A 2×recorder, running at 22Mb s–1, can write
a full 4.7 Gb DVD in about 30 minutes. A 4×recorder can do it in about 15 minutes.
DVD-R (media)
There are six different formats of DVD and this one allows the user to record in a single session or in
multiple sessions until the disk is complete. DVD-R is compatible with most DVD drives and players.
This depends on the drive being used but a typical speed is 40×(CD),i.e. 6 Mb s–1
Capacity
Normally 4.7 Gb
A major problem with DVD is the format of data. There are several different data formats that are not
compatible with each other. In other words, a DVD+R/RW drive cannot write a DVD-R or DVD-RW
disk, and vice versa (unless it is a combo drive that writes both formats). Very roughly, DVD-R and
DVD+R disks work in about 85% of existing drives and players, while DVD-RW and DVD+RW
disks work in around 70%. The situation is steadily improving.
Interface
An interface is a hardware device that is needed to allow the processor to communicate with an
external or internal device such as a printer, modem or hard drive. Sometimes the interface is a board
in the computer and sometimes it is a connection to a port.
The reason that an interface is required is that there are differences in characteristics between the
peripheral device and the processor. Those characteristics include:
• Data conversion
• Speed of operation
• Temporary storage of data.
Data Conversion
The commonest example of data conversion is when the peripheral accepts an analogue signal that
must be converted into digital for the processor to comprehend it.
Speed of operation
The speed of operation of peripheral devices tends to be in terms of pages per minutes, frames per
second or megabytes per second; however, the processor works at a rate in line with its internal clock,
which is much faster. The speed of the internal operations is measured in gigahertz and a processor
will typically work at 2.8 GHz, i.e. 2800,000,000 cycles per second. This difference in the speed of
operation between the processor and devices requires an interface between the two devices as the
processor can deliver data much faster than the peripheral device can handle.
Data storage
In older computer systems the processor would stand idle while the printer was finishing a print job.
One way around this problem is to have the data held temporarily in transit between the processor and
the printer. Interfaces are used to hold this data, thus releasing the processor; the data is held in a
‘buffer’. Keyboard characters entered by the user are stored in the keyboard buffer while they are
being processed.
One of the important considerations when purchasing a portable CD-RW drive is the type of interface
it uses. There are four interface options for portable drives: parallel port, PC card, USB 2.0 and IEEE
1394 Firewire.
Most users favour USB 2.0 and Firewire because of their high connection speeds and flexibility.
Types of interfaces include IDE, SCSI, serial, parallel, PCI, USB and Firewire
Storage capacity abbreviations
Communication devices
There are two types of communication devices
a) Modem
b) Fax/modem
a. Modem
Modems allow computers (digital devices) to communicate via the phone system (based on analog
technology). It turns the computers digital data into analog, sends it over the phone line, and then
another modem at the other end of the line turns the analog signal back into digital data.
Fax/modem
It is a basic digital/analog modem enhanced with fax transmission hardware that enables faxing of
information from computer to another fax/modem or a fax machine (NOTE: a separate scanner must
be connected to the computer in order to use the fax/modem to transfer external documents)
Computer Memory
Memory capability is one of the features that distinguish a computer from other electronic devices.
Like the CPU, memory is made of silicon chips containing circuits holding data represented by on or
off electrical states, or bits. Eight bits together form a byte. Memory is usually measured in megabytes
or gigabytes.
A kilobyte is roughly 1,000 bytes. Specialized memories, such as cache memories, are typically
measured in kilobytes. Often both primary memory and secondary storage capacities today contain
megabytes, or millions of bytes, of space.
Types of Memory
1. RAM (Random Access Memory) /RWM (Read Write Memory) – Also referred to as main
memory, primary storage or internal memory. Its content can be read and can be changed and is the
working area for the user. It is used to hold programs and data during processing. RAM chips are
volatile, that is, they loose their contents if power is disrupted. Typical sizes of RAM include 32MB,
64MB, 128MB, 256MB and 512MB.
a. EDO (Extended Data Out) –It is a type of random access memory (RAM) chip that
improves the time to read from memory on faster microprocessors such as the Intel Pentium.
EDO RAM was initially optimized for the 66 MHz Pentium. For faster computers, different
types of synchronous dynamic RAM (SDRAM) are recommended
b. DRAM (Dynamic RAM) – It is a type of random-access memory that stores each bit
of data in a separate capacitor within an integrated circuit. The capacitor can be either charged
or discharged; these two states are taken to represent the two values of a bit, conventionally
called 0 and 1. Since capacitors leak charge, the information eventually fades unless the
capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic
memory as opposed to SRAM and other static memory.
2. ROM (Read Only Memory) – Its contents can only be read and cannot be changed. ROM
chips is non-volatile, so the contents aren’t lost if the power is disrupted. ROM provides permanent
storage for unchanging data & instructions, such as data from the computer maker. It is used to hold
instructions for starting the computer called the bootstrap program.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 31
PROM: the settings must be programmed into the chip. After they are programmed, PROM behaves
like ROM – the circuit states can’t be changed. PROM is used when instructions will be permanent,
but they aren’t produced in large enough quantities to make custom chip production (as in ROM) cost
effective. PROM chips are, for example, used to store video game instructions.
Instructions are also programmed into erasable programmable read-only memory. However, the
contents of the chip can be erased and the chip can be reprogrammed.
EPROM chips are used where data and instructions don’t change often, but non-volatility and
quickness are needed. The controller for a robot arm on an assembly line is an example of EPROM
use.
a) PROM (Programmable Read Only Memory) – It is written onto only once using special
devices. Used mostly in electronic devices such as alarm systems.
b) EPROM (Erasable Programmable Read Only Memory) –Can be written onto more than
once.
3. Cache Memory - Cache memory is high-speed memory that a processor can access more quickly
than RAM. Frequently used instructions are stored in cache since they can be retrieved more quickly,
improving the overall performance of the computer. Level 1 (L1) cache is located on the processor;
Level 2 (L2) cache is located between the processor and RAM.
Software
Software is a program commercially prepared and tested in software by one or a group of programmers
and system analyst to perform a specified task. Software is simply set of instructions that cause a
computer to perform one or more tasks. The set of instructions is often called a program or, if the set
is particularly large and complex, a system. Computers cannot do any useful work without instructions
from software; thus a combination of software and hardware (the computer) is necessary to do any
computerized work. A program must tell the computer each of a set of tasks to perform, in a framework
of logic, such that the computer knows exactly what to do and when to do it. Data are raw facts and
ideas that have not been processed while Information is data that has been processed so as to be useful
to the user
Classification of Software
SOFTWARE
1) System software
It consists of programs that control operations of the computer and enable user to make efficient use
of computers. They coordinate computer activities and optimize use of computers. They are used to
control the computer and develop and run application programs examples of jobs done by the system
software are management of computer resources, defragmentation etc. They can be divided into;
They are programs designed for general support of the processes of a computer; "a computer system
provides utility programs to perform the tasks needed by most users". The service programs can
further be divided into;
Utilities-They performs a variety of tasks that maintain or enhance the computer’s operating
system Utility programs are generally fairly small. Each type has a specific job to do. Below
are some descriptions of utilities.
Anti-virus applications protect your computer from the damage that can be caused by
viruses and similar programs
Compression utilities make files smaller for storage (or sending over the Internet)
and then return them to normal size.
Data recovery utilities attempt to restore data and files that have been damaged or
accidentally deleted.
Disk defragmenters reorganize the data stored on disks so that it is more efficiently
arranged.
Firewalls prevent outsiders from accessing your computer over a network such as the
Internet.
Development programs are used in the creation of new software. They comprise of sets of
software tools to allow programs to be written and tested. Knowledge of appropriate
programming language is assumed. Tools used here are
Text editor that allows one to enter and modify programs statements
Assembler- allows one to code in machine programs language .i.e. processor specific
Compilers-makes it possible for programmer to convert source code to object code
which can be stored and saved on different computers.
Interpreters-used to convert source programs statement by statement as it executes
the program without being compiled first.
Libraries- commonly used parts or portions of a program which can be called or
included in the programmer’s code without having to recode that portion.
Diagnostic utilities-used to detect bugs in the logic of program during program
development
Communication programs- refer to programs that make it possible to transmit data.
2) Application software
Are programs for user to do their jobs e.g. typing, recording keeping, production of financial
statements, drawing, and statistics.
storing, retrieving and manipulating data and various calculations on spreadsheets. General purpose
programs are discussed below;
Proprietary Software
Proprietary software is computer software licensed under exclusive legal right of the copyright
holder with the intent that the licensee is given the right to use the software only under certain
conditions, and restricted from other uses, such as modification, sharing, studying, redistribution,
or reverse engineering.
Some of the advantages of proprietary software include:
a) You can get exactly what you need in terms of reports, features etc.
b) Being involved in development offers a further level in control over results.
c) There is more flexibility in making modifications that may be required to counteract a new
initiative by a competitor or to meet new supplier or customer requirements. A merger with
another firm or an acquisition will also necessitate software changes to meet new business
needs.
Programming Languages
Programming languages are collections of commands, statements and words that are combined using
a particular syntax, or rules, to write both systems and application software. This results in meaningful
instructions to the CPU.
The only advantage is that program of machine language run very fast because no translation
program is required for the CPU.
The resulting programs still directly instructed the computer hardware. For example, an assembly
language instruction might move a piece of data stored at a particular location in RAM into a particular
location on the CPU. Therefore, like their first generation counterparts, second generation programs
were not easily portable.
Assembly languages were designed to run in a small amount of RAM. Furthermore, they are low-level
languages; that is the instructions directly manipulate the hardware. Therefore, programs written in
assembly language execute efficiently and quickly. As a result, more systems software is still written
using assembly languages.
The language has a one to one mapping with machine instructions but has macros added to it. A macro
is a group of multiple machine instructions, which are considered as one instruction in assembly
language. A macro performs a specific task, for example adding, subtracting etc. A one to one mapping
means that for every assembly instruction there is a corresponding single or multiple instructions in
machine language.
An assembler is used to translate the assembly language statements into machine language.
Advantages of assembly language include:
The symbolic programming of Assembly Language is easier to understand and saves a lot of
time and effort of the programmer.
It is easier to correct errors and modify program instructions.
Assembly Language has the same efficiency of execution as the machine level language.
Because this is one-to-one translator between assembly language program and its
corresponding machine language program.
Third generation languages are sometimes referred to as “procedural” languages since program
instructions, must still the computer detailed instructions of how to reach the desired result.
High-level languages incorporated greater use of symbolic code. Its statements are more English –like,
for example print, get, while. They are easier to learn but the resulting program is slower in execution.
Examples include Basic, Cobol, C and Fortran. They have first to be compiled (translated into
corresponding machine language statements) through the use of compilers.
Many of the first fourth generation languages were connected with particular database management
systems. These languages were called query languages since they allow people to retrieve information
from databases. Structured query language, SQL, is a current fourth generation language used to access
many databases. There are also some statistical fourth generation languages, such as SAS or SPSS.
Some fourth generation languages, such as Visual C++, Visual Basic, or PowerBuilder are targeted to
more knowledgeable users, since they are more complex to use. Visual programming languages, such
as visual basic, use windows, icons, and pull down menus to make programming easier and more
intuitive.
Object oriented programs consist of objects, such as a time card, that include descriptions of the data
relevant to the object, as well as the operations that can be done on that data. For example, included in
the time card object, would be descriptions of such data such as employee name, hourly rate, start time,
end time, and so on. The time card object would also contain descriptions of such operations as
calculate total hours worked or calculate total pay.
Language translators
Although machine language is the only language the CPU understands, it is rarely used anymore since
it is so difficult to use. Every program that is not written in machine language must be translated into
machine language before it can be executed. This is done by a category of system software called
language translation software. These are programs that convert the code originally written by the
programmer, called source code, into its equivalent machine language program, called object code.
There are two main types of language translators: interpreters and compilers.
Interpreters
While a program is running, interpreters read, translate, and execute one statement of the program at a
time. The interpreter displays any errors immediately on the monitor. Interpreters are very useful for
people learning how to program or debugging a program. However, the line-by-line translation adds
significant overhead to the program execution time leading to slow execution.
Compilers
A compiler uses a language translation program that converts the entire source program into object
code, known as an object module, at one time. The object module is stored and it is the object module
that executes when the program runs. The program does not have to be compiled again until changes
are made in the source code.
It has revolutioned the media and modes of computing, storing and communicating information.
Man’s infinite capacity for invention and desire for discovery, exploration and research has led to
rapid growth of technologies and there by information technology, Information explosion has created
problems for proper processing and dissemination of information, which can only be solved, with the
aid of this information technology.
ICT facilitates innovation, free of information creative expression, and effective management. IT in
education has tremendously increased because of it provides enhanced satisfaction, cost effectiveness,
faster and simpler programmes, rapid responses and easier operational procedures
Information Technology
The term “Information Technology” in English is derived from the French word ‘ Informatique’ and
“Informatika” in Russian encompasses the notation of information handling. IT is a new science of
collecting, storing, processing and transmitting information.
The word “Information Technology” is a combination of two words. One is information and other is
technology. Information means knowledge, it can be a bit or a para or a page. IT is science of
information handling, particularly using computers to support the communication of knowledge in
technical, economic and social fields.
According to ALA Glossary, Information Technology is the application of computers & other
technologies to the acquisition, organisation, storage, retrieval & dissemination.
Computer Technology,
Communication Technology and
Reprographic, Micrographic and Printing Technologies
Computer Technology
The wide spread use of computer technology has made dramatic developments in the information
transmission process in very field of human life. Highly sophisticated information services ranging
from elaborate abstracting and indexing services to computerized data bases in almost all scientific
disciplines are in wide use all over the world.
Communication Technology
Audio Technology
Due to tremendous improvements and inventions, older gramophone records are now dwindling and
much sophisticated cassettes and tape records are emerging. The outmoded AM (Amplitude
Modulated) radio receivers are being received by the modern FM (Frequency Modulation) receivers.
Thus, the new audio technology can be used in libraries and information centers for a wide variety of,
recreation, etc.
Audio-Visual Technology
Motion pictures, Television, Video disc are the main contributions of this technology
Videodisc is a new medium containing prerecorded information, which allows the user to reproduce
this information in the form of images on the screen of a television receiver at, will. Videodisc
technology offers high quality storage, image stability and speed of recall.
Facsimile transmission has been boosted by the adoption of methods of data compression made
possible by compact, reliable and inexpensive electronics. During the initial stages, the average speed
of facsimile transmission was found to be 3.4 minutes per page. This technology was slow it was
replaced by micro facsimile- Satellite communication and fiber optics have increased the potential of
facsimile transmission.
Electronic Mail
E-mail is the electronic transmission and receiving of messages, information, data files, letters or
documents by means of point-to-point systems or computer-based messages system.
The technology of reprography made a big impact on the document delivery system. Most of the
research libraries have reprographic machines and provide photocopy of any document on demand.
Using reprographic and micrographic techniques, we can condense the bulky archives and newspapers
and solve the storage problems. They also serve the purpose of preservation they help in resource
sharing and save the time of users.
Micro Forms
Microforms is a term for all type of micro-documents whether they are transparent or opaque or in roll
or sheet form. The verities of microforms are microfilm, microfiche, ultra fiche, microopaques, cards,
computer about microfiche / micro film (COM)
Roll-film (microfilm)
It is a continuous strip of film with images arranged in sequence. It is available in 100 feet roll with
35mm width.
Microfiche
It is flat film having large number of images arranged in rows and columns. Standard sized microfiche
of 4x6 inches accommodated 98 pages.
Printing Technology
Thousands of years ago, people recognized the necessity of keeping records of their daily activities.
Paper was invented and the art of writing and record keeping came to be defined. At present lasers
and computers have entered the field of printing.
a) line printers,
b) dot matrix printer, and
c) laser printers.
Laser printers are popular today.
Conclusion
New information technology will enable information services to carry out consolidation and synthesis
of scientific information on a very large scale. In spite of tremendous advantages and advancement of
information, it can be concluded that digital learning will help learners to get a new learning
experience. No doubt, ICT will supplement the traditional educational system but it wouldn’t replace
it.
According to the European Commission, the importance of ICTs lies less in the technology itself than
in its ability to create greater access to information and communication in underserved populations.
Many countries around the world have established organizations for the promotion of ICTs, because it
is feared that unless less technologically advanced areas have a chance to catch up, the increasing
technological advances in developed nations will only serve to exacerbate the already-existing
economic gap between technological "have" and "have not" areas. Internationally, the United Nations
actively promotes ICTs for Development (ICT4D) as a means of bridging the digital divide.
ICT has become an integral and accepted part of everyday life for many people. ICT is increasing in
importance in people’s lives and it is expected that this trend will continue, to the extent that ICT
literacy will become a functional requirement for people’s work, social, and personal lives.
ICT includes the range of hardware and software devices and programmes such as personal
computers, assistive technology, scanners, digital cameras, multimedia programmes, image editing
software, database and spreadsheet programmes. It also includes the communications equipment
through which people seek and access information including the Internet, email and video
conferencing.
The use of ICT in appropriate contexts in education can add value in teaching and learning, by
enhancing the effectiveness of learning, or by adding a dimension to learning that was not previously
available. ICT may also be a significant motivational factor in students’ learning, and can support
students’ engagement with collaborative learning.
ICT staff is responsible for the development, management and support of the ICT infrastructure in the
organisation, including the internal and external electronic communication networks, including:
a. wide area networks (WANs) and local area networks (LANs) that link the operational systems
within healthcare organisations the hardware e.g. desktop computers, printers software
systems e.g. email systems, applications and systems used for pathology reports and patient
administration. wide area networks (WANs) and local area networks (LANs) that link the
operational systems within healthcare organisations
b. the hardware e.g. desktop computers, printers
c. software systems e.g. email systems, applications and systems used for pathology reports and
patient administration.
The number of people working in the ICT department and what they do will depend on:
The size of the computing facility. Larger computers are operated on a shift work basis.
The nature of the work. Batch processing systems tend to require more staff.
Whether a network is involved. This requires additional staff.
How much software and maintenance is done in house instead of seeking external resources.
The information technology staff may be categorized into various sections whose managers are
answerable to the information technology manager. The responsibilities of the information technology
manager include:
Giving advice to managers on all issues concerning the information technology department.
Determining the long-term IT policy and plans of the organization.
Liaisons with external parties like auditors and suppliers.
Setting budgets and deadlines.
Selecting and promoting IT staff.
ICT DIRECTOR
MANAGER
Other Support
Development Production Control
Support & support Network
Management
Development Center
Technology
Management
Capacity
Management
The sections that make up the ICT department and their functions are discussed below:
1) Development section
System Analysis Functions include:
System investigations.
System design.
System testing.
System implementation.
System maintenance.
2) Operations section
Duties include:
Planning procedures, schedules and staff timetables.
Contingency planning.
Supervision and coordination of data collection, preparation, control and computer room
operations.
Liaison with the IT manager and system development manager.
a) Data preparation
Data preparation staff are responsible for converting data from source documents to computer sensible
form.
Duties are:
Correctly entering data from source documents and forms.
Keeping a record of data handled.
Reporting problems with data or equipment.
b) Data control
Data control staff are generally clerks. Duties include:
Receiving incoming work on time.
Checking and logging incoming work before passing it to the data preparation staff.
Dealing with errors and queries on processing.
Checking and distributing output.
Duties include:
• Starting up equipment.
• Running programs.
• Loading peripherals with appropriate media.
• Cleaning and simple maintenance.
Files librarian
They usually keeps all files organized and up to date. Typical duties are:
This section is charged with responsibilities over database and network management
Database management
The database administrator is responsible for database management. He is responsible for the
planning, organization and control of the database. His functions include
Network management
It is important to measure how a system, organization or a department performs, mainly its efficiency
and effectiveness.
Efficiency is a ratio of what is produced to what is consumed. It ranges from 0 – 100%. Systems can
be compared by how efficient they are.
a. Independent
The independent structure has no established standards of communication. It is considered flexible
and a product of individual activity. Thus, it is the mode for professionals who own their own offices
and function as their own entitles, like attorneys and physicians. As a result, communication is viewed
in a more fragmented way. In the business world, this structure is almost exclusively confined to
independent professionals.
b. Matrix
The matrix structure of a business is based on group work within departments. In other words, each
department is assigned a task, and that department is responsible for completing that task. The result
is a form of business communication that also tends to be somewhat fragmented, but only if the
departments fails to communicate within one another. Within the departments, communication is
more effective, because the task at hand requires keeping one another informed.
c. Entrepreneurial
The entrepreneurial business structure is most common within small businesses. Here, leadership
(whether one or more) communicates decisions to individual employees. Consequently, results are
achieved more quickly because decision makers readily convey their decisions to the employees
responsible for carrying them out.
d. Pyramid
The pyramid structure is seen most frequently in large companies with multiple departments. The
decisions of company heads are passed down through the chain of command: to department heads,
supervisors, managers, and so forth. Inversely, information about company activities often flows from
employees up through managers, supervisors, department heads and to company heads.
e. Communication Channels
Regardless of the type of business structure, all information passed from one person to another
follows some type of communication channel. Communication channels can be either lateral or
horizontal. Lateral is within a department or between departments among employees of the same
level. Horizontal is from one level of an organization to another. The organization's overall business
structure will play a role in the communication channels it develops.
From the implementation of mainframes and desktops, through to cloud computing and smart phones,
business has adapted to changes in Information Communication Technology (ICT). Whilst what a
business needs to do change slowly (the need to be customer centric and make a profit), how a
business operates (the use of ICT to better service customers) has brought significant rapid change to
a business. It is the change in how a business operates, including ICT that, allows a business to
remain competitive.
Although ICT has significantly impacted businesses to lower costs, improve service, and standardise
processes and operations, the adoption of ICT and resulting business changes has not always been
smooth. Some businesses have failed to make changes, others have missed opportunities, and others
are reluctant to change due to risk and/or the need to overcome incumbency. The business change
around the adoption of ICT starts with an appreciation of the business impacts of changes in ICT.
ICT is Business
Irrespective of an individual technology or changes in a technology, common requirements for ICT
within the business environment include:
Changes in ICT, the availability of information and the speed with which decisions need to be made is
changing the command and control structure within businesses. Even if the decision makers had all of
the information needed at the right time to make a decision, decision makers struggle to find the time
to make all of the decisions. The emerging trend is to use ICT to allow for decentralised decision
making within frameworks for delivery. The changes in ICT are driving empowerment and problem
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 50
solving at source. Such changes place a premium on strategy and planning, with a culture of
empowerment to manage outcomes and behaviours. Underpinning such a structure are distributed
operations with the ability to adapt to changes, to self-heal and create an emergent behaviour.
Changes include:
a. People – Leaders with visions and strategy and the ability to implement and manage such
environments. The assurance to support empowered operations is required, together with
decision making at source. The required strategies, communication and skilling of staff to
work within such structures are necessary.
b. Process – Adoption of distributed operation business models and the use of frameworks and
tools such as enterprise risk management to ensure delivery.
c. Information – Access to information is key to success, with knowledge being a utility that
underpins business.
Transaction Processing
As more transactions are processed by ICT without intervention, the skill set required is changing.
Proactive problem solvers are required when things go wrong and to manage exceptions, and to
engage with customers to manage expectations. With routine transactions processed by ICT, more
skilled resources with excellent communication skills and increasing specialisation are required to
address complicated and high worth transactions. A veneer of generalists to work across the resulting
silos is also required. Changes include:
a. People – More skilled resources with critical thinking and proactive problem solving are
required. A premium is placed on the professional or soft skills.
b. Process – Successful processes are engineered from the custom view to deliver outcomes and
work across the silos of a business.
c. Information – Access to information in context integrated with work-flow is required.
Collaboration
Meeting customer needs and delivery of outcomes increasingly requires collaboration across
interacting dependencies. Permanent staff, casuals, contractors, out-sourcers, and off-shore resources
are increasingly coming together to work across the globe in collaborative teams to address issues as
they arrive. The freeing up of staff from routine transaction processing further reinforces the project
nature of roles. Changes include:
Changing Markets
The increasing use of ICT means that products come to market faster, with a decreasing time in the
market with offerings being more easily copied and innovated. Changes to the business model like
the use of the “value of free” or the use of “how to” are being accommodated. Revision of the sales
process to include webinars and podcasts, the need for sticky messages, and the role of the sales to be
the trusted adviser in an ocean of choice (solution selling) are all impacting businesses. Changes
include:
a. People
Ability to respond to change and challenges is required, together with the ability to listen and problem
solve. The empowerment of an educated and skilled workforce that is trusted to deliver in such an
environment is required.
b. Process
Within dynamic markets, processes need to respond and accommodate change whilst assuring
delivery.
c. Information
The cross-silo management of knowledge is required.
a. People – Ability to work across channels where and when the opportunity presents is
required. Flexibility and professionalism of skilled resources allowing for critical thinking
and innovation ensures delivery.
b. Process – The ability to deliver across channels and devices is necessary, with a tight
integration of information to process.
c. Information – Access to information to facilitate conversation and interaction is required.
INFORMATION CENTERS
ICT has shaped the global arena and revolutionized the way we transact business at the local and
international level. Competition for business has become cut-throat; our customers are much more
informed and their level expectations very high; government funding has dwindled over the last few
years; the environment has become very fluid and dynamic. In the current set setup only those
businesses that are innovative, dynamic and technology savvy and have a chance of surviving the
complexity and strong turbulence. Quality has become a key concern for organizations and customers
and this cannot be gainsaid.
In realization of these global developments and related initiatives in this key area of technology,
businesses have embraced ICT as a driving force for the attainment of our goals and objectives. While
we take pride in enviable achievements in expanding the ICT infrastructure and providing more
access; technology has enabled businesses to effectively participate in the global higher education
arena.
An information center is a "center designed specifically for storing, processing, and retrieving
information for dissemination at regular intervals, on demand or selectively, according to express
needs of users
Before any campaign can be developed, you need to look at the current situation. What perceptions
exist? Are they fair? Are there misconceptions which need to be overcome?
Measuring the perceived value of the Information Center will help you plan your strategy. You will
know what you want to achieve through the promotional activities, and how to measure their success.
There are three main ways you can obtain this information:
2. Identify audiences
It is likely that the Information Center will have different target audiences, with different
expectations, information needs and perceptions. These should be segmented into identifiable groups
so you can communicate more effectively with them. These groups may be split by function, e.g.
marketing, sales, finance, human resources etc., according to those who are familiar with the
Information Centre, and use its services a great deal, through to late adopters, who are not yet aware
of what it can do for them.
3. Set objectives
Once you have identified the current perceptions, you will know what you want to achieve. Setting
objectives will lay down clear goals which can be used in the future to assess the success of your
campaign. Objectives might include:
4. Set tactics
Consideration needs to be given as to how you will actually achieve these objectives. There are a
number of tactics which can be used, and which may vary depending on the audiences you have
identified. These could include:
Advertising
This may be most effective for new recruits, or those who rarely use the Information Center.
Induction visits, electronic bulletin boards, email, brochures and flyers could all be used to advertise
the Information Center's services.
Demonstrations
Seminars, quarterly briefings, and workshops are all easy, interactive promotional tools. These offer
good opportunities to deepen the knowledge users have about the Information Center. Guest speakers
in particular topic areas will enable you to showcase an example of how you have assisted them with
a particular business project.
Newsletters
Newsletters featuring information updates, new research findings and case studies can all be used to
push out positive messages.
Alerting
Alerts on users' systems could be set up to bring new information to their attention in real time. This
could be done by function or department, so that numerous alerts don't have to be set up all around the
organisation.
5. Set timescales
Consider over what timescales you will run your first campaign. It is important not to expect results
overnight. Set a period of time for each stage of the plan, and then the time over which you will begin
and spread your activity.
Decide how you will measure the success of your campaign. A follow-up survey after the first
quarter, and then every six months will help you know if you are nearing your goal. As well as the
surveys, be sure to include opportunity for brainstorming, round table discussions and debate in your
workshops and seminars. This will throw up new ideas about how the Information Center and users
can work better together, what new information needs users may have, and how they can help you. If
the internal events programme is a great success, it may be worth setting up a user committee which
meets on a regular basis.
7. Communicate success
Be positive. If the Information Center has completed any successful projects, gained some new and
exclusive information, or taken on a new recruit, this should be communicated. Your aim should be to
keep what you do, who does it and with what results at the forefront of users' minds. Take advantage
of monthly newsletters, a bulletin board on an intranet site, or speaker opportunities at company
events.
Look at new ways of researching and delivering information to help you develop a better service and
become known as a centre of excellence. Keep investigating new technologies or emerging
approaches to research and present your findings and innovations at workshops or briefings, to help
raise support for your work and create organisational buy-in to the importance of inward investment.
Time spent shadowing key personnel in other departments will help you build contacts and
relationships. It will also enable you to better understand the business priorities of your users and their
information needs. In order that this does not prove a costly drain on resources in the information
team, one person could be assigned to shadow another in a particular function, with shadowing done
in rotation.
The key to effective promotion is persistence. A successful campaign is one that develops over time.
It is not a one-off exercise. As the relationship between the Information Center and users matures, the
promotion will become easier and users will approach you with ideas, questions and feedback.
It has brought forward capabilities, which previously were only considered as fiction novel material.
Information technology has supported miniaturization of electronic circuits’ making many products’
portable, for example, computers, phones, etc. Information technology has helped development in
communication technology by making it affordable. Penetration rate of mobile phone is higher than
ever before with greater coverage and with ever lowering cost.
The concept of big data has become reality, with development of high memory storage devices.
Information technology is a network of devices, which are connected with each other, which process
data into useful and meaningful information. Information technology, therefore, has six broad
functions around which innovation is driven. The six broad functions are as follows:
a) Capture
It is defined as a process to obtain information in a form which can be further manipulated. This input
of information may be through keyboard, mouse, picture, etc.
b) Transmit
It is defined as a process through which captured information is sent from one system to another. This
system could be within same geographical boundary or otherwise. For example, Radio, TV, email,
telephone, fax, etc.
c) Store
It is defined as a process through which captured information is kept in safe and secure manner and,
which can be further accessed when required for example, hard disk, USB, etc.
d) Retrieval
It is defined as a process through which stored information can be called upon when required. For
example, RAM, hard disk, USB, etc.
e) Manipulation
It is defined as a process through which captured and stored information can be transformed. This
transformation could be the arrangement of data, calculation, presentation, etc., For example,
computer software.
f) Display
It is defined as a process of projecting the information. For example, computer screen, printer, etc.
a) Portability
Advances in information technology have made portability of all electronic gadgets possible.
b) Speed
Computing is now done at speed at which earlier generations of super computer were working.
c) Miniaturization
Another innovation is in form of hand-held computing devices as well as an information system, like
GPS, Smartphone, IPad etc.
d) Connectivity
Information technology has transformed communication capability.
e) Entertainment
Proliferation of multimedia and digital information has been tremendous.
f) User Interface
Advancement in information technology has changed way users interact with computing devices. The
advent of touch screen has made computing intuitive and interactive.
From above cases it can leave no doubt that information technology and development in the driving
force within today’s innovation.
Terminology
Multiprogramming
Multiprogramming is a rudimentary form of parallel processing in which several programs are run at
the same time on a uniprocessor. Since there is only one processor, there can be no true simultaneous
execution of different programs. Instead, the operating system executes part of one program, then part
of another, and so on. To the user it appears that all programs are executing at the same time.
Multiprocessing
Multiprocessing is the coordinated (simultaneous execution) processing of programs by more than one
computer processor. Multiprocessing is a general term that can mean the dynamic assignment of a
program to one of two or more computers working in tandem or can involve multiple computers
working on the same program at the same time (in parallel).
Multitasking
In a computer operating system, multitasking is allowing a user to perform more than one computer
task (such as the operation of an application program) at a time. The operating system is able to keep
track of where you are in these tasks and go from one to the other without losing information. Microsoft
Windows 2000, IBM's OS/390, and Linux are examples of operating systems that can do multitasking
(almost all of today's operating systems can). When you open your Web browser and then open word
at the same time, you are causing the operating system to do multitasking.
Multithreading
It is easy to confuse multithreading with multitasking or multiprogramming, which are somewhat
different ideas.
Multithreading is the ability of a program or an operating system process to manage its use by more
than one user at a time and to even manage multiple requests by the same user without having to have
multiple copies of the programming running in the computer
REVISION EXERCISES
1. Explain an overview of computer systems
2. Discuss in detail computer structures
3. Discuss the function of a control unit
4. Differentiate between data and information
5. What are the features of the Random Access Memory and Read Only Memory?
22. Describe multiprogramming, virtual storage, time-sharing, and multiprocessing. Why are they
important for the operation of an information system?
23. Explain two major advantages of multiprogramming.
CHAPTER 2
INTRODUCTION TO SYSTEMS DEVELOPMENT
SYNOPSIS
Introduction……………………………………………………. 58
Role Of Management in Systems Development……………….. 59
Systems Development Approach………………………………. 60
Systems Development Life Cycle……………………………… 64
Rapid Applications Development……………………………… 75
Business Process Re-Engineering……………………………… 77
Systems Development Constraints…………………………….. 80
INTRODUCTION
Software systems are developed in order to support the activities that occur in some (class of)
business domain(s). As a direct consequence, concepts from the business domain(s) are bound to play
an important role in the deliverables that are produced in the course of system development, such as
requirements and design documents, the constructed system, as well as the manuals for using the
system. When, for instance, developing a software system to assist in the handling of claims in the
context of a health insurance company, concepts such as “claim”, “treatment”, “processing of claims”,
“policy”, etc., are bound to play a crucial role. During system development, requirements on the
software system are likely to be expressed in terms of these concepts, while the design of the system
is bound to comprise a class or entity type “claim” and “policy” and some activity/process “claim
processing”. Needless to say that these concepts will even be reflected in the (user) manuals of the
system.
The concepts in the business domain are not the only concepts that play a role during system
development. The software system will be implemented using several forms of technologies and pre-
existing infrastructures. This gives rise to an additional class of concepts: the concepts from the
implementation domain. These concepts deal with the mapping of the concepts from the business
domain to the technological infrastructure underlying the software system. Examples of such concepts
would be: “claim queue handler”, “claim scheduler”, etc. Some of the concepts in the implementation
domain are likely to be application dependent while others will be of a more infrastructural/generic
nature. In this article we mainly focus on concepts that are native to the business domain.
In sum, one could state that during system development, a lot of “concept handling” occurs. At times
we may engage in it without explicitly realizing we do so. Concept handling may software needs, or
during the design and realization of the implementation of the system and its documentation. Business
domain concepts are introduced, evolved and retired for different reasons. Initially they are introduced
with the aim of scoping and understanding the business domain for which the software system is to be
built.
During requirements engineering as well as the design and realization of the system, additional
insights may be gained into the structure and nature of the business domain. These insights are bound
to lead to the evolution of the concepts used thus far.
It is our believe that one should not just handle concepts, but rather consciously manage them. We
regard the proper management of concepts during system development as an essential cornerstone for
the development of systems that indeed fit the needs of the business domain. With the notion of
concept management we refer to: the deliberate activity of introducing, evolving and retiring
concepts, where deliberate hints at the goal-driven nature of the management of the concepts.
Management complements the SDLC when it comes to project quality. It provides a method of
managing these unique project efforts, which increases the odds of attaining cost, schedule and quality
goals.
i. Provide consistency of success with regard to time, cost, and quality objectives
ii. Ensure customer expectations are met
iii. Collect historical information/data for future use
iv. Provide a method of thought for ensuring all requirements are addressed through a
comprehensive work definition process
v. Reduce Risks associated with the project
vi. Minimize scope creep by providing a process for managing changes
Without strong management support, circumstances will affect our ability to satisfy the customer and
meet our project and product objectives. Management that is willing to intervene, when asked to, will
further increase the probability of successfully delivering a quality product.
a. Information Access
Managers need rapid access to information to make decisions about strategic, financial, marketing and
operational issues. Companies collect vast amounts of information, including customer records, sales
data, market research, financial records, manufacturing and inventory data, and human resource
records. However, much of that information is held in separate departmental databases, making it
difficult for decision makers to access data quickly. A management information system simplifies and
speeds up information retrieval by storing data in a central location that is accessible via a network.
The result is decisions that are quicker and more accurate.
b. Data Collection
Management information systems bring together data from inside and outside the organization. By
setting up a network that links a central database to retail outlets, distributors and members of a
supply chain, companies can collect sales and production data daily, or more frequently, and make
decisions based on the latest information.
c. Collaboration
In situations where decision-making involves groups, as well as individuals, management information
systems make it easy for teams to make collaborative decisions. In a project team, for example,
management information systems enable all members to access the same essential data, even if they
are working in different locations.
d. Interpretation
Management information systems help decision-makers understand the implications of their
decisions. The systems collate raw data into reports in a format that enables decision-makers to
quickly identify patterns and trends that would not have been obvious in the raw data. Decision-
makers can also use management information systems to understand the potential effect of change. A
sales manager, for example, can make predictions about the effect of a price change on sales by
running simulations within the system and asking a number of “what if the price was” questions.
e. Presentation
The reporting tools within management information systems enable decision-makers to tailor reports
to the information needs of other parties. If a decision requires approval by a senior executive, the
decision-maker can create a brief executive summary for review. If managers want to share the
detailed findings of a report with colleagues, they can create full reports and provide different levels
of supplementary data.
a. Waterfall development
The waterfall model is a sequential development approach, in which development is seen as flowing
steadily downwards (like a waterfall) through the phases of requirements analysis, design,
implementation, testing (validation), integration, and maintenance. The first formal description of the
method is often cited as an article published by Winston W. Royce in 1970 although Royce did not
use the term "waterfall" in this article.
• Project is divided into sequential phases, with some overlap and splash back
acceptable between phases.
• Emphasis is on planning, time schedules, target dates, budgets and implementation of
an entire system at one time.
• Tight control is maintained over the life of the project via extensive written
documentation, formal reviews, and approval/signoff by the user and information
technology management occurring at the end of most phases before beginning the
next phase.
The waterfall model is a traditional engineering approach applied to software engineering. It has been
widely blamed for several large-scale government projects running over budget, over time and
sometimes failing to deliver on requirements due to the Big Design Up Front approach. Except when
contractually required, the Waterfall model has been largely superseded by more flexible and versatile
methodologies developed specifically for software development
c) Prototyping
Prototyping is the process of creating an incomplete model of the future full-featured system, which
can be used to let the users have a first idea of the completed program or allow the clients to evaluate
the program.
Types of Prototyping
System prototyping are of various kinds. However, all the methods are in some way based on two
major types of prototyping :
a) Throwaway Prototyping
Throwaway or rapid prototyping refers to the creation of a model that will eventually be discarded
rather than becoming part of the finally delivered system. After preliminary requirements gathering is
accomplished, a simple working model of the system is constructed to visually show the users what
their requirements may look like when they are implemented into a finished system. The most
obvious reason for using throwaway prototyping is that it can be done quickly.
b) Evolutionary Prototyping
Evolutionary prototyping (also known as breadboard prototyping) is quite different from throwaway
prototyping. The main goal when using evolutionary prototyping is to build a very good prototype in a
structured manner so that we can refine it or make further changes to it. The reason for this is that the
evolutionary prototype, when built, forms the heart of the new system, and the improvements and
further requirements will be built on to it. It is not discarded or removed like the throwaway
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 62
prototype. When developing a system using evolutionary prototyping, the system is continually
refined and rebuilt.
c) Incremental Prototyping
The final product is built as separate prototypes. At the end the separate prototypes are merged in an
overall design.
Advantages of Prototyping
Prototyping can improve the quality of requirements and specifications provided to developers. Early
determination of what the user really wants can result in faster and less expensive software.
iii) The designer and implementer can obtain feedback from the users early in the project
development.
iv) The client and the contractor can compare that the developing system matches with the system
specification, according to which the system is built.
v) It also gives the engineer some idea about the accuracy of initial project estimates and whether the
deadlines can be successfully met.
Disadvantages of Prototyping
i) Insufficient Analysis
Since a model has to be created, developers will not properly analyse the complete project. This may
lead to a poor prototype and a final project that will not satisfy the users
c) Incremental development
Various methods are acceptable for combining linear and iterative systems development
methodologies, with the primary objective of each being to reduce inherent project risk by breaking a
project into smaller segments and providing more ease-of-change during the development process.
• A series of mini-waterfalls are performed, where all phases of the waterfall are completed for
a small part of a system, before proceeding to the next increment, or
• Overall requirements are defined before proceeding to evolutionary, mini-waterfall
development of individual increments of a system, or
• The initial software concept, requirements analysis, and design of architecture and system
core are defined via waterfall, followed by iterative prototyping, which culminates in
installing the final prototype, a working system.
d) Spiral development
The spiral model is a software development process combining elements of both design and
prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. It is a
meta-model, a model that can be used by other models.
• Focus is on risk assessment and on minimizing project risk by breaking a project into smaller
segments and providing more ease-of-change during the development process, as well as
providing the opportunity to evaluate risks and weigh consideration of project continuation
throughout the life cycle.
• "Each cycle involves a progression through the same sequence of steps, for each part of the
product and for each of its levels of elaboration, from an overall concept-of-operation
document down to the coding of each individual program."
• Each trip around the spiral traverses four basic quadrants:
(1) determine objectives, alternatives, and constraints of the iteration;
(2) evaluate alternatives; Identify and resolve risks;
(3) develop and verify deliverables from the iteration; and
(4) plan the next iteration.
• Begin each cycle with an identification of stakeholders and their win conditions, and end each
cycle with review and commitment.
• Key objective is for fast development and delivery of a high quality system at a relatively low
investment cost.
• Attempts to reduce inherent project risk by breaking a project into smaller segments and
providing more ease-of-change during the development process.
• Aims to produce high quality systems quickly, primarily via iterative Prototyping (at any
stage of development), active user involvement, and computerized development tools. These
tools may include Graphical User Interface (GUI) builders, Computer Aided Software
Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation
programming languages, code generators, and object-oriented techniques.
• Key emphasis is on fulfilling the business need, while technological or engineering excellence
is of lesser importance.
• Project control involves prioritizing development and defining delivery deadlines or
“timeboxes”. If the project starts to slip, emphasis is on reducing requirements to fit the
timebox, not in increasing the deadline.
• Generally includes joint application design (JAD), where users are intensely involved in
system design, via consensus building in either structured workshops, or electronically
facilitated interaction.
• Active user involvement is imperative.
• Iteratively produces production software, as opposed to a throwaway prototype.
• Produces documentation necessary to facilitate future development and maintenance.
• Standard systems analysis and design methods can be fitted into this framework.
The System Development Life Cycle (SDLC) is a series of six steps that a project team works through
in order to conceptualize, analyze, design, construct and implement a new information technology
system. Adhering to a SDLC increases efficiency and accuracy and reduces the risk of product failure.
The SDLC contains a comprehensive checklist of the rules and regulations governing IT systems, and
is one way to ensure system developers comply with all applicable government regulations, because
the consequences of not doing so are high and wide ranging. This is especially true in the post 9/11
environment where larger amounts of information are considered sensitive in nature, and are shared
among commercial, international, federal, state, and local partners
Overview
The Systems development life cycle (SDLC) is a process used by a systems analyst to develop an
information system, training, and user (stakeholder) ownership. The SDLC aims to produce a high
quality system that meets or exceeds customer expectations, reaches completion within time and cost
estimates, works effectively and efficiently in the current and planned Information Technology
infrastructure, and is inexpensive to maintain and cost-effective to enhance. "Systems Development
Computer systems are complex and often (especially with the recent rise of service-oriented
architecture) link multiple traditional systems potentially supplied by different software vendors. To
manage this level of complexity, a number of SDLC models or methodologies have been created,
such as "waterfall"; "spiral"; "Agile software development"; "rapid prototyping"; "incremental"; and
"synchronize and stabilize".
SDLC can be described along spectrum of agile to iterative to sequential. Agile methodologies, such
as XP and Scrum, focus on lightweight processes which allow for rapid changes along the
development cycle. Iterative methodologies, such as Rational Unified Process and dynamic systems
development method, focus on limited project scope and expanding or improving products by
multiple iterations. Sequential or big-design-up-front (BDUF) models, such as Waterfall, focus on
complete and correct planning to guide large projects and risks to successful and predictable results.
Other models, such as Anamorphic Development, tend to focus on a form of development that is
guided by project scope and adaptive iterations of feature development.
In project management a project can be defined both with a project life cycle (PLC) and an SDLC,
during which slightly different activities occur. According to Taylor (2004) "the project life cycle
encompasses all the activities of the project, while the systems development life cycle focuses on
realizing the product requirements". SDLC (systems development life cycle) is used during the
development of an IT project, it describes the different stages involved in the project from the drawing
board, through the completion of the project. SDLC is software development
History
The systems life cycle (SLC) is a methodology used to describe the process for building information
systems, intended to develop information systems in a very deliberate, structured and methodical way,
reiterating each stage of the life cycle. The systems development life cycle, according to Elliott &
Strachan & Radford (2004), "originated in the 1960s, to develop large scale functional business
systems in an age of large scale business conglomerates. Information systems activities revolved
around heavy data processing and number crunching routines".
Several systems development frameworks have been partly based on SDLC, such as the structured
systems analysis and design method (SSADM) produced for the UK government Office of
Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life
cycle approaches to systems development have been increasingly replaced with alternative approaches
and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional
SDLC
The seven-step process contains a procedural checklist and the systematic progression required to
evolve an IT system from conception to disposition. The following descriptions briefly explain each
of the seven phases of the SDLC:
1. Conceptual Planning
This phase is the first step of any system's life cycle. It is during this phase that a need to acquire or
significantly enhance a system is identified, its feasibility and costs are assessed, and the risks and
various project-planning approaches are defined. Roles and responsibilities for the Asset Manager,
Sponsor's Representative, System Development Agent (SDA), System Support Agent (SSA), and
other parties in SDLC policy are designated during this stage and updated throughout the system's life
cycle.
3. Design.
During this phase, functional, support and training requirements are translated into preliminary and
detailed designs. Decisions are made to address how the system will meet functional requirements. A
preliminary (general) system design, emphasizing the functional features of the system, is produced as
a high-level guide. Then a final (detailed) system design is produced that expands the design by
specifying all the technical detail needed to develop the system.
5. Implementation
During this phase, the new or enhanced system is installed in the production environment, users are
trained, data is converted (as needed), the system is turned over to the sponsor, and business processes
are evaluated. This phase includes efforts required to implement, resolve system problems identified
during the implementation process, and plan for sustainment.
7. Disposition
This phase represents the end of the system's life cycle. It provides for the systematic termination of a
system to ensure that vital information is preserved for potential future access and/or reactivation. The
system, when placed in the Disposition Phase, has been declared surplus and/or obsolete and has been
scheduled for shutdown. The emphasis of this phase is to ensure that the system (e.g., equipment,
parts, software, data, procedures, and documentation) is packaged and disposed of in accordance with
appropriate regulations and requirements.
Object-oriented analysis
Object-oriented analysis (OOA) is the process of analyzing a task (also known as a problem domain),
to develop a conceptual model that can then be used to complete the task. A typical OOA model
would describe computer software that could be used to satisfy a set of customer-defined
requirements. During the analysis phase of problem-solving, a programmer might consider a written
requirements statement, a formal vision document, or interviews with stakeholders or other interested
parties. The task to be addressed might be divided into several subtasks (or domains), each
representing a different business, technological, or other areas of interest. Each subtask would be
analyzed separately. Implementation constraints, (e.g., concurrency, distribution, persistence, or how
the system is to be built) are not considered during the analysis phase; rather, they are addressed
during object-oriented design (OOD).
The conceptual model that results from OOA will typically consist of a set of use cases, one or more
UML class diagrams, and a number of interaction diagrams. It may also include some kind of user
interface mock-up.
The input for object-oriented design is provided by the output of object-oriented analysis. Realize that
an output artifact does not need to be completely developed to serve as input of object-oriented
design; analysis and design may occur in parallel, and in practice the results of one activity can feed
the other in a short feedback cycle through an iterative process. Both analysis and design can be
performed incrementally, and the artifacts can be continuously grown instead of completely
developed in one shot. Some typical input artifacts for object-oriented design are:
i. Conceptual model
Conceptual model is the result of object-oriented analysis, it captures concepts in the problem domain.
The conceptual model is explicitly chosen to be independent of implementation details, such as
concurrency or data storage.
users or other systems. In many circumstances use cases are further elaborated into use case diagrams.
Use case diagrams are used to identify the actor (users or other systems) and the processes they
perform.
The SDLC phases serve as a programmatic guide to project activity and provide a flexible but
consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC
phase objectives are described in this section with key deliverables, a description of recommended
tasks, and a summary of related control objectives for effective management. It is critical for the
project manager to establish and monitor control objectives during each SDLC phase while executing
projects. Control objectives help to provide a clear statement of the desired result or purpose and
should be used throughout the entire SDLC process. Control objectives can be grouped into major
categories (domains), and relate to the SDLC phases as shown in the figure.
To manage and control any SDLC initiative, each project will be required to establish some degree of
a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the
project. The WBS and all programmatic material should be kept in the "project description" section of
the project notebook. The WBS format is mostly left to the project manager to establish in a way that
best describes the project work.
There are some key areas that must be defined in the WBS as part of the SDLC policy. The following
diagram describes three key areas that will be addressed in the WBS in a manner established by the
project manager.
The upper section of the work breakdown structure (WBS) should identify the major phases and
milestones of the project in a summary fashion. In addition, the upper section should provide an
overview of the full scope and timeline of the project and will be part of the initial project description
effort leading to project approval. The middle section of the WBS is based on the seven systems
development life cycle (SDLC) phases as a guide for WBS task development. The WBS elements
should consist of milestones and "tasks" as opposed to "activities" and have a definitive period
(usually two weeks or more). Each task must have a measurable output (e.x. document, decision, or
analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems
engineering) and may require close coordination with other tasks, either internal or external to the
project. Any part of the project needing support from contractors should have a statement of work
(SOW) written to include the appropriate tasks from the SDLC phases. The development of a SOW
does not occur during a specific phase of SDLC but is developed to include the work from the SDLC
process that may be conducted by external resources such as contractors and structure.
Baselines are an important part of the systems development life cycle (SDLC). These baselines are
established after four of the five phases of the SDLC and are critical to the iterative nature of the
model. Each baseline is considered as a milestone in the SDLC.
SDLC Objectives
When we plan to develop, acquire or revise a system we must be absolutely clear on the objectives of
that system. The objectives must be stated in terms of the expected benefits that the business expects
from investing in that system.
The primary definition of quality in a business context is the return on investment (ROI) achieved by
the system. The business could have taken the money spent on developing and running the system and
spent it on advertising, product development, staff raises or many other things. However, someone
made a decision that if that money was spent on the system it would provide the best return or at least
a return justifying spending the money on it.
This ROI can be the result of such things as: operational cost savings or cost avoidance; improved
product flexibility resulting in a larger market share; and/or improved decision support for strategic,
tactical and operational planning. In each case the ROI should be expressed quantitatively, not
qualitatively. Qualitative objectives are almost always poorly defined reflections of incompletely
analyzed quantitative benefits.
The SDLC must ensure that these objectives are well defined for each project and used as the primary
measure of success for the project and system.
The business objectives provide the contextual definition of quality. There is also an intrinsic
definition of quality. This definition of quality centers on the characteristics of the system itself: is it
zero defect, is it well-structured, it is well-documented, is it functionally robust, etc. The
characteristics are obviously directly linked to the system's ability to provide the best possible ROI.
Therefore, the SDLC must ensure that these qualities are built into the system. However, how far you
go in achieving intrinsic quality is tempered by the need to keep contextual quality (i.e., ROI) the
number one priority. At times there are trade-offs to be made between the two. Within the constraints
of the business objectives, the SDLC must ensure that the system has a high degree of intrinsic
quality.
c. Maximize Productivity
There are two basic definitions of productivity. One centers on what you are building; the other is
from the perspective of how many resources, how much time and how much money it takes to build
it. The first definition of productivity is based on the return on investment (ROI) concept. What value
is there in doing the wrong system twice as fast? It would be like taking a trip to the wrong place in a
plane that was twice as fast. You might have been able to simply walk to the correct destination.
Therefore, the best way to measure a project team's or system department's productivity is to measure
the net ROI of their efforts. The SDLC must not just ensure that the expected ROI for each project is
well defined. It must ensure that the projects being done are those with the maximum possible ROI
opportunities of all of the potential projects.
Even if every project in the queue has significant ROI benefits associated with it, there is a practical
limit to how large and how fast the systems organization can grow. We need to make the available
staff as productive as possible with regard to the time, money and resources required to deliver a
given amount of function. The first issue we face is the degree to which the development process is
labor intensive. Part of the solution lies in automation. The SDLC must be designed in such a way as
to take maximum advantage of the computer assisted software engineering (CASE) tools.
The complexity of the systems and the technology they use has required increased specialization.
These specialized skills are often scarce. The SDLC must delineate the tasks and deliverables in such
a way as to ensure that specialized resources can be brought to bear on the project in the most
effective and efficient way possible.
One of the major wastes of resources on a project is having to do things over. Scrap and rework
occurs due to such things as errors and changes in scope. The SDLC must ensure that scrap and
rework is minimized. Another activity that results in non-productive effort is the start-up time for new
resources being added to the project. The SDLC must ensure that start-up time is minimized in any
way possible. A final opportunity area for productivity improvements is the use of off-the-shelf
components. Many applications contain functions identical to those in other applications. The SDLC
should ensure that if useful components already exist, they can be re-used in many applications.
What we have identified so far are the primary business objectives of the SDLC and the areas of
opportunity we should focus on in meeting these objectives. What we must now do is translate these
objectives into a set of requirements and design points for the SDLC.
SDLC Requirements
The requirements for the SDLC fall into five major categories:
- Scope
- Technical Activities
- Management Activities
- Usability
- Installation Guidance
The scoping requirements bound what types of systems and projects are supported by the SDLC. The
technical and management activities define the types of tasks and deliverables to be considered in the
project. The usability requirements address the various ways in which the SDLC will be used by the
team members and what must be considered in making the SDLC easy to use in all cases. The
installation requirements address the needs associated with phasing the SDLC into use, possibly piece
by piece, over time.
Scope Requirements
The SDLC must be able to support various project types, project sizes and system types.
Project Types
There are five project types that the SDLC must support:
- New Development
- Rewrites of Existing Systems
- Maintenance
- Package Selection
- System Conversions
New Development
A totally new system development effort implies that there is no existing system. You have a blank
sheet of paper and total latitude in defining its requirements and design. In reality this is a rather rare
occurance.
Rewrites
In a rewrite there is an existing system but the current design has degenerated to become so poorly
structured that it is difficult to maintain or add any significant new features. Therefore, a new system
will be created to take its place. However, there is a necessity to retain a high degree of functional
compatibility with the existing system. Thus, you might go from a batch system to an on-line system,
from a centralized system to a distributed system, etc., but the core business (i.e., logical) functions
remain the same.
Maintenance
Here we must be careful to make the distinction between a management definition of maintenance and
a technical definition. From a management perspective, some organizations call any project of under
six person months, or some similar resource limit, that affects an existing system a maintenance
project. Some even reduce this to just the effort required to fix errors and comply with regulatory
changes. This can also be called "zero-based maintenance", after zero-based budgeting, since anything
over that is discretionary. The rest is called development.
Package Selection
Package selection involves evaluating, acquiring, tailoring and installing third party software.
System Conversions
A system conversion involves translating a system to run in a new environment. This includes
conversions to a new language, a new operating system, a new computer, new disk drives, a new
DBMS, etc. In doing the translation, the system is not redesigned. It is ported over to the new
environment on a one-to-one basis to the extent possible.
In reality, projects are often a blend of these various project types. For example, a package installation
may also require maintenance changes to interfacing systems, developing some new code, converting
other code to run on a compatible configuration and rewriting portions of some systems. The SDLC
must handle each project type and any blend of them.
Project Sizes
Projects come in many sizes. Some may last as short as a day, staffed by only one person. Others may
last many years, staffed by hundreds of people scattered across many development locations. The
types and degree of the management controls, such as project check-points and status reporting,
change depending on the size of the effort. The SDLC must accommodate the full range of project
sizes without burdening the small project nor over simplifying to the detriment of the large project.
System Types
The SDLC must support each of these and any combination of them. The SDLC actually needs to
support the full range of combinations and permutations of the various project types, project sizes and
system types. It must do this in a single lifecycle. Creating a unique lifecycle for each possible
combination of the above would result in literally billions of SDLC's. (We leave the computation up
to the reader.)
Technical Activities
In addressing each of these topics we will need to distinguish what tasks must be performed from how
they might be performed. This distinction is important since the "how-to" is dependent on the specific
software engineering techniques selected and the available CASE tools. However, the "what" should
be generic and stable, regardless of the techniques and tools.
System Definition
In defining the requirements for supporting analysis, design and coding we must consider three
aspects of the problem: system components, the categories of requirements and system views.
System Components
Regardless of the techniques being used, we can say that any system can be said to be composed of
nine basic component types:
a) Use Cases
b) Functions
c) Triggers
d) Data Stores
e) Data Flows
f) Data Elements
g) Processors
h) Data Storage
i) Data Connections
j) Actors/External Entities
a) Use Cases are an ordered set of processes, initiated by a specific trigger (e.g., transaction, end
of day), which accomplish a meaningful unit of work from the perspective of the user.
b) Functions are context independent processes that transform data and/or determine the state of
entities.
c) Triggers are the events that intiate Use Cases. There are three types of triggers: time triggers,
state triggers and transaction triggers.
d) Data stores are data at rest. Data flows are data in movement between two processes, a
process and a data store, etc.
e) Data elements are the atomic units within data flows and data stores.
f) Processors are the components which execute the processes and events (i.e., computers and
people).
g) Data storage is the repository in which the data stores reside (e.g., disks, tapes, filing
cabinets).
h) Data connections are the pipelines through which the data flows flow (e.g., communications
network, the mail).
i) Actors/External entities are people or systems outside the scope of the system under
investigation but with which it must interface.
j) Each of these components has many properties or attributes which are needed to fully
describe them. For example, in describing a process we can state its algorithm, who or what
executes it, where it takes place, when it takes place, how much information it must process,
etc. In a given project and for a given component, the properties which must be
gathered/defined may vary. The SDLC must allow for this flexibility versus an all-or nothing
approach.
designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important
to take the best practices from the SDLC model and apply it to whatever may be most appropriate for
the software being designed.
Strengths Weaknesses
An alternative to the SDLC is rapid application development, which combines prototyping, joint
application development and implementation of CASE tools. The advantages of RAD are speed,
reduced development cost, and active user involvement in the development process.
RAD supports the analysis, design, development and implementation of individual application
systems. However, RAD does not support the planning or analysis required to define the information
needs of the enterprise as a whole or of a major business area of the enterprise. RAD provides a means
for developing systems faster while reducing cost and increasing quality. This is done by:
i. Automating large portions of the system development life cycle,
ii. Imposing rigid limits on development time frames and
iii. Re-using existing components.
i. The concept definition stage defines the business functions and data subject areas that the
system will support and determines the system scope.
ii. The functional design stage uses workshops to model the system’s data and processes and to
build a working prototype of critical system components.
iii. The development stage completes the construction of the physical database and application
system, builds the conversion system and develops user aids and deployment work plans.
iv. The deployment stage includes final user testing and training, data conversion and the
implementation of the application system.
End-user development
End-user development refers to the development of information systems by end users with minimal or
no assistance from professional systems analysts or programmers. This is accomplished through
sophisticated "user-friendly" software tools and gives end-users direct control over their own
computing.
End-user development is suited to solving some of the backlog problem because the end-users can
develop their needed applications themselves. It is suited to developing low-transaction systems. End-
user development is valuable for creating systems that access data for such purposes as analysis
(including the use of graphics in that analysis) and reporting. It can also be used for developing simple
data-entry applications.
b) data dictionaries,
c) reporting facilities,
d) code generators, and
e) documentation generators.
These tools can greatly increase the productivity of the systems analyst or designer by:
Enforcing a standard.
Improving communication between users and technical specialists.
Organizing and correlating design components and providing rapid access to them via a design
repository or library.
Automating the tedious and error-prone portions of analysis and design.
Automating testing and version control
Information Technology plays a major role in Business Process Reengineering as it provides office
automation, it allows the business to be conducted in different locations, provides flexibility in
manufacturing, permits quicker delivery to customers and supports rapid and paperless transactions.
In general it allows an efficient and effective change in the manner in which work is performed.
The globalization of the economy and the liberalization of the trade markets have formulated new
conditions in the market place which are characterized by instability and intensive competition in the
business environment. Competition is continuously increasing with respect to price, quality and
selection, service and promptness of delivery.
Reengineering is the fundamental rethinking and radical redesign of business processes to achieve
dramatic improvements in critical contemporary measures of performance such as cost, quality,
service and speed.
Process is a structured, measured set of activities designed to produce a specified output for a
particular customer or market. It implies a strong emphasis on how work is done within an
organization.
b) the processing of the data or materials (which usually go through several stages and may
necessary stops that turns out to be time and money consuming), and
c) the outcome (the delivery of the expected result). The problematic part of the process is
processing.
Business process reengineering mainly intervenes in the processing part, which is reengineered in
order to become less time and money consuming.
The term "Business Process Reengineering" has, over the past couple of year, gained increasing
circulation. As a result, many find themselves faced with the prospect of having to learn, plan,
implement and successfully conduct a real Business Process Reengineering endeavor, whatever that
might entail within their own business organization. Hammer and Champy (1993) define business
process reengineering (BPR) as the fundamental rethinking and radical redesign of the business
processes to achieve dramatic improvements in critical, contemporary measures of performance, such
as cost, quality, service and speed.
It is a new management approach reflecting the practices, experiences of managers and providing a
source of practical feedback to management science. It represents a response to: -
Failure of business processes to meet customer needs and deliver customer satisfaction.
The challenge to organizational politics.
The gap between the strategic decision made in the boardroom and the day-to-day practice of
the business.
The disappointment following the application of information technology to businesses during
the 1980’s. This resulted in failure of businesses because senior managers failed to align its
strategy with corporate objectives.
BPR is not confined to manufacturing process and has been applied to a wide range of administrative
and operational activities.
There are a number of principles that have been identified for BPR including:
1. Processes should be designed to achieve desired outcomes rather than focus on tasks.
Removal of job demarcation and emphasize multi-skilling.
2. People who use the output should perform the process themselves. For example a company
could set up a database of approved suppliers. This would allow personnel who actually
require supplies to order them themselves, using line technology and thereby eliminate the
need for using a separate purchasing department
3. Incorporate information processing into the real work that produces the information- avoid
separate data gathering processes or operations.
4. Geographically dispersed resources should be treated as if they were centralized for example
economies of scale through central negotiation of supply contracts, without losing the benefits
of decentralization e.g. flexibility and responsiveness.
5. Link parallel activities rather than integrate the results. This would involve for example, co-
ordination between teams working on different aspects of a single process.
6. Empowerment:- ‘Doers’ should be allowed to be self managing. Put the decision point where
the work is performed.
7. Capture information only once. Ideally only at its source.
1. BPR revolves around customer need and helps to give appropriate focus to the business.
2. Provides cost advantages that assists the organization’s competitive position.
3. Encourages a long-term strategic view of operational processes by asking radical questions
about how things are done and how they could be improved.
4. It focuses on the entire processes and therefore the exercise can streamline activities
throughout the organization.
5. It can help eliminate unnecessary activities therefore help reduce organizational complexities.
1. It requires far-reaching and long term commitment by management and staff. Securing this is
not an easy task.
2. Sometimes it is incorrectly seen as a single once for all cost cutting exercise. Primarily the
aim is not cost cutting and it should be an ongoing process. This view could create hostility as
employees see it as a threat to security.
3. Sometimes it is also seen as a tool to make small changes yet in the real sense it should be
used to make radical changes.
Objectives of BPR
When applying the BPR management technique to a business organization the implementation team
effort is focused on the following objectives:
1. Customer focus
Customer service oriented processes aiming to eliminate customer complaints.
2. Speed
Dramatic compression of the time it takes to complete a task for key business processes. For instance,
if process before BPR had an average cycle time 5 hours, after BPR the average cycle time should be
cut down to half an hour.
3. Compression
Cutting major tasks of cost and capital, throughout the value chain. Organizing the processes a
company develops transparency throughout the operational level reducing cost. For instance the
decision to buy a large amount of raw material at 50% discount is connected to eleven cross checkings
in the organizational structure from cash flow, inventory, to production planning and marketing.
These checkings become easily implemented within the cross-functional teams, optimizing the
decision making and cutting operational cost.
4. Flexibility
Adaptive processes and structures to changing conditions and competition. Being closer to the
customer the company can develop the awareness mechanisms to rapidly spot the weak points and
adapt to new requirements of the market.
5. Quality
Obsession with the superior service and value to the customers. The level of quality is always the
same controlled and monitored by the processes, and does not depend mainly on the person, who
servicing the customer.
6. Innovation
Leadership through imaginative change providing to organization competitive advantage.
7. Productivity
Improve drastically effectiveness and efficiency.
Project scope is the work that needs to be accomplished with the specified features and functions in a
project. We can also say that the project scope is the goals that need to be fulfilled in order to
complete a project. Without proper project scope, the development team can go out of the track and
produce the final deliverable that is not intended by their clients. This will result in loss of time,
resource to the company whom develops the application. On the other hand, the cost will increase
since they have to recode the developed system to suit the client’s needs
Time is a period measurement that is used in project scheduling to estimate the project duration. The
time here can be either in days, weeks, months or years depending on the complexity and size of the
project. The project duration should be carefully planned and enough time should be provided for
each stages. Without allocating proper time amount the system produced may not be at an optimum
quality. This is because the development team might need to rush to finish the project within the time
frame which results in poorly designed and coded system which is prone to error and bugs.
Budget (Cost) provides a forecast of revenues and expenditures in the project. By evaluating budget,
the profit and loss can also be estimated. This will help to decide whether to undertake the project or
not. The budget estimation is a difficult task since it involves analysing skills and numerical values.
Without proper budget plan, the development can exceed or over spend which will lead into higher
production cost and lower profit margin.
With an effective project management, one can balances those mentioned constraints easily. Project
management helps to plan each and every progress of a project carefully from start to end. There are
tools and techniques that help us to plan and analyse the project before the actual project starts.
Firstly, the project management provides a standard methodology. The methodology becomes a
standard practice/framework for all the projects that the development team undertakes. By having a
standardized practice there is an improvement in development productivity. This is because the team
members will get used to the development approach and be familiar after working in several projects.
Apart from that, it also makes the development team to remain sticky to the actual scope without
going out of the track.
Secondly, the project management also offers a shorter implementation time. This is because an
effective project management can take control of each process in the project development effectively.
In project management, the proper planning of time in project scheduling ensures that the project
duration is carefully crafted to avoid the wastage of any resources. These results in short
implementation time for the deployment of Information System (IS). Some of the tools and techniques
used to plan the duration of the projects are critical path method, project scheduling and few more.
Apart from that, the project management also allows the cost planning almost accurately. There are
certain tools and techniques such as COCOMO model, payback period, NPV and IRR. These tools
help to draft and analyze the possible cost and profit for the system that is going to be produced. By
having budget estimation, the company can decide whether they are capable of taking the project
under their wing. At the same time, it avoids any losses and mishap in terms of money occurs to the
development company.
In conclusion, the project management is an effective method that assists the development team to
overcome the major constraints in a system development. The project management should be
implemented in all scale of project either big or small
REVISION EXERCISES
1. Discuss the role of management in system development.
2. Explain the role of management information system in decision making.
3. Discuss the various system development.
4. Discuss the various types of prototypes .
5. What are some of the advantages and disadvantages of prototyping?
6. What is an application software package? What are the advantages and disadvantages of
developing information systems based on software packages?
7. What is meant by system development life cycle?
8. Discuss the various phases of system development life cycle.
9. What are the main objectives of system development life cycle?
10. What are some of the strengths and weakness of system development life cycle?
11. What are the advantages and disadvantages of building an information system using the
traditional systems life cycle?
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 82
CHAPTER 3
INFORMATION SYSTEMS IN AN ENTERPRISE
SYNOPSIS
Introduction……………………………………………………. 83
Types of Information Systems…………………………………. 89
Systems in a Functional Perspective…………………………… 96
Enterprise Applications and the Business Process Integration… 100
INTRODUCTION
An enterprise information system is generally any kind of computing system that is of "enterprise
class". This means typically offering high quality of service, dealing with large volumes of data and
capable of supporting some large organization ("an enterprise").
Enterprise information systems provide a technology platform that enables organizations to integrate
and coordinate their business processes. An enterprise information system provides a single system
that is central to the organization and that ensures information can be shared across all functional
levels and management hierarchies. Enterprise systems create a standard data structure and are
invaluable in eliminating the problem of information fragmentation caused by multiple information
systems within an organization.
A typical enterprise information system would be housed in one or more data centers, would run
enterprise software, and could include applications that typically cross organizational borders such as
content management systems.
The word enterprise can have various connotations. Frequently the term is used only to refer to very
large organizations. However, the term may be used to mean virtually anything, by virtue of it having
become the latest corporate-speak buzzword
There are plenty of software systems, being information a constant in all of them.
Information is not only a constant element in the systems we are going to focus on, but the
fundamental element. The relevance of information in software systems is related to their function,
that of managing this ungraspable element (that we call information).
That is why the main problems these systems have to solve are related to information representation
and persistence, data reception and transmission, and to the devices that help us to transmit and
communicate this information.
Then, what is an information system? We can define an information system as the compound of
components (or elements) that operate together in order to catch, process, store, and distribute
information. This information is generally used for taking decisions, for the co-ordination, the control,
and the analysis in an organisation. In many occasions, the system ́s basic aim is the management of
that information.
An information system can further be defined as set of coordinated network of components which act
together towards producing, distributing and or processing information. An important factor of
computer based information system is precision, which may not apply to other types of systems.
System
In a system, network of components work towards a single objective, if there is lack of co-ordination
among components, it leads to counterproductive results. A system may have following features:
a. Adaptability
Some systems are adaptive to the exterior environment, while some systems are non-adaptive to the
external environment. For example, anti-lock braking system in car reacts depending on the road
conditions, where as the music system in the car is independent of other happening with the car.
b. Limitation
every system has pre-defined limits or boundaries within which it operates. This limits or boundaries
can be defined by law or current state of technology.
Information
Common definition of information is data. However, data is no true information. Data gets its
meaning and significance if only it is information. Information is represented with data, symbols and
letters.
Representation of Information
Information is represented with help of data, numbers, letters or symbols. Information is perceived in
a way it gets represented. Decimal system and binary system are two ways of representing
information. The binary circuits of computers are designed to operate under two states (0,1).
Organization of Information
The way in which information is organized directly affect the way the information is managed and
retrieved.
The simplest way of organizing information is through linear model. In this form, data is structured
one after another, for example, in magnetic tapes, music tapes, etc.
In a binary tree model, data is arranged in an inverted tree format where it assumes two values.
The hierarchy model is derived from a binary tree model. In this model, branch can assume multi-
value data, for example in the UNIX operating system this model is used for its file system.
The hypertext model is another way of organizing information; World Wide Web is an example of
this model.
Random access model is another way of organizing information. This model is used for optimum
utilization of available computer storage space. Here data is stored in specified location under
direction of the operating system.
Networking Information
Information is networked through network topology. The layout of all the connected devices, and it
provides virtual shape or structure to the network is known as network topology. The physical
structure may not be representative of network topology. The basic types of topology are bus, ring,
star, tree and mesh.
The above topologies are constructed and managed with help of Hubs, Switches, Bridges, Routers,
Brouters and Gateways.
Securing Information
Security of information as well as an information system is critical. Data back-up is on the way
through which Information can be made secured. Security management for network and information
system is distinct for different setup like home, small business, medium business, large business,
school and government.
For most businesses, there are a variety of requirements for information. Senior managers need
information to help with their business planning. Middle management need more detailed information
to help them monitor and control business activities. Employees with operational roles need
information to help them carry out their duties.
As a result, businesses tend to have several "information systems" operating at the same time. This
revision note highlights the main categories of information system and provides some examples to
help you distinguish between them.
Features
If we go further, we may wonder about the main features of information systems. Then, let ś analyse
them:
a) They manage huge amounts of persistent data (concretely, they manage the data theystore)
b) They manage many users converging access to information (these users produce and consume
data that the system manages)
c) Information system graphic interfaces are, in some aspect, defined in relation to the kind of
information the system manages (certainly, in many formulation screens and reports)
d) Information systems can integrate to many other enterprise applications.
A recent survey conducted has highlighted that the change in the business environment can be
summarized with following:
Globalization and opening up of markets has not only increased competition but also has allowed
companies to operate in markets previously considered forbidden.
Inclusion of information technology as integral part of business environment has ensured that
companies are able to process, store and retrieve the huge amount of data at ever dwindling costs.
Globalization has encouraged free movement of capital, goods and service across countries.
i. Business environments are complex in nature as well as dynamic because they are dependent
upon factors like political, economic, legal, technological, social, etc. for sustenance.
ii. Business environment affects companies in different industries in its own unique way. For
example, importers may favor lower exchange rate while exporters may favor higher
exchange rate.
iii. With change in the business environment, some fundamental effects are short term in nature
while some are felt over a period of time.
Outsourcing has help companies reduce their overhead expenses, improve productivity, shorten
innovation cycles, encourage new market penetration and also improving customer experience. India
has seen tremendous growth in BPO industry within function like customer care, finance/accounts,
payroll, high end financial services, human-resource, etc.
Emerging Trends
The recent explosion of information technology has seen few but significant emerging trends, for
example, mobile platform for doing business, cloud computing, technology to handle a large volume
of data, etc.
These fresh technologies and platforms are offering numerous opportunities for companies to drive
strategic business advantage and stay ahead of the competition. Companies need to work on new
plans as to maintain flexibility and deliver customer satisfying products and services.
a) Transaction processing
b) Management reporting
a) Transaction processing
Major processing functions include:
i. Process transactions
Activities such as making a purchase or a sale or manufacturing a product. It may be internal to the
organization or involve an external entity. Performance of a transaction requires records to:
b) Management reporting
This is the function involved in producing outputs for users. These outputs are mainly as reports to
management for planning, control and monitoring purposes. Major outputs of an information system
include:
i. Transaction documents or screens
ii. Preplanned reports
iii. Preplanned inquiry responses
iv. Ad hoc reports and ad hoc inquiry responses
v. User-machine dialog results
Types of decisions
a) Structured/programmable decisions
These decisions tend to be repetitive and well defined e.g. inventory replenishment decisions. A
standardized pre-planned or pre-specified approach is used to make the decision and a specific
methodology is applied routinely. Also the type of information needed to make the decision is known
precisely. They are programmable in the sense that unambiguous rules or procedures can be specified
in advance. These may be a set of steps, flowchart, decision table or formula on how to make the
decision. The decision procedure specifies information to be obtained before the decision rules are
applied. They can be handled by low-level personnel and may be completely automated.
It is easy to provide information systems support for these types of decisions. Many structured
decisions can be made by the system itself e.g. rejecting a customer order if the customer’s credit with
the company is less than the total payment for the order. Yet managers must be able to override these
systems’ decisions because managers have information that the system doesn’t have e.g. the customer
order is not rejected because alternative payment arrangements have been made with the customer.
In other cases the system may make only part of the decision required for a particular activity e.g. it
may determine the quantities of each inventory item to be reordered, but the manager may select the
most appropriate vendor for the item on the basis of delivery lead time, quality and price.
Examples of such decisions include: inventory reorder formulas and rules for granting credit.
Information systems requirements include:
o Clear and unambiguous procedures for data input
o Validation procedures to ensure correct and complete input
o Processing input using decision logic
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 89
b) Semi-structured/semi-programmable decisions
The information requirements and the methodology to be applied are often known, but some aspects
of the decision still rely on the manager: e.g. selecting the location to build a new warehouse. Here the
information requirements for the decision such as land cost, shipping costs are known, but aspects such
as local labour attitudes or natural hazards still have to be judged and evaluated by the manager.
c) Unstructured/non-programmable decisions
These decisions tend to be unique e.g. policy formulation for the allocation of resources. The
information needed for decision-making is unpredictable and no fixed methodology exists. Multiple
alternatives are involved and the decision variables as well as their relationships are too many and/or
too complex to fully specify. Therefore, the manager’s experience and intuition play a large part in
making the decision.
In addition there are no pre-established decision procedures either because:
The decision is too infrequent to justify organizational preparation cost of procedure or
The decision process is not understood well enough, or
The decision process is too dynamic to allow a stable pre-established decision procedure.
Information systems requirements for support of such decisions are:
Access to data and various analysis and decision procedures.
Data retrieval must allow for ad hoc retrieval requests
Interactive decision support systems with generalized inquiry and analysis capabilities.
Example: Selecting a CEO of a company.
A transaction is any business related exchange, such as a sale to a client or a payment to a vendor.
Transaction processing systems process and record transactions as well as update records. They
automate the handling of data about business activities and transactions. They record daily routine
transactions such as sales orders from customers, or bank deposits and withdrawals. Although they are
the oldest type of business information system around and handle routine tasks, they are critical to
business organization. For example, what would happen if a bank’s system that records deposits and
withdrawals and maintain accounts balances disappears?
TPS are vital for the organization, as they gather all the input necessary for other types of systems.
Think of how one could generate a monthly sales report for middle management or critical marketing
information to senior managers without TPS. TPS provide the basic input to the company’s database.
A failure in TPS often means disaster for the organization. Imagine what happens when an airline
reservation system fails: all operations stops and no transaction can be carried out until the system is
up and running again. Long queues form in front of ATMs and tellers when a bank’s TPS crashes.
Transaction processing systems were created to maintain records and do simple calculations faster,
more accurately and more cheaply than people could do the tasks.
Management Reporting Systems (MRS) formerly called Management information systems (MIS)
provide routine information to decision makers to make structured, recurring and routine decisions,
such as restocking decisions or bonus awards. They focus on operational efficiency and provide
summaries of data. A MRS takes the relatively raw data available through a TPS and converts it into
meaningful aggregated form that managers need to conduct their responsibilities. They generate
information for monitoring performance (e.g. productivity information) and maintaining coordination
(e.g. between purchasing and accounts payable).
The main input to an MRS is data collected and stored by transaction processing systems. A MRS
further processes transaction data to produce information useful for specific purposes. Generally, all
MIS output have been pre-programmed by information systems personnel. Outputs include:
a) Scheduled Reports
These were originally the only reports provided by early management information systems. Scheduled
reports are produced periodically, such as hourly, daily, weekly or monthly. An example might be a
weekly sales report that a store manager gets each Monday showing total weekly sales for each
department compared to sales this week last year or planned sales.
b) Demand Reports
These provide specific information upon request. For instance, if the store manager wanted to know
how weekly sales were going on Friday, and not wait until the scheduled report on Monday, she could
request the same report using figures for the part of the week already elapsed.
c) Exception Reports
These are produced to describe unusual circumstances. For example, the store manager might receive
a report for the week if any department’s sales were more than 10% below planned sales.
competitors or government regulations may need to be considered, or the facility may be needed due
to a new product line or business venture.
When the structure of a problem or decision changes, or the information required to address it is
different each time the decision is made, then the needed information cannot be supplied by an MIS,
but must be interactively modelled using a DSS. DSS provide support for analytical work in semi-
structured or unstructured situations. They enable mangers to answer ‘What if’ questions by providing
powerful modelling tools (with simulation and optimization capabilities) and to evaluate alternatives
e.g. evaluating alternative marketing plans.
DSS have less structure and predictable use. They are user-friendly and highly interactive. Although
they use data from the TPS and MIS, they also allow the inclusion of new data, often from external
sources such as current share prices or prices of competitors.
ESS has menu-driven user-friendly interfaces, interactive graphics to help visualization of the situation
and communication capabilities that link the senior executives to the external databases he requires.
Top executives need ESS because they are busy and want information quickly and in an easy to read
form. They want to have direct access to information and want their computer set-up to directly
communicate with others. They want structured forms for viewing and want summaries rather than
details.
ES may expand the capabilities of a DSS in support of the initial phase of the decision making process.
It can assist the second (design) phase of the decision making process by suggesting alternative
scenarios for "what if" evaluation. It assists a human in the selection of an appropriate model for the
decision problem. This is an avenue for an automatic model management; the user of such a system
would need less knowledge about models. ES can simplify model-building in particular simulation
models lends itself to this approach.ES can provide an explanation of the result obtained with a DSS.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 93
This would be a new and important DSS capability. ES can act as tutors. In addition ES capabilities
may be employed during DSS development; their general potential in software engineering has been
recognised.
unprotected telecommunications lines for data transmissions. Therefore the system must provide high
levels of logical and physical security for both the customer and the machinery.
Information System
An information system can be defined as set of coordinated network of components, which act
together towards producing, distributing and or processing information. An important characteristic of
computer-based information systems information is precision, which may not apply to other types.
In any given organization information system can be classified based on the usage of the information.
Therefore, information systems in business can be divided into operations support system and
management support system.
Information Technology
Everyday knowingly or unknowingly, everyone is utilizing information technology. It has grown
rapidly and covers many areas of our day to day life like movies, mobile phones, the internet, etc.
Information technology greatly enhances the performance of economy; it provides edge in solving
social issues as well as making information system affordable and user friendly.
Information technology has brought big change in our daily life be it education, life at home, work
place, communication and even in function of government.
Origin
Information systems have been in existence since pre-mechanical era in form of books, drawings, etc.
However, the origin of information technology is mostly associated with invention of computers.
Development
Information systems have undergone great deal of evolution, i.e. from manual record keeping to the
current cloud storage system. Similarly, information technology is seeing constant changes with
evermore faster processor and constantly shrinking size of storage devices.
Business Application
Businesses have been using information systems for example in form of manual books of accounts to
modern tally. The mode of communication has also gone under big change, for example, from a letter
to email. Information technology has helped drive efficiency across organization with improved
productivity and precision manufacturing.
Information systems have been known to mankind in one form or the other as a resource for decision
making. However, with the advent of information technology information systems have become
sophisticated, and their usage proliferated across all walks of life. Information technology has helped
managed large amount of data into useful and valuable information.
The table below shows that information systems are used in sales and marketing in a number of ways.
At the strategic level, sales and marketing systems monitor trends affecting new products and sales
opportunities, support planning for new products and services, and monitor the performance of
competitors. At the management level, sales and marketing systems support market research,
advertising and promotional campaigns, and pricing decisions. They analyze sales performance and
the performance of the sales staff. At the operational level, sales and marketing systems assist in
locating and contacting prospective customers, tracking sales, processing orders, and providing
customer service support.
The table below shows some typical manufacturing and production information systems arranged by
organizational level. Strategic-level manufacturing systems deal with the firm’s long-term
manufacturing goals, such as where to locate new plants or whether to invest in new manufacturing
technology. At the management level, manufacturing and production systems analyze and monitor
manufacturing and production costs and resources. Operational manufacturing and production
systems deal with the status of production tasks.
The accounting function is responsible for maintaining and managing the firm’s financial records—
receipts, disbursements, depreciation, payroll—to account for the flow of funds in a firm. Finance and
accounting share related problems—how to keep track of a firm’s financial assets and fund flows.
They provide answers to questions such as these: What is the current inventory of financial assets?
What records exist for disbursements, receipts, payroll, and other fund flows?
The table below shows some of the typical finance and accounting information systems found in large
organizations. Senior management uses finance and accounting systems to establish long-term
investment goals for the firm and to provide long-range forecasts of the firm’s financial performance.
Middle management uses systems to oversee and control firm’s financial resources. Operational
management uses finance and accounting systems to track the flow of funds in the firm through
transactions, such as pay-checks, payments to vendors, securities reports, and receipts.
Human resources systems help senior management identify the manpower requirements (skills,
educational level, types of positions, number of positions, and cost) for meeting the firm’s long-term
business plans. Middle management uses human resources systems to monitor and analyze the
recruitment, allocation, and compensation of employees. Operational management uses human
resources systems to track the recruitment and placement of the firm’s employees
The 7S framework introduced by McKinsey is one of the ways through which analysis can be done to
determine the efficiency of organization in meeting strategic objective.
The 7S model is utilized to study and suggest areas within company which needs improvement,
examine the effects with change in strategy, internal alignment with every merger and acquisition.
7S Framework
The 7S framework constitutes of 7 factors, which affect organizational effectiveness. These 7 factors
are strategy, organizational structure, IT systems, shared values, employee skills, management style
and staff. These 7 factors can be broadly categorized into Hard Elements-Strategy, Structure, Systems
and Soft Elements-Shared Values, Skills, Style and Staff. Hard elements highlighted above are the
ones which are under direct control of management. Soft elements are not in direct control of
management and are driven by internal culture
Strategy: It is defined as an action plan working towards the organizational defined objective.
Structure: It is defined as design of organization-employees interaction to meet defined
objective.
Systems: It is defined as information systems in which organization has invested to fulfill its
defined objective.
Staff: It is defined as workers employed by the organization.
Style: It is defined as the approach adopted by the leadership to interact with employees,
supplier and customers.
Skills: It is defined as characteristics of employees associated with the organization.
Shared Values: It is the central piece of the whole 7S framework. It is a concept based on
which organization has decided to achieve its objective.
Usage of 7S Framework
The basis of the 7S framework is that for organization to meet its objective it is essential all the seven
elements are in sync and mutually balancing. The model is used to identify which out of 7 factors
need to be balanced as to align with change in organization.
7S framework is helpful in identifying the pain points which are creating a hurdle in organization
growth.
In digital age, technology and technology-driven information systems both are game changer as far as
meeting objective for organization is concerned. Companies are moving towards automation, cloud
computing, etc. This has led to technology as central nervous system of the organization.
The 7S framework is applicable across all industries and companies. It is one of the premier models
used to measure organizational effectiveness. In this challenging environment, strategy of
organization is constantly evolving. In such an environment, it is essential organization to look back
upon its seven elements to identify the source which is hampering the growth.
Organization can use the 7S framework to identify its position with existing strategy.
Each of these stovepipe systems held independent data; it was recognised that customer information
and the sharing of this information across departments was extremely valuable to an enterprise.
Allowing the disparate systems to interoperate became increasingly important and necessary. As
organisations grew, so too did the desire to integrate key systems with clients and vendors.
Research has shown that during software development, a third of the time is dedicated to problem of
creating interfaces and points of integration for existing applications and data-stores. Clearly, the idea
and pursuit of application integration is not something new. What is new are the approach and the
ideas that Enterprise Application Integration (EAI) encompasses and the techniques it uses. In order
for it to be a success and a realistic solution, applying EAI requires involvement of the entire
enterprise: business processes, applications, data, standards and platforms.
Business Process
The focus here is on combining tasks, procedures, required input and output information and the tools
needed at each stage of a process. It is imperative that an enterprise identifies all processes that
contribute to the exchange of data within an organisation. This allows organizations to streamline
operations, reduce costs and improve responsiveness to customer demands
Application
The aim here is on taking one application’s data and/or functionality and merging them with that of
another application. This can be realised in a number of ways. For example, business-to-business
integration, web integration, or building websites that are capable of interacting with numerous
systems within the business.
Platform
This provides a secure and reliable means for a corporation’s heterogeneous systems to communicate
and transfer data from one application to another without running into problems.
There are two types of logical integration architecture that EAI employs:
a) Point-to-point Integration
When dealing with very few applications, this method is certainly adequate. Point-to-point integration
is usually pursued because of its ease and speed of implementation. It must be stressed though, that
the efficiency of this method deteriorates as you try and integrate more systems. So, although to begin
with you only have a few systems, consideration must go into the future; scalability is a huge concern.
b) Middleware
An intermediate layer provides generic interfaces through which the integrated systems are able to
communicate. Middleware performs tasks such as routing and passing data. Each of the interfaces
define a business process provided by an application. Adding and replacing applications will not
affect another application
Overview
Enterprise application integration is an integration framework composed of a collection of
technologies and services which form a middleware to enable integration of systems and applications
across the enterprise.
Supply chain management applications (for managing inventory and shipping), customer relationship
management applications (for managing current and potential customers), business intelligence
applications (for finding patterns from existing data from operations), and other types of applications
(for managing data such as human resources data, health care, internal communications, etc.) typically
cannot communicate with one another in order to share data or business rules. For this reason, such
applications are sometimes referred to as islands of automation or information silos. This lack of
communication leads to inefficiencies, wherein identical data are stored in multiple locations, or
straightforward processes are unable to be automated.
Enterprise Application Integration (EAI) is the process of linking such applications within a single
organization together in order to simplify and automate business processes to the greatest extent
possible, while at the same time avoiding having to make sweeping changes to the existing
applications or data structures. In the words of the Gartner Group, EAI is the “unrestricted sharing of
data and business processes among any connected application or data sources in the enterprise.”
One large challenge of EAI is that the various systems that need to be linked together often reside on
different operating systems, use different database solutions and different computer languages, and in
some cases are legacy systems that are no longer supported by the vendor who originally created
them. In some cases, such systems are dubbed "stovepipe systems" because they consist of
components that have been jammed together in a way that makes it very hard to modify them in any
way.
Improving Connectivity
If integration is applied without following a structured EAI approach, point-to-point connections grow
across an organization. Dependencies are added on an impromptu basis, resulting in a complex
structure that is difficult to maintain. This is commonly referred to as spaghetti, an allusion to the
programming equivalent of spaghetti code. For example:
The number of connections needed to have fully meshed point-to-point connections, with n points, is
However, EAI is not just about sharing data between applications; it focuses on sharing both business
data and business process. Middleware analysts attending to EAI involves looking at the system of
systems, which involves large scale inter-disciplinary problems with multiple, heterogeneous,
distributed systems that are embedded in networks at multiple levels. One of the biggest mistakes that
organizations make to solve this problem is excessively focusing on low-level bottom-up IT
approaches, often driven from development-oriented technical teams. In contrast, a paradigm shift is
emerging to start EAI rationalization efforts with effective top-down business-oriented analysis found
in disciplines such as Enterprise Architecture, Business Architecture, and Business Process
Management. The business oriented approach can enable a cohesive business integration strategy
which is supported by, instead of dictated by, technical and data integration strategies
Purposes
EAI can be used for different purposes:
i. Data integration
It ensures that information in multiple systems is kept consistent. This is also known as enterprise
information integration (EII).
Integration Patterns
There are two patterns that EAI systems implement:
1. Mediation (intra-communication)
Here, the EAI system acts as the go-between or broker between multiple applications. Whenever an
interesting event occurs in an application (for instance, new information is created or a new
transaction completed) an integration module in the EAI system is notified. The module then
propagates the changes to other relevant applications.
2. Federation (inter-communication)
In this case, the EAI system acts as the overarching facade across multiple applications. All event
calls from the 'outside world' to any of the applications are front-ended by the EAI system. The EAI
system is configured to expose only the relevant information and interfaces of the underlying
applications to the outside world, and performs all interactions with the underlying applications on
behalf of the requester.
Both patterns are often used concurrently. The same EAI system could be keeping multiple
applications in sync (mediation), while servicing requests from external users against these
applications (federation).
Access patterns
EAI supports both asynchronous and synchronous access patterns, the former being typical in the
mediation case and the latter in the federation case.
Lifetime patterns
An integration operation could be short-lived (e.g. keeping data in sync across two applications could
be completed within a second) or long-lived (e.g. one of the steps could involve the EAI system
interacting with a human work flow application for approval of a loan that takes hours or days to
complete).
Technologies
Multiple technologies are used in implementing each of the components of the EAI system:
a. Bus/hub
This is usually implemented by enhancing standard middleware products (application server,
message bus) or implemented as a stand-alone program (i. e., does not use any middleware), acting as
its own middleware.
b. Application connectivity
The bus/hub connects to applications through a set of adapters (also referred to as connectors). These
are programs that know how to interact with an underlying business application. The adapter performs
two-way communication, performing requests from the hub against the application, and notifying the
hub when an event of interest occurs in the application (a new record inserted, a transaction
completed, etc.). Adapters can be specific to an application (e. g., built against the application
vendor's client libraries) or specific to a class of applications (e. g., can interact with any application
through a standard communication protocol, such as SOAP, SMTP or Action Message Format
(AMF)). The adapter could reside in the same process space as the bus/hub or execute in a remote
location and interact with the hub/bus through industry standard protocols such as message queues,
web services, or even use a proprietary protocol. In the Java world, standards such as JCA allow
adapters to be created in a vendor-neutral manner.
d. Integration modules
An EAI system could be participating in multiple concurrent integration operations at any given time,
each type of integration being processed by a different integration module. Integration modules
subscribe to events of specific types and process notifications that they receive when these events
occur. These modules could be implemented in different ways: on Java-based EAI systems, these
could be web applications or EJBs or even POJOs that conform to the EAI system's specifications.
Communication Architectures
Currently, there are many variations of thought on what constitutes the best infrastructure, component
model, and standards structure for Enterprise Application Integration. There seems to be consensus
that four components are essential for modern enterprise application integration architecture:
i. A centralized broker that handles security, access, and communication. This can be
accomplished through integration servers (like the School Interoperability Framework (SIF)
Zone Integration Servers) or through similar software like the enterprise service bus (ESB)
model that acts as a SOAP-oriented services manager.
ii. An independent data model based on a standard data structure, also known as a canonical data
model. It appears that XML and the use of XML style sheets has become the de facto and in
some cases de jure standard for this uniform business language.
iii. A connector, or agent model where each vendor, application, or interface can build a single
component that can speak natively to that application and communicate with the centralized
broker.
iv. A system model that defines the APIs, data flow and rules of engagement to the system such
that components can be built to interface with it in a standardized way.
Although other approaches like connecting at the database or user-interface level have been explored,
they have not been found to scale or be able to adjust. Individual applications can publish messages to
the centralized broker and subscribe to receive certain messages from that broker. Each application
only requires one connection to the broker. This central control approach can be extremely scalable
and highly evolvable.
a) customers,
b) business processes,
c) product services and communication technology.
Design of an information system is done based on elements of the model.
a) Customers
Every information system has end users or customers. An information system can have internal as
well as external. Customers are beneficiaries of products and services provided by an information
system. Here external customers could be people visiting a website for shopping or e-commerce
transaction, people searching for cooking recipe, searching for tax saving tools, etc.
Internal customer of an information system could be employee receiving salary from payroll system,
employee checking inventory and stock, etc. Sometimes these employees could be the customer for
the product and services, for example, employee working with a computer manufacturer could be
customer of manufactured product.
For a manufacturing organization, production department would be customer for supply department.
Therefore, information system requirements of each department would be different. Information
systems are design to service what is best for external customers. However, information systems
should be flexible enough to support internal requirements also.
design is a service. In internet banking, customer can accomplish the entire banking task, without
visiting the bank. Internet banking, therefore, is a service.
An information system can generate various types of services and products based on its design. An
effective information system needs to satisfy customer expectation. An information system should
provide product and service based on customer’s needs and requirements.
c) Business Processes
Business activity consists of various processes. These processes include talking to customer,
understanding her requirements, manufacturing product as per requirement, provide post sales service,
etc. A business process may not be structured all the time and may not be formal. An improvement in
the business process directly impacts business performance. An information system can improve a
business process, by providing relevant information, increasing a step in business process or
eliminating a step in a business process.
Communication Technology
Communication technology and computers are the central pieces of an information system model.
Their presence is required to deliver efficient business process and customer delighting products and
services. Infusion of technology within business creates win-win situations. Technology improves
internal communication via email chat, etc. and improve external communication through website,
webinar etc. Access to valuable information is quicker through information system, and this can
provide a competitive edge in digital age.
Information system model highlights the pivot role information system plays in bringing efficiency in
any work system.
REVISION EXERCISES
1. Discuss the changing phase of business environment.
2. Discuss the various characteristics of business environments.
3. Discuss the various types of information systems
4. Compare and contrast between information system and information technology
5. Discuss information systems form a functional perspectives
6. Discuss the elements of an information system model
CHAPTER 4
FILE ORGANIZATION AND APPLICATION
SYNOPSIS
Introduction……………………………………………………… 107
Files and File Structure…………………………………………. 108
File Organisation Methods……………………………………… 109
Processing of Computer Files………………………………….. 113
Database Systems……………………………………………….. 116
Characteristics, Importance and
Limitations of Database Systems……………………………… 122
Data Warehousing………………………………………………. 126
INTRODUCTION
Files stored on magnetic media can be organised in a number of ways, just as in a manual system.
There are advantages and disadvantages to each type of file organisation, and the method chosen will
depend on several factors such as:
A file is a collection of data, usually stored on disk. As a logical entity, a file enables you to divide
your data into meaningful groups, for example, you can use one file to hold all of a company's product
information and another to hold all of its personnel information. As a physical entity, a file should be
considered in terms of its organization.
File organization refers to the logical relationships among the various records that constitute the file,
particularly with respect to the means of identification and access to any specific record.
File structure refers to the format of the label and data blocks and of any logical record control
information. The organization of a given file may be sequential, relative, or indexed.
File organization is the methodology which is applied to structured computer files. Files contain
computer records which can be documents or information which is stored in a certain way for later
retrieval. File organization refers primarily to the logical arrangement of data (which can itself be
organized in a system of records with correlation between the fields/columns) in a file system. It
should not be confused with the physical storage of the file in some types of storage media. There are
certain basic types of computer file, which can include files stored as blocks of data and streams of
data, where the information streams out of the file while it is being read until the end of the file is
encountered. A program that uses a file needs to know the structure of the file and needs to interpret
its contents.
However, all things considered the most important considerations might be:
Rapid access to a record or a number of records which are related to each other.
The Adding, modification, or deletion of records.
Efficiency of storage and retrieval of records.
Redundancy, being the method of ensuring data integrity.
A file should be organized in such a way that the records are always available for processing with no
delay. This should be done in line with the activity and volatility of the information.
However, because of the structural differences between the file systems and directories, the data
within these entities can be managed separately.
When the operating system is installed for the first time, it is loaded into a directory structure, as
shown in the following illustration.
File Types
The UNIX filesystem contains several different types of files:
1. Ordinary Files
Used to store your information, such as some text you have written or an image you have
drawn. This is the type of file that you usually work with.
Always located within/under a directory file
Do not contain other files
2. Directories
Branching points in the hierarchical tree
3. Special Files
Used to represent a real physical device such as a printer, tape drive or terminal, used for
Input/Ouput (I/O) operations
Unix considers any device attached to the system to be a file - including your terminal:
By default, a command treats your terminal as the standard input file (stdin) from which to
read its input
Your terminal is also treated as the standard output file (stdout) to which a command's output
is sent
Usually only found under directories named /dev
4. Pipes
UNIX allows you to link commands together using a pipe. The pipe acts a temporary file
which only exists to hold data from one command until it is read by another
For example, to pipe the output from one command into another command:
who | wc -l
This command will tell you how many users are currently logged into the system. The standard output
from the who command is a list of all the users currently logged into the system. This output is piped
into the wc command as its standard input. Used with the -l option this command counts the numbers
of lines in the standard input and displays the result on its standard output - your terminal.
1. serial;
2. sequential;
3. indexed sequential;
4. random access
Once stored in the file, the record cannot be made shorter, or longer, or deleted. However, the record
can be updated if the length does not change. (This is done by replacing the records by creating a new
file.) New records will always appear at the end of the file.
If the order of the records in a file is not important, sequential organization will suffice, no matter how
many records you may have. Sequential output is also useful for report printing or sequential reads
which some programs prefer to do. As with serial organisation, records are stored one after the other,
but in a sequential file the records are sorted into key sequence. Files that are stored on tape are
always either serial or sequential, as it is impossible to write records to a tape in any way except one
after the other. From the computer’s point of view there is essentially no difference between a serial
and a sequential file. In both cases, in order to find a particular record, each record must be read,
starting from the beginning of the file, until the required record is located. However, when the whole
file has to be processed (for example a payroll file prior to payday) sequential processing is fast and
efficient.
3. Index-sequential files
Key searches are improved by this system too. The single-level indexing structure is the simplest one
where a file, whose records are pairs, contains a key pointer. This pointer is the position in the data
file of the record with the given key. A subset of the records, which are evenly spaced along the data
file, is indexed, in order to mark intervals of data records.
This is how a key search is performed: the search key is compared with the index keys to find the
highest index key coming in front of the search key, while a linear search is performed from the
record that the index key points to, until the search key is matched or until the record pointed to by the
next index entry is reached. Regardless of double file access (index + data) required by this sort of
search, the access time reduction is significant compared with sequential file searches.
Let's examine, for sake of example, a simple linear search on a 1,000 record sequentially organized
file. An average of 500 key comparisons are needed (and this assumes the search keys are uniformly
distributed among the data keys). However, using an index evenly spaced with 100 entries, the total
number of comparisons is reduced to 50 in the index file plus 50 in the data file: a five to one
reduction in the operations count!
Hierarchical extension of this scheme is possible since an index is a sequential file in itself, capable of
indexing in turn by another second-level index, and so forth and so on. And the exploit of the
hierarchical decomposition of the searches more and more, to decrease the access time will pay
increasing dividends in the reduction of processing time. There is however a point when this
advantage starts to be reduced by the increased cost of storage and this in turn will increase the index
access time.
Hardware for index-sequential organization is usually disk-based, rather than tape. Records are
physically ordered by primary key. And the index gives the physical location of each record. Records
can be accessed sequentially or directly, via the index. The index is stored in a file and read into
memory at the point when the file is opened. Also, indexes must be maintained.
Life sequential organization the data is stored in physical contiguous box. However the difference is
in the use of indexes. There are three areas in the disc storage:
Primary Area
It contains file records stored by key or ID numbers.
Overflow Area
It contains records area that cannot be placed in primary area.
Index Area
It contains keys of records and there locations on the disc. When there is need to access records
sequentially by some key value and also to access records directly by the same key value, the
collection of records may be organized in an effective manned called Indexes Sequential
Organization.
You must be familiar with search process for a word in a language dictionary. The data in the
dictionary is stored in sequential manner. However an index is provided in terms of thumb tabs. To
search for a word we do not search sequentially. We access the index that is the appropriate thumb
tab, locate an approximate location for the word and then proceed to find the word sequentially.
To implement the concept of indexed sequential file organizations, we consider an approach in which
the index part and data part reside on a separate file. The index file has a tree structure and data file
has a sequential structure. Since the data file is sequenced, it is not necessary for the index to have an
entry for each record Following figure shows a sequential file with a two-level index.
Level 1 of the index holds an entry for each three-record section of the main file. The level 2 indexes
level 1 in the same way.
When the new records are inserted in the data file, the sequence of records need to be preserved and
also the index is accordingly updated.
Two approaches used to implement indexes are static indexes and dynamic indexes.
As the main data file changes due to insertions and deletions, the static index contents may change but
the structure does not change . In case of dynamic indexing approach, insertions and deletions in the
main data file may lead to changes in the index structure. Recall the change in height of B-Tree as
records are inserted and deleted.
Both dynamic and static indexing techniques are useful depending on the type of application.
4. Random Access
It refers to the ability to access data at random. The opposite of random access is sequential access. To
go from point A to point Z in a sequential-access system, you must pass through all intervening
points. In a random-access system, you can jump directly to point Z. Disks are random access media,
whereas tapes are sequential access media.
The terms random access and sequential access are often used to describe data files. A random-access
data file enables you to read or write information anywhere in the file. In a sequential-access file, you
can only read and write information sequentially, starting from the beginning of the file.
Both types of files have advantages and disadvantages. If you are always accessing information in the
same order, a sequential-access file is faster. If you tend to access information randomly, random
access is better.
Data redundancy
It causes data to be duplicated in multiple data files
Redundancy leads to inconsistencies in data representation e.g. refer to the same person as client or
customer values of data items across multiple files
Program-data dependence
There is a tight relationship between data files and specific programs used to maintain files.
Lack of flexibility
There is need to write a new program to carry out each new task
Integrity problems
Integrity constraints (e.g. account balance > 0) become part of program code. It’s hard to add new
constraints or change existing ones
The external structure of a file depends on whether it is being created on a FAT or NTFS partition.
The maximum filename length on a NTFS partition is 256 characters, and 11 characters on FAT (8
character name+"."+3 character extension.) NTFS filenames keep their case, whereas FAT filenames
have no concept of case (but case is ignored when performing a search under NTFS Operating
System). Also, there is the new VFAT which permits 256 character filenames.
The concept of directories and files is fundamental to the UNIX operating system. On Microsoft
Windows-based operating systems, directories are depicted as folders and moving about is
accomplished by clicking on the different icons. In UNIX, the directories are arranged as a hierarchy
with the root directory being at the top of the tree. The root directory is always depicted as /. Within
the / directory, there are subdirectories (e.g.: etc and sys). Files can be written to any directory
depending on the permissions. Files can be readable, writable and/or executable.
With the advent of Microsoft Windows 7 the concept of file organization and management has
improved drastically by way of use of powerful tool called libraries. A library is file organization
system to bring together related files and folders stored in different locations of the local as well as
network computer such that these can be accessed centrally through a single access point. For
instance, various images stored in different folders in the local computer or/and across a computer
network can be accumulated in an image library. Aggregation of similar files can be manipulated,
sorted or accessed conveniently as and when required through a single access point on a computer
desktop by use of a library. This feature is particularly very useful for accessing similar content of
related content, and also, for managing projects using related and common data.
Computer Files
A file is a collection of related data or information that is normally maintained on a secondary storage
device. The purpose of a file is to keep data in a convenient location where they can be located and
retrieved as needed. The term computer file suggests organized retention on the computer that
facilitates rapid, convenient storage and retrieval. As defined by their functions, two general types of
files are used in computer information systems: master files and transaction files.
Streams
File processing consists of creating, storing, and/or retrieving the contents of a file from a
recognizable medium. For example, it is used to save word-processed files to a hard drive, to store a
presentation on floppy disk, or to open a file from a CD-ROM. A stream is the technique or means of
performing file processing. In order to manage files stored in a computer, each file must be able to
provide basic pieces of information about itself. This basic information is specified when the file is
created but can change during the lifetime of a file.
To create a file, a user must first decide where it would be located: this is a requirement. A file can be
located on the root drive. Alternatively, a file can be positioned inside of an existing folder. Based on
security settings, a user may not be able to create a file just anywhere in the (file system of the)
computer. Once the user has decided where the file would reside, there are various means of creating
files that the users are trained to use. When creating a file, the user must give it a name following the
rules of the operating system combined with those of the file system. The most fundamental piece of
information a file must have is a name.
Once the user has created a file, whether the file is empty or not, the operating system assigns basic
pieces of information to it. Once a file is created, it can be opened, updated, modified, renamed, etc.
If you are creating a new file, there are certainly some rules you must observe. The name of a file
follows the directives of the operating system. On MS DOS and Windows 3.X (that is, prior to
Microsoft Windows 9X), the file had to use the 8.3 format. The actual name had to have a maximum
of 8 characters with restrictions on the characters that could be used. The user also had to specify
three characters after a period. The three characters, known as the file extension, were used by the
operating system to classify the file. That was all necessary for those 8-bit and 16-bit operating
systems. Various rules have changed. For example, the names of folders and files on Microsoft
Windows >= 95 can have up to 255 characters. The extension of the file is mostly left to the judgment
of the programmer but the files are still using extensions. Applications can also be configured to save
different types of files; that is, files with different extensions.
Master files
Master files contain information to be retained over a relatively long time period. Information in
master files is updated continuously to represent the current status of the business.
An example is an accounts receivable file. This file is maintained by companies that sell to customers
on credit. Each account record will contain such information as account number, customer name and
address, credit limit amount, the current balance owed, and fields indicating the dates and amounts of
purchases during the current reporting period. This file is updated each time the customer makes a
purchase. When a new purchase is made, a new account balance is computed and compared with the
credit limit. If the new balance exceeds the credit limit, an exception report may be issued and the order
may be held up pending management approval.
Transaction Files
Transaction files contain records reflecting current business activities. Records in transaction files are
used to update master files. To continue with the illustration, records containing data on customer
orders are entered into transaction files. These transaction files are then processed to update the
master files. This is known as posting transaction data to master file. For each customer transaction
record, the corresponding master record is accessed and updated to reflect the last transaction and the
new balance. At this point, the master file is said to be current.
Accessing Files
Files can be accessed
Sequentially - start at first record and read one record after another until end of file or desired
record is found
o known as “sequential access”
o only possible access for serial storage devices
Directly - read desired record directly
o known as “random access” or “direct access”
Although a computer file-based processing system has many advantages over manual record keeping
system, but it has some limitations. The basic disadvantages (or limitations) of computer file-based
processing system are described below.
a. Data Redundancy
Redundancy means having multiple copies of the same data. In computer file-based processing
system, each application program has its own data files. The same data may be duplicated in more
than one file. The duplication of data may create many problems such as:
To update a specific data/record, the same data must be updated in all files, otherwise different file
may have different information about a specific item. A valuable storage space is wasted.
b. Data Inconsistency
Data inconsistency means that different files may contain different information of a particular object
or person. Actually redundancy leads to inconsistency. When the same data is stored in multiple
locations, the inconsistency may occur.
c. Data Isolation
In computer file-based system, data is isolated in separate files. It is difficult to update and to access
particular information from data files.
d. Data Atomicity
Data atomicity means data or record is either entered as a whole or it is not entered at all.
e. Data Dependence
In computer file-based processing systems, the data stored in file depends upon the application
program through which the file was created. It means that the structure of data files is coupled with
application program.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 116
The physical structure of data files and records are defined in the application program code. It is
difficult to change the structure of data files or records. If you want to change the structure of data file
(or format of file), then you have to modify the application program.
f. Program Maintenance
In computer file-based processing system, the structure of data file is coupled with the individual
application programs. Therefore, any modification to a data file such as size of a data field, its type
etc. requires the modification of the application program also. This process of modifying the program
is referred to as program maintenance.
g. Data Sharing
In computer file-based processing systems, each application program uses its own private data files.
The computer file-based processing systems do not provide the facility to share data of a data file
among multiple users on the network.
h. Data Security
The computer file-based processing system do not provide the proper security system against illegal
access of data. Anyone can easily change or delete valuable data stored in the data file. It is the most
complicated problem of file-processing system.
DATABASE SYSTEM
DBMSs are system software that aid in organizing, controlling and using the data needed by
application programs. A DBMS provides the facility to create and maintain a well-organized database.
It also provides functions such as normalization to reduce data redundancy, decrease access time and
establish basic security measures over sensitive data.
Most DBMS have internal security features that interface with the operating system access control
mechanism/package, unless it was implemented in a raw device. A combination of the DBMS security
features and security package functions is often used to cover all required security functions. This dual
security approach however introduces complexity and opportunity for security lapses.
DBMS Architecture
Data elements required to define a database are called metadata. There are three types of metadata:
Data dictionary and directory systems (DD/DS) have been developed to define and store in source and
object forms all data definitions for external schemas, conceptual schemas, the internal schema and all
associated mappings. The data dictionary contains an index and description of all the items stored in
the database. The directory describes the location of the data and access method. Some of the benefits
of using DD/DS include:
Enhancing documentation
Providing common validation criteria
Facilitating programming by reducing the needs for data definition
Standardizing programming methods
Database Structure
The common database models are:
Hierarchical database model
Network database model
Relational database model
Object–oriented model
to express relationships when children need to relate to more than one parent. When the data
relationships are hierarchical, the database is easy to implement, modify and search.
A hierarchical structure has only one root. Each parent can have numerous children, but a child can
have only one parent. Subordinate segments are retrieved through the parent segment. Reverse pointers
are not allowed. Pointers can be set only for nodes on a lower level; they cannot be set to a node on a
predetermined access path.
The network structure is more flexible, yet more complex, than the hierarchical structure. Data
records are related through logical entities called sets. Within a network, any data element can be
connected to any item. Because networks allow reverse pointers, an item can be an owner and a
member of the same set of data. Members are grouped together to form records, and records are
linked together to form a set. A set can have only one owner record but several member records.
Relational database technology separates data from the application and uses a simplified data model.
Based on set theory and relational calculations, a relational database models information in a table
structure with columns and rows. Columns, called domains or attributes, correspond to fields. Rows
or tuples are equal to records in a conventional file structure. Relational databases use normalization
rules to minimize the amount of information needed in tables to satisfy users’ structured and
unstructured queries to the database.
Database Administrator
He/she coordinates the activities of the database system. Duties include:
Schema definition
Storage structure and access method definition
Schema and physical organisation modification
Large volumes of data are concentrated into files that are physically very small
The processing capabilities of a computer are extensive, and enormous quantities of data are
processed without human intervention.
Easy to lose data in a database from equipment malfunction, corrupt files, loss during copying
of files and data files are susceptible to theft, floods etc.
Unauthorized people can gain access to data files and read classified data on files
Information on a computer file can be changed without leaving any physical trace of change
Database systems are critical in competitive advantage to an organization
2) PC controls
a. Keyboard lock
b. Password
c. Locking disks
d. Training
e. Virus scanning
f. Policies and procedures on software copying
3) Database controls
A number of controls have been embedded into DBMS, these include:
a. Authorization – granting of privileges and ownership, authentication
b. Provision of different views for different categories of users
c. Backup and recovery procedures
d. Checkpoints – the point of synchronization between database and transaction log files. All
buffers are force written to storage.
e. Integrity checks e.g. relationships, lookup tables, validations
f. Encryption – coding of data by special algorithm that renders them unreadable without
decryption
g. Journaling – maintaining log files of all changes made
h. Database repair
4) Development controls
When a database is being developed, there should be controls over the design, development and testing
e.g.
a. Testing
b. Formal technical review
c. Control over changes
d. Controls over file conversion
5) Document standards
They are standards that are required for documentation such as:
a. Requirement specification
b. Program specification
c. Operations manual
d. User manual
6) Legal issues
a. Escrow agreements – legal contracts concerning software
b. Maintenance agreements
c. Copyrights
d. Licenses
e. Privacy
Database recovery is the process of restoring the database to a correct state in the event of a failure.
A distributed database system exists where logically related data is physically distributed between a
number of separate processors linked by a communication network.
A multi-database system is a distributed system designed to integrate data and provide access to a
collection of pre-existing local databases managed by heterogeneous database systems such as oracle.
Outside the world of professional information technology, the term database is sometimes used
casually to refer to any collection of data (perhaps a spreadsheet, maybe even a card index). This
article is concerned only with databases where the size and usage requirements necessitate use of a
database management system.
The interactions catered for by most existing DBMS fall into four main groups:
i. Data definition. Defining new data structures for a database, removing data structures from
the database, modifying the structure of existing data.
ii. Update. Inserting, modifying, and deleting data.
iii. Retrieval. Obtaining information either for end-user queries and reports or for processing by
applications.
iv. Administration. Registering and monitoring users, enforcing data security, monitoring
performance, maintaining data integrity, dealing with concurrency control, and recovering
information if the system fails.
A DBMS is responsible for maintaining the integrity and security of stored data, and for recovering
information if the system fails.
Both a database and its DBMS conform to the principles of a particular database model. "Database
system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the
DBMS and related software. Database servers are usually multiprocessor computers, with generous
memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to
one or more servers via a high-speed channel, are also used in large volume transaction processing
environments. DBMSs are found at the heart of most database applications. DBMSs may be built
around a custom multitasking kernel with built-in networking support, but modern DBMSs typically
rely on a standard operating system to provide these functions. Since DBMSs comprise a significant
economical market, computer and storage vendors often take into account DBMS requirements in
their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such
as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone),
the query language(s) used to access the database (such as SQL or XQuery), and their internal
engineering, which affects performance, scalability, resilience, and security.
CHARACTERISTICS OF DBMS
A database management system (DBMS) consists of several components. Each component plays very
important role in the database management system environment. The major components of database
management system are:
a) Software
b) Hardware
c) Data
d) Procedures
e) Database Access Language
a) Software
The main component of a DBMS is the software. It is the set of programs used to handle the database
and to control and manage the overall computerized database
i. DBMS software itself, is the most important software component in the overall system
ii. Operating system including network software being used in network, to share the data of
database among multiple users.
iii. Application programs developed in programming languages such as C++, Visual Basic that
are used to access database in database management system. Each program contains
statements that request the DBMS to perform operation on database. The operations may
include retrieving, updating, deleting data etc. The application program may be conventional
or online workstations or terminals.
b) Hardware
Hardware consists of a set of physical electronic devices such as computers (together with associated
I/O devices like disk drives), storage devices, I/O channels, electromechanical devices that make
interface between computers and the real world systems etc, and so on. It is impossible to implement
the DBMS without the hardware devices. In a network, a powerful computer with high data
processing speed and a storage device with large storage capacity is required as database server.
c) Data
Data is the most important component of the DBMS. The main purpose of DBMS is to process the
data. In DBMS, databases are defined, constructed and then data is stored, updated and retrieved to
and from the databases. The database contains both the actual (or operational) data and the metadata
(data about data or description about data).
d) Procedures
Procedures refer to the instructions and rules that help to design the database and to use the DBMS.
The users that operate and manage the DBMS require documented procedures on hot use or run the
database management system. These may include.
The most popular database access language is SQL (Structured Query Language). Relational
databases are required to have a database query language.
Users
The users are the people who manage the databases and perform different operations on the databases
in the database system. There are three kinds of people who play different roles in database system
• Application Programmers
• Database Administrators
• End-Users
Application Programmers
The people who write application programs in programming languages (such as Visual Basic, Java, or
C++) to interact with databases are called Application Programmer.
Database Administrators
A person who is responsible for managing the overall database management system is called database
administrator or simply DBA.
End-Users
The end-users are the people who interact with database management system to perform different
operations on database such as retrieving, updating, inserting, deleting data etc.
IMPORTANCES OF DBMS
The database management system has a number of advantages as compared to traditional computer
file-based processing approach. The database administrator must keep in mind these benefits or
capabilities during databases and monitoring the DBMS.
b. Sharing of Data
In DBMS, data can be shared by authorized users of the organization. The database administrator
manages the data and gives rights to users to access the data. Many users can be authorized to access
the same piece of information simultaneously. The remote users can also share same data. Similarly,
the data of same database can be shared between different application programs.
c. Data Consistency
By controlling the data redundancy, the data consistency is obtained. If a data item appears only once,
any update to its value has to be performed only once and the updated value is immediately available
to all users. If the DBMS has controlled redundancy, the database system enforces consistency.
d. Integration of Data
In Database management system, data in database is stored in tables. A single database contains
multiple tables and relationships can be created between tables (or associated data entities). This
makes easy to retrieve and update data.
e. Integration Constraints
Integrity constraints or consistency rules can be applied to database so that the correct data can be
entered into database. The constraints may be applied to data item within a single record or the may
be applied to relationships between records.
f. Data Security
Form is very important object of DBMS. You can create forms very easily and quickly in DBMS.
Once a form is created, it can be used many times and it can be modified very easily. The created
forms are also saved along with database and behave like a software component. A form provides
very easy way (user-friendly) to enter data into database, edit data and display data from database.
The non-technical users can also perform various operations on database through forms without going
into technical details of a fatabase.
g. Report Writers
Most of the DBMSs provide the report writer tools used to create reports. The users can create very
easily and quickly. Once a report is created, it can be used may times and it can be modified very
easily. The created reports are also saved along with database and behave like a software component.
j. Data Independence
The separation of data structure of database from the application program that uses the data is called
data independence. In DBMS, you can easily change the structure of database without modifying the
application program.
LIMITATION OF DBMS
Although there are many advantages of DBMS, the DBMS may also have some minor disadvantages.
These are:
e. Database Damage
In most of the organization, all data is integrated into a single database. If database is damaged due to
electric failure or database is corrupted on the storage media, the your valuable data may be lost
forever.
DATA WAREHOUSING
Different people have different definitions for a data warehouse. The most popular definition came
from Bill Inmon, who provided the following:
Subject-Oriented: A data warehouse can be used to analyze a particular subject area. For example,
"sales" can be a particular subject.
Integrated: A data warehouse integrates data from multiple data sources. For example, source A and
source B may have different ways of identifying a product, but in a data warehouse, there will be only
a single way of identifying a product.
Time-Variant: Historical data is kept in a data warehouse. For example, one can retrieve data from 3
months, 6 months, 12 months, or even older data from a data warehouse. This contrasts with a
transactions system, where often only the most recent data is kept. For example, a transaction system
may hold the most recent address of a customer, where a data warehouse can hold all addresses
associated with a customer.
Non-volatile: Once data is in the data warehouse, it will not change. So, historical data in a data
warehouse should never be altered.
A data warehouse is a copy of transaction data specifically structured for query and analysis.
This is a functional view of a data warehouse. Kimball did not address how the data warehouse is
built like Inmon did, rather he focused on the functionality of a data warehouse.
The figure below illustrates key differences between an OLTP system and a data warehouse.
One major difference between the types of system is that data warehouses are not usually in third
normal form (3NF), a type of data normalization common in OLTP environments. Data warehouses
and OLTP systems have very different requirements. Here are some examples of differences between
typical data warehouses and OLTP systems:
a. Workload
Data warehouses are designed to accommodate ad hoc queries and data analysis. You might not know
the workload of your data warehouse in advance, so a data warehouse should be optimized to perform
well for a wide variety of possible query and analytical operations. OLTP systems support only
predefined operations. Your applications might be specifically tuned or designed to support only these
operations.
b. Data modifications
A data warehouse is updated on a regular basis by the ETL process (run nightly or weekly) using bulk
data modification techniques. The end users of a data warehouse do not directly update the data
warehouse except when using analytical tools, such as data mining, to make predictions with
associated probabilities, assign customers to market segments, and develop customer profiles.
In OLTP systems, end users routinely issue individual data modification statements to the database.
The OLTP database is always up to date, and reflects the current state of each business transaction.
c. Schema design
Data warehouses often use denormalized or partially denormalized schemas (such as a star schema) to
optimize query and analytical performance.
OLTP systems often use fully normalized schemas to optimize update/insert/delete performance, and
to guarantee data consistency.
d. Typical operations
A typical data warehouse query scans thousands or millions of rows. For example, "Find the total
sales for all customers last month."
A typical OLTP operation accesses only a handful of records. For example, "Retrieve the current
order for this customer."
e. Historical data
Data warehouses usually store many months or years of data. This is to support historical analysis and
reporting.
OLTP systems usually store data from only a few weeks or months. The OLTP system stores only
historical data as needed to successfully meet the requirements of the current transaction.
A wide array of statistical functions, including descriptive statistics, hypothesis testing, correlations
analysis, test for distribution fit, cross tabs with Chi-square statistics, and analysis of variance
(ANOVA); these functions are described in the Oracle Database SQL Language Reference.
Data Mining
Data mining uses large quantities of data to create models. These models can provide insights that are
revealing, significant, and valuable. For example, data mining can be used to:
Data mining is not restricted to solving business problems. For example, data mining can be used in
the life sciences to discover gene and protein targets and to identify leads for new drugs.
Oracle data mining performs data mining in the oracle database. Oracle data mining does not require
data movement between the database and an external mining server, thereby eliminating redundancy,
improving efficient data storage and processing, ensuring that up-to-date data is used, and maintaining
data security.
i. Classification
Grouping items into discrete classes and predicting which class an item belongs to; classification
algorithms are Decision Tree, Naive Bayes, Generalized Linear Models (Binary Logistic Regression),
and Support Vector Machines.
ii. Regression
Approximating and predicting continuous numerical values; the algorithms for regression are Support
Vector Machines and Generalized Linear Models (Multivariate Linear Regression).
v. Clustering
Finding natural groupings in the data that are often used for identifying customer segments; the
algorithms for clustering are k-Means and O-Cluster.
vi. Associations
Analyzing "market baskets", items that are likely to be purchased together; the algorithm for
associations is a priori.
In addition to mining structured data, ODM permits mining of text data (such as police reports,
customer comments, or physician's notes) or spatial data.
Data mining activities such as model building, testing, and scoring are accomplished through a
PL/SQL API, a Java API, and SQL Data Mining functions. The Java API is compliant with the data
mining standard JSR 73. The Java API and the PL/SQL API are fully interoperable.
Oracle Data Mining allows the creation of a supermodel, that is, a model that contains the instructions
for its own data preparation. The embedded data preparation can be implemented automatically and/or
manually. Embedded Data Preparation supports user-specified data transformations; Automatic Data
Preparation supports algorithm-required data preparation, such as binning, normalization, and outlier
treatment.
SQL Data Mining functions support the scoring of classification, regression, clustering, and feature
extraction models. Within the context of standard SQL statements, pre-created models can be applied
to new data and the results returned for further processing, just like any other SQL query.
Predictive Analytics automates the process of data mining. Without user intervention, Predictive
Analytics routines manage data preparation, algorithm selection, model building, and model scoring
so that the user can benefit from data mining without having to be a data mining expert.
• Data mining functions in Oracle SQL for high performance scoring of data
• DBMS_DATA_MINING PL/SQL packages for model creation, description, analysis, and
deployment
• DBMS_DATA_MINING_TRANSFORM PL/SQL package for transformations required for
data mining
• Java interface based on the Java Data Mining standard for model creation, description,
analysis, and deployment
• DBMS_PREDICTIVE_ANALYTICS PL/SQL package supports the following procedures:
a) EXPLAIN - Ranks attributes in order of influence in explaining a target column
b) PREDICT - Predicts the value of a target column
c) PROFILE - Creates segments and rules that identify the records that have the same target
value.
REVISION EXERCISES
1. Discuss the various types of files
2. What are some of the methods of file organization?
3. Discuss the basis of processing of computer files.
4. Discuss the disadvantages of computer file processing system
5. What is a database system?
6. What are some of the characteristics of a database system
7. What is the importance of databases system?
8. What are the limitation of a database system
9. What is a data warehousing?
10. How is information extracted from data warehousing
CHAPTER 5
DATA COMMUNICATION AND COMPUTER
NETWORKS
SYNOPSIS
Introduction………………………………………………………. 131
Principles of Data Communication
and Networks……………………………………………………… 136
Data Transmission Characteristics…………………………......... 139
Types of Networks………………………………………………. 142
Network Topologies……………………………………………… 143
Benefits And Challenges of Networks
in an Organisation…………………………………………………. 164
Limitations Of Networks In An Organisation………….................. 167
Cloud Computing………………………………………………….. 168
INTRODUCTION
Communication is defined as transfer of information, such as thoughts and messages between two
entities. The invention of telegraph, radio, telephone, and television made possible instantaneous
communication over long distances.
In the context of computers and information technology (IT), the data are represented by binary digit
or bit has only two values 0s and 1s. In fact anything the computer deals with are 0s and 1s only.
Due to this it is called discrete or digital. In the digital world messages, thoughts, numbers.. etc can be
represented in different streams of 0s and 1s. Data communications concerns itself with the
transmission (sending and receiving) of information between two locations by means of electrical
signals. The two types of electrical signals are analog and digital. Data communication is the name
given to the communication where exchange of information takes place in the form of 0s and 1s over
some kind of media such as wire or wireless. The subject-Data Communications deals with the
technology, tools, products and equipment to make this happen.
Data
Data the raw material for information is defined as groups of non-random symbols that represent
quantities, actions, objects etc. In information systems data items are formed from characters that may
be alphabetical, numeric, or special symbols. Data items are organized for processing purposes into data
structures, file structures and databases. Data relevant to information processing and decision-making
may also be in the form of text, images or voice.
Information
Information is data that has been processed into a form that is meaningful to the recipient and is of real
or perceived value in current or prospective actions or decisions. It is important to note that data for one
level of an information system may be information for another. For example, data input to the
management level is information output of a lower level of the system such as operations level.
Information resources are reusable. When retrieved and used it does not lose value: it may indeed gain
value through the credibility added by use.
The value of information is described most meaningfully in the context of a decision. If there were no
current or future choices or decisions, information would be unnecessary. The value of information in
decision-making is the value of change in decision behaviour caused by the information less the cost
of obtaining the information. Decisions however are usually made without the “right” information. The
reasons are:
Much of the information that organizations or individuals prepare has value other than in decision-
making. The information may also be prepared for motivation and background building.
Data Processing
Data processing may be defined as those activities, which are concerned with the systematic recording,
arranging, filing, processing and dissemination of facts relating to the physical events occurring in the
business. Data processing can also be described as the activity of manipulating the raw facts to generate
a set or an assembly of meaningful data, what is described as information. Data processing activities
include:
i. data collection,
ii. classification,
iii. sorting, adding,
iv. merging,
v. summarizing,
vi. storing,
vii. retrieval and
viii. dissemination.
The black box model is an extremely simple principle of a machine, that is, irrespective of how a
machine operates internally any machine takes an input, operates on it and then produces an output.
In dealing with digital computers this data consists of: numerical data, character data and special
(control) characters.
Communication Channels
The transmission media used in communication are called communication channels. Two ways of
connecting microcomputers for communication with each other and with other equipment is through
cable and air. There are five kinds of communication channels used for cable or air connections:
- Telephone lines
- Coaxial cable
- Fiber-optic cable
- Microwave
- Satellite
Coaxial cable
Coaxial cable is a high-frequency transmission cable that replaces the multiple wires of telephone lines
with a single solid copper core. It has over 80 times the transmission capacity of twisted pair. It is often
used to link parts of a computer system in one building.
Fibre-optic cable
Fibre-optic cable transmits data as pulses of light through tubes of glass. It has over 26,000 times the
transmission capacity of twisted pair. A fibre-optic tube can be half the diameter of human hair. Fibre-
optic cables are immune to electronic interference and more secure and reliable. Fibre-optic cable is
rapidly replacing twisted-pair telephone lines.
Microwave
Microwaves transmit data as high-frequency radio waves that travel in straight lines through air.
Microwaves cannot bend with the curvature of the earth. They can only be transmitted over short
distances. Microwaves are good medium for sending data between buildings in a city or on a large
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 135
college campus. Microwave transmission over longer distances is relayed by means of ‘dishes’ or
antennas installed on towers, high buildings or mountaintops.
Satellite
Satellites are used to amplify and relay microwave signals from one transmitter on the ground to
another. They orbit about 22,000 miles above the earth. They rotate at a precise point and speed and
can be used to send large volumes of data. Bad weather can sometimes interrupt the flow of data from
a satellite transmission. INTELSAT (INternational TELecommunication SATellite consortium),
owned by 114 governments forming a worldwide communications system, offers many satellites that
can be used as microwave relay stations.
Modem
A modem is a hardware device that converts computer signals (digital signals) to telephone signals
(analog signals) and telephone signals (analog signals) back to computer signals (digital signals).
The process of converting digital signals to analog is called modulation while the process of converting
analog signals to digital is called demodulation.
Computer Computer
Modem Modem
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 136
The speed with which modems transmit data varies. Communications speed is typically measured in
bits per second (bps). The most popular speeds for conventional modems are 36.6 kbps (36,600 bps)
and 56kbps (56,000 bps). The higher the speed, the faster you can send and receive data.
Types of Modems
There are 3 types of modems as discussed below.
a) External modem
An external modem stands apart from the computer. It is connected by a cable to the computer’s serial
port. Another cable is used to connect the modem to the telephone wall jack.
b) Internal modem
An internal modem is a plug-in circuit board inside the system unit. A telephone cable connects this
type of modem to the telephone wall jack.
c) Wireless modem
A wireless modem is similar to an external modem. It connects to the computer’s serial port, but does
not connect to telephone lines. It uses new technology that receives data through the air.
a. Destiny
The system should transmit the message to the correct intended destination. The destination can be
another user or another computer.
b. Reliability
The system should deliver the data to the destiny faithfully. Any unwanted signals (noise) added
along with the original data may play havoc.
c. Fast
The system should transmit the data as fast as possible within the technological constraints. In case of
audio and video data they must be received in the same order as they are produced without adding any
significant delay
Data Transmission
Technical matters that affect data transmission include:
Bandwidth
Type of transmission
Direction of data flow
Mode of transmitting data
Protocols
Bandwidth
Bandwidth is the bits-per-second (bps) transmission capability of a communication channel. There are
three types of bandwidth:
Voice band – bandwidth of standard telephone lines (9.6 to56 kbps)
Medium band – bandwidth of special leased lines used (56 to 264,000 kbps)
Broadband – bandwidth of microwave, satellite, coaxial cable and fiber optic (56 to 30,000,000
kbps)
Types of transmission
There are 2 types of transmission
a) serial data transmission
b) parallel data transmission
a) Asynchronous transmission
Data is sent and received one byte at a time. Used with microcomputers and terminals with slow speeds.
b) Synchronous transmission
Data is sent and received several bytes (blocks) at a time. It requires a synchronized clock to enable
transmission at timed intervals.
Protocols
Protocols are sets of communication rules for exchange of information. Protocols define speeds and
modes for connecting one computer with another computer. Network protocols can become very
complex and therefore must adhere to certain standards. The first set of protocol standards was IBM
Systems Network Architecture (SNA), which only works for IBM’s own equipment.
The Open Systems Interconnection (OSI) is a set of communication protocols defined by International
Standards Organization. The OSI is used to identify functions provided by any network and separates
each network’s functions into seven ‘layers’ of communication rules.
Data has to arrive intact in order to be used. Two techniques are used to detect and correct errors.
Cyclic Redundancy Check (CRC) – the CRC or frame check sequence (FCS) is used for
situations where bursts of errors may be present (parity and block sum checks are not effective
at detecting bursts of errors). A single set of check digits is generated for each frame
transmitted, based on the contents of the frame and appended to the tail of the frame.
Recovery
When errors are so bad and that you can’t ignore them, have a new plan to get the data.
Security
What are you concerned about if you want to send an important message?
Did the receiver get it?
o Denial of service
Is it the right receiver?
o Receiver spoofing
Is it the right message?
o Message corruption
Did it come from the right sender?
o Sender spoofing
Network management
This involves configuration, provisioning, monitoring and problem-solving.
While analog transmission is the transfer of a continuously varying analog signal, digital
communications is the transfer of discrete messages. The messages are either represented by a
sequence of pulses by means of a line code (baseband transmission), or by a limited set of
continuously varying wave forms (passband transmission), using a digital modulation method. The
passband modulation and corresponding demodulation (also known as detection) is carried out by
modem equipment. According to the most common definition of digital signal, both baseband and
passband signals representing bit-streams are considered as digital transmission, while an alternative
definition only considers the baseband signal as digital, and passband transmission of digital data as a
form of digital-to-analog conversion.
Data transmitted may be digital messages originating from a data source, for example a computer or a
keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-
stream for example using pulse-code modulation (PCM) or more advanced source coding (analog-to-
digital conversion and data compression) schemes. This source coding and decoding is carried out by
codec equipment.
1. Delivery
The system must deliver data to the correct destination. Data must be received by the intended device
or user and only by that device or user.
2. Accuracy:
The system must deliver the data accurately. Data that have been altered in transmission and left
uncorrected are unusable.
3. Timeliness:
The system must deliver data in a timely manner. Data delivered late are useless. In the case of video
and audio, timely delivery means delivering data as they are produced, in the same order that they are
produced, and without significant delay. This kind of delivery is called real-time transmission.
4. Jitter
Jitter refers to the variation in the packet arrival time. It is the uneven delay in the delivery of audio or
video packets. For example, let us assume that video packets are sent every 30ms. If some of the
packets arrive with 30ms delay and others with 40ms delay, an uneven quality in the video is the
result.
NETWORKS
A network is a set of devices (often referred to as nodes) connected by communication links. A node
can be a computer, printer, or any other device capable of sending and/or receiving data generated by
other nodes on the network
Computer Networks
A computer network is a communications system connecting two or more computers that work to
exchange information and share resources (hardware, software and data). A network may consist of
microcomputers, or it may integrate microcomputers or other devices with larger computers. Networks
may be controlled by all nodes working together equally or by specialized nodes coordinating and
supplying all resources. Networks may be simple or complex, self-contained or dispersed over a large
geographical area.
Network architecture is a description of how a computer is set-up (configured) and what strategies are
used in the design. The interconnection of PCs over a network is becoming more important especially
as more hardware is accessed remotely and PCs intercommunicate with each other.
Distributed Processing
Most networks use distributed processing, in which a task is divided among multiple computers.
Instead of one single large machine being responsible for all aspects of a process, separate computer
(usually a personal computer or workstation) handle a subset.
Network Criteria
A network must be able to meet a certain number of criteria. The most important of these are
performance, reliability, and security.
Performance
Performance can be measured in many ways, including transmit time and response time.
Transmit time is the amount of time required for a message to travel from one device to another.
Response time is the elapsed time between an inquiry and a response.
The performance of a network depends on a number of factors, including the number of users, the
type of transmission medium, the capabilities of the connected hardware, and the efficiency of the
software.
Performance is often evaluated by two networking metrics: throughput and delay. We often need
more throughputs and less delay
Reliability
In addition to accuracy of delivery, network reliability is measured by the frequency of failure, the
time it takes a link to recover from a failure.
Security
Network security issues include protecting data from unauthorized access, protecting data from
damage and development, and implementing policies and procedures for recovery from breaches and
data losses
Node – any device connected to a network such as a computer, printer, or data storage device.
Client – a node that requests and uses resources available from other nodes. Typically a
microcomputer.
Server – a node that shares resources with other nodes. May be called a file server, printer server,
communication server, web server, or database server.
Network Operating System (NOS) – the operating system of the network that controls and
coordinates the activities between computers on a network, such as electronic communication and
sharing of information and resources.
Distributed processing – computing power is located and shared at different locations. Common
in decentralized organizations (each office has its own computer system but is networked to the
main computer).
Host computer – a large centralized computer, usually a minicomputer or mainframe.
TYPES OF NETWORKS
Different communication channels allow different types of networks to be formed. Telephone lines
may connect communications equipment within the same building. Coaxial cable or fibre-optic cable
can be installed on building walls to form communication networks. You can also create your own
network in your home or apartment. Communication networks also differ in geographical size.
Three important networks according to geographical size are
a) LANs,
b) MANs
c) WANs.
A LAN is a computer network in which computers and peripheral devices are in close physical
proximity. It is a collection of computers within a single office or building that connect to a common
electronic connection – commonly known as a network backbone. This type of network typically uses
microcomputers in a bus organization linked with telephone, coaxial, or fibre-optic cable. A LAN
allows all users to share hardware, software and data on the network. Minicomputers, mainframes or
optical disk storage devices can be added to the network. A network bridge device may be used to link
a LAN to other networks with the same configuration. A network gateway device may be used to link
a LAN to other networks, even if their configurations are different.
A MAN is a computer network that may be citywide. This type of network may be used as a link
between office buildings in a city. The use of cellular phone systems expand the flexibility of a MAN
network by linking car phones and portable phones to the network.
c) Wide Area Networks (WAN)
A WAN is a computer network that may be countrywide or worldwide. It normally connects networks
over a large physical area, such as in different buildings, towns or even countries. A modem connects
a LAN to a WAN when the WAN connection is an analogue line.
For a digital connection a gateway connects one type of LAN to another LAN, or WAN, and a bridge
connects a LAN to similar types of LAN. This type of network typically uses microwave relays and
satellites to reach users over long distances. The widest of all WANs is the Internet, which spans the
entire globe.
WAN Technologies
How you get from one computer to the other across the Internet.
Computer-computer communication
(iii) Frame relay
Like packet switching
Low level error correction removed to yield higher data rates
(iv) Cell relay – ATM (Asynchronous Transmission Mode)
Frame relay with uniformly sized packets (cells)
Dedicated circuit paths
(v) ISDN (Integrated Services Digital Network)
Transmits voice and data traffic
Specialized circuit switching
Uses frame relay (narrowband) and ATM (broadband)
NETWORK TOPOLOGIES
Topology refers to the way in which the network of computers is connected. Each topology is suited
to specific tasks and has its own advantages and disadvantages.
Star network
In a star network there are a number of small computers or peripheral devices linked to a central unit
called a main hub. The central unit may be a host computer or a file server. All communications pass
through the central unit and control is maintained by polling. This type of network can be used to
provide a time-sharing system and is common for linking microcomputers to a mainframe.
Advantages:
It is easy to add new and remove nodes
A node failure does not bring down the entire network
It is easier to diagnose network problems through a central hub
Disadvantages:
If the central hub fails the whole network ceases to function
It costs more to cable a star configuration than other topologies (more cable is required than
for a bus or ring configuration).
Bus network
In a bus network each device handles its communications control. There is no host computer; however
there may be a file server. All communications travel along a common connecting cable called a bus.
It is a common arrangement for sharing data stored on different microcomputers. It is not as efficient
as star network for sharing common resources, but is less expensive. The distinguishing feature is that
all devices (nodes) are linked along one communication line - with endpoints - called the bus or
backbone.
Advantages:
Reliable in very small networks as well as easy to use and understand
Requires the least amount of cable to connect the computers together and therefore is less
expensive than other cabling arrangements.
Is easy to extend. Two cables can be easily joined with a connector, making a longer cable for
more computers to join the network
A repeater can also be used to extend a bus configuration
Disadvantages:
Heavy network traffic can also slow a bus considerably. Because any computer can transmit
at any time, bus networks do not coordinate when information is sent. Computers interrupting
each other can use a lot of bandwidth
Each connection between two cables weakens the electrical signal
The bus configuration can be difficult to troubleshoot. A cable break or malfunctioning
computer can be difficult to find and can cause the whole network to stop functioning.
Ring network
In a ring network each device is connected to two other devices, forming a ring. There is no central file
server or computer. Messages are passed around the ring until they reach their destination. Often used
to link mainframes, especially over wide geographical areas. It is useful in a decentralized organization
called a distributed data processing system.
Advantages:
Ring networks offer high performance for a small number of workstations or for larger
networks where each station has a similar work load
Ring networks can span longer distances than other types of networks
Ring networks are easily extendable
Disadvantages
Relatively expensive and difficult to install
Failure of one component on the network can affect the whole network
It is difficult to troubleshoot a ring network
Adding or removing computers can disrupt the network
Advantages:
Improves sharing of data and programs across the network
Offers reliable communication between nodes
Disadvantages:
Difficult and costly to install and maintain
Difficult to troubleshoot network problems
Advantages:
Yields the greatest amount of redundancy (multiple connections between same nodes) in the
event that one of the nodes fail where network traffic can be redirected to another node.
Network problems are easier to diagnose
Disadvantages
The cost of installation and maintenance is high (more cable is required than any other
configuration)
Client/Server Environment
Use of client/server technology is one of the most popular trends in application development. More
and more business applications have embraced the advantages of the client/server architecture by
distributing the work among servers and by performing as much computational work as possible on
the client workstation. This allows users to manipulate and change the data that they need to change
without controlling resources on the main processing unit.
In client/server systems, applications no longer are limited to running on one machine. The applications
are split so that processing may take place on different machines. The processing of data takes place
on the server and the desktop computer (client). The application is divided into pieces or tasks so
processing can be done more efficiently.
A client/server network environment is one in which one computer acts as the server and provides data
distribution and security functions to other computers that are independently running various
applications. An example of the simplest client/server model is a LAN whereby a set of computers is
linked to allow individuals to share data. LANs (like other client/server environments) allow users to
maintain individual control over how information is processed.
Client/server computing differs from mainframe or distributed system processing in that each
processing component is mutually dependent. The ‘client’ is a single PC or workstation associated with
software that provides computer presentation services as an interface to server computing resources.
Presentation is usually provided by visually enhanced processing software known as a Graphical User
Interface (GUI). The ‘server’ is one or more multi-user computer(s) (these may be mainframes,
minicomputers or PCs). Server functions include any centrally supported role, such as file sharing,
printer sharing, database access and management, communication services, facsimile services,
application development and others. Multiple functions may be supported by a single server.
Network Protocols
Protocols are the set of conventions or rules for interaction at all levels of data transfer. They have
three main components:
Syntax – data format and signal types
Semantics – control information and error handling
Timing – data flow rate and sequencing
Numerous protocols are involved in transferring a single file even when two computers are directly
connected. The large task of transferring a piece of data is broken down into distinct sub tasks. There
are multiple ways to accomplish each task (individual protocols). The tasks are well described so that
they can be used interchangeably without affecting the overall system.
Application Layer
o Takes care of the needs of the specific application
o HTTP: send request, get a batch of responses from a bunch of different servers
o Telnet: dedicated interaction with another machine
Transport Layer
o Makes sure data is exchanged reliably between the two end systems
o Needs to know how to identify the remote system and package the data properly
Application Layer
o User application protocols
Transport Layer
o Transmission control protocol
o Data reliability and sequencing
Internet Layer
o Internet Protocol
o Addressing, routing data across Internet
Network Access Layer
o Data exchange between host and local network
o Packets, flow control
o Network dependent (circuit switching, Ethernet etc)
Physical Layer
o Physical interface, signal type, data rate
Data is passed from top layer of the transmitter to the bottom, then up from the bottom layer to the top
on the recipient. However, each layer on the transmitter communicates directly with the recipient’s
corresponding layer. This creates a virtual data flow between layers. The data sent can be termed as a
data packet or data frame.
Data Data
Presentation Presentation
Session Session
Transport Transport
Network
Network
Data Link
Data Link
Physical
Physical
1. Application Layer
This layer provides network services to application programs such as file transfer and electronic mail.
It offers user level interaction with network programs and provides user application, process and
management functions.
2. Presentation Layer
The presentation layer uses a set of translations that allow the data to be interpreted properly. It may
have to carry out translations between two systems if they use different presentation standards such as
different character sets or different character codes. It can also add data encryption for security
purposes. It basically performs data interpretation, format and control transformation. It separates what
is communicated from data representation.
3. Session Layer
The session layer provides an open communications path to the other system. It involves setting up,
maintaining and closing down a session (a communication time span). The communications channel
and the internetworking should be transparent to the session layer. It manages (administration and
control) sessions between cooperating applications.
4. Transport Layer
If data packets require to go out of a network then the transport layer routes them through the
interconnected networks. Its task may involve splitting up data for transmission and reassembling it
after arrival. It performs the tasks of end-to-end packetization, error control, flow control, and
synchronization. It offers network transparent data transfer and transmission control.
5. Network Layer
The network layer routes data frames through a network. It performs the tasks of connection
management, routing, switching and flow control over a network.
7. Physical Layer
The physical link layer defines the electrical characteristics of the communications channel and the
transmitted signals. This includes voltage levels, connector types, cabling, data rate etc. It provides the
physical interface.
a) Twisted-pair
Twisted-pair and coaxial cables transmit electric signals, whereas fibre-optic cables transmit light
pulses. Twisted-pair cables are not shielded and thus interfere with nearby cables. Public telephone
lines generally use twisted-pair cables. In LANs they are generally used up to bit rates of 10 Mbps and
with maximum lengths of 100m.
b) Coaxial cable
Coaxial cable has a grounded metal sheath around the signal conductor. This limits the amount of
interference between cables and thus allows higher data rates. Typically they are used at bit rates of
100 Mbps for maximum lengths of 1 km.
c) Fibre-optic
The highest specification of the three cables is fibre-optic. This type of cable allows extremely high bit
rates over long distances. Fibre-optic cables do not interfere with nearby cables and give greater
security, more protection from electrical damage by external equipment and greater resistance to harsh
environments; they are also safer in hazardous environments.
Internetworking Connections
Most modern networks have a backbone, which is a common link to all the networks within an
organization. This backbone allows users on different network segments to communicate and also
allows data into and out of the local network.
Networks are partitioned from other networks using a bridge, a gateway or a router. A bridge links
two networks of the same type. A gateway connects two networks of dissimilar type. Routers operate
rather like gateways and can either connect two similar networks or two dissimilar networks. The key
operation of a gateway, bridge or router is that it only allows data traffic through itself when the data
is intended for another network which is outside the connected network. This filters traffic and stops
traffic not intended for the network from clogging up the backbone. Modern bridges, gateways and
routers are intelligent and can determine the network topology. A spanning-tree bridge allows
multiple network segments to be interconnected. If more than one path exists between individual
segments then the bridge finds alternative routes. This is useful in routing frames away from heavy
traffic routes or around a faulty route.
A repeater is used to increase the maximum interconnection length since for a given cable
specification and bit rate, each has a maximum length of cable.
Network Standards
Standards are good because they allow many different implementations of interoperable technology.
However they are slow to develop and multiple standard organizations develop different standards for
the same functions.
Fax machines
Fax machines convert images to signals that can be sent over a telephone line to a receiving machine.
They are extremely popular in offices. They can scan the image of a document and print the image on
paper. Microcomputers use fax/modem circuit boards to send and receive fax messages.
Internet access – you can get access to the World Wide Web.
Internet
The Internet is a giant worldwide network. The Internet started in 1969 when the United States
government funded a major research project on computer networking called ARPANET (Advanced
Research Project Agency NETwork). When on the Internet you move through cyberspace.
Shopping
- Shopping on the Internet is called e-commerce
- You can window shop at cyber malls called web storefronts
- You can purchase goods using checks, credit cards or electronic cash called electronic
payment
Researching
- You can do research on the Internet by visiting virtual libraries and browse through
stacks of books
- You can read selected items at the virtual libraries and even check out books
Entertainment
- There are many entertainment sites on the Internet such as live concerts, movie
previews and book clubs
- You can also participate in interactive live games on the Internet
You get connected to the Internet through a computer. Connection to the Internet is referred to as access
to the Internet. Using a provider is one of the most common ways users can access the Internet. A
provider is also called a host computer and is already connected to the Internet. A provider provides a
path or connection for individuals to access the Internet.
(i) Colleges and universities – colleges and universities provide free access to the Internet
through their Local Area Networks,
(ii) Internet Service Providers (ISP) – ISPs offer access to the Internet for a fee. They are more
expensive than online service providers.
(iii) Online Service Providers – provide access to the Internet and a variety of other services for
a fee. They are the most widely used source for Internet access and less expensive than ISP.
Connections
There are three types of connections to the Internet through a provider:
o Direct or dedicated
o SLIP and PPP
o Terminal connection
Direct or dedicated
This is the most efficient access method to all functions on the Internet. However it is expensive and
rarely used by individuals. It is used by many organizations such as colleges, universities, service
providers and corporations.
SLIP and PPP
This type of connection is widely used by end users to connect to the Internet. It is slower and less
convenient than direct connection. However it provides a high level of service at a lower cost than
direct connection. It uses a high-speed modem and standard telephone line to connect to a provider that
has a direct connection to the Internet. It requires special software protocol: SLIP (Serial Line Internet
Protocol) or PPP (Point-to-Point Protocol). With this type of connection your computer becomes part
of a client/server network. It requires special client software to communicate with server software
running on the provider’s computer and other Internet computers.
Terminal connection
This type of connection also uses a high-speed modem and standard telephone line. Your computer
becomes part of a terminal network with a terminal connection. With this connection, your computer’s
operations are very limited because it only displays communication that occurs between provider and
other computers on the Internet. It is less expensive than SLIP or PPP but not as fast or convenient.
Internet protocols
The standard protocol for the Internet is TCP/IP. TCP/IP (Transmission Control Protocol/Internet
Protocol) are the rules for communicating over the Internet. Protocols control how the messages are
broken down, sent and reassembled. With TCP/IP, a message is broken down into small parts called
packets before it is sent over the Internet. Each packet is sent separately, possibly travelling through
different routes to a common destination. The packets are reassembled into correct order at the
receiving computer.
Internet services
The four commonly used services on the Internet are:
Telnet
FTP
Gopher
The Web
Telnet
Telnet allows you to connect to another computer (host) on the Internet
With Telnet you can log on to the computer as if you were a terminal connected to it
There are hundreds of computers on the Internet you can connect to
Some computers allow free access; some charge a fee for their use
Search tools
Search tools developed for the Internet help users locate precise information. To access a search tool,
you must visit a web site that has a search tool available. There are two basic types of search tools
available:
- Indexes
- Search engines
Indexes
Indexes are also known as web directories
They are organized by major categories e.g. Health, entertainment, education etc
Each category is further organized into sub categories
Users can continue to search of subcategories until a list of relevant documents appear
The best known search index is Yahoo
Search engines
Search engines are also known as web crawlers or web spiders
They are organized like a database
Key words and phrases can be used to search through a database
Databases are maintained by special programs called agents, spiders or bots
Widely used search engines are Google, HotBot and AltaVista.
Web utilities
Web utilities are programs that work with a browser to increase your speed, productivity and
capabilities. These utilities can be included in a browser. Some utilities may be free on the Internet
while others can be charged for a nominal charge. There are two categories of web utilities:
Plug-ins
Helper applications
Plug-ins
A plug-in is a program that automatically loads and operates as part of your browser.
Many websites require plug-ins for users to fully experience web page contents
Some widely used plug-ins are:
a) Shockwave from macromedia – used for web-based games, live concerts and
dynamic animations
b) QuickTime from Apple – used to display video and play audio
c) Live-3D from Netscape – used to display three-dimensional graphics and virtual
reality
Helper Applications
Helper applications are also known as add-ons and helper applications. They are independent programs
that can be executed or launched from your browser. The four most common types of helper
applications are:
Off-line browsers – also known as web-downloading utilities and pull products.
It is a program that automatically connects you to selected websites. They download HTML documents
and saves them to your hard disk. The document can be read latter without being connected to the
Internet.
Information pushers – also known as web broadcasters or push products.
They automatically gather information on topic areas called channels. The topics are then sent to your
hard disk. The information can be read later without being connected to the Internet.
Metasearch utilities – offline search utilities are also known as metasearch programs.
They automatically submit search requests to several indices and search engines. They receive the
results, sort them, eliminate duplicates and create an index.
Filters
Filters are programs that allow parents or organizations to block out selected sites e.g. adult sites. They
can monitor the usage and generate reports detailing time spent on activities.
Discussion Groups
There are several types of discussion groups on the Internet:
Mailing lists
Newsgroups
Chat groups
Mailing lists
In this type of discussion groups, members communicate by sending messages to a list address. To
join, you send your e-mail request to the mailing list subscription address. To cancel, send your email
request to unsubscribe to the subscription address.
Newsgroups
Newsgroups are the most popular type of discussion group. They use a special type of computers called
UseNet. Each UseNet computer maintains the newsgroup listing. There are over 10,000 different
newsgroups organized into major topic areas. Newsgroup organization hierarchy system is similar to
the domain name system. Contributions to a particular newsgroup are sent to one of the UseNet
computers. UseNet computers save messages and periodically share them with other UseNet
computers. Interested individuals can read contributions to a newsgroup.
Chat groups
Chat groups are becoming a very popular type of discussion group. They allow direct ‘live’
communication (real time communication). To participate in a chat group, you need to join by selecting
a channel or a topic. You communicate live with others by typing words on your computer. Other
members of your channel immediately see the words on their computers and they can respond. The
most popular chat service is called Internet Relay Chat (IRC), which requires special chat client
software.
Instant messaging
Instant messaging is a tool to communicate and collaborate with others. It allows one or more people
to communicate with direct ‘live’ communication. It is similar to chat groups, but it provides greater
control and flexibility. To use instant messaging, you specify a list of friends (buddies) and register
with an instant messaging server e.g. Yahoo Messenger. Whenever you connect to the Internet, special
software will notify your messaging server that you are online. It will notify you if any of your friends
are online and will also notify your buddies that you are online.
E-mail addresses
The most important element of an e-mail message is the address of the person who is to receive the
letter. The Internet uses an addressing method known as the Domain Name System (DNS). The system
divides an address into three parts:
(i) User name – identifies a unique person or computer at the listed domain
(ii) Domain name – refers to a particular organization
(iii) Domain code – identifies the geographical or organizational area
Almost all ISPs and online service providers offer e-mail service to their customers.
The main standards that relate to the protocols of email transmission and reception are:
Simple Mail Transfer Protocol (SMTP) – which is used with the TCP/IP suite. It has
traditionally been limited to the text-based electronic messages.
Multipurpose Internet Mail Extension – which allows the transmission and reception of
mail that contains various types of data, such as speech, images and motion video. It is a newer
standard than SMTP and uses much of its basic protocol.
The possible use of the Internet for non-useful applications (by employees).
The possible connection of non-friendly users from the global connection into the organization’s
local network.
For these reasons, many organizations have shied away from connection to the global network and
have set-up intranets and extranets.
Firewalls are often used to protect organizational Internets from external threats.
Intranets
Intranets are in-house, tailor-made networks for use within the organization and provide limited access
(if any) to outside services and also limit the external traffic (if any) into the intranet. An intranet might
have access to the Internet but there will be no access from the Internet to the organization’s intranet.
Organizations which have a requirement for sharing and distributing electronic information normally
have three choices:
- Use a proprietary groupware package such as Lotus Notes
- Set up an Intranet
- Set up a connection to the Internet
Groupware packages normally replicate data locally on a computer whereas Intranets centralize their
information on central servers which are then accessed by a single browser package. The stored data
is normally open and can be viewed by any compatible WWW browser. Intranet browsers have the
great advantage over groupware packages in that they are available for a variety of clients, such as
PCs, Macs, UNIX workstations and so on. A client browser also provides a single GUI interface, which
offers easy integration with other applications such as electronic mail, images, audio, video, animation
and so on.
Extranets
Extranets (external Intranets) allow two or more companies to share parts of their Intranets related to
joint projects. For example two companies may be working on a common project, an Extranet would
allow them to share files related with the project.
Extranets allow other organizations, such as suppliers, limited access to the organization’s
network.
The purpose of the extranet is to increase efficiency within the business and to reduce costs
Firewalls
A firewall (or security gateway) is a security system designed to protect organizational
networks. It protects a network against intrusion from outside sources. They may be
categorized as those that block traffic or those that permit traffic.
It consists of hardware and software that control access to a company’s intranet, extranet and
other internal networks.
It includes a special computer called a proxy server, which acts as a gatekeeper.
All communications between the company’s internal networks and outside world must pass
through this special computer.
The proxy server decides whether to allow a particular message or file to pass through.
Information Superhighway
Information superhighway is a name first used by former US Vice President Al Gore for the vision of
a global, high-speed communications network that will carry voice, data, video and other forms of
information all over the world, and that will make it possible for people to send e-mail, get up-to-the-
minute news, and access business, government and educational information. The Internet is already
providing many of these features, via telephone networks, cable TV services, online service providers
and satellites.
It is commonly used as a synonym for National Information Infrastructure (NII). NII is a proposed,
advanced, seamless web of public and private communications networks, interactive services,
interoperable hardware and software, computers, databases, and consumer electronics to put vast
amounts of information at user’s fingertips.
Terminology
Multiplexors/concentrators
They are the devices that use several communication channels at the same time. A multiplexor allows
a physical circuit to carry more than one signal at one time when the circuit has more capacity
(bandwidth) than individual signals required. It transmits and receives messages and controls the
communication lines to allow multiple users access to the system. It can also link several low-speed
lines to one high-speed line to enhance transmission capabilities.
Cluster controllers
They are the communications terminal control units that control a number of devices such as terminals,
printers and auxiliary storage devices. In such a configuration devices share a common control unit,
which manages input/output operations with a central computer. All messages are buffered by the
terminal control unit and then transmitted to the receivers.
Protocol converters
They are devices used to convert from one protocol to another such as between asynchronous and
synchronous transmission. Asynchronous terminals are attached to host computers or host
communication controllers using protocol converters. Asynchronous communication techniques do not
allow easy identification of transmission errors; therefore, slow transmission speeds are used to
minimize the potential for errors. It is desirable to communicate with the host computer using
synchronous transmission if high transmission speeds or rapid response is needed.
Multiplexing
Multiplexing is sending multiple signals or streams of information on a carrier at the same time in the
form of a single, complex signal and then recovering the separate signals at the receiving end. Analog
signals are commonly multiplexed using frequency-division multiplexing (FDM), in which the carrier
bandwidth is divided into sub-channels of different frequency widths, each carrying a signal at the
same time in parallel. Digital signals are commonly multiplexed using time-division multiplexing
(TDM), in which the multiple signals are carried over the same channel in alternating time slots. In
some optical fibre networks, multiple signals are carried together as separate wavelengths of light in a
multiplexed signal using dense wavelength division multiplexing (DWDM).
Circuit-switched
Circuit-switched is a type of network in which a physical path is obtained for and dedicated to a single
connection between two end-points in the network for the duration of the connection. Ordinary voice
phone service is circuit-switched. The telephone company reserves a specific physical path to the
number you are calling for the duration of your call. During that time, no one else can use the physical
lines involved.
Circuit-switched is often contrasted with packet-switched. Some packet-switched networks such as the
X.25 network are able to have virtual circuit-switching. A virtual circuit-switched connection is a
dedicated logical connection that allows sharing of the physical path among multiple virtual circuit
connections.
Packet-switched
Packet-switched describes the type of network in which relatively small units of data called packets
are routed through a network based on the destination address contained within each packet. Breaking
communication down into packets allows the same data path to be shared among many users in the
network. This type of communication between sender and receiver is known as connectionless (rather
than dedicated). Most traffic over the Internet uses packet switching and the Internet is basically a
connectionless network.
Virtual circuit
A virtual circuit is a circuit or path between points in a network that appears to be a discrete, physical
path but is actually a managed pool of circuit resources from which specific circuits are allocated as
needed to meet traffic requirements.
A permanent virtual circuit (PVC) is a virtual circuit that is permanently available to the user just as
though it were a dedicated or leased line continuously reserved for that user. A switched virtual circuit
(SVC) is a virtual circuit in which a connection session is set up for a user only for the duration of a
connection. PVCs are an important feature of frame relay networks and SVCs are proposed for later
inclusion.
VSAT
VSAT (Very Small Aperture Terminal) is a satellite communications system that serves home and
business users. A VSAT end user needs a box that interfaces between the user's computer and an
outside antenna with a transceiver. The transceiver receives or sends a signal to a satellite transponder
in the sky. The satellite sends and receives signals from an earth station computer that acts as a hub for
the system. Each end user is interconnected with the hub station via the satellite in a star topology. For
one end user to communicate with another, each transmission has to first go to the hub station which
retransmits it via the satellite to the other end user's VSAT. VSAT handles data, voice, and video
signals.
VSAT offers a number of advantages over terrestrial alternatives. For private applications, companies
can have total control of their own communication system without dependence on other companies.
Business and home users also get higher speed reception than if using ordinary telephone service or
ISDN.
a. Communication
Communication is one of the biggest advantages provided by the computer networks. Different
computer networking technology has improved the way of communications people from the same or
different organization can communicate in the matter of minutes for collaborating the work activities.
In offices and organizations computer networks are serving as the backbone of the daily
communication from top to bottom level of organization. Different types of softwares can be installed
which are useful for transmitting messages and emails at fast speed.
b. Data sharing
Another wonderful advantage of computer networks is the data sharing. All the data such as
documents, file, accounts information, reports multi media etc can be shared with the help computer
networks. Hardware sharing and application sharing is also allowed in many organizations such as
banks and small firms. .
d. Video conferencing
Before the arrival of the computer networks there was no concept for the video conferencing. LAN
and WAN have made it possible for the organizations and business sectors to call the live video
conferencing for important discussions and meetings
e. Internet Service
Computer networks provide internet service over the entire network. Every single computer attached
to the network can experience the high speed internet. Fast processing and work load distribution
f. Broadcasting
With the help of computer networks news and important messages can be broadcasted just in the
matter of seconds who saves a lot of time and effort of the work.
People, can exchange messages immediately over the network any time or we can say 24 hour.
h. Saves Cost
Computer networks save a lot of cost for any organizations in different ways. Building up links
thorough the computer networks immediately transfers files and messages to the other people which
reduced transportation and communication expense. It also raises the standard of the organization
because of the advanced technologies that are used in networking.
j. Flexible
Computer networks are quite flexible all of its topologies and networking strategies supports addition
for extra components and terminals to the network. They are equally fit for large as well as small
organizations.
k. Reliable
Computer networks are reliable when safety of the data is concerned. If one of the attached system
collapse same data can be gathered form another system attached to the same network.
l. Data transmission
Data is transferred at the fast speed even in the scenarios when one or two terminals machine fails to
work properly. Data transmission in seldom affected in the computer networks. Almost complete
communication can be achieved in critical scenarios too.
The organization should strive to hire, train, and retain skilled managers and staff who understand
technology and how it can be used to satisfy organizational objectives. This is not easy, given the
highly competitive job market for network specialists, and the rapid proliferation of new networking
technologies. During the planning process, potentially serious political and organizational issues
should be identified. For instance, people may feel threatened if they believe that the proposed
network will compromise their power or influence. Consequently, they may attempt to hinder the
project’s progress. The organization must confront these fears and develop strategies for dealing with
them.
In addition to organizational challenges, numerous technical challenges must be faced when designing
a network. Perhaps the foremost challenge is the sheer multiplicity of options that must be considered.
Added to this is the fact that current networks continue to grow in size, scope, and complexity. On top
of this, the networking options available are in a constant state of flux. Keeping abreast of new
developments and relating them to organizational requirements is a formidable task, and it is rare that
an organization will have all the in-house expertise that it needs to do this well.
Often consultants and outside vendors are needed to help plan and implement the network. It is much
easier to manage the activities of the consultants if the organization has a firm grip on the business
objectives and requirements. However, sometimes consultants are needed to help develop and specify
the business objectives and requirements. Although outside consultants offer benefits such as
expertise and objectivity, they also present their own set of challenges. For instance, it is important to
develop a “technology transfer” plan when working with outside consultants, to make sure that in-
house staff can carry on as needed after the consultant leaves.
Through the 1970s and 1980s, if you wanted a network, you could call IBM and they would design
your network. It was a common adage that “the only risk is not buying IBM.” However, for the
foreseeable future, there will be increasing numbers of network vendors in the marketplace and a
decreasing likelihood that any one vendor will satisfy all of the organization’s network requirements.
While often unavoidable, using multiple vendors can pose problems, particularly when there are
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 167
problems with the network implementation and each vendor is pointing a finger at the other. Since it
is increasingly likely that a particular network vendor will provide only a part of the network solution,
it is incumbent on the network design team to make sure that the global network requirements are
addressed.
In short, the sheer volume, complexity, and pace of change in technology complicate the already
formidable task of network design. Strategies for meeting these challenges are dictated by common
sense and good management principles. We briefly summarize some of these strategies below:
Another major problem with networks is that their efficiency is very dependent on the skill of the
systems manager. A badly managed network may operate less efficiently than non-networked
computers. Also, a badly run network may allow external users into it with little protection against
them causing damage. Damage could also be caused by novices causing problems, such as deleting
important files.
1. If a network file server develops a fault, then users may not be able to run application
programs
2. A fault on the network can cause users to loose data (if the files being worked upon are not
saved)
3. If the network stops operating, then it may not be possible to access various resources
4. Users work-throughput becomes dependent upon network and the skill of the systems
manager
5. It is difficult to make the system secure from hackers, novices or industrial espionage
6. Decisions on resource planning tend to become centralized, for example, what word processor
is used, what printers are bought, e.t.c.
7. Networks that have grown with little thought can be inefficient in the long term.
8. As traffic increases on a network, the performance degrades unless it is designed properly
9. Resources may be located too far away from some users
10. The larger the network becomes, the more difficult it is to manage
CLOUD COMPUTING
Cloud computing is the use of computing resources (hardware and software) that are delivered as a
service over a network (typically the Internet). The name comes from the common use of a cloud-
shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud
computing entrusts remote services with a user's data, software and computation.
End users access cloud-based applications through a web browser or a light-weight desktop or mobile
application while the business software and user's data are stored on servers at a remote location.
Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and
focus on projects that differentiate their businesses instead of infrastructure. Proponents also claim
that cloud computing allows enterprises to get their applications up and running faster, with improved
manageability and less maintenance, and enables IT to more rapidly adjust resources to meet
fluctuating and unpredictable business demand.
In the business model using software as a service (SaaS), users are provided access to application
software and databases. Cloud providers manage the infrastructure and platforms that run the
applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-
per-use basis. SaaS providers generally price applications using a subscription fee.
Proponents claim that the SaaS allows a business the potential to reduce IT operational costs by
outsourcing hardware and software maintenance and support to the cloud provider. This enables the
business to reallocate IT operations costs away from hardware/software spending and personnel
expenses, towards meeting other IT goals. In addition, with applications hosted centrally, updates can
be released without the need for users to install new software. One drawback of SaaS is that the users'
data are stored on the cloud provider's server. As a result, there could be unauthorized access to the
data.
Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar
to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the
broader concept of converged infrastructure and shared services.
History
The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe
became available in academia and corporations, accessible via thin clients / terminal computers, often
referred to as "dumb terminals", because they were used for communications but had no internal
computational capacities. To make more efficient use of costly mainframes, a practice evolved that
allowed multiple users to share both the physical access to the computer from multiple terminals as
well as to share the CPU time. This eliminated periods of inactivity on the mainframe and allowed for
a greater return on the investment. The practice of sharing CPU time on a mainframe became known
in the industry as time-sharing.
In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-
point data circuits, began offering virtual private network (VPN) services with comparable quality of
service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use
overall network bandwidth more effectively. They began to use the cloud symbol to denote the
demarcation point between what the provider was responsible for and what users were responsible for.
Cloud computing extends this boundary to cover servers as well as the network infrastructure.
As computers became more prevalent, scientists and technologists explored ways to make large-scale
computing power available to more users through time sharing, experimenting with algorithms to
provide the optimal use of the infrastructure, platform and applications with prioritized access to the
CPU and efficiency for the end users.
John McCarthy opined in the 1960s that "computation may someday be organized as a public utility."
Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility,
online, illusion of infinite supply), the comparison to the electricity industry and the use of public,
private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966
book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots
go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated
that the entire world would operate on dumb terminals powered by about 15 large data centers. Due to
the expense of these powerful computers, many corporations and other entities could avail themselves
of computing capability through time sharing and several organizations, such as GE's GEISCO, IBM
subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966),
National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by
Tymshare in 1968), and Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial
venture.
The development of the Internet from being document centric via semantic data towards more and
more services was described as "Dynamic Web".This contribution focused in particular in the need
for better meta-data able to describe not only implementation details but also conceptual details of
model-based applications.
The ubiquitous availability of high-capacity networks, low-cost computers and storage devices as well
as the widespread adoption of hardware virtualization, service-oriented architecture, autonomic, and
utility computing have led to a tremendous growth in cloud computing.
After the dot-com bubble, Amazon played a key role in the development of cloud computing by
modernizing their data centers, which, like most computer networks, were using as little as 10% of
their capacity at any one time, just to leave room for occasional spikes. Having found that the new
cloud architecture resulted in significant internal efficiency improvements whereby small, fast-
moving "two-pizza teams" (teams small enough to feed with two pizzas) could add new features faster
and more easily, Amazon initiated a new product development effort to provide cloud computing to
external customers, and launched Amazon Web Service (AWS) on a utility computing basis in 2006.
In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying
private clouds. In early 2008, OpenNebula, enhanced in the Reservoir European Commission-funded
project, became the first open-source software for deploying private and hybrid clouds, and for the
federation of clouds. In the same year, efforts were focused on providing quality of service guarantees
(as required by real-time interactive applications) to cloud-based infrastructures, in the framework of
the IRMOS European Commission-funded project, resulting to a real-time cloud environment. By
mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among
consumers of IT services, those who use IT services and those who sell them" and observed that
"organizations are switching from company-owned hardware and software assets to per-use service-
based models" so that the "projected shift to computing ... will result in dramatic growth in IT
products in some areas and significant reductions in other areas."
The main enabling technologies for Cloud Computing are virtualization and autonomic computing.
Virtualization abstracts the physical infrastructure, which is the most rigid component, and makes it
available as a soft component that is easy to use and manage. By doing so, virtualization provides the
agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On
the other hand, autonomic computing automates the process through which the user can provision
resources on-demand. By minimizing user involvement, automation speeds up the process and
reduces the possibility of human errors.
Users face difficult business problems every day. Cloud Computing adopts concepts from Service-
oriented Architecture (SOA) that can help the user break these problems into services that can be
integrated to provide a solution. Cloud Computing provides all of its resources as services, and makes
use of the well-established standards and best practices gained in the domain of SOA to allow global
and easy access to cloud services in a standardized way.
Cloud Computing also leverages concepts from utility computing in order to provide metrics for the
services used. Such metrics are at the core of the public cloud pay-per-use models. In addition,
measured services are an essential part of the feedback loop in autonomic computing, allowing
services to scale on-demand and to perform automatic failure recovery.
Cloud Computing is a kind of Grid Computing; it has evolved from Grid by addressing the QoS and
reliability problems. Cloud Computing provides the tools and technologies to build data/compute
intensive parallel applications with much more affordable prices compared to traditional parallel
computing techniques.
i. Autonomic computing
They are computer systems capable of self-management.
v. Utility computing
The "packaging of computing resources, such as computation and storage, as a metered service
similar to a traditional public utility, such as electricity."
vi. Peer-to-peer means distributed architecture without the need for central coordination.
Participants are both suppliers and consumers of resources (in contrast to the traditional
client–server model).
vii. Cloud gaming
Also known as on-demand gaming it is a way of delivering games to computers. Gaming data is
stored in the provider's server, so that gaming is independent of client computers used to play the
game.
Characteristics
Cloud computing exhibits the following key characteristics:
v. Virtualization technology allows servers and storage devices to be shared and utilization be
increased. Applications can be easily migrated from one physical server to another.
vi. Multitenancy enables sharing of resources and costs across a large pool of users thus allowing
for:
• Centralization of infrastructure in locations with lower costs (such as real estate,
electricity, etc.)
• Peak-load capacity increases (users need not engineer for highest possible load-
levels)
• Utilisation and efficiency improvements for systems that are often only 10–20%
utilised.
vii. Reliability is improved if multiple redundant sites are used, which makes well-designed cloud
computing suitable for business continuity and disaster recovery.
viii. Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-
grained, self-service basis near real-time,without users having to engineer for peak loads.
ix. Performance is monitored, and consistent and loosely coupled architectures are constructed
using web services as the system interface.
x. Security could improve due to centralization of data, increased security-focused resources,
etc., but concerns can persist about loss of control over certain sensitive data, and the lack of
security for stored kernels. Security is often as good as or better than other traditional
systems, in part because providers are able to devote resources to solving security issues that
many customers cannot afford. However, the complexity of security is greatly increased when
data is distributed over a wider area or greater number of devices and in multi-tenant systems
that are being shared by unrelated users. In addition, user access to security audit logs may be
difficult or impossible. Private cloud installations are in part motivated by users' desire to
retain control over the infrastructure and avoid losing control of information security.
xi. Maintenance of cloud computing applications is easier, because they do not need to be
installed on each user's computer and can be accessed from different places.
The National Institute of Standards and Technology's definition of cloud computing identifies "five
essential characteristics":
a) On-demand self-service
A consumer can unilaterally provision computing capabilities, such as server time and network
storage, as needed automatically without requiring human interaction with each service provider.
d) Rapid elasticity.
Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly
outward and inward commensurate with demand. To the consumer, the capabilities available for
provisioning often appear unlimited and can be appropriated in any quantity at any time.
e) Measured service.
Cloud systems automatically control and optimize resource use by leveraging a metering capability at
some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and
active user accounts). Resource usage can be monitored, controlled, and reported, providing
transparency for both the provider and consumer of the utilized service.
On-demand self-service
On-demand self-service allows users to obtain, configure and deploy cloud services themselves using
cloud service catalogues, without requiring the assistance of IT. This feature is listed by the National
Institute of Standards and Technology (NIST) as a characteristic of cloud computing.
The self-service requirement of cloud computing prompts infrastructure vendors to create cloud
computing templates, which are obtained from cloud service catalogues. Manufacturers of such
templates or blueprints include BMC Software (BMC), with Service Blueprints as part of their cloud
management platform Hewlett-Packard (HP), which names its templates as HP Cloud Maps
RightScale and Red Hat, which names its templates CloudForms.
The templates contain predefined configurations used by consumers to set up cloud services. The
templates or blueprints provide the technical information necessary to build ready-to-use clouds. Each
template includes specific configuration details for different cloud infrastructures, with information
about servers for specific tasks such as hosting applications, databases, websites and so on. The
templates also include predefined Web service, the operating system, the database, security
configurations and load balancing.
Cloud consumers use cloud templates to move applications between clouds through a self-service
portal. The predefined blueprints define all that an application requires to run in different
environments. For example, a template could define how the same application could be deployed in
cloud platforms based on Amazon Web Service, VMware or Red Hat. The user organization benefits
from cloud templates because the technical aspects of cloud configurations reside in the templates,
letting users to deploy cloud services with a push of a button. Cloud templates can also be used by
developers to create a catalog of cloud services.
Service models
Cloud computing providers offer their services according to several fundamental models:
infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where
IaaS is the most basic and each higher model abstracts from the details of the lower models. Other key
components in XaaS are described in a comprehensive taxonomy model published in 2009, such as
Strategy-as-a-Service, Collaboration-as-a-Service, Business Process-as-a-Service, Database-as-a-
Service, etc. In 2012, network as a service (NaaS) and communication as a service (CaaS) were
officially included by ITU (International Telecommunication Union) as part of the basic cloud
computing models, recognized service categories of a telecommunication-centric cloud ecosystem
In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often)
virtual machines - and other resources. (A hypervisor, such as Xen or KVM, runs the virtual machines
as guests. Pools of hypervisors within the cloud operational support-system can support large numbers
of virtual machines and the ability to scale services up and down according to customers' varying
requirements.) IaaS clouds often offer additional resources such as a virtual-machine disk image
library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area
networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on-demand
from their large pools installed in data centers. For wide-area connectivity, customers can use either
the Internet or carrier clouds (dedicated virtual private networks).
To deploy their applications, cloud users install operating-system images and their application
software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating
systems and the application software. Cloud providers typically bill IaaS services on a utility
computing basis cost reflects the amount of resources allocated and consumed.
Amazon EC2,
Azure Services Platform,
DynDNS,
Google Compute Engine,
HP Cloud,
iland,
Joyent,
LeaseWeb,
Linode,
NaviSite,
Oracle Infrastructure as a Service,
Rackspace Cloud,
ReadySpace Cloud Services,
ReliaCloud,
SAVVIS,
SingleHop, and
Terremark
Platform as a service (PaaS)
In the PaaS model, cloud providers deliver a computing platform typically including operating
system, programming language execution environment, database, and web server. Application
developers can develop and run their software solutions on a cloud platform without the cost and
complexity of buying and managing the underlying hardware and software layers. With some PaaS
offers, the underlying computer and storage resources scale automatically to match application
demand such that cloud user does not have to allocate resources manually.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 175
In the SaaS model, cloud providers install and operate application software in the cloud and cloud
users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and
platform where the application runs. This eliminates the need to install and run the application on the
cloud user's own computers, which simplifies maintenance and support. Cloud applications are
different from other applications in their scalability—which can be achieved by cloning tasks onto
multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the
work over the set of virtual machines. This process is transparent to the cloud user, who sees only a
single access point. To accommodate a large number of cloud users, cloud applications can be
multitenant, that is, any machine serves more than one cloud user organization. It is common to refer
to special types of cloud based application software with a similar naming convention: desktop as a
service, business process as a service, test environment as a service, communication as a service.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so price is
scalable and adjustable if users are added or removed at any point.
Examples of SaaS include: Google Apps, Microsoft Office 365, Onlive, GT Nexus, Marketo, Casengo
and TradeCard.
A category of cloud services where the capability provided to the cloud service user is to use
network/transport connectivity services and/or inter-cloud network connectivity services. NaaS
involves the optimization of resource allocations by considering network and computing resources as
a unified whole.
Traditional NaaS services include flexible and extended VPN, and bandwidth on demand. NaaS
concept materialization also includes the provision of a virtual network service by the owners of the
network infrastructure to a third party (VNP – VNO)
Cloud clients
Users access cloud computing using networked client devices, such as desktop computers, laptops,
tablets and smartphones. Some of these devices - cloud clients - rely on cloud computing for all or a
majority of their applications so as to be essentially useless without it. Examples are thin clients and
the browser-based Chromebook. Many cloud applications do not require specific software on the
client and instead use a web browser to interact with the cloud application. With Ajax and HTML5
these Web user interfaces can achieve a similar or even better look and feel as native applications.
Some cloud applications, however, support specific client software dedicated to these applications
(e.g., virtual desktop clients and most email clients). Some legacy applications (line of business
applications that until now have been prevalent in thin client Windows computing) are delivered via a
screen-sharing technology.
Deployment models
Some of the deployment models include :
a) Public cloud
b) Community cloud
c) Hybrid cloud
d) Private cloud
a. Public cloud
Public cloud applications, storage, and other resources are made available to the general public by a
service provider. These services are free or offered on a pay-per-use model. Generally, public cloud
service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure and
offer access only via Internet (direct connectivity is not offered).
b. Community cloud
Community cloud shares infrastructure between several organizations from a specific community with
common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-
party and hosted internally or externally. The costs are spread over fewer users than a public cloud
(but more than a private cloud), so only some of the cost savings potential of cloud computing are
realized.
c. Hybrid cloud
Hybrid cloud is a composition of two or more clouds (private, community or public) that remain
unique entities but are bound together, offering the benefits of multiple deployment models.Such
composition expands deployment options for cloud services, allowing IT organizations to use public
cloud computing resources to meet temporary needs. This capability enables hybrid clouds to employ
cloud bursting for scaling across clouds.
Cloud bursting is an application deployment model in which an application runs in a private cloud or
data center and "bursts" to a public cloud when the demand for computing capacity increases. A
primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for
extra compute resources when they are needed.
Cloud bursting enables data centers to create an in-house IT infrastructure that supports average
workloads, and use cloud resources from public or private clouds, during spikes in processing
demands.
By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain degrees of fault
tolerance combined with locally immediate usability without dependency on internet connectivity.
Hybrid cloud architecture requires both on-premises resources and off-site (remote) server-based
cloud infrastructure.
Hybrid clouds lack the flexibility, security and certainty of in-house applications.Hybrid cloud
provides the flexibility of in house applications with the fault tolerance and scalability of cloud based
services.
d. Private cloud
Private cloud is cloud infrastructure operated solely for a single organization, whether managed
internally or by a third-party and hosted internally or externally. Undertaking a private cloud project
requires a significant level and degree of engagement to virtualize the business environment, and
requires the organization to reevaluate decisions about existing resources. When done right, it can
have improve business, but every step in the project raises security issues that must be addressed to
prevent serious vulnerabilities.
They have attracted criticism because users "still have to buy, build, and manage them" and thus do
not benefit from less hands-on management, essentially "[lacking] the economic model that makes
cloud computing such an intriguing concept
Architecture
Cloud computing sample architecture
Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud
computing, typically involves multiple cloud components communicating with each other over a loose
coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of
tight or loose coupling as applied to mechanisms such as these and others.
The Intercloud
The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network
of networks" on which it is based.
Cloud engineering
a. Privacy
Privacy advocates have criticized the cloud model for hosting companies' greater ease can control—
and thus, can monitor at will—communication between host company and end user, and access user
data (with or without permission). Instances such as the secret NSA program, working with AT&T,
and Verizon, which recorded over 10 million telephone calls between American citizens, causes
uncertainty among privacy advocates, and the greater powers it gives to telecommunication
companies to monitor user activity.A cloud service provider (CSP) can complicate data privacy
because of the extent of virtualization (virtual machines) and cloud storage used to implement cloud
service. CSP operations, customer or tenant data may not remain on the same system, or in the same
data center or even within the same provider's cloud; this can lead to legal concerns over jurisdiction.
While there have been efforts (such as US-EU Safe Harbor) to "harmonise" the legal environment,
providers such as Amazon still cater to major markets (typically the United States and the European
Union) by deploying local infrastructure and allowing customers to select "availability zones."Cloud
computing poses privacy concerns because the service provider may access the data that is on the
cloud at any point in time. They could accidentally or deliberately alter or even delete information.
b. Compliance
To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data
Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt
community or hybrid deployment modes that are typically more expensive and may offer restricted
benefits. This is how Google is able to "manage and meet additional government policy requirements
beyond FISMA"and Rackspace Cloud or QubeSpace are able to claim PCI compliance.
Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds that the
hand-picked set of goals and standards determined by the auditor and the auditee are often not
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 179
disclosed and can vary widely. Providers typically make this information available on request, under
non-disclosure agreement.
Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the EU
regulations on export of personal data.
c. Legal
As with other changes in the landscape of computing, certain legal issues arise with cloud computing,
including trademark infringement, security concerns and sharing of proprietary data resources.
The Electronic Frontier Foundation has criticized the United States government for considering
during the Megaupload seizure process that people lose property rights by storing data on a cloud
computing service.
One important but not often mentioned problem with cloud computing is the problem of who is in
"possession" of the data. If a cloud company is the possessor of the data, the possessor has certain
legal rights. If the cloud company is the "custodian" of the data, then a different set of rights would
apply. The next problem in the legalities of cloud computing is the problem of legal ownership of the
data. Many Terms of Service agreements are silent on the question of ownership.
d .Vendor lock-in
Because cloud computing is still relatively new, standards are still being developed. Many cloud
platforms and services are proprietary, meaning that they are built on the specific standards, tools and
protocols developed by a particular vendor for its particular cloud offering. This can make migrating
off a proprietary cloud platform prohibitively complicated and expensive.
i. Platform lock-in: cloud services tend to be built on one of several possible virtualization
platforms, for example VMWare or Xen. Migrating from a cloud provider using one platform
to a cloud provider using a different platform could be very complicated.
ii. Data lock-in: since the cloud is still new, standards of ownership, i.e. who actually owns the
data once it lives on a cloud platform, are not yet developed, which could make it complicated
if cloud computing users ever decide to move data off of a cloud vendor's platform.
iii. Tools lock-in: if tools built to manage a cloud environment are not compatible with different
kinds of both virtual and physical infrastructure, those tools will only be able to manage data
or apps that live in the vendor's particular cloud environment.
Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor
lock-in, and aligns with enterprise data centers that are operating hybrid cloud models. The absence of
vendor lock-in lets cloud administrators select his or her choice of hypervisors for specific tasks, or to
deploy virtualized infrastructures to other enterprises without the need to consider the flavor of
hypervisor in the other enterprise.
A heterogeneous cloud is considered one that includes on-premise private clouds, public clouds and
software-as-a-service clouds. Heterogeneous clouds can work with environments that are not
virtualized, such as traditional data centers. Heterogeneous clouds also allow for the use of piece
parts, such as hypervisors, servers, and storage, from multiple vendors.
Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each
other. The result is complicated migration between backends, and makes it difficult to integrate data
spread across various locations. This has been described as a problem of vendor lock-in. The solution
to this is for clouds to adopt common standards.
e. Open source
Open-source software has provided the foundation for many cloud computing implementations,
prominent examples being the Hadoop framework and VMware's Cloud Foundry. In November 2007,
the Free Software Foundation released the Affero General Public License, a version of GPLv3
intended to close a perceived legal loophole associated with free software designed to run over a
network.
f. Open standards
Most cloud providers expose APIs that are typically well-documented (often under a Creative
Commons license but also unique to their implementation and thus not interoperable. Some vendors
have adopted others' APIs and there are a number of open standards under development, with a view
to delivering interoperability and portability. As of November 2012, the Open Standard with broadest
industry support is probably OpenStack, founded in 2010 by NASA and Rackspace, and now
governed by the OpenStack Foundation. OpenStack supporters include AMD, Intel, Canonical, SUSE
Linux, Red Hat, Cisco, Dell, HP, IBM, Yahoo and now VMware.
g. Security
As cloud computing is achieving increased popularity, concerns are being voiced about the security
issues introduced through adoption of this new model. The effectiveness and efficiency of traditional
protection mechanisms are being reconsidered as the characteristics of this innovative deployment
model can differ widely from those of traditional architectures. An alternative perspective on the topic
of cloud security is that this is but another, although quite broad, case of "applied security" and that
similar security principles that apply in shared multi-user mainframe security models apply with cloud
security.
The relative security of cloud computing services is a contentious issue that may be delaying its
adoption. Physical control of the Private Cloud equipment is more secure than having the equipment
off site and under someone else's control. Physical control and the ability to visually inspect data links
and access ports is required in order to ensure data links are not compromised. Issues barring the
adoption of cloud computing are due in large part to the private and public sectors' unease
surrounding the external management of security-based services. It is the very nature of cloud
computing-based services, private or public, that promote external management of provided services.
This delivers great incentive to cloud computing service providers to prioritize building and
maintaining strong management of secure services. Security issues have been categorised into
sensitive data access, data segregation, privacy, bug exploitation, recovery, accountability, malicious
insiders, management console security, account control, and multi-tenancy issues. Solutions to various
cloud security issues vary, from cryptography, particularly public key infrastructure (PKI), to use of
multiple cloud providers, standardisation of APIs, and improving virtual machine support and legal
support.
Cloud computing offers many benefits, but is vulnerable to threats. As cloud computing uses increase,
it is likely that more criminals find new ways to exploit system vulnerabilities. Many underlying
challenges and risks in cloud computing increase the threat of data compromise. To mitigate the
threat, cloud computing stakeholders should invest heavily in risk assessment to ensure that the
system encrypts to protect data, establishes trusted foundation to secure the platform and
infrastructure, and builds higher assurance into auditing to strengthen compliance. Security concerns
must be addressed to maintain trust in cloud computing technology.
h. Sustainability
Although cloud computing is often assumed to be a form of green computing, no published study
substantiates this assumption. Citing the servers' effects on the environmental effects of cloud
computing, in areas where climate favors natural cooling and renewable electricity is readily
available, the environmental effects will be more moderate. (The same holds true for "traditional" data
centers.) Thus countries with favorable conditions, such as Finland, Sweden and Switzerland, are
trying to attract cloud computing data centers. Energy efficiency in cloud computing can result from
energy-aware scheduling and server consolidation. However, in the case of distributed clouds over
data centers with different source of energies including renewable source of energies, a small
compromise on energy consumption reduction could result in high carbon footprint reduction.
i. Abuse
As with privately purchased hardware, customers can purchase the services of cloud computing for
nefarious purposes. This includes password cracking and launching attacks using the purchased
services. In 2009, a banking trojan illegally used the popular Amazon service as a command and
control channel that issued software updates and malicious instructions to PCs that were infected by
the malware.
j. IT governance
The introduction of cloud computing requires an appropriate IT governance model to ensure a secured
computing environment and to comply with all relevant organizational information technology
policies. As such, organizations need a set of capabilities that are essential when effectively
implementing and managing cloud services, including demand management, relationship
management, data security management, application lifecycle management, risk and compliance
management. A danger lies with the explosion of companies joining the growth in cloud computing
by becoming providers. However, many of the infrastructural and logistical concerns regarding the
operation of cloud computing businesses are still unknown. This over-saturation may have
ramifications for the industry as whole.
l. Ambiguity of terminology
Outside of the information technology and software industry, the term "cloud" can be found to
reference a wide range of services, some of which fall under the category of cloud computing, while
others do not. The cloud is often used to refer to a product or service that is discovered, accessed and
paid for over the Internet, but is not necessarily a computing resource. Examples of service that are
sometimes referred to as "the cloud" include, but are not limited to, crowd sourcing, cloud printing,
crowd funding, cloud manufacturing.
REVISION EXECRISES
1. Discuss the principles of data communication and networks.
2. What are some of the characteristics of data transmission?
3. There is a global trend towards adopting digital communication as opposed to analogue
systems. Analogue data has therefore to be converted to digital data in a process known as
digitisation. Why is it advantageous to digitise data?
4. Briefly describe the main components of a protocol.
5. What is a network topology?
6. Discuss the four types of network topology
7. Why has the use of fiber optic systems become popular in the recent past?
8. List three examples of network protocols.
9. What are some of the benefits networks in an organizations?
10. Discuss the challenges and limitations of networks in an organization
11. Identify the main components of a Local Area Network (LAN)
12. Describe how fiber optic systems are used in communications systems.
13. Define the following terms:
(i) Attenuation
(ii) Delay distortion
(iii) Noise
14. What is cloud computing and what are some of the characteristics of cloud computing
15. There are three main types of network topologies namely; star, ring and bus. As a network
administrator, you have been asked to produce a briefing document that discusses each
topology in terms of cabling cost, fault tolerance, data redundancy and performance as the
number of nodes increases.
CHAPTER 6
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 183
E-COMMERCE
SYNOPSIS
Introduction……………………………………………………. 183
Impact of The Internet on Business……………………………. 183
Models of E-Commerce……………………………………….. 190
Business Opportunities in E-Commerce……………………… 198
Challenges of E-Commerce………………………………….. 200
Mobile Computing……………………………………………. 203
Internet Labs…………………………………………………… 204
INTRODUCTION
Electronic commerce, commonly known as ecommerce, is a type of industry where buying and selling
of product or service is conducted over electronic systems such as the Internet and other computer
networks. Electronic commerce draws on technologies such as mobile commerce, electronic funds
transfer, supply chain management, Internet marketing, online transaction processing, electronic data
interchange (EDI), inventory management systems, and automated data collection systems. Modern
electronic commerce typically uses the World Wide Web at least at one point in the transaction's life-
cycle, although it may encompass a wider range of technologies such as e-mail, mobile devices social
media, and telephones as well.
Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of
the exchange of data to facilitate the financing and payment aspects of business transactions.
• E-tailing or "virtual storefronts" on websites with online catalogs, sometimes gathered into a
"virtual mall"
• The gathering and use of demographic data through Web contacts and social media
• Electronic Data Interchange (EDI), the business-to-business exchange of data
• E-mail and fax and their use as media for reaching prospective and established customers (for
example, with newsletters)
• Business-to-business buying and selling
• The security of business transactions
1. Technical Papers
Originally, the Internet was only used by the government and universities. Research scientists used
the Internet to communicate with other scientists at different labs and to access powerful computer
systems at distant computing facilities. Scientists also shared the results of their work in technical
papers stored locally on their computer system in ftp sites. Researchers from other facilities used the
Internet to access the ftp directory and obtain these technical papers. Examples of research sites are
NASA and NASA AMES.
Commercial companies are now using the Web for many purposes. One of the first ways that
commercial companies used the Web was to share information with their employees. Sterling
Software's Web page informs employees about such things as training schedules and C++ Guidelines.
There is also some information which is company private and access is restricted to company
employees only. Another company example is Sun Microsystems which similarily contains general
information about the Sun Microsystems company.
3. Product Information
One of the ways businesses share information is to present their product information on a Web page.
Some examples are: Cray Research, Sun Microsystems, Hewlet Packard, and GM's Pontiac Site. The
Web provides an easy and efficient way for companies to distribute product information to their
current and potential customers.
4. Advertising
Along these lines, companies are beginning to actually advertise online. Some examples of different
ways to advertise online are Netscape's Ad Page. Netscape has a list of advertising companies. They
also use a banner for advertisements on their Yahoo Web Page. Starware similarly uses banner
advertisement. These advertisements are created in the established advertising model where the
advertising is positioned between rather than within editorial items. Another type of advertising
focuses on entertaining the customers and keeping them at the companies' site for a longer time
period.
Commercial use restrictions of the Internet were lifted in 1991. This has caused an explosion of
commercial use. More information about business on the Internet can be found at the Commerce Net.
This site has information such as the projected growth of advertising on the Internet and online
services. Commercial Services on the Net has a list of various businesses on the Internet. They are
many unusual businesses listed here such that you begin to wonder if they are legitimate businesses.
This topic is discussed in more detail in the section on risks and consumer confidence. Business and
Commerce provides consumer product information. The Federal Trade Commission is also quite
concerned about legal business on the Internet.
WWW users are clearly upscale, professional, and well educated compared with the population as a
whole. For example, from CommerceNet's Survey (CommerceNet is a not for-profit 501c(6) mutual
benefit corporation which is conducting the first large-scale market trial of technologies and business
processes to support electronic commerce via the Internet) as of 10/30/95 :
• 25% of WWW users earn household income of more than $80,000 whereas only 10% of the
total US and Canadian population has that level of income.
• 50% of WWW users consider themselves to be in professional or managerial occupations. In
contrast, 27% of the total US and Canadian population categorize themselves to have such
positions.
• 64% of WWW users have at least college degrees while the US and Canadian national level is
29%.
CommerceNet's study also found that there is a sizable base of Internet Users in the US and Canada.
With 24 million Internet users (16 years of age or older) and 18 million WWW users (16 years of age
or older), WWW users are a key target for business applications. Approximately 2.5 million people
have made purchases using the WWW. The Internet is, however, heavily skewed to males in terms of
both usage and users. Access through work is also an important factor for both the Internet and online
services such as America Online and CompuServe. For an example of the size of the market, the total
Internet usage exceeds online services and is approximately equivalent to playback of rented
videotapes.
6. Magazines
Magazines are starting to realize that they can attract customers online. Examples of magazines now
published online are Outside, Economist, and Business Week. These magazines are still published in
hard copy, but they are now also available online. Many of these publications are available free
sometimes because of the time delay (i.e. publications online are past issues) or usually to draw in
subscribers for a free initial trial period. Some of these publications may remain free online if
advertisers pay for the publications with their advertisement banners.
7. Newspapers
Some newspapers are beginning to publish online. The San Jose Mercury News is a full newspaper
online, while the Seattle Times offers just classified ads and educational information. The Dow Jones
Wall Street Journal publishes its front page online with highlighted links from the front page to
complete stories. The Journal also provides links to briefing books, which provide financial
information on the company, stock performance, and recent articles and press releases. For an
example of a briefing book see, Netscape Briefing Book. This is all free by the Wall Street Journal
during the trial period which should last until mid 1996.
8. Employment Ads
Companies are also beginning to list their employment ads online to attract talented people who they
might not have been able to reach by the more tradition method of advertising in local papers. Sun
Microsystems provides a list of job openings on the Internet. Interested parties can submit a resume or
call to schedule an interview, which saves time for everyone involved. Universities can also help their
students find jobs more easily by using job listings on the Internet. The University of Washington has
a job listing site. Local papers can also make it easier for job searchers by creating a database search
feature. The job searchers can select the type of jobs that they are interested in and the search will
return a list of all the matching job openings. San Jose Mercury News is a good example of this
approach.
9. Stock Quotes
There are several time delayed (15 minutes) ways to track stock performance, and they are all free.
The first to provide this service was PAWWS Financial Network, and now CNN also lets you track
stocks. These are commercial companies which provide stock quotes for free but charge for other
services. A non-commercial site, MIT's Stock & Mutual Fund Charts, updates information daily and
provides a history file for a select number of stocks and mutual funds. Information in these history
files can be graphically displayed so that it is easier to see a stock's performance over time.
Thinking about investing in a particular country? Information on countries can be found online. For
example, check out the graphical information (GDP, inflation, direct foreign investment, etc.) on
Indonesia.
You can order a pizza online. This Web site is actually a joke, but you can easily imagine people
working late at their offices and ordering out for food online.
A very effective and efficient use of the Web is to order software online. This reduces the
packaging and shipping costs. Also documentation can now be provided online. A good example is
Netscape Navigator. Another example is Macromedia's Shockwave. What is Shockwave for Director?
The description online is as following:
"Shockwave for Director is the product name for the Macromedia Director-on-the-Internet project.
Shockwave for Director includes two distinct pieces of functionality:
(1) Shockwave Plug-In for Web browsers like Netscape Navigator 2.0 which allows movies to be
played seamlessly within the same window as the browser page.
(2) Afterburner is a post-processor for Director movie source files. Multimedia developers use it to
prepare content for Internet distribution. Afterburner compresses movies and makes them ready for
uploading to an HTTP server, from which they'll be accessed by Internet users."
So by reading about the product online, you can decide if it sounds interesting. You can then
immediately get the software by downloading it from Macromedia's computer to yours. Next, you
install it on your system and you're all set. You didn't even have to leave your terminal, and there was
no shipping cost to you or the company.
Ever wonder what the rush hour traffic was like before you head home and get stuck in it? Many
different cities are putting traffic information online. In Seattle, a graphical traffic report is available.
14. Tourism
Plan a trip to Australia or New Zealand with information gathered off the Internet. These and other
countries are on the Internet. So you can plan your vacation from your computer.
Who needs Siskel and Ebert, when you can be your own movie critic? Buena Vista Movie Clips
provides movie clips from many of their new releases.
Chat rooms are a more interactive technology. America Online provides areas where people can "log
on" and converse with others with similar interests in real time. This is the first popular use of
interactivity by the general public. The other uses up until recently have been more static, one-way
distribution of information. Interactivity is the future of the Internet .
Forecast of How the Internet & WWW Might Be Used in the Future
There are many ways that the Internet could be used in the next 3 to 5 years. The main aspect that
they all have in common is the increased use of interactivity on the Internet. This means that the
Internet will shift from being a one-way distribution of information to a two-way information stream.
Scientists will continue to lead the way in this area by watching the results from scientific
experiments and exchanging ideas through live audio and video feeds. Due to budget cuts, this
collaboration should be expected to increase even more to stretch what budget they do have.
One of the first areas where interactivity will increase on the Internet are computer games. People will
no longer have to take turns playing solitary or crowd around one machine. Instead they will join a
computer network game and compete against players located at distant sites. An example of this is
Starwave's Fantasy Sports Game. This game is still a more traditional approach of updating statistics
on the computer and players looking at their status. A more active game is Marathon Man, which
portrays players on the screen reacting to various situations. In the future, many of these games will
also include virtual reality.
2. Real Estate
Buying a home online will become possible. While very few people would want to buy a home
without seeing it in person, having house listings online will help reduce the time it takes to purchase
a home. People can narrow down which houses that they are actually interested in viewing by seeing
their description and picture online. An example is a list of house descriptions by region of the
country. This will be improved when database search capabilities are added. People can select the
features that they are interested in and then search the database. In response, they will receive a list of
houses that meet their criteria. Also, having several different images of the House as well as a short
video clip of a walk-through of the house, will help buyers make their selection quicker. This area is
growing quickly
After a house is chosen, potential buyers can apply for a mortgage online. No longer will buyers be
restricted to local lending institutions, since many lenders will be able to compete online for business.
Visit an example of an online mortgage computation. In the future, each lender will have a Web page
which will process the mortgage application. One of the main reasons this has not been implemented
is security, which is discussed further under the strategic risks and security section.
4. Buying stocks
Stocks will soon be able to be purchased over the Internet without the assistance of a broker. Charles
Schwab has a prototype that is being tested currently in Florida. Once the security issues are ironed
out, this application will also be active.
5. Ordering products.
Ordering products online is an important application. As mentioned above, the Pizza Page showed
how easy it could be done. Other companies are setting up Web pages to actually do this. An example
is TSI Soccer. Customers can actually order online if they choose to do so. They can even send their
credit card number over the network. Since this is non-secure, most people probably still call the
company to order any item.
6. Live Video
Viewing live video clips will become more common in the future. CNN has files of video clips of
news stories at video vault which can be downloaded and viewed on a home computer. Seeing actual
live video feed is dependent on network speed, and most home users do not have fast enough
connections to make this a practical application yet. Once the speed of network connection increases,
more people will be interested in live video clips.
While AOL users are currently accessing "Chat Rooms" to communicate with other people on the
Internet, they are restricted to text-based communication or possibly an icon as their identity online.
CUCME from Carneige Mellon provides a means for people to actually see other people online.
However, network speed is once again a limiting factor. If a user is not directly connected to the
Internet (most connections are via modem), then the image is extremely slow. This application will
become more popular with increased network connections.
8. Video Conferencing
On the other hand, businesses will begin using video to communicate with others. There should also
be some applications that businesses can choose to help set up video conferencing. IBM bought
LOTUS Notes for this reason last summer. IBM needs to make it a more flexible solution by
interacting LOTUS Notes with the Internet. They currently are in the process of doing this. Netscape
also offers a solution based on the software company Collabora that they purchased. These possible
solutions should encourage businesses to use video conferencing and online training. Additional
information on Video Conferencing is also available.
Some of the risks associated with conducting business through the internet include:
It is important for advertisers to spend their advertisement money wisely. They can achieve this by
using appropriate methods of advertising and targeting the right market segments. Two different types
of advertising are entertainment ads and traditional advertising. Entertainment ads focus on
entertaining a customer whereas traditional advertising is more direct and usually positioned between
rather than within editorial items. When the entertainment ads work well, they can be quite successful
in drawing customers to their site; however, it is very easy for this type of ad to flop resulting in no
one returning to visit the advertisement site after they see it once. Traditional advertising has better
readership. It can also be used well in targeting the right market segments. For instance, the ESPN
Sports page would be a good site to place ads by Gatorade and Nike. Sports minded people that might
be interested in these products would be likely to access these pages. A good reference for researching
this topic further is at Advertising Age.
2. Security
One of the main factors holding back businesses' progress on the Internet, is the issue of security.
Customers do not feel confident sending their credit card numbers over the Internet. Computer
hackers can grab this information off the Internet if it is not encrypted. Netscape and several other
companies are working on encryption methods. Strong encryption algorithms and public education in
the use of the Internet should increase the number of online transactions. After all, getting your credit
card number stolen in every day transactions is easier. In addition, securing private company
information and enforcing copyright issues still need to be resolved before the business community
really takes advantage of Internet transactions. There are, however, currently some methods within
Netscape for placing the information online yet restricting it to only certain people such as company
employees.
3. Consumer confidence
Consumer confidence is essential for conducting business online. Although related to security,
consumer confidence also deals with feeling confident about doing business online. For instance, can
consumers believe that a company is legitimate if it is on the Internet, or could it be some kind of
boiler room operation? Also, companies must be able to substantiate their advertising claims if they
are published online.
The speed of network access is a risk for businesses. If businesses spend a lot of money for fast
network connections and design their sites with this in mind yet customers have lower speed
connections, this may result in less consumers accessing their site. Less consumers accessing their site
most likely results in lower profits which is in addition to the extra cost of the faster network
connection. On the other hand, if the company designed for slower access yet customers have faster
access, they could still lose out in profits. Currently, some of the options that home users have to
choose from are traditional modems, ISDN, and Cable Modems. Traditional modems are cheaper but
the current speed is a maximum of 28.8 Kbps. ISDN is faster at 56 Kbps, but more expensive. Cable
modems are faster yet with a speed of 4 Mbps. However, two-way interaction with a cable modem
needs some more testing to be sure that it works as well as ISDN.
Along these lines of picking industry standards, companies must also be sure that the Web Browser
that they develop for is the standard. Otherwise, some of the features that they are using to highlight
their site may not work. Currently the defacto standard is Netscape. There also needs to be a standard
language that adds high quality features such as animation, so that software applications written for
the Internet will run on all the different types of architectures customers may have. Major computer
industry players have backed JAVA by Sun Microsystems. So while some areas are becoming
standardized, companies must be alert to industry changes to avoid becoming obsolete in hardware,
software, and data communications.
The Internet was originally developed with a philosophy for sharing information and assisting others
in their research. The original intent emphasised concern for others, technological advances, and not
for profit organizations.
With the lifting of commercial restrictions in 1991, businesses are now joining the Internet
community. As with any small town that has a sudden increase in population, fast growth can cause
problems. Old residents could create animosity if they feel that the new residents are taking over their
community and causing congestion and prices to increase. Businesses need to be conscious of this
phenomenon.
While businesses can expect help from Internet users, businesses will lose this help if they only use it
to make a quick profit. As in a large city, people will start to feel less like helping others in need.
Businesses will be more successful on the Internet if they can emphasize how they can help add value
to the Internet rather than focusing on how to make a quick profit. For example, businesses can take
advantage of the opportunity to provide additional Internet services (e.g., services discussed in the
sections on current uses of the Internet and future uses) now that funding from the government is
being reduced.
An example of a city that has grown rapidly, yet still considered very livable, is Seattle. One of the
reasons attributed to Seattle's successful growth is, that despite it being a large city, there are
numerous small communities within the city. These small communities retain such benefits as
concern for others within the framework of services that a large city can provide. If businesses along
with the Internet community follow this model, the Internet will have a chance to keep its successful
small town atmosphere while adding increased services for more people.
MODELS OF E-COMMERCE
E Commerce is one of the popular aspects of spreading business on a large scale. Online media is
used as a platform to carry out business transactions. For an e-commerce site proper e-commerce web
development is required. E-commerce involves the buying and selling of products or services using an
electronic payment processor. It can be either a business-to-business (B2B) or business-to-consumer
transaction. Business activities takes place on the Internet, or more specifically, the Web. It's the
newest form of business transaction and has grown exponentially since the start of the 21st century.
Features
E-commerce has made it possible for customers to contact a business at any time of the day at the
customer's convenience. It makes use of the Internet's communication capabilities through product
displays, sales presentations and order processing and delivery. Using a website as the storefront, a
business carries out the same interactions and transactions as occur within a physical storefront, minus
the face-to-face interaction. Products are selected and placed in shopping carts, then customers
purchase their selected items through an order form or payment page. These pages are typically set up
through a merchant account provider and provide security encryptions that protect the customer's
payment information
Function
The actual e-commerce transaction is where the sale is made. This is where the customer provides her
financial information--credit card information, e-check data and shipping information (if applicable),
in exchange for delivery of the product. In the case of electronic products, such as e-books or software
applications, product delivery is immediate. The seller has set up some form of automated delivery by
which customers are redirected to a download page or sent an email with the download link. With
physical products, retail sites typically go through a third party distributor that handles the shipping
and delivery process
1. Business-to-Business (B2B) is one of the major forms of e commerce. Here the seller and the
buyer participate as business entities. Here the business is carried out the same way a
manufacturer supplies goods to a wholesaler.
2. Business-to-Consumer (B2C) In this case transactions take place between consumers and
business houses. Here individuals are also involved in the online business transactions.
3. Consumer-to-Consumer (C2C) model is applicable when the business transaction is carried
between two individuals. But for this type of e commerce, the individuals require a platform
or an intermediary for business transactions.
4. Peer-to-Peer (P2P) is another model of e-commerce. This model is technologically more
sound than the other e commerce models. During this type of transactions, people can share
computer resources. Here it is not required to use a common server; instead a common
platform can be used for the transactions.
5. With technological advancements, the business transactions can be done through mobile
devices. The latest model for e commerce is the M-Commerce. The e commerce sites can be
specially optimized and programmed so that they can be viewed and used through mobiles.
Here two mobile users can contact each other to carry out the business transactions.
Considerations
Not unlike the "brick-and-mortar" storefronts, e-commerce sites have to let customers know where
they are and what they have to offer. As new as the e-commerce model is, new methods for marketing
are popping up every day. Some of the most popular methods include email marketing, banner
exchanges, classified ads, pay-per-click advertising and article marketing.
Online businesses are increasingly becoming aware of the importance of building a relationship of
trust with potential and current customers. A common practice is to offer free products, services or
information as a way to get customers interested in a business's products. The built-in speed and
convenience of the Internet has become a new business world in which marketing, product display,
customer relations and product purchase can all happen at one virtual site.
It ensures that customers are able to buy the products they want.
It ensures that producers are able to sell products in a free market.
It ensures a stable financing is available to conduct production.
It ensures that perishable goods are stored in an appropriate manner for consumption.
It ensures that products are transported to all customer markets.
It ensures that quality standards are always maintained.
Market Environment
The market environment directly impacts the function and working of an organization. The 3
categories of market environment are internal environment, micro environment and macro
environment. Organizations develop strategies as to be successful in all three environments.
The culture and environment of organizations play an important role in delivering value to customers.
Internal customers of organization are the ones which contribute in delivering the final products.
Organization needs to look at strategies to motivate internal organization to satisfy external
customers.
Companies deploy different marketing strategies during each stage of marketing life cycle. These
strategies are closely associated with revenue generation from product sales.
The source of marketing information comes through internal records and external records. The
internal record includes day to day production data as well as product sales data. Internal data helps
manager track marketing impact on the different product mix.
External data is market performance of a competitor also plays important in the decision-making
process. Company's sales force is a huge data source. Therefore, it is essential for system to capture
their market intelligence input.
The data collected through external or internal market research agencies plays an important to provide
a holistic market view to the managers.
An information system captures information from all the different sources. The information is
analyzed and then distributed to managers for decision-making process.
With open proliferation of information, customer expectations are reaching new heights. Companies
need to figure out the right channel mix with multi channels’ strategies. From a manager stand point
marketing channel is defined as any external agencies, which facilitate distribution of products and
services.
The marketing channel is one of the key drivers for strategies around the marketing mix, i.e. product,
price, place and promotion.
The channel structure is referred to as the combination of different channel members in achieving
organization’s marketing mix strategy.
Channel Participants
The marketing channel consists of various players like manufacturers, producers, wholesalers and
retailers. Manufacturers and producers develop their own marketing channel to reach the end user.
However, not all manufacturers have the expertise in managing channel participants. Therefore, they
need wholesalers and retailers for distribution of goods.
There are three types of wholesalers; merchant wholesalers, agents and producer’s branch offices.
Merchant wholesalers usually have good capacity of storing and managing goods. In contrast, agent
works as middlemen for producers and end users. Retailers are responsible for selling goods and
products to end users.
However, all issues in the channel cannot be considered as a conflict. The channel manager needs to
assess the frequency of disagreement, level of disagreement and importance of issue.
The top three reasons for an emergence of conflict among channel partners are as follows. The first
reason is the different business objectives of channel partners (producers, wholesalers and retailers).
The other reason is a narrow vision of each channel partner, i.e. they do not view channel on whole
but only at their level.
Conflict between the channel partners can be resolved by improving communication among
themselves and also with producer. Another way of solving conflict is by directing all channels to a
single objective of creating customer delight.
Franchise
Another innovation in the marketing channel system is the franchise. Franchise enables brand
recognition, standardization of operation structure, access to learning curve and less financial
investment.
Before the advent of information system, customer-related information was recorded in individual
sales representative’s personal books rather than on a centralized data center. This meant that the
customer information would be lost with movement of sales representative.
Therefore, to preserve and utilized customer relation and improve performance of sales force, sales
support system was developed.
This module of the sales support system looks at offering calendar based activities to plan and
coordinate meetings with customer relationship accounts. The module looks at consolidating team
activities for a given period of time. The module provides in-depth analysis of the historical and
current sales cycle.
Sales force racks up into team member and team member into the sales manager. Therefore, sales
manager has to monitor activities of more than one sales team. This module helps sales manager
generate reports, which provide data point around current sale activities performed by the different
sales team. This module helps sales force connect with various product specialists based out of
various sales office locations. Territory-wise pipeline management becomes easy for the sales
manager.
Contact Management
One of the important needs of sales force and sales team is management of various contact points
across different organization. Contact management module should be able to organize contact across
current and potential client organization.
Lead Management
Sales force work relentlessly to generate a sales lead. Lead management module provides
management of leads, which come through marketing campaigns and referral management. This
module also tracks characteristics of each lead as to highlight other possible leads.
Configuration Support
Every organization has distinctive and varied product requirement. It is important for the sales force
to have ready access to different product configuration and associated price. This module facilitates
configuration support.
Knowledge Management
The modern information system can hold large volume of data, which can be effectively converted
into information.
Advent of the Internet has simplified sales force access to centralized databases. It helps sales force
stay in touch with each other as well as sales manager. Availability of the Internet has reduced the
cost of managing communication. Mobile devices have further contributed to proliferation sales
support system.
Another aspect of sales is after sales product support. Organizations have on-site or field service staff.
The Internet has made field service management possible on a real-time basis.
Sales support system enhances productivity and efficiency of the sales force. The sales force remains
aware of development around the potential clients on a real-time basis. This increases probability of
closing a sales deal. The productive sales force not only increases market share but also improves
profitability of organization.
Information Flow
It is very important for the retailer to communicate with the supplier as well as the consumer. From
the producer, the retail should know the following:
Retailer should know when a new product is getting launched or whether the producer is
introducing a new variant for the existing product.
Retailers should get a regular training from the manufacturer about brand new products and
fresh technology.
Retailer should have information well in advance about any impending pricing change.
Retailer should also know about sales forecast from producer for given line of product.
Consumer is also as important for the retailer as the producer. From the consumer, the retailer should
know the following:
Objective
i. Retail Information systems should connect all the stores under the company's
ii. Retail information system should allow instant information exchange between stores and
management.
iii. Retail information system should handle the various aspect of product management.
iv. Retail information system should handle customer analysis.
v. Retail information system should allow the store manager flexible pricing over a financial
year.
Retail information system should support basic retail function like material procurement, storage,
dispatch, etc. It should allow the manager to monitor sales of product mix and daily sales volume. An
information system should help in inventory management.
Retail information system is applicable to different types industry within retail management. An
information system can be developed to manage fashion store, pharmacy, a grocery store as well as a
toy store.
The current trends show that the use of the Internet, smart phones and the confidence of the people in
using their credit cards online is growing exponentially. Hence, ecommerce is here to stay, and we
have to adapt ourselves to become smarter online buyers and sellers and web entrepreneurs – because
all the basic principles of the real world business apply to ecommerce also.
The e-commerce spending and online buyers and penentration of e-commerce will surely grow but the
growth will vary from country to country and affect the online market at various time periods, but
eventually when all the continent markets mature, the global market will shrink the geographic
boundaries further – giving rise to further impetus for a favorable online scenario.
The development of the internet in the 20th century led to the birth of an electronic marketplace or it
is called e-marketplace, which is now a kernel of electronic commerce (e-commerce). An e-
marketplace provides a virtual space where sellers and buyers trade with each other as in the
traditional marketplace.
Various kinds of economic transactions and buying and selling of goods and services, as well as
exchanges of information, take place in e-marketplaces. E-marketplaces have become an alternative
place for trading. Finally, an e-marketplace can serve as an information agent that provides buyers and
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 199
sellers with information on products and other participants in the market. These features have been
reshaping the economy by affecting the behavior of buyers and sellers.
a. E-business
E-business affects the whole business and the value chains in which it operates. It enables a much
more integrated level of collaboration between the different components of a value chain than ever
before. Adopting e-Business also allows companies to reduce costs and improve customer response
time. Organizations that transform their business practices stand to benefit immensely from
innumerable new possibilities brought about by technology
E-commerce as anything that involves an online transaction. This can range from ordering online,
through online delivery of paid content, to financial transactions such as movement of money between
bank accounts. One area where there are some positive indications of e-commerce is financial
services. Online stock trading saw sustained growth throughout the period of broadband diffusion. E-
shopping is available to all these who use a computer.
b. E-commerce integration
The rationale for infusion of e-commerce education into all business courses is that technological
developments are significantly affecting all aspects of today's business. An e-commerce dimension
can be added to the business curriculum by integrating e-commerce topics into existing upper-level
business courses. Students would be introduced to e-commerce education and topics covered in a
variety of business courses in different disciplines e.g. accounting, economics, finance, marketing,
management, management information systems. To help assure that all related business courses in all
disciplines such as e.g., accounting, finance, economics, marketing, management, information
systems pay proper attention to the critical aspects of e-commerce, certain e-commerce topics should
be integrated into existing business courses.
Traditional insurance requires a certificate for every policy issued by the insurance company.
However, paper certificates encumber problems including loss, duplication and forging of the
certificate. The conventional certificate is now replaced with an electronic certificate that can be
digitally signed by both the insurer and the insurance company and verified by a certifying authority.
Online policy purchase is faster, more user-friendly and definitely more secure than the traditional
processes. Therefore it is more attractive to the insurer. At the same time it incurs less cost and
requires fewer resources than traditional insurance and is therefore more profitable for the insurance
company.
E-insurance also makes the insurance procedure more secure since the policy details are stored
digitally and all transactions are made over secure channels. These channels provide additional market
penetration that is absent in traditional channels and help in earning more revenue than traditional
insurance processes.
CHALLENGES OF E-COMMERCE
Internet based e-commerce has besides, great advantages, posed many threats because of its being
what is popularly called faceless and borderless.
Some examples of ethical issues that have emerged as a result of electronic commerce. All of the
following examples are both ethical issues and issues that are uniquely related to electronic
commerce.
A. Ethical issues
The following ethical issues related to e-commerce.
1) Privacy
Privacy has been and continues to be a significant issue of concern for both current and prospective
electronic commerce customers. With regard to web interactions and e- commerce the following
dimensions are most salient:
a. Privacy consists of not being interfered with, having the power to exclude; individual privacy
is a moral right.
b. Privacy is "a desirable condition with respect to possession of information by other persons
about him/herself on the observation/perceiving of him/herself by other persons"
2) Security concerns
In addition to privacy concerns, other ethical issues are involved with electronic commerce. The
Internet offers unprecedented ease of access to a vast array of goods and services. The rapidly
expanding arena of "click and mortar" and the largely unregulated cyberspace medium have however
prompted concerns about both privacy and data security.
C. E-commerce Integration
Beside many an advantages offered by the education a no. of challenges have been posed to the recent
education system.
Zabihollah Rezaee, Kenneth R. Lambert and W. Ken Harmon(2006) reported that E-commerce
Integration assures coverage of all critical aspects of e-commerce, but it also has several obstacles.
First, adding e-commerce materials to existing business courses can overburden faculty and students
alike trying to cope with additional subject matter in courses already saturated with required
information. Second, many business faculty members may not wish to add e-commerce topics to their
courses primarily because of their own lack of comfort with technology-related subjects. Third and
finally, this approach requires a great deal of coordination among faculty and disciplines in business
schools to ensure proper coverage of e-commerce education.
D. Legal system
Beside many an advantages offered by the IT a no. of challenges have been posed to the legal system.
The information transferred by electronic means which culminates into a contract raises many legal
issues which cannot be answered within the existing provisions of the contract act. The IT act does
not form a complete code for the electronic contracts.
Farooq Ahmed(2001) reported that some of the multifaceted issues raised are summarized in
following manner.
1. Formation of e-contracts
b) Cyber contracts
2. Validity of e-transactions.
5. Mistake in e-commerce
a) Mutual mistake
b) Unilateral mistake
6. Jurisdiction: cyber space transactions know no national and international boundaries and are not
analogous to 3- dimensional world in which common law principles involved.
7. Identity of parties
The issues of jurisdiction, applicable law and enforcement of the judgments are not confined to only
national boundaries. The problems raised are global in nature and need global resolution.
• security,
• E- commerce payments
• Supply- chain management
• Sales force, data warehousing, customer relations
• Integrating all of this existing back-end operation.
For more than two decades, organizations have conducted business electronically by employing a
variety of electronic commerce solutions. In the traditional scenario, an organization enters the
electronic market by establishing trading partner agreements with retailers or wholesalers of their
choosing. These agreements may include any items that cannot be reconciled electronically, such as
terms of transfer, payment mechanisms, or implementation conventions. After establishing the proper
business relationships, an organization must choose the components of their electronic commerce
system. Although these systems differ substantially in terms of features and complexity, the core
components typically include:
a. Workflow application - a forms interface that aids the user in creating outgoing requests or
viewing incoming requests. Information that appears in these forms may also be stored in a
local database.
b. Electronic Data Interchange (EDI) translator - a mapping between the local format and a
globally understood format.
c. Communications - a mechanism for transmitting the data; typically asynchronous or
bisynchronous
d. Value-Added Network (VAN) - a store and forward mechanism for exchanging business
messages
Using an electronic commerce system , a retailer may maintain an electronic merchandise inventory
and update the inventory database when items are received from suppliers or sold to customers. When
the inventory of a particular item is low, the retailer may create a purchase order to replenish his
inventory. As the purchase order passes through the system, it will be translated into its EDI
equivalent, transmitted to a VAN, and forwarded to the supplier’s mailbox. The supplier will check
his mailbox, obtain the EDI purchase order, translate it into his own local form, process the request,
and ship the item.
These technologies have primarily been used to support business transactions between organizations
that have established relationships (i.e. retailer and the wholesaler). More recently, due largely to the
popularity of the Internet and the World Wide Web, vendors are bringing the product directly to the
consumer via electronic shopping malls. These electronic malls provide the consumer with powerful
browsing and searching capabilities, somewhat duplicating the traditional shopping experience. In this
emerging business-to- consumer model, where consumers are businesses are meeting electronically,
business relationships will have to be automatically negotiated.
MOBILE COMPUTING
Mobile computing is human–computer interaction by which a computer is expected to be transported
during normal usage. Mobile computing involves mobile communication, mobile hardware, and
mobile software. Communication issues include ad-hoc and infrastructure networks as well as
communication properties, protocols, data formats and concrete technologies. Hardware includes
mobile devices or device components. Mobile software deals with the characteristics and
requirements of mobile applications.
Devices
Many types of mobile computers have been introduced since the 1990s including the:
2. Security standards
When working mobile, one is dependent on public networks, requiring careful use of VPN. Security is
a major concern while concerning the mobile computing standards on the fleet. One can easily attack
the VPN through a huge number of networks interconnected through the line.
3. Power consumption
When a power outlet or portable generator is not available, mobile computers must rely entirely on
battery power. Combined with the compact size of many mobile devices, this often means unusually
expensive batteries must be used to obtain the necessary battery life.
4. Transmission interferences
Weather, terrain, and the range from the nearest signal point can all interfere with signal reception.
Reception in tunnels, some buildings, and rural areas is often poor.
INTERNET LAB
A Laboratory Information Management System (LIMS), sometimes referred to as a Laboratory
Information System (LIS) or Laboratory Management System (LMS), is a software-based laboratory
and information management system that offers a set of key features that support a modern
laboratory's operations. Those key features include — but are not limited to — workflow and data
tracking support, flexible architecture, and smart data exchange interfaces, which fully "support its
use in regulated environments."The features and uses of a LIMS have evolved over the years from
simple sample tracking to an enterprise resource planning tool that manages multiple aspects of
laboratory informatics.
Due to the rapid pace at which laboratories and their data management needs shift, the definition of
LIMS has become somewhat controversial. As the needs of the modern laboratory vary widely from
lab to lab, what is needed from a laboratory information management system also shifts. The end
result: the definition of a LIMS will shift based on who you ask and what their vision of the modern
lab is Dr. Alan McLelland of the Institute of Biochemistry, Royal Infirmary, Glasgow highlighted this
problem in the late 1990s by explaining how a LIMS is perceived by an analyst, a laboratory manager,
an information systems manager, and an accountant, "all of them correct, but each of them limited by
the users' own perceptions."
Historically the LIMS, LIS, and Process Development Execution System (PDES) have all performed
similar functions. Historically the term "LIMS" has tended to be used to reference informatics systems
targeted for environmental, research, or commercial analysis such as pharmaceutical or petrochemical
work. "LIS" has tended to be used to reference laboratory informatics systems in the forensics and
clinical markets, which often required special case management tools. The term "PDES" has generally
applied to a wider scope, including, for example, virtual manufacturing techniques, while not
necessarily integrating with laboratory equipment.
In recent times LIMS functionality has spread even farther beyond its original purpose of sample
management. Assay data management, data mining, data analysis, and electronic laboratory notebook
(ELN) integration are all features that have been added to many LIMS, enabling the realization of
translational medicine completely within a single software solution. Additionally, the distinction
between a LIMS and a LIS has blurred, as many LIMS now also fully support comprehensive case-
centric clinical data.
Technology
The LIMS is an evolving concept, with new features and functionality being added often. As
laboratory demands change and technological progress continues, the functions of a LIMS will likely
also change. Despite these changes, a LIMS tends to have a base set of functionality that defines it.
That functionality can roughly be divided into five laboratory processing phases, with numerous
software functions falling under each:
i. the reception and log in of a sample and its associated customer data
ii. the assignment, scheduling, and tracking of the sample and the associated analytical workload
iii. the processing and quality control associated with the sample and the utilized equipment and
inventory
iv. the storage of data associated with the sample analysis
v. the inspection, approval, and compilation of the sample data for reporting and/or further
analysis
There are several pieces of core functionality associated with these laboratory processing phases that
tend to appear in most LIMS:
Sample Management
A lab worker matches blood samples to documents. With a LIMS, this sort of sample management is
made more efficient.
The core function of LIMS has traditionally been the management of samples. This typically is
initiated when a sample is received in the laboratory, at which point the sample will be registered in
the LIMS. Some LIMS will allow the customer to place an "order" for a sample directly to the LIMS
at which point the sample is generated in an "unreceived" state. The processing could then include a
step where the sample container is registered and sent to the customer for the sample to be taken and
then returned to the lab. The registration process may involve accessioning the sample and producing
barcodes to affix to the sample container. Various other parameters such as clinical or phenotypic
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 206
information corresponding with the sample are also often recorded. The LIMS then tracks chain of
custody as well as sample location. Location tracking usually involves assigning the sample to a
particular freezer location, often down to the granular level of shelf, rack, box, row, and column.
Other event tracking such as freeze and thaw cycles that a sample undergoes in the laboratory may be
required.
Modern LIMS have implemented extensive configurability, as each laboratory's needs for tracking
additional data points can vary widely. LIMS vendors cannot typically make assumptions about what
these data tracking needs are, and therefore vendors must create LIMS that are adaptable to individual
environments. LIMS users may also have regulatory concerns to comply with such as CLIA, HIPAA,
GLP, and FDA specifications, affecting certain aspects of sample management in a LIMS solution.
One key to compliance with many of these standards is audit logging of all changes to LIMS data, and
in some cases a full electronic signature system is required for rigorous tracking of field-level changes
to LIMS data.
Modern LIMS products now also allow for the import and management of raw assay data results.
Modern targeted assays such as QPCR and deep sequencing can produce tens of thousands of data
points per sample. Furthermore, in the case of drug and diagnostic development as many as 12 or
more assays may be run for each sample. In order to track this data, a LIMS solution needs to be
adaptable to many different assay formats at both the data layer and import creation layer, while
maintaining a high level of overall performance. Some LIMS products address this by simply
attaching assay data as BLOBs to samples, but this limits the utility of that data in data mining and
downstream analysis.
Client-side options
A LIMS has utilized many architectures and distribution models over the years. As technology has
changed, how a LIMS is installed, managed, and utilized has also changed with it.
The following represents architectures which have been utilized at one point or another:
Thick-client
A thick-client LIMS is a more traditional client/server architecture, with some of the system residing
on the computer or workstation of the user (the client) and the rest on the server. The LIMS software
is installed on the client computer, which does all of the data processing. Later it passes information to
the server, which has the primary purpose of data storage. Most changes, upgrades, and other
modifications will happen on the client side.
This was one of the first architectures implemented into a LIMS, having the advantage of providing
higher processing speeds (because processing is done on the client and not the server) and slightly
more security (as access to the server data is limited only to those with client software). Additionally,
thick-client systems have also provided more interactivity and customization, though often at a greater
learning curve. The disadvantages of client-side LIMS include the need for more robust client
computers and more time-consuming upgrades, as well as a lack of base functionality through a web
browser. The thick-client LIMS can become web-enabled through an add-on component.
Thin-client
A thin-client LIMS is a more modern architecture which offers full application functionality accessed
through a device's web browser. The actual LIMS software resides on a server (host) which feeds and
processes information without saving it to the user's hard disk. Any necessary changes, upgrades, and
other modifications are handled by the entity hosting the server-side LIMS software, meaning all end-
users see all changes made. To this end, a true thin-client LIMS will leave no "footprint" on the
client's computer, and only the integrity of the web browser need be maintained by the user. The
advantages of this system include significantly lower cost of ownership and fewer network and client-
side maintenance expenses. However, this architecture has the disadvantage of requiring real-time
server access, a need for increased network throughput, and slightly less functionality. A sort of
hybrid architecture that incorporates the features of thin-client browser usage with a thick client
installation exists in the form of a web-based LIMS.
Some LIMS vendors are beginning to rent hosted, thin-client solutions as "software as a service"
(SaaS). These solutions tend to be less configurable than on premise solutions and are therefore
considered for less demanding implementations such as laboratories with few users and limited
sample processing volumes.
Web-enabled
A web-enabled LIMS architecture is essentially a thick-client architecture with an added web browser
component. In this setup, the client-side software has additional functionality that allows users to
interface with the software through their device's browser. This functionality is typically limited only
to certain functions of the web client. The primary advantage of a web-enabled LIMS is the end-user
can access data both on the client side and the server side of the configuration. As in a thick-client
architecture, updates in the software must be propagated to every client machine. However, the added
disadvantages of requiring always-on access to the host server and the need for cross-platform
functionality mean that additional overhead costs may arise.
Web-based
Arguably one of the most confusing architectures, web-based LIMS architecture is a hybrid of the
thick- and thin-client architectures. While much of the client-side work is done through a web
browser, the LIMS also requires the additional support of Microsoft's .NET Framework technology
installed on the client device. The end result is a process that is apparent to the end-user through the
Microosoft-compatible web browser, but perhaps not so apparent as it runs thick-client-like
processing in the background. In this case, web-based architecture has the advantage of providing
more functionality through a more friendly web interface. The disadvantages of this setup are more
sunk costs in system administration and support for Internet Explorer and .NET technologies, and
reduced functionality on mobile platforms.
Configurability
LIMS implementations are notorious for often being lengthy and costly. This is due in part to the
diversity of requirements within each lab, but also to the inflexible nature of LIMS products for
adapting to these widely varying requirements. Newer LIMS solutions are beginning to emerge that
take advantage of modern techniques in software design that are inherently more configurable and
adaptable — particularly at the data layer — than prior solutions. This means not only that
implementations are much faster, but also that the costs are lower and the risk of obsolescence is
minimized.
i. A LIMS traditionally has been designed to process and report data related to batches of
samples from biology labs, water treatment facilities, drug trials, and other entities that handle
complex batches of data. A LIS has been designed primarily for processing and reporting data
related to individual patients in a clinical setting.
ii. A LIMS needs to satisfy good manufacturing practice (GMP) and meet the reporting and
audit needs of the U.S. Food and Drug Administration and research scientists in many
different industries. A LIS, however, must satisfy the reporting and auditing needs of hospital
accreditation agencies, HIPAA, and other clinical medical practitioners.
iii. A LIMS is most competitive in group-centric settings (dealing with "batches" and "samples")
that often deal with mostly anonymous research-specific laboratory data, whereas a LIS is
usually most competitive in patient-centric settings (dealing with "subjects" and "specimens")
and clinical labs.
REVISION EXERCISES
1. What are some of the impacts of internet on business
2. Discuss the future of internet in business.
3. What are the risks associated with use of internet in business?
4. Discuss the models of e-commerce in business
5. What is the importance of channel participants in marketing
6. What is a sales support system?
7. What are the characteristics of a retail information system
8. Discuss the challenges of e-commerce
9. What is mobile computing and what are some of its limitation
10. Discuss the business opportunities in e-commerce
CHAPTER 7
INFORMATION SYSTEMS STRATEGY
SYNOPSIS
Introduction…………………………………………………………… 21
…. 0
Overview of Business Strategy 21
Hierarchy……………………………….. 2
The Strategic Process and Information
Systems 22
Planning………………………………………………………… 4
Development of an Information Systems 23
Strategy………………………. 5
Aligning Information Systems to The
Organisation's Corporate 23
Strategy……………………………………….. 8
Managing Information Systems 24
Strategy……………………………….. 1
Information Systems For Competitive 26
Advantage………………………. 5
INTRODUCTION
Through in-depth analyses of the business environment and the strategy of the business as well as an
examination of the role that information and systems can and could fulfill in the business, a set of
known requirements and potential opportunities can be identified. These needs and options will result
from business pressures, the strategy of the business and the organization of the various activities,
resources and people in the organization. Information needs and relationships can then be converted
into systems requirements and an appropriate organization of data and information resources.
To enable these 'ideal applications to be developed and managed successfully, resources and
technologies will have to be acquired and deployed effectively. In all cases, systems and information
will already exist, and, normally, IS resources and technology will already be deployed.
Any strategy, therefore, must not only identify what is eventually required and must also understand
accurately how much has already been achieved.
The IS/IT strategic plan must therefore define a migration path that overcomes existing weaknesses,
exploits strengths and enables the new requirements to be achieved in such a way t h a t it can be
resourced and managed appropriately.
A strategy has been defined as 'an integrated set of actions aimed at increasing the long-term well-
being and strength of the enterprise.'
The IS/IT strategy must be integrated not only in terms of information, systems and technology via a
coherent set of actions but also in terms of a process of adaptation to meet the changing needs of the
business as they evolve. "Long term' suggests uncertainty, both in terms of the business requirements
and the potential benefits that the various applications and technologies will offer. Change is the only
thing that is certain. These changing circumstances will mean that the organization will have to be
capable of effective responses to unexpected opportunities and problems.
Prior research on IS strategy has been heavily influenced by the treatment of strategy in the field of
strategic management.
The first of these streams focuses on the central question of what is strategy, or what constitutes a
strategy. Although, to date, there is no model that has received consensus, there are several strategy
models, including Porter’s five-forces and the value chain model, core competency theory, the
resource based view of the firm, and other tools that aid in the analysis, development, and execution
of strategy. While each of these tools reflects a useful perspective of strategy, they do not provide
direct help in providing a clear definition of strategy.
The second major stream emphasizes characteristics for distinguishing strategic decisions from non-
strategic decisions. Frequently cited characteristics of strategic decisions include their irreversible
nature, the expected impact on long-term firm performance, and the directional nature, that give
guidance to non-strategic decisions. Similar to the first stream of research, this line of strategy
research does not offer a tight definition of strategy per se.
The third stream has focused on the central questions that emerge from the existence of strategy at
different organizational levels. For example, at a corporate level, strategy that involves answering
what businesses the corporation should be in is viewed as a major area of interest .
In contrast, business unit strategy deals primarily with addressing how to gain competitive advantage
in a given business and hence is also referred to as competitive strategy. Finally, functional strategy is
primarily concerned with resource allocations to achieve the maximization of resource productivity.
While strategy may include various decisions at different organizational levels, strategy is
nevertheless recognized to be more than the sum of the strategic decisions it includes. In this sense,
Lorange and Vancil (1977) consider strategy as a “conceptual glue” that ensures coherence between
individual strategic decisions. However, whether this form of integration is achieved ex ante (i.e.,
through planning) or ex post (i.e., emergent) has remained a point of debate.
IS strategy is and, instead, has focused more on how to conduct strategic planning, how to align IS
strategy with a given business strategy, or who should be involved in forming the strategy. On one
hand, it is quite clear that, applying Whittington’s (1993) framework, most IS strategies described in
the extant literature fall into the “classical” quadrant of strategy (i.e., IS strategic planning is a product
of calculated deliberation with profit maximization as the goal). On the other hand, there remains a
large degree of obscurity about IS strategy due to the absence of established typologies such as those
found within business strategy literature. Moreover, a variety of terms have been employed to
represent similar constructs such as IT strategy, IS strategy, IS/IT strategy or information strategy,
among others. This plethora of terms creates confusion among researchers trying to interpret existing
works. As stated earlier, information systems is a broad concept (covering the technology components
and human activities related to the management and employment process of technology within the
organization); therefore, we find it most meaningful to use the term IS strategy throughout this paper.
More specifically, following Mintzberg’s (1987) fifth definition of strategy as a perspective, we
define IS strategy as the organizational perspective on the investment in, deployment, use, and
management of information systems. We note that the term of IS strategy is chosen to embrace rather
than to exclude the meanings of the other terms. With this definition, we do not regard the notion of
IS strategy as an ex post only or “realized IS strategy” as defined in the IS strategic alignment
literature. Nor do we suggest that an IS strategy must be intentional as implied in the strategic
information systems planning literature. This is because organizations, without an (formal or
intentional) IS strategy, do use IS and hence make decisions regarding IS. For example, recent
research has examined the pattern of IS deployment as an indication of IS strategy. However, we
cannot infer an intentional IS strategy from the mere existence of IS within a company. Therefore, we
contend that examining IS strategy as a perspective may resolve this dilemma. Furthermore, our
definition of IS strategy suggests that while IS strategy is part of a corporate strategy, conceptually it
should not be examined as part of a business strategy. Rather, it is a separate perspective from the
business strategy that addresses the scope of the entire organization (i.e., IS investment, deployment,
and management) to improve firm performance. This view is consistent with Earl’s (1989) work,
which argues that IS strategy should both support and question business strategy. Therefore, this
definition also implies that IS strategy should be examined at the organizational level, rather than at a
functional level. Hence, while each individual business and IS executive can have his/her own view of
IS, organizational IS strategy should reflect the collective view shared across the upper echelon of the
organization. Meanwhile, this notion has implications for advancements in the stream of research that
seeks to “align” the two separate strategies—business and IS.
The origin of the hierarchical view of strategies dates back to the 1920s when some of the largest US
firms started pursuing a strategy of diversification. At that time, these firms were typically organized
functionally. But diversified growth using these organization structures soon led to severe
coordination and resource allocation problems. Top management, in firms such as Dupont and
General Motors, responded to this problem with the creation of the multidivisional organization
structure, or the M-Form.
Following Chandler’s (1962) pioneering work showing how a strategy of diversification led to the use
of a multidivisional structure, other researchers sought theoretical reasons for the emergence and
adoption of the M-form organization structure. Using transaction cost economics reasoning,
Williamson (1975) argued that the M-form was adopted because it did a better job than capital
markets in allocating scarce capital between competing investment proposals. He suggested that both
the monitoring and policing costs were also lower in the multidivisional structure when compared to
capital markets.
However, the multidivisional structure was itself becoming unwieldy. Leading firms like General
Electric (GE) invited McKinsey & Company, one of the founders of the now flourishing management
consulting industry, to examine its corporate structure. GE had at that time nearly 200 profit centers
and 145 departments. The McKinsey consultants advised GE’s top management to organize their
firm’s businesses along strategic lines, influenced more by external industry conditions than internal
organizational considerations. GE’s profit centers and departments were consolidated into a smaller
number of Strategic Business Units (SBU).
Each SBU became a stand-alone entity deserving of its own strategy and dedicated functional support.
While corporate strategy was concerned with domain selection (the portfolio of businesses that the
firm should have in order to deliver value to its shareholders); business unit strategy was concerned
with domain navigation (competitive positioning of each of the firm’s business within its industry
environment). Finally, functional strategies specified the contributions that were expected from each
function and their relative salience to the success of the firm’s business strategy.
Corporations also turned to consultants for answers regarding resource allocation. Starting with
BCG’s growth share matrix, numerous other consulting firms introduced portfolio planning matrix as
an answer to the resource allocation problem. The two axes of the matrix were typically the industry’s
attractiveness and the company’s position within the industry. Each of the corporation’s strategic
business units could be mapped onto this matrix. SBUs with strong market positions in growing
industries, the “star” businesses, were lavished with additional resources; even as SBUs with weak
positions in stagnating or declining industries, the so called “dog “ businesses, were slated for
divestment. By the mid 1970s, portfolio planning became very popular. Indeed, by the early 1980s
over half of the Fortune 500 had introduced portfolio planning techniques.
Further, in order to bridge the multiple levels of decision making within the firm top management
needed a process. Formal planning and control systems began filling this void. A study by Stanford
Research Institute showed that a majority of US companies used formal planning systems by 1963.
Vancil and Lorange (1977) and Lorange (1980) describe three distinct phases in a typical strategic
planning process: agenda setting, strategic programming and budgeting. Aspirations of top
management when cycled through these three phases and three layers of management (corporate,
divisional and functional) resulted in concrete budgets for business units and functions within the
firm. When the three phases were followed in a rigid sequential fashion, the intent was frozen when
strategic programs began to be developed. In turn, the programs were non-negotiable once budgets
were decided.
By the early 1980s, with the diffusion of M-form structure, the creation of SBUs, the adoption of
formal planning systems and portfolio planning techniques, the separation of business unit and
corporate strategies was complete in the US and Europe. Functional strategies had to be subservient to
the business strategies that they supported, and business strategies in turn had to be aligned with
strategy.
Furthermore, this hierarchical view of strategy was also mapped on to levels of management within
the firm. The locus of decision making for each strategy was thus clearly specified. The corporate
office was the primary architect of strategy.
Divisional managers helped in a more restricted fashion by detailing their business strategy within
strict corporate guidelines. Functional managers supported their divisional heads with well aligned
functional strategies.
It was assumed then that this unidirectional causality and hierarchically determined locus of decision
making was the sine qua non for superior firm performance. No theoretical basis was provided for this
assertion. Nor were there systematic empirical studies conducted to verify this claim. The assumption
was that since the framework emerged from the practices of high performing companies like General
Motors, Dupont, ITT and GE, it had to have universal appeal. It appeared to be a useful framework in
practice and that seemed to have sufficed.
However, the hierarchical view of strategies has since unraveled because of both empirical and
theoretical developments on corporate, business and functional strategies. It has also lost its relevance
today mostly because strategic management has changed dramatically due to an increasingly turbulent
business context. Strategy making in a transnational corporation cannot afford to be hierarchical.
Business strategy
In the business world, managers must take a part in their decisions about information systems, but do
not have to understand the total concepts of it. If managers leave it up to other people to make their
IT decisions, it could cause problems for their company. Information systems or IS manages the
company’s infrastructure and must be aligned the same way it manages its employees. A framework
for understanding IS’s impact on companies is called Information Systems Strategy Triangle that
relates business, organization and IS strategies. Companies try to balance and compliment these three
strategies. If you make a change in one strategy, you must reflect a change in the other two. Also all
three strategies must constantly be adjusted to keep up with the changing world.
These strategies, in order to work, must be aligned. Alignment in this sense means, the companies’
current business strategy is enabled, supported and unconstrained by technology. Two other concepts
that are similar are synchronization and convergence. Synchronization means technology helps
current business strategies and helps create new ones to use for the future. Convergence means that
business and IS strategies are combined and the leaders of these two understand both concepts.
Alignment is the most important concept of these and is important in achieving harmony in
organization, business and IS strategies.
A strategy is a coordinated set of actions to fulfill goals, objectives and purposes. You must set
certain limits on what you want to achieve. To formulate a strategy you must have a mission, a clear
and compelling statement that unifies your effort and describes what your organization is about. A
mission statement describes what your company can do and why it exists. A business strategy is a
strategy stating where the business is going and how it expects to achieve its results. It also shows
how a company can communicate its goals. A business strategy is formulated in response to market
forces, customer demands and organizational capabilities.
Michael Porter created the generic strategies framework. This framework helps managers learn new
strategies to enhance their competitive advantage. All businesses must sell their products against
other competitors. There are three primary strategies in Porter’s framework. One is cost leadership,
which results when the companies’ goal is to have the lowest costs without diminishing quality in
their products. Only one leader in cost cutting can emerge and if everyone starts cutting costs a price
war can start. This can eventually lead to higher costs or loss of profit.
The second Porter framework is differentiation, where the company’s products or services are unique
to others in the marketplace. In order to work, the price for the unique product/service must be
important enough to the consumer. The third Porter framework is focus, which a company will limit
its scope to a smaller segment of the market. Focus has two variants, cost focus – where the goal is to
seek a cost advantage within that group and differentiation focus – where it distinguishes its
products/services within the same group. By doing this, the goal is to have a local competitive
advantage over a larger one in the entire market.
There are variations on Porter’s differentiation strategy. The shareholder value model states that the
timing of the use of specialized knowledge can create differentiation advantage as long as the said
knowledge remains unique. Customers buy products from this company to gain unique knowledge.
This is a one-time event and the advantage is static. Another variant is the unlimited resources model,
which has a larger base of resources that allows one company to outlast others by using a
differentiation strategy. With more resources a company can pull from greater resources and sustain
losses more easily than others. The Porter models and variants are useful for understanding how a
company seeks its profits and building new advantages. They balance the competitive forces enforced
by buyers, suppliers, competitors in the market and new products and services within the industry.
The Porter models were created in a time when the rate of change was much slower than it is today.
The D’Aveni framework has seven approaches to where an organization can create their business
strategy:
These are a useful framework for identifying different aspects of the business strategy and help make
the company more competitive. Managers can identify new answers to their competition and new
opportunities to strength their current abilities. One application of hypercompetition is to destroy
your business. Basically take apart your current business models and create new ones that will
actually help grow it.
When a manager places IS decisions on another player, this will hurt their business strategy. The
business strategy needs to drive IS strategy, not the other way around. Changes in both should be
reflected in both. To understand the business strategy you must know what the business goal is, what
the plan for achieving it is and who are the crucial competitors in the field? The Porter and D’Aveni
frameworks help answer these questions.
Organizational strategy includes its design and choices it makes to define, set up, coordinate and
control its work processes. It’s a plan that answers how the company will organize to seek its goals
and implement its business strategy. One simple framework for understanding organizational design
is a business diamond. The business diamond includes the organizational plan and the following four
concepts: business processes, values and beliefs, tasks and structures and management/measurement
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 217
systems. The business diamond states that the execution of the organization strategy is composed of
the best combination of control, cultural and organizational variables. Organizational variables are
decision rights, business processes, formal reporting relationships and informal networks. Control
variables are the availability of data, nature and quality of planning and effectiveness of performance
measurement/evaluation systems. Cultural variables are the values of the company. These three
variables are managerial levers used by decision makers to enforce necessary changes in their
company. To understand organizational strategy you must answer these questions:
What are the important structures and reporting relationships within the company?
Who holds the decision rights to critical decisions?
What are the characteristics, experiences and skill levels of people within the company?
What are the key business processes?
What control systems are in place? What is the culture of the company?
Answers to these inform any assessment of the company’s use of IS.
IS strategy is a plan the company uses to provide its information services and allows it to complete its
business strategy. Business strategy is the function of competition, positioning, and capabilities.
There are four IS infrastructure components including hardware – physical components like
computers, software – programs on the computers, network – how the information is exchanged with
others and data – how its stored.
The Halo Effect is an error, which the basic human tendency is to make specific inferences on the
basis of a general impression. Three misconceptions created by the halo effect are:
1) There exists a formula that companies can apply makes them succeed.
2) Firm performance is driven completely by internal factors.
3) Because a decision may turn out bad, doesn’t mean it was poorly executed.
Managers should avoid formulas and understand that success is relative, think of decisions as
probabilities and evaluate the decision making process not just the outcomes.
a) corporate level
b) business unit level
c) functional or departmental level.
While strategy may be about competing and surviving as a firm, one can argue that products, not
corporations compete, and products are developed by business units. The role of the corporation then
is to manage its business units and products so that each is competitive and so that each contributes to
corporate purposes.
i. Reach
Defining the issues that are corporate responsibilities; these might include identifying the overall
goals of the corporation, the types of businesses in which the corporation should be involved, and the
way in which businesses will be integrated and managed.
At the business unit level, the strategic issues are less about the coordination of operating units and
more about developing and sustaining a competitive advantage for the goods and services that are
produced. At the business level, the strategy formulation phase deals with:
The functional level of the organization is the level of the operating divisions and departments. The
strategic issues at the functional level are related to business processes and the value chain. Functional
level strategies in marketing, finance, operations, human resources, and R&D involve the
development and coordination of resources through which business unit level strategies can be
executed efficiently and effectively.
Functional units of an organization are involved in higher level strategies by providing input into the
business unit level and corporate level strategy, such as providing information on resources and
capabilities on which the higher level strategies can be based. Once the higher-level strategy is
developed, the functional units translate it into discrete action-plans that each department or division
must accomplish for the strategy to succeed.
Organizations invest in research and development for superior content production, or they
acquire/merge with companies. The purpose of acquisition is to either expand current product offering
or add content as to provide end to end solutions.
Organization strategy can be devised using Porter’s Five Force model. Organization’s strategy should
be to increase customer base and provide customized solution. Service also plays an important role in
organization strategy. Service is the key factor in maintaining good customer relationship.
Organization needs to devise a strategy which is convergence of technology, brand marketing, product
innovation and world-class service.
A physical value chain consist procurement of raw materials, operations, delivery, sales and
marketing and service. Information technology has changed the way we look at the value chain.
Information technology has introduced concept of virtual value chain.
a) Gather
Information age has helped digitization of information. Proliferation of information is higher than
ever before. The internet provides data and information about markets, economies, government
policies, etc. Companies gather information relevant to them as a first stage in the virtual value chain.
b) Organizing
Information gathered in the first stage of the virtual value chain is in form of text, data tables, video,
etc. The challenge in the second stage is to organize the gathered information in a way to retrieve
easily for further analysis.
c) Selection
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 220
In the third stage of virtual value chain, organizations analyze captured information to add value to
customers. Organizations develop better ways of dealing with customers, product delivery, etc. using
information.
d) Synthesization
In the fourth stage of virtual value chain, organizations synthesize the available data. The data reaches
the end user in the desired format.
e) Distribution
The last stage of the virtual value chain is delivery of information to the end user. In a physical value
chain, products are delivered to customers, in the virtual value chain this is replaced by a digital
product. For example, digital movie streaming of movies compared to mail delivery of DVD.
Therefore, today’s businesses are also known as information business.
Importance of Virtual Value Chain
The concept of a virtual value chain was devised looking at current internet penetration. It provides
addition to existing value chain. Information technology helps in holistic view of physical value and
making it efficient and effective.
Today’s information systems are capable of capturing information from every part of the value chain.
This information is utilized to optimize performance at each stage. However, this information can also
be utilized to improve customer experience at each stage. This enhanced experience can be through
new product and services, thus generating more revenue to the company.
Primary Activities
The primary activities are directly associated with the manufacturing of products like supply
management, plant operations, etc.
Secondary Activities.
The secondary activities are referred to as support functions such as finance, HR, information
technology, etc.
In the era of advanced information and communication technology, many businesses have started
operations on the internet as its medium. Through the internet, many commercial activities like
buying, selling, auctioning is taking place. This online commercial activity is known as e-commerce.
E-commerce value chain has series of activities like electronic fund transfer, internet marketing,
distribution channel, supply chain etc.
Every activity within a physical value chain has an inherent information component. The amount of
information that is present in activities determines, company’s orientation towards e-commerce. It has
been observed that companies with high information presence will adopt e-commerce faster rather
than companies with lower information presence.
For example, a computer manufacturer has high information presence, i.e. they can provide a great
deal of product information through their website. Consumers also have flexibility to determine the
product configuration using the website. Such computer manufacturers and companies with
comparative business model are also likely to adopt e-commerce.
Activities which comprise of the value chain are undertaken by companies to produce and sell product
and services. Some of the activities done within the value chain are understanding customer needs,
designing products, procuring materials for production, production, storage of products, distribution
of products, after sale services of products and customer care.
There are two ways to assess information presence. The first way is by looking at the industry, and
second way is by looking at the product. In an industry with high information presence, it has been
observed that:
E-Strategy
Companies with high information presence were the first to look at e-commerce as an alternate way of
conducting business. For example, software companies, much of there is business is done through the
internet. Their website provides in-depth product information through e-brochure, video, client
opinion, etc. Sales leads are generated online; purchase and fund transfer is done, and also after-sales
service is done online.
These high information companies have made substantial investment in human resources and
information/communication technology.
Challenges
Companies which are moving towards e-commerce need to have business model developed to support
online activities. The dotcom burst of 2000 has served hard example about companies doing e-
commerce.
The concept of the value chain was introduced by Michael Porter. The concept helps categories’
activities undertaken by enterprise to deliver a successful product to a customer. The concept since its
introduction in 1980s has become a forefront in developing strategies around customer delight and
commercial success. The value chain is series of activities undertaken by organization to deliver a
product to end users. Here the concept does not apply to one single manufacturing organization, but it
also applies to the players in the value chain. One of the purposes of the value chain is to understand
activities, which add value during creation of the end product.
Value Chain
Enterprise undertakes several primary activities as well as secondary activities to deliver the final
product to customers. Here primary activities are defined as activities, which directly support
production of product or service. Secondary activities or support activities are activities which
primary activities.
Primary Activities
Primary activities in the value chain are directly related with the production and delivery of the final
product. The objective of these activities is adding value to product that is more than the cost of
product. This will ensure that company can generate healthy margin and stay in business. Primary
activities mainly consist of inbound supply chain, operations, dispatch, sales and marketing and
service.
Inbound supply chain is made up of activities like receiving raw materials, storing raw
materials and inventory management.
Operations consist of activities which convert different raw material into final product.
Dispatch activities consist of sending final product to distributors, retailers etc.
Sales and Marketing activities includes promotion of products to potential as well as existing
customers, networking with channel partners etc.
Service consists of activities like solving customer issues before the sale of the product as
well after sale of the product i.e customer care or customer support.
Commercial value chain is defined as any value chain used to achieve its organizational goal. Every
company in any given industry will have its own value. However objective all the different value
chain is to add value chain at every stage till product is delivered. The value chain of business
includes activities:
b) Customer Interaction
Website design and navigation should ensure that potential buyers are able to reach the required web
page. Another option available is customers entering their requirement and website displaying
potential products.
E-strategy Formulation
The two very important factors which determines a successful strategy is customer requirements and
commercial scalability. Without either, business will fail in its venture. Customers expect superior
quality in product and service they purchase. For e-commerce, quality means easy negotiable website,
secure transaction and web-site management.
For companies to develop and manage e-commerce sites, it has to invest in manpower and
technology. E-commerce sites consist of complex software and hardware structure. Companies make
a choice for technology to run its site based on cost-benefit analysis and project scalability.
A successful e-commerce strategy model consists of organization structure policy and positional
structure policy.
Organization Structure
A successful strategy starts with vision and mission statement. This vision comes from corporate
leadership. Corporate leadership should keep open mind about prevailing new technology and should
be flexible in changing strategy to tune with an ever-changing world.
The last important portion of organizational structure is organizational learning. Organization needs to
maintain and encourage culture of organizational learning. This prepares company for adaption of
new strategy and introduction of new technology.
Organizational Positioning
The second important factor of e-strategy is the organizational positioning in technology, brand,
service and market.
The internet has provided an alternate medium through which an organization can benefit in brand
development. People are logging onto the internet more than ever. This has provided golden
opportunity for organization to reinforce it brand leadership or create brand awareness.
Organizations have managed to achieve phenomenal growth using the internet. They have assessed
market conditions preemptively and responded by providing correct market offering.
Clearly from above in the current business environment, it is important to acknowledge importance of
e-commerce and prepare a strategy which provides an organization competitive.
Strategic information systems planning, or SISP, are based on two core arguments. The first is that, at
a minimum, a firm’s information systems investments should be aligned with the overall business
strategy and in some cases may even become an emerging source of competitive advantage. While no
one disagrees with this, operations management researchers are just starting to study how this
alignment takes place and what the measurable benefits are. An issue under examination is how a
manufacturer’s business strategy, characterized as either “market focused” or “operations focused,”
affects its ability to garner efficiency versus customer service benefits from its Economic Resource
Planning (ERP) investments.
The second core argument behind SISP is that companies can best achieve IS-based alignment or
competitive advantage by following a proactive, formal and comprehensive process that includes the
development of broad organizational information requirements. This is in contrast to a “reactive”
strategy, in which the IS group sits back and responds to other areas of the business only when a need
arises. Such a process is especially relevant to ERP investments, given their costs and long-term
impact. Seegars, Grover and Teng have identified six dimensions that define an excellent SISP
process (notice that many of these would apply to the strategic planning process in other areas as
well):
1. Comprehensiveness
2. Formalization
Formalization is “the existence of structures, techniques, written procedures, and policies that guide
the planning process”.
3. Focus
Focus is “the balance between creativity and control orientations inherent within the strategic
planning system”. An innovative orientation emphasizes innovative solutions to deal with
opportunities and threats. An integrative orientation emphasizes control, as implemented through
budgets, resource allocation, and asset management.
4. Top-down flow
SISP should be initiated by top managers, with the aid of support staff.
5. Broad participation
Even though the planning flow is top-down, participation must involve multiple functional areas and,
as necessary, key stakeholders at lower levels of the organization.
6. High consistency
SISP should be characterized by frequent meetings and reassessments of the overall strategy.
The recommendations found in the SISP literature have been echoed in the operations management
literature. It has been suggested that firms should institutionalize a formal top-down planning process
for linking information systems strategy to business needs as they move toward evolution in their
management orientation, planning, organization, and control aspects of the IT function.
Background
For a long time relationship between information system functions and corporate strategy was not of
much interest to Top Management of firms. Information Systems were thought to be synonymous
with corporate data processing and treated as some back-room operation in support of day-to-day
mundane tasks. In the 80’s and 90’s, however, there has been a growing realization of the need to
make information systems of strategic importance to an organization. Consequently, strategic
information systems planning (SISP) is a critical issue. In many industry surveys, improved SISP is
often mentioned as the most serious challenge facing IS managers.
Planning for information systems, as for any other system, begins with the identification of needs. In
order to be effective, development of any type of computer-based system should be a response to
need--whether at the transaction processing level or at the more complex information and support
systems levels. Such planning for information systems is much like strategic planning in management.
Objectives, priorities, and authorization for information systems projects need to be formalized. The
systems development plan should identify specific projects slated for the future, priorities for each
project and for resources, general procedures, and constraints for each application area. The plan must
be specific enough to enable understanding of each application and to know where it stands in the
order of development. Also the plan should be flexible so that priorities can be adjusted if necessary.
Strategic capability architecture - a flexible and continuously improving infrastructure of
organizational capabilities – is the primary basis for a company's sustainable competitive advantage.
He has emphasized the need for continuously updating and improving the strategic capabilities
architecture.
SISP is the analysis of a corporation’s information and processes using business information models
together with the evaluation of risk, current needs and requirements. The result is an action plan
showing the desired course of events necessary to align information use and needs with the strategic
direction of the company (Battaglia, 1991). The same article emphasizes the need to note that SISP is
a management function and not a technical one. This is consistent with the earlier distinction between
the older data processing views and the modern strategic importance view of Information Systems.
SISP thus is used to identify the best targets for purchasing and installing new management
information systems and help an organization maximize the return on its information technology
investment. A portfolio of computer-based applications is identified that will assist an organization in
executing its business plans and realize its business goals. There is a growing realization that the
application of information technology (IT) to a firm’s strategic activities has been one of the most
common and effective ways to improve business performance.
Overview
Strategic systems are information systems that are developed in response to corporate business
initiative. They are intended to give competitive advantage to the organization. They may deliver a
product or service that is at a lower cost, that is differentiated, that focuses on a particular market
segment, or is innovative.
Strategic information management is a salient feature in the world of information technology (IT). In
a nutshell, strategic information management helps businesses and organizations categorize, store,
process and transfer the information they create and receive. It also offers tools for helping companies
apply metrics and analytical tools to their information repositories, allowing them to recognize
opportunities for growth and pinpoint ways to improve operational efficiency.
General Definition
Strategic information systems are those computer systems that implement business strategies; They
are those systems where information services resources are applied to strategic business opportunities
in such a way that the computer systems have an impact on the organization’s products and business
operations. Strategic information systems are always systems that are developed in response to
corporate business initiative. The ideas in several well-known cases came from information Services
people, but they were directed at specific corporate business thrusts. In other cases, the ideas came
from business operational people, and Information Services supplied the technological capabilities to
realize profitable results.
Most information systems are looked on as support activities to the business. They mechanize
operations for better efficiency, control, and effectiveness, but they do not, in themselves, increase
corporate profitability. They are simply used to provide management with sufficient dependable
information to keep the business running smoothly, and they are used for analysis to plan new
directions. Strategic information systems, on the other hand, become an integral and necessary part of
the business, and they affect the profitability and growth of a company. They open up new markets
and new businesses. They directly affect the competitive stance of the organization, giving it an
advantage against the competitors.
Most literature on strategic information systems emphasizes the dramatic breakthroughs in computer
systems, such as American Airlines' Sabre System and American Hospital Supply’s terminals in
customer offices. These, and many other highly successful approaches are most attractive to think
about, and it is always possible that an equivalent success may be attained in your organization. There
are many possibilities for strategic information systems, however, which may not be dramatic
breakthroughs, but which will certainly become a part of corporate decision making and will, increase
corporate profitability. The development of any strategic information systems always enhances the
image of information Services in the organization, and leads to information management having a
more participatory role in the operation of the organization.
The three general types of information systems that are developed and in general use are financial
systems, operational systems, and strategic systems. These categories are not mutually exclusive and,
in fact, they always overlap to some. Well-directed financial systems and operational systems may
well become the strategic systems for a particular organization.
Financial systems are the basic computerization of the accounting, budgeting, and finance operations
of an organization. These are similar and ubiquitous in all organizations because the computer has
proven to be ideal for the mechanization and control or financial systems; these include the personnel
systems because the headcount control and payroll of a company is of prime financial concern.
Financial systems should be one of the bases of all other systems because they give a common,
controlled measurement of all operations and projects, and can supply trusted numbers for indicating
departmental or project success. Organizational planning must be tied to financial analysis. There is
always a greater opportunity to develop strategic systems when the financial systems are in place, and
required figures can be readily retrieved from them.
Operational systems, or services systems, help control the details of the business. Such systems will
vary with each type of enterprise. They are the computer systems that operational managers need to
help run the business on a routing basis. They may be useful but mundane systems that simply keep
track of inventory, for example, and print out reorder points and cost allocations. On the other hand,
they may have a strategic perspective built into them, and may handle inventory in a way that
dramatically impacts profitability. A prime example of this is the American Hospital Supply inventory
control system installed on customer premises. Where the great majority of inventory control systems
simply smooth the operations and give adequate cost control, this well-known hospital system broke
through with a new version of the use of an operational system for competitive advantage. The great
majority of operational systems for which many large and small computer systems have been
purchased, however, simply help to manage and automate the business. They are important and
necessary, but can only be put into the "strategic" category it they have a pronounced impact on the
profitability of the business.
All businesses should have both long-range and short-range planning of operational systems to ensure
that the possibilities of computer usefulness will be seized in a reasonable time. Such planning will
project analysis and costing, system development life cycle considerations, and specific technology
planning, such as for computers, databases, and communications. There must be computer capacity
planning, technology forecasting, and personnel performance planning. It is more likely that those in
the organization with entrepreneurial vision will conceive of strategic plans when such basic
operational capabilities are in place and are well managed.
Operational systems, then, are those that keep the organization operating under control and most cost
effectively. Any of them may be changed to strategic systems if they are viewed with strategic vision.
They are fertile grounds for new business opportunities.
Strategic systems are those that link business and computer strategies. They are the systems where
new business strategies has been developed and they can be realized using Information Technology.
They may be systems where new computer technology has been made available on the market, and
planners with an entrepreneurial spirit perceive how the new capabilities can quickly gain competitive
advantage. They may be systems where operational management people and Information Services
people have brainstormed together over business problems, and have realized that a new competitive
thrust is possible when computer methods are applied in a new way.
There is a tendency to think that strategic systems are only those that have been conceived at what
popular, scientific writing sometimes calls the "achtpunckt." This is simply synthetic German for "the
point where you say ‘acht!’ or ‘that’s it!’" The classical story of Archimedes discovering the principle
of the density of matter by getting into a full bathtub, seeing it overflow, then shouting "Eureka!" or "I
have found it!" is a perfect example of an achtpuncht. It is most pleasant and profitable if someone is
brilliant enough, or lucky enough, to have such an experience. The great majority of people must be
content, however, to work step-by-step at the process of trying to get strategic vision, trying to
integrate information services thinking with corporate operational thinking, and trying to conceive of
new directions to take in systems development. This is not an impossible task, but it is a slow task that
requires a great deal of communication and cooperation. If the possibilities of strategic systems are
clearly understood by all managers in an enterprise, and they approach the development of ideas and
the planning systematically, the chances are good that strategic systems will be result. These may not
be as dramatic as American Airline’s Sabre, but they can certainly be highly profitable.
There is general agreement that strategic systems are those information systems that may be used
gaining competitive advantage. How is competitive advantage gained?. At this point, different writers
list different possibilities, but none of them claim that there may not be other openings to move
through.
Some of the more common ways of thinking about gaining competitive advantage are:
d) Innovation
Develop products or services through the use of computers that are new and appreciably from other
available offerings. Examples of this are automatic credit card handing at service stations, and
automatic teller machines at banks. Such innovative approaches not only give new opportunities to
attract customers, but also open up entirely new fields of business so that their use has very elastic
demand.
Almost any data processing system may be called "strategic" if it aligns the computer strategies with
the business strategies of the organization, and there is close cooperation in its development between
the information Services people and operational business managers. There should be an explicit
connection between the organization’s business plan and its systems plan to provide better support of
the organization’s goals and objectives, and closer management control of the critical information
systems.
Many organizations that have done substantial work with computers since the 1950s have long used
the term "strategic planning" for any computer developments that are going to directly affect the
conduct of their business. Not included are budget, or annual planning and the planning of developing
Information Services facilities and the many "housekeeping" tasks that are required in any
corporation. Definitely included in strategic planning are any information systems that will be used by
operational management to conduct the business more profitably. A simple test would be to ask
whether the president of the corporation, or some senior vice presidents, would be interested in the
immediate outcome of the systems development because they felt it would affect their profitability. If
the answer is affirmative, then the system is strategic.
Strategic system, thus, attempt to match Information Services resources to strategic business
opportunities where the computer systems will have an impact on the products and the business
operations. Planning for strategic systems is not defined by calendar cycles or routine reporting. It is
defined by the effort required to impact the competitive environment and the strategy of a firm at the
point in time that management wants to move on the idea.
Effective strategic systems can only be accomplished, of course, if the capabilities are in place for the
routine basic work of gathering data, evaluating possible equipment and software, and managing the
routine reporting of project status. The calendarized planning and operational work is absolutely
necessary as a base from which a strategic system can be planned and developed when a priority
situation arises. When a new strategic need becomes apparent, Information Services should have laid
the groundwork to be able to accept the task of meeting that need.
Strategic systems that are dramatic innovations will always be the ones that are written about in the
literature. Consultants in strategic systems must have clearly innovative and successful examples to
attract the attention of senior management. It should be clear, however, that most Information
Services personnel will have to leverage the advertised successes to again funding for their own
systems. These systems may not have an Olympic effect on an organization, but they will have a good
chance of being clearly profitable. That will be sufficient for most operational management, and will
draw out the necessary funding and support. It helps to talk about the possibilities of great
breakthroughs, if it is always kept in mind that there are many strategic systems developed and
installed that are successful enough to be highly praised within the organization and offer a
competitive advantage, but will not be written up in the Harvard Business Review.
•Main approach: entrepreneurial (user innovation), multiple (bottom-up development, top down
analysis, etc.) at the same time.
Strategic Information Systems Planning in the present SIS era is not an easy task because such a
process is deeply embedded in business processes. These systems need to cater to the strategic
demands of organizations, i.e., serving the business goals and creating competitive advantage as well
as meeting their data processing and MIS needs. The key point here is that organizations have to plan
for information systems not merely as tools for cutting costs but as means to adding value. The
magnitude of this change in perspective of IS/IT’s role in organizations is highlighted in a Business
Week article, ‘The Technology Payoff’ (Business Week, June 14, 1993).
Throughout the 1980s US businesses invested a staggering $1 trillion in the information technology.
This huge investment did not result in a commensurate productivity gain - overall national
productivity rose at a 1% annual rate compared with nearly 5% in Japan. Using the information
technology merely to automate routine tasks without altering the business processes is identified as
the cause of the above productivity paradox. As IT is used to support breakthrough ideas in business
processes, essentially supporting direct value adding activities instead of merely cost saving, it has
resulted in major productivity gains. In 1992, productivity rose nearly 3% and the corporate profits
went up sharply. According to an MIT study quoted in the above article, the return on investment in
information systems averaged 54% for manufacturing and 68% for all businesses surveyed. This
impact of information technology on re-defining, re-engineering businesses is likely to continue and it
is expected that information technology will play increasingly important roles in future. For example,
Pant, et al. (1994) point out that the emerging vision of virtual corporations will become a reality only
if it is rooted in new visionary information technology. It is information technology alone which will
carve multiple ‘virtual corporations’ simultaneously out of the same physical resources and adapt
them without having to change the actual organizations. Thus, it is obvious that information
technology has indeed come a long way in the SIS era, offering unprecedented possibilities, which, if
not cashed on, would turn into unprecedented risks. As Keen (1993) has morbidly but realistically
pointed out that organizations not planning for strategic information systems may fail to spot the
business implications of competitors’ use of information technology until it is too late for them to
react. In situations like this, when information technology changes the basics of competition in an
industry, 50% of the companies in that industry disappear within ten years.
a) Impact and
b) alignment
a) Impact Methodologies
Impact methodologies help create and justify new uses of IT, while the methodologies in the
“alignment” category align IS objectives with organizational goals.
Once the value chain is charted, executives can rank order the steps in importance to determine which
departments are central to the strategic objectives of the organization. Also, executives can then
consider the interfaces between primary functions along the chain of production, and between support
activities and all of the primary functions. This helps in identifying critical points of inter-
departmental collaboration. Thus, value chain analysis:
(a) is a form of business activity analysis which decomposes an enterprise into its parts. Information
systems are derived from this analysis.
(b) helps in devising information systems which increase the overall profit available to a firm.
(c) helps in identifying the potential for mutual business advantages of component businesses, in the
same or related industries, available from information interchange.
Strengths
The main strength of value chain analysis is that it concentrates on direct value adding activities of a
firm and thus pitches information systems right into the realm of value adding rather than cost cutting.
Weaknesses
Although a very useful and intuitively appealing, value chain analysis suffers from a few weaknesses,
namely,
a) it only provides a higher level information model for a firm and fails to address the
developmental and implementation issues.
b) it fails to define a data structure for the firm because of its focus on internal operations instead
of data,
c) the basic concept of a value chain is difficult to apply to non-manufacturing organizations
where the product is not tangible and there are no obvious raw materials.
d) it does not provide an automated support for carrying out analysis.
Value chain analysis, therefore, needs to be used in conjunction with some other methodology which
addresses the development and implementation issues and defines a data structure.
Consequently, critical success factors are areas of activity that should receive constant and careful
attention from management.
Rockart originally developed the CSF approach as a means to understanding the information needs of
CEOs. The approach has subsequently been applied to the enterprise as a whole and has been
extended into a broader planning methodology. It has been made the basis of many consulting
practices and has achieved major results where it has been used well.
CSFs can exist at a number of levels, i.e., industry, organizational, business unit, or manager’s. CSFs
at a lower level are derived from those at the preceding higher level. The CSF approach introduces
information technology into the initial stages of the planning process and helps provide a realistic
assessment of the IT’s contribution to the organization
Strengths
CSF analysis provides a very powerful method for concentrating on key information requirements of
an organization, a business unit, or of a manager. This allows the management to concentrate
resources on developing information systems around these requirements. Also, CSF analysis is easy to
perform and can be carried out with few resources.
Weaknesses
(a) although a useful and widely used technique, CSF analysis by itself is not enough to perform
comprehensive SISP - it does not define a data architecture or provides automated support for
analysis.
(b) to be of value, the CSF analysis should be easily and directly related back to the objectives of the
business unit under review. It has been the experience of the people using this technique that generally
it loses its value when used below the third level in an organizational hierarchy (Ward, 1990, p.164).
(c) CSFs focus primarily on management control and thus tend to be internally focused and analytical
rather than creative
(d) CSFs partly reflect a particular executive’s management style. Use of CSFs as an aid in identifying
systems, with the associated long lead-times for developing these systems, may lead to giving an
executive information that s/he does not regard as important (Ibid.).
(e) CSFs do not draw attention to the value-added aspect of information systems. While CSF analysis
facilitates identification of information systems which meet the key information needs of an
organization/business unit, the value derived from these systems is not assessed.
b) Alignment Methodologies
Strengths
Because BSP combines a top down business analysis approach with a bottom up implementation
strategy, it represents an integrated methodology. In its top down strategy, BSP is similar to CSF
method in that it develops an overall understanding of business plans and supporting IS needs through
joint discussions. IBM being the vendor of this methodology, it has the advantage of being better
known to the top management than other methodologies.
Weaknesses
(a) BSP requires a firm commitment from the top management and their substantial involvement.
(b) it requires a high degree of IT experience within the BSP planning team.
c) there is a problem of bridging the gap between top down planning and bottom up implementation.
(e) major weakness of BSP is the considerable time and effort required for its successful
implementation.
Also known as PRO planner and developed by Robert Holland, this methodology is similar to BSP. A
business functional model is defined by analyzing major functional areas of a business. A data
architecture is derived from the business function model by combining information requirements into
generic data entities and subject databases. New systems and their implementation schedules are
derived from this architecture. This architecture is then used to identify new systems and their
implementation schedule. Although steps in the SSP procedure are similar to those in the BSP, a
major difference between SSP and BSP is SSP’s automated handling of the data collected during the
SISP process. Software produces reports in a wide range of formats and with various levels of detail.
Affinity reports show the frequencies of accesses to data and clustering reports give guidance for
database design. Users are guided through menus for on-line data collection and maintenance. The
software also provides a data dictionary interface for sharing SSP data with an existing data dictionary
or other automated design tools.
In addition to SSP, Holland System’s Corporation also offers two other methodologies - one for
guiding the information system architecture and another for developing data structures for modules
from the SISP study. The strengths and weaknesses of BSP apply to SSP as well
This methodology was developed by James Martin (1982) and provides techniques for building
enterprise, data and process models. These models combine to form a comprehensive knowledge base
which is used to create and maintain information systems.
Basic philosophy underlying this technique is the use of structured techniques in all the tasks relating
to planning, analysis, design and construction of enterprise wide information systems. Such structured
techniques are expected to result in well integrated information systems. IE relies on an information
systems pyramid for an enterprise. The pyramid has three sides which represent the organization’s
data, the activities the organization carries out using the data and the technology that is employed in
implementing information systems. IE views all three aspects of information systems from a high-
level, management oriented perspective at the top to a fully detailed implementation at the bottom.
The pyramid describes the four levels of activities, namely, strategy, analysis, systems design and
construction, that involve data, activities and technology
In addition to information engineering, Martin advocates the use of critical success factors. A major
difference between IE and other methodologies is the automated tools provided by IE to link its
output to subsequent systems development efforts, and this is the major strength of this methodology.
Major weaknesses of IE have been identified as difficulty in securing top management commitment,
difficulty in finding the team leader meeting criteria, too much user involvement and that the planning
exercise takes long time.
By a technique we mean a set of steps and a set of rules which define how a representation of an IS is
derived and handled using some conceptual structure and related notation This definition is illustrated
in Figure below. By using a technique, system developers perceive, define and communicate on
certain aspects of the current or desired object system. These aspects are defined by the conceptual
structure of the technique and represented by the notation. By a tool we generally mean a computer-
based application which supports the use of a modeling technique. Tool-supported modeling
functionality includes abstraction of the object system into models, checking that models are
consistent, converting results from one form of model and representation to another, and providing
specifications for review
Examples of modeling techniques are data flow diagrams and activity models. As a technique, a data
flow diagram identifies and names the objects (e.g. process, store) and relationships (e.g. data flow,
control flow) which it considers important in developing an IS. Other techniques include other sets of
objects and relationships. Modeling techniques also have a notation and a representation form. In a
data flow diagram the notation for a process is a circle, and for a data flow a solid line with an arrow-
head. The representation form of a data flow diagram is a graphical diagram. Furthermore, a
technique defines some principles on how the models should be derived (e.g. decomposition of
processes while modeling with data flow diagrams). In other words, a modeling technique specifies
which kind of aspects of an object system need to be perceived, in what notation each aspect is
represented, and how such representations should be produced.
A method can be considered as a predefined and organized collection of techniques and a set of rules
which state by whom, in what order, and in what way the techniques are used to achieve or maintain
some objectives. In short, we call this method knowledge. Thus, our definition of method includes
both the product and process aspects, although dictionaries define the term “method” as meaning “the
procedure of obtaining an object” and therefore emphasize the process rather than the representation
(i.e. product of the method use). In contrast, Wijers (1991) notes that most ISD method text-books
focus on feasible specifications rather than on the process of how to develop such specifications. In
addition, a method also includes knowledge about method users, development objectives and values.
We will analyze the types of method knowledge in more detail in the next section.
Examples of methods include Structured Analysis and Design, and the object-oriented methods of
Booch (1991) and Rumbaugh et al. (1991). A short example of method knowledge is in order. The
method knowledge of SA/SD can be discussed in terms of the techniques (e.g. data flow diagram,
entity-relationship diagram) and their interrelations. In SA/SD the overall view of the object system is
perceived through a hierarchical structure of the processes that the system includes. This overall
topology is completed by data transformations; how data is used and produced by different processes,
how it is transformed between processes, and where it is stored. Moreover, the data used in the system
needs to be defined in a data-dictionary and interrelations between data need to be specified with
entity-relationship diagrams. Thus, methods describe not only how models are developed but also
how they are organized and structured. Furthermore, since ISD methods aim to carry out the change
process from a current to a desired state they should also include knowledge for creating alternative
design solutions and provide guidelines to select among them (Tolvanen and Lyytinen 1994).
SA/SD and other methods put forward a defined and a limited number of techniques including their
conceptual structures and notations. In the same way as there is variety in techniques, there is also
diversity among methods (Welke and Konsynski 1980). Different methods include different types and
sets of techniques. Interrelations between techniques can be defined differently even between methods
which use the same techniques, and the procedures for building and analyzing models can be
different. Although there is diversity among ISD methods they include similarities, e.g. they apply the
same concepts and notations. To understand these differences and similarities we shall analyze several
methods in more detail by describing types of method knowledge.
The categorization applied here is illustrated in the figure below whose shape leads us to call it a shell
model. According to the model, methods are based on a number of concepts and their interrelations.
These concepts are applied in modeling techniques to represent models of ISs according to a notation.
Processes must be based on the concepts and they describe how models are created, manipulated, and
used with the notation. The concepts and their representations are derived, analyzed, corrected etc. by
various stakeholders. In addition, methods include specific development objectives about a ‘good’ IS,
and have some underlying values, “weltanschauung” and other philosophical assumptions
The shape of a shell emphasizes that different types of method knowledge are neither exclusive, nor
orthogonal. Each type of knowledge complements the others and all are required to yield a
“complete” method, although many methods focus only on the concepts and notations included in
modeling techniques. In the procedural guidelines of Structured Analysis (DeMarco 1979) this
concept is described as a top-down refinement of the system starting from the high level diagram. In
the modeling technique this concept is implemented as the possibility for every process to have a sub-
diagram, and in the balancing of the data flows between the decomposed process and its sub-diagram.
The concept of decomposition also affects other method knowledge in several ways: the method
should explain who identifies, specifies, and reviews decompositions; the partitioning of the system
into a hierarchical structure dominates the design decisions and reveals the underlying assumptions of
the method, i.e. that an IS can be effectively designed by partitioning the system based on its
processes.
In such a scenario, it is important that company invest in technology which is aligned with overall
strategy of the company. This calls for technology strategy formulation.
This alignment between CIO and CEO revolves around issues like:
Planning
Corporate planning plays an important role in alignment of technology with organization strategy. In a
perfect scenario CIO and CEO will have a same planning horizon. However, it is observed that the
CEO and CIO do not share same vision, from planning to execution.
This introduces the concept of planning lead time. In some organization, strategy execution does not
match to technology planning horizon and execution. By the time technology strategy is executed,
more advancement is observed in that system, thus competitive edge is lost.
In the above scenario, companies become reactive rather than pro-active. Companies need to adjust
with challenges posed by market leaders and trend setters. A strong CIO-CEO relationship ensure
organization develop understanding of technological challenges and its impact on overall
organization.
Organizational Structure
Organization needs to ensure that their structure is agile and flexible as to accommodate changes in
the technology. They should be efficient and effective enough to deal demands of the market change.
Organization needs to develop and maintain technology systems, which are flexible and adaptive.
There are three types of technology infrastructure available with companies’ ERP, data warehousing
and knowledge management.
All three dimensions ERP, Data Warehousing and Knowledge Management provide cutting edge to
the organization.
Organizational Systems
Organization invests in technology looking at its present needs; future requirements and its capability
to provide a competitive edge. Systems can be classified into three categories depending upon
technology timeline, new systems, matured systems and declining systems.
New systems have latest technology and provide a competitive edge. As time progresses system and
technology are adopted by more companies, thus losing competitive edge. Finally, systems and
technology reach the obsolete stage where its usage has declined and is to be phased out.
Applying a concept that information system is strictly under the purview of IT department can lead to
adverse situation for the company. Therefore, it is essential for organization to recognize information
systems contribution in business effectiveness.
Development in information systems has brought opportunities but also threats. The onus is on the
organization to identify opportunity and implement it. Organization needs to develop strategies, which
can best utilize information systems to increase overall productivity.
The most common practice with regards to information systems is automation. Though automation is
helpful, innovation using information systems give the organization a competitive edge.
Organizations are fully aware that proliferation of information systems has reduced product life cycle,
reduced margin and brought in new products. In such scenario customer satisfaction alone will not
suffice, organization needs to strive for customer delight. Information systems with data warehousing
and analytics capability can help organization collect customer feedback and develop products, which
exceed customer expectation. This customer delight will lead to a loyal customer base and brand
ambassador.
Organizations require different types of information systems to mitigate distinctive process and
requirements. Efficient business transaction systems make organization productive. Business
transaction systems ensure that routine process are captured and acted upon effectively, for example,
sales transaction, cash transaction, payroll, etc.
Further, information systems are required for executive decision. Top leadership requires precise
internal as well as external information to devise a strategy for organization. Decision support systems
are designed to execute this exact function.
Business transaction systems and executive decision support systems contribute to overall
organizational productivity.
Information systems have facilitated the increase in workers’ productivity. With introduction of email,
video conferencing and shared white board collaboration across organization and departments have
increased. This increased collaboration ensures smooth execution and implementation of various
projects across geographies and locations.
Organization use information systems to achieve its various strategy as well as short-term and long-
term goals. Development of information systems was to improve productivity and business
effectiveness of organization. Success of information systems is highly dependent on the prevalent
organization structure, management style and overall organization environment.
With correct development, deployment and usage of information systems, organization can achieve
lower costs, improved productivity, growth in top-line as well as the bottom-line and competitive
advantage in the market.
The readiness of workers into accepting the information systems is the key in realizing the full
potential of them.
Development and deployment of information systems have revolutionized the way business is
conducted. It has contributed to business effectiveness and increased in productivity.
Information enables us to determine the need to create new products and services. Information tells us
to move into new markets or to withdraw from other markets. Without information, the goods do not
get made, the orders are not placed, the materials are not procured, the shipments are not delivered,
the customers are not billed, and the business cannot survive.
But information has far lesser impact when presented as raw data. In order to maximize the value of
information, it must be captured, analyzed, quantified, compiled, manipulated, made accessible, and
shared. In order to accomplish those tasks, an information system (IS) must be designed, developed,
administered, and maintained.
Improving information management practices is a key focus for many organisations, across both the
public and private sectors. This is being driven by a range of factors, including a need to improve the
efficiency of business processes, the demands of compliance regulations and the desire to deliver new
services.
In many cases, ‘information management’ has meant deploying new technology solutions, such as
content or document management systems, data warehousing or portal applications. These projects
have a poor track record of success, and most organisations are still struggling to deliver an integrated
information management environment.
Effective information management is not easy. There are many systems to integrate, a huge range of
business needs to meet, and complex organisational (and cultural) issues to address. This topic draws
together a number of ‘critical success factors’ for information management projects. These do not
provide an exhaustive list, but do offer a series of principles that can be used to guide the planning
and implementation of information management activities.
Information is a vital ingredient for the operations and management of any organization. A computer
based management information system is designed to both reduce the costs and increase the
capabilities of organizational information processing. Information systems support the operations and
effective managing of major functions in an organization. Online operations facilitate user machine,
dialogue, interactive analysis, planning, and decision making. Information systems may be viewed as
a substantial extension of the concepts of managerial accounting, operation research, and
organizational theories related to management and decision making. Information systems call for
analysis of a business, management views and policies, organizational cultures and management
styles. An open system of information system offers an ability of continues changes and adjustments
or corrections in the system inline with environmental changes in which it works. An understanding
of the effective and responsible use of management of information systems and technologies is
important for managers, business professional and other knowledge workers in today’s internet work
enterprises
Information system plays a vital role in e-business and e-commerce operations, enterprise
collaborations and strategic success of business. An information system like any other system receives
inputs of data, and instructions, processes the data according to these instructions and produces
outputs. This information-processing model can be used to depict an information system. The major
purpose of an information system is to convert data into valuable information. Information is data
with meaning. In a business context regarding an organization like IBM, an information system is
subsystem of the business system of an organization. Each business system has goals such as
increasing profits, expanding market shares and providing service to potential customers. Any
organization deals with three main levels of organization, namely Operational Level, Tactical Level,
and Strategic Level. Illustratively, Operational information systems of an organization provide
information on the day-to-day activities of a business such as processing sales order or checking
credit, ordering new stock. These activities are decided and judged by junior managers, and are done
almost instantly.
Information systems that provide information that lets management allocate resources effectively to
achieve business objectives are known as tactical systems, this may include promotion of a particular
product. Tactical information is used by middle level managers . Finally, information systems that
support the strategic plans of the business are known as strategic planning system. Strategic decisions
are effectively made by senior managers. These decisions need time and care, particularly if it
requires major investment, like setting up a new plant. Furthermore, information provides managers
with the feedback they need about a system and its operations, which they can use for decision
making. Using this information, a manager can reallocate resources, redesign jobs or reorganize
procedures to accomplish the objectives, set for business growth, successfully.
Management information systems provide information in the form of reports and displays to managers
and many business professionals. Decision support systems give direct computers support to
managers during the decision-making system. Executive information system provides critical
information from a wide variety of internal and external sources in easy to use displays to executives
and managers. However, several other categories of Information Systems can support either
operations or management applications, for example expert systems can provide expert advice for
operations chooses like equipments diagnostics or managerial decisions. Knowledge management
system supports the creation, organization and dissemination of business knowledge to employees and
managers throughout a company. Finally, Strategic Information technology terms to products,
services or business processes to help it gain a strategic advantage over its competitors.
In literal terms, Implementation is doing what you have planned to do. Thus Implementation is the
most important responsibility of a manager. Implementation can be viewed as a process that carries
out the plans for changes in business IT/strategies and applications. The figure below illustrates the
business /IT planning process of any large scale organization, here IBM, which focuses on
discovering innovative approaches to satisfying a company’s customer value and business value
goals. This planning process leads to development of strategies and business models for new e-
business and e-commerce platforms, processes products and services, then a company can develop IT
strategies and an IT architecture that supports building and implementing their newly planned
business applications. Both the C.E.O and the Chief Information Officer (C.I.O) of a company must
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 244
efficiently manage the development of complementary business and IT strategies to meet its customer
value and business value vision. This co-adaptation process is necessary because information
technologies are fast changing, but a vital component in many strategic business initiatives. With the
introduction of information technology in business, organizations like IBM have undergone major
changes by implementing new e-business strategies and applications, as shown in figure below. It
clearly illustrates the impact and the levels and scope of business changes that applications of
information technology introduce into an organization.
L New business
Redefine Core
e Initiative
businesses
v
e
l
Best Practices Process
s
Reengineerin
o
g
f
Model Best
Improve
Practices
C efficiency
h
a Efficiency
n
g
e Single Core Supply Extended
Function Processes chain
For instance, IBM exhaustively uses and implements in its day to day operations, applications
like online transaction processing that bring efficiency to single function or core business processes.
However, implementing e-business application such as Enterprise Resource Management or Customer
Relationship Management (CRM) requires a reengineering of core business process internally and
with supply chain partners, thus forcing a company to model and implement business practices being
implemented by leading firms in their industry. Of course, any major new business and initiatives can
enable a company to redefine its core lines of business and precipitate dramatic changes within the
entire inter-enterprise value chain of a business. Implementing new business/IT strategies require
managing the effects of major changes in key organizational dimensions such as business processes,
organizational structures, managerial roles, employee work assignments, and stakeholder relationships
that arise from the deployment of new business information systems (Chou). Induction of E.D.I and
E.C as part of an organization’s infrastructure, while providing many benefits can also result in
resistance to change that is brought about by the new ways of working. IBM is a real world example
that demonstrates the challenges of implementing major business/ IT strategies and applications, and
the change management challenges that confront management.
IBM embraces customer relationship management (CRM) as a primary key for e-business
applications. It is designed to implement a business strategy of using IT to support a total customer
care focus for all areas of the company. Business challenges also include aggregating business
functions and information to drive greater efficiency and responsiveness, automating processes for
managing data to improve quality, efficiency and reduce costs, utilizing actionable information to
enable better business decision-making, adhering to regulatory requirements, and improving data
storage and distribution processes to increase efficiency and reduce overall costs, and lastly, enabling
brand new business functions and processes through better access to data and diverse applications.
IBM’s high level industry expertise and global investment in diverse application platforms and
application skills helps to provide a strong foundation for leadership in application design,
development, implementation, and management.
Even more important is end user involvement in organizational changes and in the development of
new information systems. Organizations have a variety of strategies to help manage business change,
so planning for change is carried out well in advance of introduction of EDI/EC so that the result is a
win-win situation across the organization. Direct end user participation in business planning and
application development projects before a new system is implemented is especially important in
reducing the potential for end user resistance. Such involvement helps ensure that the system design
meets the end user needs. The following section illustrates some of the key dimensions of
organizational change management, and the level of difficulty and business impact involved. Note
that some of the people, process, and technology factors involved in the implementation of E-business
strategies and applications, or other changes caused by introducing new information technologies.
Thus people are a major focus of organizational change management. This includes activities such as
developing innovative ways to measure, motivate and reward performance. So is designing programs
to recruit and train employees in the core competencies required in a changing work place. Change
management also involves analyzing and defining all changes facing the organization, and developing
programs to reduce the risks and costs and to maximize the benefits of the change. For example,
implementing a new e-business process like customer relationship management, might involve
developing a change action plan, assigning selected managers as change sponsor, developing
employee change teams and encouraging open communications and feedbacks about organizational
changes. Some key tactics change experts recommend include; involve as many people as possible in
E-business planning and application development, make constant change in expected part of the
culture, tell everyone as much as possible about everything as often as possible, preferably in person,
make liberal use of financial incentives and recognition, and lastly, work within the company culture
and not around it. E-business vision created in the strategy planning phase should be communicated in
compelling change story to the people in the organization. Evaluating the readiness for the e-business
changes within an organization, developing change strategies, choosing and training change leaders
and champions based on that assessment could be the next steps in managing organizational changes.
Business Challenges
Management
Enterprise resource planning system provides a holistic view of the enterprise and is devised to draw
benefits form IT. It works around the core activities of the organization, and facilitates seamless flow
of information across departmental barriers. ERP systems optimally plan and manage all the resources
of the organization, and hence cover the techniques and concepts employed for the integrated
management of businesses as a whole, from the viewpoint of the effective usage of management
resources to improve the efficiency of an enterprise. Direct benefits of E.R.P include
improved efficiency,
information integration for better-decision making, and
faster response time to customer queries.
However, the indirect advantages of E.R.P include better corporate image, improved customer
goodwill, and customer satisfaction. Thus, ERP’s best hope for demonstrating value is a sort of
battering ram for improving the business performance.
Title 2:
Strategic management is the set of decisions and actions used to formulate and implement strategies
that will provide a competitively superior fit between the organization and its environment so as to
achieve organizational goals. Managers ask questions such as “What changes and trends are occurring
in the competitive environment? Who are our customers? What products or services should we offer?
How can we offer those products and services most efficiently?” Answers to these questions help
managers make choices about how to position their organization in the environment with respect to
rival companies. Superior organizational performance is not a matter of luck. It is determined by the
choices managers make. Top executives use strategic management to define an overall direction for
the organization, which is the firm’s grand strategy. Grand strategy is the general plan of major action
by which a firm intends to achieve its long term goals. Within the overall grand strategy of an
organization executives define an explicit strategy, which is the plan of action that describes resource
allocation and activities for dealing with the environment and attending the organization goals. The
essence of strategy is choosing to perform different activities or to execute activities differently than
competitors do. Strategy necessarily changes over time to fit environmental conditions but to remain
competitive; companies develop strategies that focus on core competencies, develop synergy, and
create value for customers (Sethi 2009).
The final aspect of strategic management involves the stages of formulation and implementation.
Strategy formulation includes the planning and decision making that lead to the establishment of the
firm’s goals and development of a specific strategic plan. It may include assessing the external
environment and internal problems and integrating result into goals and strategy. This is contrast to
strategy implementation, which is the use of managerial and organizational tools to direct resources
and information toward accomplishing strategic result. Strategy implementation is the administration
and execution of the strategic plan. Managers may use persuasion, new equipment, changes in
organization structure, or reward system to ensure that employees and resources are used to make
formulated strategy a reality .
The development of the strategy also considers the environmental factors such as the technology, the
markets, the lifestyle the work culture, the attitudes, the policies of the government and so on. A
strategy helps to meet the external forces affecting the business development effectively ad further
ensures that the goals and objectives are achieved. The development of the strategy considers the
strength of the organization in deploying the resources and at the same time it compensates for the
weaknesses. The strategy formulation therefore is an unstructured exercise of a complex nature
riddled with the uncertainties. It sets the guidelines for use of the resources in kind and manner during
the planning period. Myburgh has defined strategic information management that “focuses on
corporate strategy and direction. It emphasizes the quality of decision making and information use
needed to improve overall business performance.”
I
Knowledge Management
N Information
Information Governance Use
F
O
Information Information
Information
R Processing Distribution
Acquisition
M
Information
A Human Resources
Use
T IT infrastructure
Information management is a set of activities that travels along the logical succession of
interdependent stages of organization development. Information management strategies involve
harnessing information resources and information capabilities, to enable the organization to learn and
adapt to its changing environment. In other words, information management centers on effectively
managing and controlling the use of information with respect to coordination and control, strategic
decision making and tactical problem solving. Information system strategy is a classic model of
representing decision making processes in information systems. Information systems strategy is the
plan and steps of execution taken by the organization in providing information systems and services.
Improvising information management is a key focus for many firms and organizations . This is driven
by an array of various factors that also include a need to improve the efficiency business processes,
the desire to deliver new services, and the demands of compliance regulations. In most of the cases,
information management involves deploying new technology solutions, like portal applications,
content or document management systems or data warehousing.
With these specialized software solutions, IBM delivers an integrated information management
environment for deployment of applications. Information management strategies is the collection and
management of valuable data and information extracted from one or more resources and distribution
of that information to potential audience. Management increases the efficiency of all the business
functions like marketing, finance, administration, production, personnel, purchase and inventory.
Knowledge base is created for people in organization. Forecasting and long term perspective planning
is effectively executed. Information management also impacts the enterprises in the following ways:
Exceptional situations are brought to the notice well in time, keeping information about the
achievements and shortfalls in the implementation of the set goals, keeping traces of probable trends
in various aspects of business, understanding business with clarity by defining data entity and its
attributes, improve the decision-making ability considerably, systemization of business operation,
creates and information based work culture in organization, making and using data dictionary and
providing common understanding of term and terminologies in the organization .
Only those companies that create new knowledge and disseminate it widely throughout the
organization and quickly embody it in the new technologies and products will survive in today’s
competitive world. Additionally, knowledge management strategies are developed to effectively
implement a range of policies and practices that the organization uses to create, develop, identify,
represent, distribute, and enable adoption of experiences and valuable insights. Such experiences and
insights comprise knowledge that are either embedded in organizational practices or processes, or are
embodied in individuals. Knowledge management strategies are derived from information
management as a discipline.
The value of information-as-knowledge and knowledge management essentially lies in the conversion
of tacit information resources to manageable information products, and the resulting expansion of the
organization’s information resource base (Schlögl 2005). Furthermore, KM strategies are aimed at
facilitating individual as well organizational learning and focuses on efficiency gains of the
organization. Information ecology and organizational culture are most important with respect to
knowledge management. Strategies for knowledge demand that successful knowledge management is
achieved as an outcome of willingness among organizational members and staff to share their insights
and expertise, in enhancing the organizational activities thereby increasing the chances of
achievement of desired goals and targets. Knowledge management has thus become one of the major
strategic uses of information technology Another factor on which Information management,
knowledge management ands information system strategies depend on, is information acquisition.
This key factor is essential in identifying market trends, environmental risks, opportunities, customer
preferences, internal process inefficiencies, demand patterns and an array of other information
resources that are leveraged to create challenging outcomes and competitive advantages . Enterprise
content management (ECM) is the strategies, tools and methods, used in the context of knowledge
management, to capture, store, manage, preserve, and distribute and deliver documents and content
related to organizational processes.
.
Information governance rules out and encompasses the internal guidelines and policies for effectively
handling enormous information resources namely, information acquisition, storage, processing,
security, distribution, maintenance, and disposal. The valuation of information governance relies on
the development of common organization-wide policies and standards for obtaining information
resources based on the organization’s information requirements.
The overall company strategy considers a very long term business perspective, deals with the overall
strength of the entire organization and evolves those policies of the business which will dominate with
course of the business movement. It is the most productive strategy, if chosen correctly and fatal if
chosen wrongly. These strategies are broad-based having a far reaching effect on the different facets
of the business and forming the basis, generating strategies in the other potential areas of business.
There are different ways to construct an information system, based upon organizational requirements,
both in the function aspect and the financial sense. Of course, the company needs to take into
consideration that hardware that is purchased and assembled into a network will become outdated
rather quickly. It is almost axiomatic that the technologies used in information systems steadily
increase in power and versatility on a rapid time scale. Perhaps the trickiest part of designing an
information system from a hardware standpoint is straddling the fine line between too much and not
enough, while keeping an eye on the requirements that the future may impose.
Applying foresight when designing a system can bring substantial rewards in the future, when system
components are easy to repair, replace, remove, or update without having to bring the whole
information system to its knees. When an information system is rendered inaccessible or inoperative,
the system is considered to be "down."
inconvenience to customers can cost the firm even more if sales are lost as a result, in addition to any
added costs the customers might incur.
Another vital consideration regarding the design and creation of an information system is to determine
which users have access to which information. The system should be configured to grant access to the
different partitions of data and information by granting user-level permissions for access. A common
method of administering system access rights is to create unique profiles for each user, with the
appropriate user-level permissions that provide proper clearances.
Individual passwords can be used to delineate each user and their level of access rights, as well as
identify the tasks performed by each user. Data regarding the performance of any user unit, whether
individual, departmental, or organizational can also be collected, measured, and assessed through the
user identification process.
The OSI seven-layer model attempts to provide a way of partitioning any computer network into
independent modules from the lowest (physical/hardware) layer to the highest (application/program)
layer. Many different specifications can exist at each of these layers.
There is more to maintaining an information system than applying technical knowledge to hardware
or software. IS professionals have to bridge the gap between technical issues and practicality for the
users. The information system should also have a centralized body that functions to provide
information, assistance, and services to the users of the system. These services will typically include
telephone and electronic mail "help desk" type services for users, as well as direct contact between the
users and IS personnel.
The location and retrieval of archived information can be a direct and logical process, if careful
planning is employed during the design of the system. Creating an outline of how the information
should be organized and indexed can be a very valuable tool during the design phase of a system. A
critical feature of any information system should be the ability to not only access and retrieve data,
but also to keep the archived information as current as possible.
2. Collaborative Tools
Collaborative tools can consist of software or hardware, and serve as a base for the sharing of data and
information, both internally and externally. These tools allow the exchange of information between
users, as well as the sharing of resources. As previously mentioned, real-time communication is also a
possible function that can be enabled through the use of collaborative tools.
3. Data Mining
Data mining, or the process of analyzing empirical data, allows for the extrapolation of information.
The extrapolated results are then used in forecasting and defining trends.
4. Query Tools
Query tools allow the users to find the information needed to perform any specific function. The
inability to easily create and execute functional queries is a common weak link in many information
systems. A significant cause of that inability, as noted earlier, can be the communication difficulties
between a management information systems department and the system users.
Another critical issue toward ensuring successful navigation of the varied information levels and
partitions is the compatibility factor between knowledge bases. For maximum effectiveness, the
system administrator should ascertain that the varied collection, retrieval, and analysis levels of the
system either operate on a common platform, or can export the data to a common platform. Although
much the same as query tools in principle, intelligent agents allow the customization of the
information flow through sorting and filtering to suit the individual needs of the users. The primary
difference between query tools and intelligent agents is that query tools allow the sorting and filtering
processes to be employed to the specifications of management and the system administrators, and
intelligent agents allow the information flow to be defined in accord with the needs of the user.
Key Points
Managers should keep in mind the following advice in order to get the most out of an information
system:
Use the available hardware and software technologies to support the business. If the
information system does not support quality and productivity, then it is misused.
Use the available technologies to create and facilitate the flow of communication within your
organization and, if feasible, outside of it as well. Collaboration and flexibility are the key
advantages offered for all involved parties. Make the most of those advantages.
Determine if any strategic advantages are to be gained by use of your information system,
such as in the areas of order placement, shipment tracking, order fulfillment, market
forecasting, just-in-time supply, or regular inventory. If you can gain any sort of advantage by
virtue of the use of your information system, use it.
Use the quantification opportunities presented by your information system to measure,
analyze, and benchmark the performances of an individual, department, division, plant, or
entire organization.
An information system is more than hardware or software. The most integral and important
components of the system are the people who design it, maintain it, and use it. While the overall
system must meet various needs in terms of power and performance, it must also be usable for the
organization's personnel. If the operation of day-to-day tasks is too daunting for the workforce, then
even the most humble of aspirations for the system will go unrealized.
A company will likely have a staff entrusted with the overall operation and maintenance of the system
and that staff will be able to make the system perform in the manner expected of it. Pairing the
information systems department with a training department can create a synergistic solution to the
quandary of how to get non-technical staff to perform technical tasks. Oft times, the individuals
staffing an information systems department will be as technical in their orientation as the operative
staff is non-technical in theirs. This creates a language barrier between the two factions, but the
communication level between them may be the most important exchange of information within the
organization. Nomenclature out of context becomes little more than insular buzzwords.
If a company does not have a formal training department, the presence of staff members with a natural
inclination to demonstrate and teach could mitigate a potentially disastrous situation. Management
should find those employees who are most likely to adapt to the system and its operation. They should
be taught how the system works and what it is supposed to do. Then they can share their knowledge
with their fellow workers. There may not be a better way to bridge the natural chasm between the IS
department and non-technical personnel. When the process of communicating information flows
smoothly and can be used for enhancing and refining business operations, the organization and its
customers will all profit.
Organisations are very complex environments in which to deliver concrete solutions. As outlined
above, there are many challenges that need to be overcome when planning and implementing
information management projects.
When confronted with this complexity, project teams often fall back upon approaches such as:
All of these approaches will fail, as they are attempting to convert a complex set of needs and
problems into simple (even simplistic) solutions. The hope is that the complexity can be limited or
avoided when planning and deploying solutions.
In practice, however, there is no way of avoiding the inherent complexities within organisations. New
approaches to information management must therefore be found that recognise (and manage) this
complexity.
Organisations must stop looking for simple approaches, and must stop believing vendors when they
offer ‘silver bullet’ technology solutions.
Instead, successful information management is underpinned by strong leadership that defines a clear
direction (principle 6). Many small activities should then be planned to address in parallel the many
needs and issues (principle 5).
Risks must then be identified and mitigated throughout the project (principle 7), to ensure that
organisational complexities do not prevent the delivery of effective solutions. Information systems are
only successful if they are used
Information management systems are only successful if they are actually used by staff, and it is not
sufficient to simply focus on installing the software centrally.
In practice, most information management systems need the active participation of staff throughout
the organisation.
For example:
Staff must save all key files into the document/records management system.
Decentralised authors must use the content management system to regularly update the
intranet.
Lecturers must use the learning content management system to deliver e-learning packages to
their students.
Front-line staff must capture call details in the customer relationship management system.
In all these cases, the challenge is to gain sufficient adoption to ensure that required information is
captured in the system. Without a critical mass of usage, corporate repositories will not contain
enough information to be useful.
This presents a considerable change management challenge for information management projects. In
practice, it means that projects must be carefully designed from the outset to ensure that sufficient
adoption is gained.
Identifying the ‘what’s in it for me’ factors for end users of the system.
Communicating clearly to all staff the purpose and benefits of the project.
Carefully targeting initial projects to build momentum for the project (see principle 10).
Conducting extensive change management and cultural change activities throughout the
project.
Ensuring that the systems that are deployed are useful and usable for staff.
These are just a few of the possible approaches, and they demonstrate the wide implications of
needing to gain adoption by staff. It is not enough to deliver ‘behind the scenes’ fixes
It is not enough to simply improve the management of information ‘behind the scenes’. While this
will deliver real benefits, it will not drive the required cultural changes, or assist with gaining
adoption by staff (principle 2).
In many cases, information management projects initially focus on improving the productivity of
publishers or information managers.
While these are valuable projects, they are invisible to the rest of the organisation. When challenged,
it can be hard to demonstrate the return on investment of these projects, and they do little to assist
project teams to gain further funding.
Instead, information management projects must always be designed so that they deliver tangible and
visible benefits.
Delivering tangible benefits involves identifying concrete business needs that must be met (principle
4). This allows meaningful measurement of the impact of the projects on the operation of the
organisation.
The projects should also target issues or needs that are very visible within the organisation. When
solutions are delivered, the improvement should be obvious, and widely promoted throughout the
organisation.
For example, improving the information available to call centre staff can have a very visible and
tangible impact on customer service.
In contrast, creating a standard taxonomy for classifying information across systems is hard to
quantify and rarely visible to general staff.
This is not to say that ‘behind the scenes’ improvements are not required, but rather that they should
always be partnered with changes that deliver more visible benefits.
This also has a major impact on the choice of the initial activities conducted (principle 10). Tackle the
most urgent business needs first
It can be difficult to know where to start when planning information management projects.
While some organisations attempt to prioritise projects according to the ‘simplicity’ of the technology
to be deployed, this is not a meaningful approach. In particular, this often doesn’t deliver short-term
benefits that are tangible and visible (principle 3).
Instead of this technology-driven approach, the planning process should be turned around entirely, to
drive projects based on their ability to address business needs.
In this way, information management projects are targeted at the most urgent business needs or issues.
These in turn are derived from the overall business strategy and direction for the organisation as a
whole.
For example, the rate of errors in home loan applications might be identified as a strategic issue for
the organisation. A new system might therefore be put in place (along with other activities) to better
manage the information that supports the processing of these applications.
Alternatively, a new call centre might be in the process of being planned. Information management
activities can be put in place to support the establishment of the new call centre, and the training of
new staff. Avoid ‘silver bullet’ solutions that promise to fix everything
There is no single application or project that will address and resolve all the information management
problems of an organisation.
Where organisations look for such solutions, large and costly strategic plans are developed. Assuming
the results of this strategic planning are actually delivered (which they often aren’t), they usually
describe a long-term vision but give few clear directions for immediate actions.
In practice, anyone looking to design the complete information management solution will be trapped
by ‘analysis paralysis’: the inability to escape the planning process.
Organisations are simply too complex to consider all the factors when developing strategies or
planning activities.
The answer is to let go of the desire for a perfectly planned approach. Instead, project teams should
take a ‘journey of a thousand steps’.
This approach recognises that there are hundreds (or thousands) of often small changes that are
needed to improve the information management practices across an organisation. These changes will
often be implemented in parallel.
While some of these changes are organisation-wide, most are actually implemented at business unit
(or even team) level. When added up over time, these numerous small changes have a major impact
on the organisation.
This is a very different approach to that typically taken in organisations, and it replaces a single large
(centralised) project with many individual initiatives conducted by multiple teams.
While this can be challenging to coordinate and manage, this ‘thousand steps’ approach recognises the
inherent complexity of organisations (principle 1) and is a very effective way of mitigating risks
(principle 7). It also ensures that ‘quick wins’ can be delivered early on (principle 3), and allows
solutions to be targeted to individual business needs (principle 4). Successful projects require strong
leadership
Successful information management is about organisational and cultural change, and this can only be
achieved through strong leadership.
The starting point is to create a clear vision of the desired outcomes of the information management
strategy. This will describe how the organisation will operate, more than just describing how the
information systems themselves will work.
Effort must then be put into generating a sufficient sense of urgency to drive the deployment and
adoption of new systems and processes.
Stakeholders must also be engaged and involved in the project, to ensure that there is support at all
levels in the organisation.
This focus on leadership then underpins a range of communications activities (principle 8) that ensure
that the organisation has a clear understanding of the projects and the benefits they will deliver.
When projects are solely driven by the acquisition and deployment of new technology solutions, this
leadership is often lacking. Without the engagement and support of key stakeholder outside the IT
area, these projects often have little impact. Apply good risk management to ensure success
Due to the inherent complexity of the environment within organisations (principle 1), there are many
risks in implementing information management solutions. These risks include:
Risk management approaches should then be used to plan all aspects of the project, including the
activities conducted and the budget spent.
For example, a simple but effective way of mitigating risks is to spend less money. This might involve
conducting pilot projects to identifying issues and potential solutions, rather than starting with
enterprise-wide deployments.
Extensive communication from the project team (and project sponsors) is critical for a successful
information management initiative.
This communication ensures that staff have a clear understanding of the project, and the benefits it
will deliver. This is a pre-requisite for achieving the required level of adoption.
With many projects happening simultaneously (principle 5), coordination becomes paramount. All
project teams should devote time to work closely with each other, to ensure that activities and
outcomes are aligned.
Instead, a clear end point (‘vision’) must be created for the information management project, and
communicated widely. This allows each project team to align themselves to the eventual goal, and to
make informed decisions about the best approaches.
For all these reasons, the first step in an information management project should be to develop a clear
communications ‘message’. This should then be supported by a communications plan that describes
target audiences, and methods of communication.
Project teams should also consider establishing a ‘project site’ on the intranet as the outset, to provide
a location for planning documents, news releases, and other updates. Staff do not understand the
distinction between systems
Users don’t understand systems. When presented with six different information systems, each
containing one-sixth of what they want, they generally rely on a piece of paper instead (or ask the
person next to them).
Educating staff in the purpose and use of a disparate set of information systems is difficult, and
generally fruitless. The underlying goal should therefore be to deliver a seamless user experience, one
that hides the systems that the information is coming from.
This is not to say that there should be one enterprise-wide system that contains all information.
There will always be a need to have multiple information systems, but the information contained
within them should be presented in a human-friendly way.
Delivering a single intranet (or equivalent) that gives access to all information and tools.
Ensuring a consistent look-and-feel across all applications, including standard navigation and
page layouts.
Providing ‘single sign-on’ to all applications.
Ultimately, it also means breaking down the distinctions between applications, and delivering tools
and information along task and subject lines.
For example, many organisations store HR procedures on the intranet, but require staff to log a
separate ‘HR self-service’ application that provides a completely different menu structure and
appearance.
Improving on this, leave details should be located alongside the leave form itself. In this model, the
HR application becomes a background system, invisible to the user.
Care should also be taken, however, when looking to a silver-bullet solution for providing a seamless
user experience. Despite the promises, portal applications do not automatically deliver this.
Instead, a better approach may be to leverage the inherent benefits of the web platform. As long as the
applications all look the same, the user will be unaware that they are accessing multiple systems and
servers behind the scenes.
Of course, achieving a truly seamless user experience is not a short-term goal. Plan to incrementally
move towards this goal, delivering one improvement at a time. The first project must build
momentum for further work
The choice of the first project conducted as part of a broader information management strategy is
critical. This project must be selected carefully, to ensure that it:
Actions speak louder than words. The first project is the single best (and perhaps only) opportunity to
set the organisation on the right path towards better information management practices and
technologies.
The first project must therefore be chosen according to its ability to act as a ‘catalyst’ for further
organisational and cultural changes.
In practice, this often involves starting with one problem or one area of the business that the
organisation as a whole would be interested in, and cares about.
For example, starting by restructuring the corporate policies and procedures will generate little
interest or enthusiasm. In contrast, delivering a system that greatly assists salespeople in the field
would be something that could be widely promoted throughout the organisation.
Conclusion
The challenges inherent in information management projects mean that new approaches need to be
taken, if they are to succeed.
This topic has outlined ten key principles of effective information management. These focus on the
organisational and cultural changes required to drive forward improvements.
The also outline a pragmatic, step-by-step approach to implementing solutions that starts with
addressing key needs and building support for further initiatives. A focus on adoption then ensures
that staff actually use the solutions that are deployed.
Of course, much more can be written on how to tackle information management projects. Future
articles will further explore this topic, providing additional guidance and outlining concrete
approaches that can be taken.
Recruitment and selection are two of the main function carried out by human-resource department. An
organization undertakes recruitment under following circumstances:
If the organization is implementing business expansion plans. This expansion may be in line
with an increase in sales. Company may be looking forward to exploring brand new markets
or coming out with new products.
If there is attrition within the existing workforce. This attrition could be that existing
employees are moving to other employers or changing industry or employee has some
personal reason like sickness, maternity, etc.
Organization also undertaken recruitment if they require employees with a specific skill set
which they currently don’t have.
If business is changing base of operation. In such case many employees may not prefer re-
locate hence the need for recruitment.
The current workforce is constantly evolving with regard to the employee mix. Organizations are
moving more and more toward temporary employees. Furthermore, there is an increase in single
parent employees. Women as percentage of workforce have as well significantly increased. Human-
resource manager needs to be aware of these changes and develop a recruitment process accordingly.
Every Human resource department has a team to manage the recruitment and selection process.
Information systems have made it possible for companies to have a dedicated tool which helps in
organizing the complete recruitment and selection process.
Recruitment management system greatly enhances the performance of recruitment process and
delivers efficiency to the organization. The key characteristics of the recruitment management system
are as follows:
iv. The system consolidates online application, outside recruitment agency process, interview
stage, etc.
v. The system stores all the applicant information within the database as to facilitate faster future
requirement processing.
vi. The system facilitates a user friendly interface between applicant, talent acquisition team and
online application link.
The system has various tools to improve overall productivity of the recruitment process.
Selection
Selection is a process through which candidate’s qualification and job’s requirement are matched as to
establish suitability for the open position. The selection needs to have structured and definite process
flow.
Selection process consists of various steps like interview, aptitude test, interaction with hiring
manager, background verification, job offer and job acceptance.
Recruitment is a process in which there is search for potential applicants for various open positions,
where as selection is a process in which candidates are short listed based on their potential.
Employee recruitment and selection are building block of any successful organization. In recent years,
information system has played major role in driving efficiency in the process through standardization
and process evolution.
A trained and developed staff will contribute to productivity increase, improved profitability and
significant increase in the market share. Therefore, it is very important for companies to design and
maintain efficient training/development systems for employees.
Training and development are different from each other. The focus of training is short term while for
development, it is long term. The utilization of work experience is low in training and high in
development. The aim of training is preparation for current assignment while development looks at
upcoming assignment. Employee participation is voluntary in training while it is mandatory in
development.
The aim of employee development is not only to make them progress in their career but also to train
them as per company’s requirement.
Companies should identify high-performing development system before investing in it. They should
continuously strive to improve developmental systems. They are possibilities that exiting system,
session and procedure may become monotonous in long term there by affecting employee motivation
One of biggest employer fear is that post training employees would look for employment change and
hence they do not encourage training. Though this concern is valid in some cases, but overall it has
shown that trained employee show better motivation level and loyalty.
through three way approach of continuous communication, conflict resolution and employee
development.
Employee is crucial and critical for overall progress of an organization. Employer-employee is very
complicated association and at times strenuous to manage. Employee relationship management is
much difficult compared to customer relationship management. For example, if the customer is not
satisfied with the association with a given company, they can move on to another company. However,
if the employee is unhappy with an employer, there are possibilities that he will continue his
association with company. However, this employee-employer relationship will not be fruitful and
convenient for both the parties.
If employee and employer are in cordial relationship and then overall efficiency and competitiveness
of the company will improve. An improvement in relation can result in employee with high morale,
which will increase his/her or her loyalty towards the company. If there is an increase in the loyalty
employee turnaround is possible and corresponding communication can be established.
The current payroll systems are linked with an information system which ensures that
employee are getting timely as well as accurate salary.
Online learning and development tools can easily be managed by employees.
Information systems facilitate leave, tax, and insurance management of employees.
Performance appraisal and individual development management are done online with help
information systems.
Employees are aware of the latest development within the organization through access to
Company’s blog and news board.
Executive management of the company can communicate directly to staff through email.
Online staff meeting brings together employees from all parts of the world.
Employee Relation Life Cycle
Employee relation life cycle starts as soon as talent is shortlisted for an interview. Post hiring process
employee undergoes training to become a full time contributing team member. Over time with
involvement in projects and various other association employees is considered as a family member.
Finally, employee reaches the stage of the brand.
There are several factors, which drive employee employer relation. The correct management of this
factor creates long-term and fruitful association for an employee as well as the employer. For
example; compensation, work culture and environment, rewards-recognition, etc.
Every organization has its own work culture and environment. Any job within organization requires a
certain skill-set. Human resource team along with hiring manager scout for talent and hire an
employee. Companies invest time and resources for training employee. This training in turn enables
an employee to excel and help the company meet its business objective. For this whole process to
reach the desired end, it is essential healthy employee-employer relationship is maintained.
Information systems contribute a lot to this success.
Although many information systems are built to solve problems, many others are built to seize
opportunities. And, as anyone in business can tell you, identifying a problem is easier than creating an
opportunity. Why? Because a problem already exists; it is an obstacle to a desired mode of operation
and, as such, calls attention to itself. An opportunity, on the other hand, is less tangible. It takes a
certain amount of imagination, creativity, and vision to identify an opportunity, or to create one and
seize it. Information systems that help seize opportunities are often called strategic information
systems (SISs). They can be developed from scratch, or they can evolve from an organization’s
existing ISs.
In a free-market economy, it is difficult for a business to do well without some strategic planning.
Although strategies vary, they tend to fall into some basic categories, such as developing a new
product, identifying an unmet consumer need, changing a service to entice more customers or retain
existing clients, or taking any other action that increases the organization’s value through improved
performance.
Many strategies do not, and cannot, involve information systems. But increasingly, corporations are
able to implement certain strategies—such as maximizing sales and lowering costs—thanks to the
innovative use of information systems. In other words, better information gives corporations a
competitive advantage in the marketplace. A company achieves strategic advantage by using strategy
to maximize its strengths, resulting in a competitive advantage. When a business uses a strategy with
the intent to create a market for new products or services, it does not aim to compete with other
organizations, because that market does not yet exist. Therefore, a strategic move is not always a
competitive move. However, in a free-enterprise society, a market rarely remains the domain of one
organization for long; thus, competition ensues almost immediately. So, we often use the terms
“competitive advantage” and “strategic advantage” interchangeably.
You might have heard statements about using the Web strategically. Business competition is no
longer limited to a particular country or even a region of the world. To increase the sale of goods and
services, companies must regard the entire world as their market. Because thousands of corporations
and hundreds of millions of consumers have access to the Web, augmenting business via the Web has
become strategic: many companies that utilized the Web early on have enjoyed greater market shares,
more experience with the Web as a business enabler, and larger revenues than latecomers. Some
companies developed information systems, or features of information systems, that are unique, such
as Amazon’s “one-click” online purchasing and Priceline’s “name your own price” auctioning.
Practically any Web-based system that gives a company competitive advantage is a strategic
information system.
i. Reduce costs
A company can gain advantage if it can sell more units at a lower price while providing quality and
maintaining or increasing its profit margin.
Companies can implement some of the strategic initiatives described in the previous section by using
information systems. As we mentioned at the beginning of the chapter, a strategic information system
(SIS) is any information system that can help an organization achieve a long-term competitive
advantage. An SIS can be created from scratch, developed by modifying an existing system, or
“discovered” by realizing that a system already in place can be used to strategic advantage. While
companies continue to explore new ways of devising SISs, some successful SISs are the result of less
lofty endeavors: the intention to improve mundane operations using IT has occasionally yielded a
system with strategic qualities.
Creating an SIS
To develop an SIS, top management must be involved from initial consideration through development
and implementation. In other words, the SIS must be part of the overall organizational strategic plan.
There is always the danger that a new SIS might be considered the IS unit’s exclusive property.
However, to succeed, the project must be a corporate effort, involving all managers who use the
system.
The answer often leads to the decision to eliminate one set of operations and build others from the
ground up. Changes such as these are called reengineering. Reengineering often involves adoption of
new machinery and elimination of management layers. Frequently, information technology plays an
important role in this process.
Reengineering’s goal is not to gain small incremental cost savings, but to achieve great efficiency
leaps—of 100 percent and even 1000 percent. With that degree of improvement, a company often
gains competitive advantage. Interestingly, a company that undertakes reengineering along with
implementing a new SIS cannot always tell whether the SIS was successful.
The reengineering process makes it impossible to determine how much each change contributed to the
organization’s improved position.
Part of GM’s initiative was to recognize the importance of Saturn dealerships in gaining competitive
advantage. Through satellite communications, the new company gave dealers access to factory
information. Clients could find out if, and exactly when, different cars with different features would
be available.
Another feature of Saturn’s SIS was improved customer service. Saturn embeds an electronic
computer chip in the chassis of each car. The chip maintains a record of the car’s technical details and
the owner’s name. When the car is serviced after the sale, new information is added to the chip.
At their first service visit, many Saturn owners were surprised to be greeted by name as they rolled
down their windows. While the quality of the car itself has been important to Saturn’s success, the
new SIS also played an important role. This technology was later copied by other automakers.
In an environment where most information technology is available to all, SISs originally developed to
create a strategic advantage quickly become an expected standard business practice.
A prime example is the banking industry, where surveys indicate that increased IS expenditures did
not yield long-range strategic advantages. The few banks that provided services such as ATMs and
online banking once had a powerful strategic advantage, but now almost every bank provides these
services.
A system can only help a company sustain competitive advantage if the company continuously
modifies and enhances it, creating a moving target for competitors. American Airlines’ Sabre—the
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 269
online reservation system for travel agents—is a classic example. The innovative IS was redesigned in
the late 1970s to expedite airline reservations and sell travel agencies a new service. But over the
years, the company spun off an office automation package for travel agencies called Agency Data
Systems. The reservation system now encompasses hotel reservations, car rentals, train schedules,
theater tickets, and limousine rentals. It later added a feature that let travelers use Sabre from their
own computers. The system has been so successful that in its early years American earned more from
it than from its airline operations. The organizational unit that developed and operated the software
became a separate IT powerhouse at AMR Corp., the parent company of American Airlines, and now
operates as Sabre Inc., an AMR subsidiary. It is the leading provider of technology for the travel
industry. Travelocity, Inc., the popular Web-based travel site, is a subsidiary of Sabre, and, naturally,
uses Sabre’s software. Chances are you are using Sabre technology when you make airline
reservations through other Web sites, as well.
Firms that “do better” than others are said to have a competitive advantage over others: They either
have access to special resources that others do not, or they are able to use commonly available
resources more efficiently—usually because of superior knowledge and information assets. In any
event, they do better in terms of revenue growth, profitability, or productivity growth (efficiency), all
of which ultimately in the long run translate into higher stock market valuations than their
competitors.
But why do some firms do better than others and how do they achieve competitive advantage? How
can you analyze a business and identify its strategic advantages? How can you develop a strategic
advantage for your own business? And how do information systems contribute to strategic
advantages?
This model provides a general view of the firm, its competitors, and the firm’s environment. Porter’s
model is all about the firm’s general business environment. In this model, five competitive forces
shape the fate of the firm.
Traditional Competitors
All firms share market space with other competitors who are continuously devising new, more
efficient ways to produce by introducing new products and services, and attempting to attract
customers by developing their brands and imposing switching costs on their customers.
In a free economy with mobile labour and financial resources, new companies are always entering the
marketplace. In some industries, there are very low barriers to entry, whereas in other industries, entry
is very difficult.
For instance, it is fairly easy to start a pizza business or just about any small retail business, but it is
much more expensive and difficult to enter the computer chip business, which has very high capital
costs and requires significant expertise and knowledge that is hard to obtain. New companies have
several possible advantages: They are not locked into old plants and equipment, they often hire
younger workers who are less expensive and perhaps more innovative, they are not encumbered by
old, worn-out brand names, and they are “more hungry” (more highly motivated) than traditional
occupants of an industry. These advantages are also their weakness: They depend on outside financing
for new plants and equipment, which can be expensive; they have a less experienced workforce; and
they have little brand recognition.
In just about every industry, there are substitutes that your customers might use if your prices become
too high. New technologies create new substitutes all the time. Even oil has substitutes: Ethanol can
substitute for gasoline in cars; vegetable oil for diesel fuel in trucks; and wind, solar, coal, and hydro
power for industrial electricity generation. Likewise, Internet telephone service can substitute for
traditional telephone service, and fiber-optic telephone lines to the home can substitute for cable TV
lines. And, of course, an Internet music service that allows you to download music tracks to an iPod is
a substitute for CDbased music stores. The more substitute products and services in your industry, the
less you can control pricing and the lower your profit margins.
Customers
A profitable company depends in large measure on its ability to attract and retain customers (while
denying them to competitors), and charge high prices.
The power of customers grows if they can easily switch to a competitor’s products and services, or if
they can force a business and its competitors to compete on price alone in a transparent marketplace
where there is little product differentiation, and all prices are known instantly (such as on the
Internet). For instance, in the used college textbook market on the Internet, students (customers) can
find multiple suppliers of just about any current college textbook. In this case, online customers have
extraordinary power over used-book firms.
Suppliers
The market power of suppliers can have a significant impact on firm profits, especially when the firm
cannot raise prices as fast as can suppliers. The more different suppliers a firm has, the greater control
it can exercise over suppliers in terms of price, quality, and delivery schedules. For instance,
manufacturers of laptop PCs almost always have multiple competing suppliers of key components,
such as keyboards, hard drives, and display screens.
How do you prevent substitutes and inhibit new market entrants? There are four generic strategies,
each of which often is enabled by using information technology and systems:
a) low-cost leadership,
b) product differentiation,
c) focus on market niche, and
d) strengthening customer and supplier intimacy.
a) Low-Cost Leadership
Use information systems to achieve the lowest operational costs and the lowest prices. The classic
example is Wal-Mart. By keeping prices low and shelves well stocked using a legendary inventory
replenishment system, Wal-Mart became the leading retail business in the United States. Wal-Mart’s
continuous replenishment system sends orders for new merchandise directly to suppliers as soon as
consumers pay for their purchases at the cash register. Point-of-sale terminals record the bar code of
each item passing the checkout counter and send a purchase transaction directly to a central computer
at Wal-Mart headquarters. The computer collects the orders from all Wal-Mart stores and transmits
them to suppliers. Suppliers can also access Wal-Mart’s sales and inventory data using Web
technology.
Because the system replenishes inventory with lightning speed, Wal-Mart does not need to spend
much money on maintaining large inventories of goods in its own warehouses. The system also
enables Wal-Mart to adjust purchases of store items to meet customer demands. Competitors, such as
Sears, have been spending 24.9 percent of sales on overhead. But by using systems to keep operating
costs low, Wal-Mart pays only 16.6 percent of sales revenue for overhead. (Operating costs average
20.7 percent of sales in the retail industry.)
b) Product Differentiation
Use information systems to enable new products and services, or greatly change the customer
convenience in using your existing products and services.
For instance, Google continuously introduces new and unique search services on its Web site, such as
Google Maps. By purchasing PayPal, an electronic payment system, in 2003, eBay made it much
easier for customers to pay sellers and expanded use of its auction marketplace. Apple created iPod, a
unique portable digital music player, plus a unique online Web music service where songs can be
purchased for 99 cents. Continuing to innovate, Apple recently introduced a portable iPod video
player.
Manufacturers and retailers are using information systems to create products and services that are
customized and personalized to fit the precise specifications of individual customers. Dell Computer
Corporation sells directly to customers using assemble-to-order manufacturing. Individuals,
businesses, and government agencies can buy computers directly from Dell, customized with the
exact features and components they need. They can place their orders directly using a toll-free
telephone number or by accessing Dell’s Web site.
Once Dell’s production control receives an order, it directs an assembly plant to assemble the
computer using components from an on-site warehouse based on the configuration specified by the
customer.
Lands’ End customers can use its Web site to order jeans, dress pants, chino pants, and shirts custom-
tailored to their own specifications. Customers enter their measurements into a form on the Web site,
which then transmits each customer’s specifications over a network to a computer that develops an
electronic made-to-measure pattern for that customer. The individual patterns are then transmitted
electronically to a manufacturing plant, where they are used to drive fabric-cutting equipment. There
are almost no extra production costs because the process does not require additional warehousing,
production overruns, and inventories, and the cost to the customer is only slightly higher than that of a
mass-produced garment. This ability to offer individually tailored products or services using the same
production resources as mass production is called mass customization.
The data come from a range of sources—credit card transactions, demographic data, purchase data
from checkout counter scanners at supermarkets and retail stores, and data collected when people
access and interact with Web sites.
Sophisticated software tools find patterns in these large pools of data and infer rules from them to
guide decision making. Analysis of such data drives one-to-one marketing that creates personal
messages based on individualized preferences. Contemporary customer relationship management
(CRM) systems feature analytical capabilities for this type of intensive data analysis..
Hilton Hotels uses a customer information system called OnQ, which contains detailed data about
active guests in every property across the eight hotel brands owned by Hilton. Employees at the front
desk tapping into the system instantly search through 180 million records to find out the preferences
of customers checking in and their past experiences with Hilton so they can give these guests exactly
what they want. OnQ establishes the value of each customer to Hilton, based on personal history and
on predictions about the value of that person’s future business with Hilton. OnQ can also identify
customers who are clearly not profitable. Profitable customers receive extra privileges and attention,
such as the ability to check out late without paying additional fees. After Hilton started using the
system, the rate of staying at Hilton Hotels rather than at competing hotels soared from 41 percent to
61 percent (Kontzer, 2004).
The Interactive Session on Technology shows how 7-Eleven improved its competitive position by
wringing more value out of its customer data. This company’s early growth and strategy had been
based on face-to-face relationships with its customers and intimate knowledge of exactly what they
wanted to purchase. As the company grew over time, it was no longer able to discern customer
preferences through personal face-to-face relationships.
A new information system helped it obtain intimate knowledge of its customers once again by
gathering and analyzing customer purchase transactions.
Some companies focus on one of these strategies, but you will often see companies pursuing several
of them simultaneously. For example, Dell Computer tries to emphasize low cost as well as the ability
to customize its personal computers.
In the second wave, eight new industries are facing a similar transformation scenario: telephone
services, movies, television, jewelry, real estate, hotels, bill payments, and software. The breadth of e-
commerce offerings grows especially in travel, information clearinghouses, entertainment, retail
apparel, appliances, and home furnishings.
For instance, the printed encyclopedia industry and the travel agency industry have been nearly
decimated by the availability of substitutes over the Internet. Likewise, the Internet has had a
significant impact on the retail, music, book, brokerage, and newspaper industries. At the same time,
the
Internet has enabled new products and services, new business models, and new industries to spring up
every day, from eBay and Amazon.com to iTunes and Google. In this sense, the Internet is
“transforming” entire industries, forcing firms to change how they do business.
Because of the Internet, the traditional competitive forces are still at work, but competitive rivalry has
become much more intense (Porter, 2001). Internet technology is based on universal standards that
any company can use, making it easy for rivals to compete on price alone and for new competitors to
enter the market. Because information is available to everyone, the Internet raises the bargaining
power of customers, who can quickly find the lowest-cost provider on the Web. Profits have been
dampened. Some industries, such as the travel industry and the financial services industry, have been
more impacted than others.
However, contrary to Porter’s somewhat negative assessment, the Internet also creates new
opportunities for building brands and building very large and loyal customer bases that are willing to
pay a premium for the brand, for example, Yahoo, eBay, BlueNile, RedEnvelope, Overstock.com,
Amazon.com, Google, and many others. In addition, as with all IT-enabled business initiatives, some
firms are far better at using the Internet than other firms are, which creates new strategic opportunities
for the successful firms.
REVISION EXERCISES
1. Define information system strategy
2. What is a business strategy hierarchy
3. What are the component of a virtual value system
4. Discuss the 6 dimensions of excellent strategic process and information system process.
5. What are the characteristics of a strategic information system plan?
6. How can information system be applied so that a business is effective
7. What are the function of an information system
8. What is the meaning of competitive advantage in business
9. What are some that a business can gain competitive advantage
10. Discuss the porters competitive force model.
11. How can the internet be used to gain competitive advantage
CHAPTER 8
MANAGING INFORMATION SYSTEMS SECURITY
SYNOPSIS
Introduction……………………………………………….. 275
Information Systems Threats………………………………. 278
Threats Control………………………………………….. 279
Systems Integrity…………………………………………. 288
Information Systems Risk Management………………… 315
Disaster Recovery And Business
Continuity Planning………………………………………. 317
INTRODUCTION
Information Systems Security Management (ISSM) from the emergent organization perspective e.g.
the e-commerce is way under study and requires attention from the academician. Although emergent
organization may be smaller in size and resources, the threats on the information systems is very
much similar and as disastrous as compared to the hierarchical organizations. Although that is the
case of current security threats, the steps towards managing information systems security between the
emergent organization and the hierarchical organization is very much different in terms of the
technology, the people and the procedure.
The evolution of business model from being hierarchical-oriented to an emergent organization, form
one of the crucial challenges requires serious attention concerning to the Information System Security
Management. Current policies and procedure are not ready to support the emergent organization.
Emergent organization is very dynamic and appears to have higher volatility feature. This statement is
true as findings of appears to show in the evaluation of selected standard approaches, where current
standards are succumb to supporting stable environment rather the emergent ones. This is because
bigger organizations face bigger threats and different organizations type face different type of threats.
As there are many type of organizations and business models, IT environment in these company is
unique for example, it has its own unique set of software products, which products may have been
evaluated in terms of different IS evaluation schemes, either in part or in full. Due to these factors,
evaluators are suggested -particularly in emergent organizations- to take more liberties in modifying
the evaluation process for their own purposes.
Critical success factors of e-commerce shows that trust factors and security issues are part and parcel
of e- commerce success. The critical success factors indicate that all businesses wishing to adopt e-
commerce as their business model or as an alternative profit generator must implement security
measures vital for competitive advantage. In whole, these e-commerce companies have to understand
and implement security measure appropriate to the business based on current security standards. E-
commerce company must be smart in choosing the most appropriate security measure for the business
and suitable to support the business objective. If wrong measures are adopted, company may face with
serious problem such as waste of resources. Current standards which are widely used such as BS
ISO/IEC17799: 2000 fail to look into the content of the standards, rather focus on the processes. The
processes also are often abstract and oversimplified.
There are no advices given to assist companies in the practice of IS security management.
Hierarchical companies with mountain of resources may not face too much problems to practices the
standards, but e- commerce company will. Limited resources and time constraint makes the task of IS
security management tiresome and in attractive, thus living this most important task aside. Most e-
commerce retailers have a business model unique to its own entity and require a dynamic procedure to
safeguard its Information Systems. Thus, an Information Systems Security Management (ISSM)
supporting a dynamic business model is highly needed. A method appropriate to this business context
is required to fast-forward their business to enter the market before their competitors.
Strategic management has the ambition to be the field that informs the decisions and actions of
general managers. In pursuit of this high goal the field has from time to time worshipped at various
theoretical altars, both in economics and sociology. For example, it has looked to industrial-
organization and transaction-cost economics, agency, network, contingency and, more recently,
resource-based (or its cousin the dynamic-capabilities-based) theories of the firm, for inspiration.
While some of these theories have helped guide general management decisions and actions, many
have been hard to operationalize. The field still lacks an actionable theory.
While theoretical anchors are seen as giving the field academic respectability, strategic management
has helped practitioners more by its frameworks and typologies.
The traditional hierarchical view of strategies: corporate, business unit and functional, must be viewed
in this light. While the hierarchical view of strategies has never had the pretensions of being a theory,
it did capture the essence of what was seen as best practice in the 1960s and 1970s. It was a useful
framework.
While the hierarchy of strategy is still often taught in business schools today, its theoretical relevance
and empirical support have been severely questioned. It does not mirror the actual locus of decision
making or the causality of strategy making in a global firm today. In a transnational firm, the
corporate office continues to drive corporate strategy for optimal portfolio balance. But this portfolio
is defined not just along business lines but also along geography and resource dimensions, traditional
prerogatives for business units and functions Business units and functions are run globally and heads
of these business units and functions are also corporate officers.
Strategic initiatives at a business or functional level may indeed drive the development of corporate
strategy, which, in the hierarchy of strategy, is viewed from the top down
Corporate, business and functional strategies are not hierarchical anymore; they are contemporaneous
and interactive. Instead of a hierarchy of strategies, we should think more in terms of a heterarchy of
strategies . In a hierarchy every strategic decision making node is connected to at most one parent
node. In a hierarchy, however, a node can be connected to any of its surrounding nodes without
needing to go through or get permission from some other node.
A security attack is the act or attempt to exploit vulnerability in a system. Security controls are the
mechanisms used to control an attack. Attacks can be classified into active and passive attacks.
Passive attacks – attacker observes information without interfering with information or flow of
information. He/she does not interfere with operation. Message content and message traffic is what
is observed.
Active attacks – involves more than message or information observation. There is interference of
traffic or message flow and may involve modification, deletion or destruction. This may be done
through the attacker masquerading or impersonating as another user. There is denial or repudiation
where someone does something and denies later. This is a threat against authentication and to some
extent integrity.
Security Goals
To retain a competitive advantage and to meet basic business requirements organizations must
endeavour to achieve the following security goals.
a. Confidentiality
Protect information value and preserve the confidentiality of sensitive data. Information should not be
disclosed without authorization. Information the release of which is permitted to a certain section of
the public should be identified and protected against unauthorized disclosure.
b. Integrity
Ensure the accuracy and reliability of the information stored on the computer systems. Information has
integrity if it reflects some real world situation or is consistent with real world situation. Information
should not be altered without authorization. Hardware designed to perform some functions has lost
integrity if it does not perform those functions correctly. Software has lost integrity if it does not
perform according to its specifications. Communication channels should relay messages in a secure
manner to ensure that integrity. People should ensure the system functions according to the
specifications.
c. Availability
Ensure the continued availability of the information system and all its assets to legitimate users at an
acceptable level of service or quality of service. Any event that degrades performance or quality of a
system affects availability
These are circumstances that have potential to cause loss or harm i.e. circumstances that have a
potential to bring about exposures.
Human error
Disgruntled employees
Dishonest employees
Greedy employees who sell information for financial gain
Outsider access – hackers, crackers, criminals, terrorists, consultants, ex-consultants, ex-
employees, competitors, government agencies, spies (industrial, military etc), disgruntled
customers
Acts of God/natural disasters – earthquakes, floods, hurricanes
Foreign intelligence
Accidents, fires, explosion
Equipment failure
Utility outage
Water leaks, toxic spills
Viruses – these are programmed threats
Vulnerability
Vulnerability is a weakness within the system that can potentially lead to loss or harm. The threat of
natural disasters has instances that can make the system vulnerable. If a system has programs that
have threats (erroneous programs) then the system is vulnerable.
THREATS CONTROL
The 2005 CSI/FBI Computer Crime and Security Survey of 700 computer security practitioners
revealed that the frequency of system security breaches has been steadily decreasing since 1999 in
almost all threats except the abuse of wireless networks. There have been financial losses resulting
from the threats individually. Note, however, that the survey report pointed that the implicit losses
(e.g., lost sales) are difficult to measure and might not have been included by survey participants.
Some of the system security threats are discussed below.
a. Viruses
A computer virus is a software code that can multiply and propagate itself. A virus can spread into
another computer via e-mail, downloading files from the Internet, or opening a contaminated file. It is
almost impossible to completely protect a network computer from virus attacks; the CSI/FBI survey
indicated that virus attacks were the most widespread attack for six straight years since 2000.
Viruses are just one of several programmed threats or malicious codes (malware) in today’s
interconnected system environment. Programmed threats are computer programs that can create a
nuisance, alter or damage data, steal information, or cripple system functions. Programmed threats
include, computer viruses, Trojan horses, logic bombs, worms, spam, spyware, and adware.
According to a recent study by the University of Maryland, more than 75% of participants received e-
mail spam every day. There are two problems with spam: Employees waste time reading and deleting
spam, and it increases the system overhead to deliver and store junk data.
Spyware is a computer program that secretly gathers users’ personal information and relays it to third
parties, such as advertisers. Common functionalities of spyware include monitoring keystrokes,
scanning files, snooping on other applications such as chat programs or word processors, installing
other spyware programs, reading cookies, changing the default homepage on the Web browser, and
consistently relaying information to the spyware home base. Unknowing users often install spyware
as the result of visiting a website, clicking on a disguised pop-up window, or downloading a file from
the Internet.
Adware is a program that can display advertisements such as pop-up windows or advertising banners
on webpages. A growing number of software developers offer free trials for their software until users
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 280
pay to register. Free-trial users view sponsored advertisements while the software is being used. Some
adware does more than just present advertisements, however; it can report users’ habits, preferences,
or even personal information to advertisers or other third parties, similar to spyware.
To protect computer systems against viruses and other programmed threats, companies must have
effective access controls and install and regularly update quarantine software. With effective
protection against unauthorized access and by encouraging staff to become defensive computer users,
virus threats can be reduced. Some viruses can infect a computer through operating system
vulnerabilities. It is critical to install system security patches as soon as they are available.
Furthermore, effective security policies can be implemented with server operating systems such as
Microsoft Windows XP and Windows Server 2003. Other kinds of software (e.g., Deep Freeze) can
protect and preserve original computer configurations. Each system restart eradicates all changes,
including virus infections, and resets the computer to its original state. The software eliminates the
need for IT professionals to perform time-consuming and counterproductive rebuilding, re-imaging,
or troubleshooting when a computer becomes infected.
Fighting against programmed threats is an ongoing and ever-changing battle. Many organizations,
especially small ones, are understaffed and underfunded for system security. Organizations can use
one of a number of effective security suites (e.g., Norton Internet Security 2005, ZoneAlarm Security
Suite 5.5, McAfee Virus Scan) that offer firewall, anti-virus, anti-spam, anti-spyware, and parental
controls (for home offices) at the desktop level. Firewalls and routers should also be installed at the
network level to eliminate threats before they reach the desktop. Anti-adware and anti-spyware
software are signature-based, and companies are advised to install more than one to ensure effective
protection. Installing anti-spam software on the server is important because increasing spam results in
productivity loss and a waste of computing resources. Important considerations for selecting anti-
spam software include a system’s effectiveness, impact on mail delivery, ease of use, maintenance,
and cost. Many Internet service providers conveniently reduce spam on their servers before it reaches
subscribers. Additionally, companies must maintain in-house and off-site backup copies of corporate
data and software so that data and software can be quickly restored in the case of a system failure.
The 2005 Electronic Monitoring and Surveillance Survey conducted by the American Management
Association (AMA) and the Policy Institute revealed that 76% of employers monitor employees’ web
connections, while 50% of employers monitor and store employee computer files. The survey also
revealed that 26% of participating employers have fired workers for workplace offenses related to the
Internet; 25% have fired employees for misuse of e-mail; and 65% of those surveyed used software to
block employee access to inappropriate websites. Most U.S. companies allow reasonable use of
computers for personal reasons, but many never define “reasonable.” As a preventive control, every
company should have a written policy regarding the use of corporate computing facilities. In addition,
companies should update their monitoring policies periodically, because IT evolves rapidly.
If an Internet monitoring policy is clearly stated, companies need not worry about employee privacy
concerns; the Electronic Communications Privacy Act does give companies the right to monitor
electronic communications in the ordinary course of business.
The following suggestions can help minimize the chance of theft when outside the office:
d. Denial of Service
A denial of service (DoS) attack is specifically designed to interrupt normal system functions and
affect legitimate users’ access to the system. Hostile users send a flood of fake requests to a server,
overwhelming it and making a connection between the server and legitimate clients difficult or
impossible to establish. The distributed denial of service (DDoS) allows the hacker to launch a
massive, coordinated attack from thousands of hijacked (zombie) computers remotely controlled by
the hacker. A massive DDoS attack can paralyze a network system and bring down giant websites.
For example, the 2000 DDoS attacks brought down websites such as Yahoo! and eBay for hours.
Unfortunately, any computer system can be a hacker’s target as long as it is connected to the Internet.
DoS attacks can result in significant server downtime and financial loss for many companies, but the
controls to mitigate the risk are very technical. Companies should evaluate their potential exposure to
DoS attacks and determine the extent of control or protection they can afford.
cracking the passwords and reading the network data without leaving a trace. One option to prevent an
attack is to use one of several encryption standards that can be built into wireless network devices.
One example, wired equivalent privacy (WEP) encryption, can be effective at stopping amateur
snoopers, but it is not sophisticated enough to foil determined hackers. Consequently, any sensitive
information transmitted over wireless networks should be encrypted at the data level as if it were
being sent over a public network.
g. System Penetration
Hackers penetrate systems illegally to steal information, modify data, or harm the system. The
following factors are related to system penetration:
i. System holes
The design deficiency of operating systems or application systems that allow hijacking, security
bypass, data manipulation, privilege escalation, and system access.
iv. IP spoofing
A technique used to gain unauthorized access to computers, whereby hackers send messages to a
computer with a deceived IP address as if it were coming from a trusted host.
vi. Tunneling
A method for circumventing a firewall by hiding a message that would be rejected by the firewall
inside another, acceptable message.
According to Symantec, unpatched operating system (OS) holes are one of the most common ways to
break into a system network; using a worm is also becoming more common. Therefore, the first step
to guard against hackers is to download free patches to fix security holes when OS vendors release
them. Routinely following this step can dramatically improve network security for many companies.
Companies can use patch-management software to automate the distribution of authentic patches from
multiple software vendors throughout the entire organization. Not all patches can work flawlessly
with existing applications, however, and sometimes the patches may conflict with a few applications,
especially the older ones. If possible, patches should first be tested in a simulated environment, and
existing systems should be backed up before the patch is installed.
Companies can use software tools or system-penetration testing to scan the system and assess
systems’ susceptibility and the effectiveness of any countermeasures in place. The testing techniques
must be updated regularly to detect ever-changing threats and vulnerabilities. Other controls to
mitigate system penetration are as follows:
i. Install anti-sniffer software to scan the networks; use encryption to mitigate data-sniffing
threats.
ii. Install all the server patches released by vendors. Servers have incorporated numerous
security measures to prevent IP spoofing attacks.
iii. Install a network firewall so that internal addresses are not revealed externally.
iv. Establish a good system-development policy to guard against a back door/trap door; remove
the back door as soon as the new system development is completed.
v. Design security and audit capabilities to cover all user levels.
h. Telecom Fraud
In the past, telecom fraud involved fraudulent use of telecommunication (telephone) facilities.
Intruders often hacked into a company’s private branch exchange (PBX) and administration or
maintenance port for personal gains, including free long-distance calls, stealing (changing)
information in voicemail boxes, diverting calls illegally, wiretapping, and eavesdropping.
As analog and digital data communications have converged, some companies have utilized the Voice
over Internet Protocol (VOIP) to lower phone bills. The originating and receiving phone numbers are
converted to IP addresses and the PBX is linked to a company’s networked computers, and hackers
can get into systems through PBX or computerized branch exchange (CBX). In addition, every
PBX/CBX system is equipped with a software program that makes it vulnerable to remote-access
fraud, and intruders use sophisticated software to find an easy target. Once a PBX is hacked, hackers
have the same access to a company’s phone system and computer network as do the employees.
Companies should install software to monitor service usage at various points on the network,
including the VOIP gatekeeper, VOIP media controller, and broadcast server. The software can
monitor the system packet performance and the router applications on the converged network. The
software can also automatically alert the responsible person if any abnormal activities have been
detected.
Access privilege and data encryption are good preventive controls against data theft by unauthorized
employees who steal for personal gain. The access controls include the traditional passwords, smart-
card security, and more-sophisticated biometric security devices. Companies can implement some
appropriate controls, including limiting access to proprietary information to authorized employees,
controlling access where proprietary information is available, and conducting background checks on
employees who will have access to proprietary information. There will, however, always be some risk
that authorized employees will misuse data they have access to in the course of their work. Companies
can also work with an experienced intellectual property attorney, and require employees to sign
noncompeting and nondisclosure agreements.
j. Financial Fraud
The nature of financial fraud has changed over the years with information technology. System-based
financial fraud includes scam e-mails, identity theft, and fraudulent transactions. With spam, con
artists can send scam e-mails to thousands of people in hours. Victims of the so-called 419 scam are
often promised a lottery winning or a large sum of unclaimed money sitting in an offshore bank
account, but they must pay a “fee” first to get their shares. Anyone who gets this kind of e-mail is
recommended to forward a copy to the U.S. Secret Service.
Companies should review bank statements as soon as they arrive and report any suspicious or
unauthorized electronic transactions. Under the Electronic Fund Transfer Act, if victims notify the
bank of an unauthorized transaction within 60 days of the date the statement is delivered, they are not
liable for any loss. Otherwise, victims could lose all the money in their account, and the unused
portion of the maximum line of credit established for overdrafts.
Phishing is a form of identity theft. Spam is sent claiming to be from an individual’s bank or credit
union or a reputable e-commerce organization. The e-mail urges the recipient to click on a link to
update their personal data. The link takes the victim to a fake website designed to elicit personal or
financial information and transmit it to the criminals.
User should never give out credit card numbers, PINs, or any personal information in response to
unsolicited e-mail. Instead of clicking a link in a suspicious e-mail, call the office or use a URL that is
legitimate to verify an e-mail that claims to be from a bank or financial institution. When submitting
sensitive financial and personal information over the Internet, make sure the server uses the Secure
Sockets Layer protocol (the URL should be https:// instead of the typical http://).
User authentication is the foundation of Web application security, and inadequate authentication may
make applications vulnerable. Companies must install a Web application firewall to ensure that all
security policies are closely followed. The following additional controls can mitigate Web application
abuses:
l. Website Defacement
Website defacement is the sabotage of webpages by hackers inserting or altering information. The
altered webpages may mislead unknowing users and represent negative publicity that could affect a
company’s image and credibility. Web defacement is in essence a system attack, and the attackers
often take advantage of undisclosed system vulnerabilities or unpatched systems.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 285
Network firewalls cannot guard against all web vulnerabilities. Companies should install additional
Web application security to mitigate the defacement risk. All known vulnerabilities must be patched
to prevent unauthorized remote command execution and privilege escalation. It is also important that
only a few authorized users are allowed root access to a website’s contents. Access to different Web
server resources, such as executables, processes, data files, and configuration files, should be
monitored. Commercial website monitoring services are also available.
m. Sabotage
System security incidents are committed by insiders about as often as by outsiders. Some of the
controls discussed above can provide protection against the sabotages committed by outsiders, but no
organization is immune from an employee abusing its trust. For example, Omega Engineering was a
thriving defensive manufacturing firm in the 1990s; it used more than 1,000 programs to produce
various products with 500,000 different designs for their customers, including NASA and the U.S.
Navy. On July 31, 1996, Omega Engineering’s server crashed and all of the software programs were
lost. To make matters worse, on the same day the backup tape also disappeared. The investigation
quickly revealed that it was a deliberate sabotage by the former system administrator, Tim Lloyd, who
had been terminated 30 days before the catastrophe. Lloyd designed and planted a time bomb to erase
all the programs on the server. The crash resulted in $10 million in lost revenues and led to 80 layoffs.
When it comes to security, companies often pay attention only to the perimeter of the organization,
not the inside. Sabotages by insiders is often orchestrated when employees know their termination is
coming. In some cases, disgruntled employees are still able to gain access after being terminated. The
2005 insider-threat case study results by CERT/SEI (www.cert.org/archive/pdf/inside
cross051105.pdf) help identify, assess, and manage sabotage threats from insiders. Their key findings
were as follows:
As indicated by the CERT/SEI study, the convenience of remote access facilitates the majority of
sabotage attacks. Another potential threat of unauthorized use is when employees quit or are
terminated but there is no coordination between the personnel department and the computer center. In
some cases, employees still have system access and an e-mail account after they have left an
organization. It is also not unusual that employees know the user IDs and passwords of their
colleagues. Companies can adopt some of the following steps to protect against such threats:
n. Company Awareness
Business operations can be disrupted by many factors, including system security breaches. System
downtime, system penetrations, theft of computing resources, and lost productivity have quickly
become critical system security issues. The financial loss of these security breaches can be significant.
In addition, system security breaches often taint a company’s image and may compromise a
company’s compliance with applicable laws and regulations. The key to protecting a company’s
accounting information system against security breaches is to be well prepared for all possible major
threats. A combination of preventive and detective controls can mitigate security threats.
Security controls
These include:
1. Administrative controls – they include
a. Policies – a policy can be seen as a mechanism for controlling security
b. Administrative procedures – may be put by an organization to ensure that users
only do that which they have been authorized to do
c. Legal provisions – serve as security controls and discourage some form of physical
threats
d. Ethics
2. Logical security controls – measures incorporated within the system to provide protection
from adversaries who have already gained physical access
3. Physical controls – any mechanism that has a physical form e.g. lockups
4. Environmental controls
Administering security
It includes:
Risk analysis
Security planning – a security plan identifies and organizes the security activities of an
organization.
Security policy
Risk Analysis
The process involves:
- Identification of the assets
- Determination of the vulnerabilities
- Estimate the likelihood of exploitation
Security Policy
Security failures can be costly to business. Losses may be suffered as a result of the failure itself or
costs can be incurred when recovering from the incident, followed by more costs to secure systems and
prevent further failure. A well-defined set of security policies and procedures can prevent losses and
save money.
The information systems security policy is the responsibility of top management of an organization
who delegate its implementation to the appropriate level of management with permanent control. The
policy contributes to the protection of information assets. Its objective is to protect the information
capital against all types of risks, accidental or intentional. An existing and enforced security policy
should ensure systems conformity with laws and regulations, integrity of data, confidentiality and
availability.
SYSTEM INTEGRITY
System integrity begins with selecting and deploying the right hardware and software components to
authenticate a user’s identity—and help prevent others from assuming it. In doing so, it needs to offer
efficient administrative functions to restrict access to administrator-level functions, and give
administrators processes and controls to manage changes to the system. There are many individual
components to system integrity, such as vulnerability assessment, antivirus, and anti-malware
solutions. However, the ultimate goal from an access control standpoint is to prevent the installation
and execution of malicious code—while protecting valuable data—from the outset.
(i)Domain integrity
This testing is really aimed at verifying that the data conform to definitions; that is, that the data items
are all in the correct domains. The major objective of this exercise is to verify that edit and validation
routines are working satisfactorily. These tests are field level based and ensure that the data item has a
legitimate value in the correct range or set.
(ii)Relational integrity
These tests are performed at the record based level and usually involve calculating and verifying
various calculated fields such as control totals. Examples of their use would be in checking aspects
such as payroll calculations or interest payments. Computerized data frequently have control totals
built into various fields and by the nature of these fields, they are computed and would be subject to
the same type of tests. These tests will also detect direct modification of sensitive data i.e. if someone
has bypassed application programs, as these types of data are often protected with control totals.
(iii)Referential integrity
Database software will sometimes offer various procedures for checking or ensuring referential
integrity (mainly offered with hierarchical and network-based databases). Referential integrity checks
involve ensuring that all references to a primary key from another file (called foreign key) actually
exist in their original file. In non-pointer databases e.g. relational databases, referential integrity checks
involve making sure that all foreign keys exist in their original table.
This is a function implemented at the operating system level and usually also availed at the application
level by the operating system. It controls access to the system and system resources so that only
authorized accesses are allowed, e.g.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 289
It is a form of logical access control, which involves protection of resources from users who have
physical access to the computer system.
The access control reference monitor model has a reference monitor, which intercepts all access
attempts. It is always invoked when the target object is referenced and decides whether to deny or grant
requests as per the rules incorporated within the monitor.
The components of an access control system can be categorized into identification, authentication and
authorization components. Typical operating system based access control mechanisms are:
Identification
Involves establishing identity of the subject (who are you?). Identification can use:
- ID, full name
- Workstation ID, IP address
- Magnetic card (requires a reader)
- Smart card (inbuilt intelligence and computation capability)
Biometrics is the identification based on unique physical or behavioural patterns of people and may
be:
They are quite effective when thresholds are sensible (substantial difference between two different
people) and physical conditions of person are normal (equal to the time when reference was first made).
They require expensive equipment and are rare. Also buyers are deterred by impersonation or belief
that devices will be difficult to use. In addition users dislike being measured.
Authentication
Involves verification of identity of subject (Are you who you say you are? Prove it!). Personal
authentication may involve:
- Something you know: password, PIN, code phrase
- Something you have: keys, tokens, cards, smart cards
- Something you are: fingerprints, retina patterns, voice patterns
Authorization
Involves determining the access rights to various system objects/resources. The security requirement
to be addressed is the protection against unauthorized access to system resources. There is need to
define an authorization policy as well as implementation mechanisms. An authorization policy defines
activities permitted or prohibited within the system. Authorization mechanisms implement the
authorization policy and includes directory of access rights, access control lists (ACL) and access
tickets or capabilities.
Logical Security
Logical access into the computer can be gained through several avenues. Each avenue is subject to
appropriate levels of access security. Methods of access include the following:
1. Operator console
These are privileged computer terminals which controls most computer operations and functions. To
provide security, these terminals should be located in a suitably controlled location so that physical
access can only be gained by authorized personnel. Most operator consoles do not have strong logical
access controls and provide a high level of computer system access; therefore, the terminal must be
located in a physically secured area.
2. Online terminals
Online access to computer systems through terminals typically require entry of at least a logon-
identifier (logon-ID) and a password to gain access to the host computer system and may also require
further entry of authentication data for access to application specific systems. Separate security and
access control software may be employed on larger systems to improve the security provided by the
operating system or application system.
4. Dial-up ports
Use of dial-up ports involves hooking a remote terminal or PC to a telephone line and gaining access
to the computer by dialling a telephone number that is directly or indirectly connected to the computer.
Often a modem must interface between the remote terminal and the telephone line to encode and
decode transmissions. Security is achieved by providing a means of identifying the remote user to
determine authorization to access. This may be a dial-back line, use of logon-ID and access control
software or may require a computer operator to verify the identity of the caller and then provide the
connection to the computer.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 291
5. Telecommunications network
Telecommunications networks link a number of computer terminals or PCs to the host computer
through a network of telecommunications lines. The lines can be private (i.e. dedicated to one user) or
public such as a nation’s telephone system. Security should be provided in the same manner as that
applied to online terminals.
Inadequate logical access controls increase an organization’s potential for losses resulting from
exposures. These exposures can result in minor inconveniences or total shutdown of computer
functions. Logical access controls reduce exposure to unauthorized alteration and manipulation of data
and programs. Exposures that exist from accidental or intentional exploitation of logical access control
weaknesses include technical exposures and computer crime.
Technical Exposures
1. Data diddling involves changing data before or as it is being entered into the computer. This is
one of the most common abuses because it requires limited technical knowledge and occurs before
computer security can protect data.
2. Trojan horses involve hiding malicious, fraudulent code in an authorized computer program. This
hidden code will be executed whenever the authorized program is executed. A classic example is
the Trojan horse in the payroll-calculating program that shaves a barely noticeable amount off each
paycheck and credits it to the perpetrator’s payroll account.
3. Rounding down involves drawing off small amounts of money from a computerized transaction
or account and rerouting this amount to the perpetrator’s account. The term ‘rounding down’ refers
to rounding small fractions of a denomination down and transferring these small fractions into the
unauthorized account. Since the amounts are so small, they are rarely noticed.
4. Salami techniques involve the slicing of small amounts of money from a computerized transaction
or account and are similar to the rounding down technique. The difference between them is that in
rounding down the program rounds off by the cent. For example, if a transaction amount was
234.39 the rounding down technique may round the transaction to 234.35. The salami technique
truncates the last few digits from the transaction amount so 234.39 become 234.30 or 234.00
depending on the calculation built into the program.
5. Viruses are malicious program code inserted into other executable code that can self-replicate and
spread from computer to computer, via sharing of computer diskettes, transfer of logic over
telecommunication lines or direct contact with an infected machine or code. A virus can harmlessly
display cute messages on computer terminals, dangerously erase or alter computer files or simply
fill computer memory with junk to a point where the computer can no longer function. An added
danger is that a virus may lie dormant for some time until triggered by a certain event or occurrence,
such as a date (1 January – Happy New Year!) or being copied a pre-specified number of times.
During this time the virus has silently been spreading.
6. Worms are destructive programs that may destroy data or utilize tremendous computer and
communication resources but do not replicate like viruses. Such programs do not change other
programs, but can run independently and travel from machine to a machine across network
connections. Worms may also have portions of themselves running on many different machines.
7. Logic bombs are similar to computer viruses, but they do not self-replicate. The creation of logic
bombs requires some specialized knowledge, as it involves programming the destruction or
modification of data at a specific time in the future. However, unlike viruses or worms, logic bombs
are very difficult to detect before they blow up; thus, of all the computer crime schemes, they have
the greatest potential for damage. Detonation can be timed to cause maximum damage and to take
place long after the departure of the perpetrator. The logic bomb may also be used as a tool of
extortion, with a ransom being demanded in exchange for disclosure of the location of the bomb.
8. Trap doors are exits out of an authorized program that allow insertion of specific logic, such as
program interrupts, to permit a review of data during processing. These holes also permit insertion
of unauthorized logic.
9. Asynchronous attacks occur in multiprocessing environments where data move asynchronously
(one character at a time with a start and stop signal) across telecommunication lines. As a result,
numerous data transmissions must wait for the line to be free (and flowing in the proper direction)
before being transmitted. Data that is waiting is susceptible to unauthorized accesses called
asynchronous attacks. These attacks, which are usually very small pinlike insertions into cable,
may be committed via hardware and are extremely hard to detect.
10. Data leakage involves siphoning or leaking information out of the computer. This can involve
dumping files to paper or can be as simple as stealing computer reports and tapes.
11. Wire-tapping involves eavesdropping on information being transmitted over telecommunications
lines.
12. Piggybacking is the act of following an authorized person through a secured door or electronically
attaching to an authorized telecommunication link to intercept and possibly alter transmissions.
13. Shut down of the computer can be initiated through terminals or microcomputers connected
directly (online) or indirectly (dial-up lines) to the computer. Only individuals knowing a high-
level systems logon-ID can usually initiate the shut down process. This security measure is
effective only if proper security access controls are in place for the high-level logon-ID and the
telecommunications connections into the computer. Some systems have proven to be vulnerable
to shutting themselves down under certain conditions of overload.
14. Denial of service is an attack that disrupts or completely denies service to legitimate users,
networks, systems or other resources. The intent of any such attack is usually malicious in nature
and often takes little skill because the requisite tools are readily available.
Viruses
Viruses are a significant and a very real logical access issue. The term virus is a generic term applied
to a variety of malicious computer programs. Traditional viruses attach themselves to other executable
code, infect the user’s computer, replicate themselves on the user’s hard disk and then damage data,
hard disk or files. Viruses usually attack four parts of the computer:
Boot and system areas that are needed to start the computer
Data files
Computer viruses are a threat to computers of any type. Their effects can range from the annoying but
harmless prank to damaged files and crashed networks. In today’s environment, networks are the ideal
way to propagate viruses through a system. The greatest risk is from electronic mail (e-mail)
attachments from friends and/or anonymous people through the Internet. There are two major ways to
prevent and detect viruses that infect computers and network systems.
Some of the policy and procedure controls that should be in place are:
Build any system from original, clean master copies. Boot only from original diskettes whose
write protection has always been in place.
Allow no disk to be used until it has been scanned on a stand-alone machine that is used for
no other purpose and is not connected to the network.
Update virus software scanning definitions frequently
Write-protect all diskettes with .EXE or .COM extensions
Have vendors run demonstrations on their machines, not yours
Enforce a rule of not using shareware without first scanning the shareware thoroughly for a
virus
Commercial software is occasionally supplied with a Trojan horse (viruses or worms). Scan
before any new software is installed.
Insist that field technicians scan their disks on a test machine before they use any of their disks
on the system
Ensure that the network administrator uses workstation and server anti-virus software
Ensure that all servers are equipped with an activated current release of the virus detection
software
Create a special master boot record that makes the hard disk inaccessible when booting from
a diskette or CD-ROM. This ensures that the hard disk cannot be contaminated by the diskette
or optical media
Consider encrypting files and then decrypt them before execution
Ensure that bridge, route and gateway updates are authentic. This is a very easy way to place
and hide a Trojan horse.
Backups are a vital element of anti-virus strategy. Be sure to have a sound and effective backup
plan in place. This plan should account for scanning selected backup files for virus infection
once a virus has been detected.
Educate users so they will heed these policies and procedures
Review anti-virus policies and procedures at least once a year
Technical means
Technical methods of preventing viruses can be implemented through hardware and software means.
The following are hardware tactics that can reduce the risk of infection:
Use workstations without floppy disks
Use boot virus protection (i.e. built-in firmware based virus protection)
Use remote booting
Use a hardware based password
Use write protected tabs on floppy disks
Software is by far the most common anti-virus tool. Anti-virus software should primarily be used as a
preventative control. Unless updated periodically, anti-virus software will not be an effective tool
against viruses.
The best way to protect the computer against viruses is to use anti-viral software. There are several
kinds. Two types of scanners are available:
One checks to see if your computer has any files that have been infected with known viruses
The other checks for atypical instructions (such as instructions to modify operating system files)
and prevents completion of the instruction until the user has verified that it is legitimate.
Once a virus has been detected, an eradication program can be used to wipe the virus from the hard
disk. Sometimes eradication programs can kill the virus without having to delete the infected program
or data file, while other times those infected files must be deleted. Still other programs, sometimes
called inoculators, will not allow a program to be run if it contains a virus.
Computer crime can be performed with absolutely nothing physically being taken or stolen. Simply
viewing computerized data can provide an offender with enough intelligence to steal ideas or
confidential information (intellectual property).
Committing crimes that exploit the computer and the information it contains can be damaging to the
reputation, morale and very existence of an organization. Loss of customers, embarrassment to
management and legal actions against the organization can be a result.
Legal repercussions – there are numerous privacy and human rights laws an organization should
consider when developing security policies and procedures. These laws can protect the
organization but can also protect the perpetrator from prosecution. In addition, not having proper
security measures could expose the organization to lawsuits from investors and insurers if a
significant loss occurs from a security violation. Most companies also must comply with industry-
specific regulatory agencies.
Loss of credibility or competitive edge – many organizations, especially service firms such as
banks, savings and loans and investment firms, need credibility and public trust to maintain a
competitive edge. A security violation can severely damage this credibility, resulting in loss of
business and prestige.
Sabotage – some perpetrators are not looking for financial gain. They merely want to cause
damage due to dislike of the organization or for self-gratification.
Logical access violators are often the same people who exploit physical exposures, although the skills
needed to exploit logical exposures are more technical and complex. Such people include:
a) Hackers – hackers are typically attempting to test the limits of access restrictions to prove their
ability to overcome the obstacles. They usually do not access a computer with the intent of
destruction; however, this is quite often the result.
b) Employees – both authorized and unauthorized employees
c) Information system personnel – these individuals have the easiest access to computerized
information since they are the custodians of this information. In addition to logical access
controls, good segregation of duties and supervision help reduce logical access violations by
these individuals.
d) End users
e) Former employees
f) Interested or educated outsiders
Competitors
Foreigners
Organized criminals
Crackers (hackers paid by a third party)
Phreackers (hackers attempting access into the telephone/communication system)
Part-time and temporary personnel – remember that office cleaners often have a great
deal of physical access and may well be competent in computing
Vendors and consultants
Accidental ignorant – someone who unknowingly perpetrates a violation
Access control software generally processes access requests in the following way:
Identification of users – users must identify themselves to the access control software such as
name and account number
Authentication – users must prove that they are who they clam to be. Authentication is a two
way process where the software must first verify the validity of the user and then proceed to
verify prior knowledge information. For example, users may provide the following
information:
Ideally, passwords should be five to eight characters in length. Anything shorter is too easy to
guess, anything longer is too hard to remember.
Passwords should allow for a combination of alpha, numeric, upper and lower case and special
characters.
Passwords should not be particularly identifiable with the user (such as first name, last name,
spouse name, pet’s name etc). Some organizations prohibit the use of vowels, making word
association/guessing of passwords more difficult.
The system should not permit previous password(s) to be used after being changed.
Logon-Ids not used after a number of days should be deactivated to prevent possible misuse.
The system should automatically disconnect a logon session if no activity has occurred for a
period of time (one hour). This reduces the risk of misuse of an active logon session left
unattended because the user went to lunch, left home, went to a meeting or otherwise forgot to
logoff. This is often referred to as ‘time out’.
Terminal security – this security feature restricts the number of terminals that can
access certain transactions based on the physical/logical address of the terminal.
Terminal locks – this security feature prevents turning on a computer terminal until a
key lock is unlocked by a turnkey or card key.
6) Dial-back procedures
When a dial-up line is used, access should be restricted by a dial-back mechanism. Dial-back interrupts
the telecommunications dial-up connection to the computer by dialling back the caller to validate user
authority.
7) Restrict and monitor access to computer features that bypass security
Generally, only system software programmers should have access to these features:
Bypass Label Processing (BLP) – BLP bypasses computer reading of the file label. Since
most access control rules are based on file names (labels), this can bypass access security.
System exits – this system software feature permits the user to perform complex system
maintenance, which may be tailored to a specific environment or company. They often exist
outside of the computer security system and thus are not restricted or reported in their use.
Special system logon-Ids – these logon-Ids are often provided with the computer by the
vendor. The names can be easily determined because they are the same for all similar computer
systems. Passwords should be changed immediately upon installation to secure them.
8) Logging of online activity
Many computer systems can automatically log computer activity initiated through a logon-ID or
computer terminal. This is known as a transaction log. The information can be used to provide a
management/audit trail.
9) Data classification
Computer files, like documents have varying degrees of sensitivity. By assigning classes or levels of
sensitivity to computer files, management can establish guidelines for the level of access control that
should be assigned. Classifications should be simple, such as high, medium and low. End user
managers and the security administrator can the use these classifications to assist with determining
who should be able to access what.
A typical classification has four data classifications:
Sensitive – applies to information that requires special precautions to assure the integrity of
the information, by protecting it from unauthorized modification or deletion. It is information
that requires a higher than normal assurance of accuracy and completeness e.g. passwords,
encryption parameters.
Confidential – applies to the most sensitive business information that is intended strictly for
use within an organization. Its unauthorized disclosure could seriously and adversely impact
the organization’s image in the eyes of the public e.g. application program source code, project
documentation etc.
Private – applies to personal information that is intended for use within the organization. Its
unauthorized disclosure could seriously and adversely impact the organization and/or its
customers e.g. customer account data, e-mail messages etc.
Public – applies to data that can be accessed by the public but can be updated/deleted by
authorized people only e.g. company web pages, monetary transaction limit data etc.
Possible Perpetrators
Employees with authorized or unauthorized access who are:
The most likely source of exposure is from the uninformed, accidental or unknowing person, although
the greatest impact may be from those with malicious or fraudulent intent.
Bolting door locks – these locks require the traditional metal key to gain entry. The key should be
stamped ‘Do not duplicate’.
Combination door locks (cipher locks) – this system uses a numeric keypad or dial to gain entry.
The combination should be changed at regular intervals or whenever an employee with access is
transferred, fired or subject to disciplinary action. This reduces the risk of the combination being
known by unauthorized people.
Electronic door locks – this system uses a magnetic or embedded chip-based plastic card key or
token entered into a sensor reader to gain access. A special code internally stored in the card or
token is read by the sensor device that then activates the door locking mechanism. Electronic door
locks have the following advantages over bolting and combination locks:
o Through the special internal code, cards can be assigned to an identifiable individual.
o Through the special internal code and sensor devices, access can be restricted based on the
individual’s unique access needs. Restriction can be assigned to particular doors or to
particular hours of the day.
o They are difficult to duplicate
o Card entry can be easily deactivated in the event an employee is terminated or a card is
lost or stolen. Silent or audible alarms can be automatically activated if unauthorized entry
is attempted. Issuing, accounting for and retrieving the card keys is an administrative
process that should be carefully controlled. The card key is an important item to retrieve
when an employee leaves the firm.
Biometric door locks – an individual’s unique body features, such as voice, retina, fingerprint or
signature, activate these locks. This system is used in instances when extremely sensitive facilities
must be protected, such as in the military.
Manual logging – all visitors should be required to sign a visitor’s log indicating their name,
company represented, reason for visiting and person to see. Logging typically is at the front
reception desk and entrance to the computer room. Before gaining access, visitors should also be
required to provide verification of identification, such as a driver’s license, business card or vendor
identification tag.
Electronic logging – this is a feature of electronic and biometric security systems. All access can
be logged, with unsuccessful attempts being highlighted.
Identification badges (photo IDs) – badges should be worn and displayed by all personnel.
Visitor badges should be a different colour from employee badges for easy identification.
Sophisticated photo Ids can also be utilized as electronic card keys. Issuing, accounting for and
retrieving the badges in an administrative process must be carefully controlled.
Video cameras – cameras should be located at strategic points and monitored by security guards.
Sophisticated video cameras can be activated by motion. The video surveillance recording should
be retained for possible future playbacks.
Security guards – guards are very useful if supplemented by video cameras and locked doors.
Guards supplied by an external agency should be bonded to protect the organization from loss.
Controlled visitor access – all visitors should be escorted by a responsible employee. Visitors
include friends, maintenance personnel, computer vendors, consultants (unless long-term, in which
case special guest access may be provided) and external auditors.
Bonded personnel – all service contract personnel, such as cleaning people and off-site storage
services, should be bonded. This does not improve physical security but limits the financial
exposure of the organization.
Deadman doors – this system uses a pair of (two) doors, typically found in entries to facilities
such as computer rooms and document stations. For the second door to operate, the first entry door
must close and lock, with only one person permitted in the holding area. This reduces risk of
piggybacking, when an unauthorized person follows an authorized person through a secured entry.
Not advertising the location of sensitive facilities – facilities such as computer rooms should not
be visible or identifiable from the outside, that is, no windows or directional signs. The building
or department directory should discreetly identify only the general location of the information
processing facility.
Computer terminal locks – these lock devices to the desk, prevent the computer from being
turned on or disengage keyboard recognition, preventing use.
Controlled single entry point – a controlled entry point monitored by a receptionist should be
used by all incoming personnel. Multiple entry points increase the risk of unauthorized entry.
Unnecessary or unused entry points should be eliminated or deadlocked.
Alarm system – an alarm system should be linked to inactive entry points, motion detectors and
the reverse flow of enter or exit only doors. Security personnel should be able to hear the alarm
when activated.
Secured report/document distribution cart – secured carts, such as mail carts, should be covered
and locked and should not be left unattended.
Personnel Issues
Employee responsibilities for security policy are:
Maintaining good physical security by keeping doors locked, safeguarding access keys, not
disclosing access door lock combinations and questioning unfamiliar people
Conforming to local laws and regulations
Adhering to privacy regulations with regard to confidential information e.g. health, legal etc.
Non-employees with access to company systems should be held accountable for security policies and
responsibilities. This includes contract employees, vendors, programmers, analysts, maintenance
personnel and clients.
Segregation of Responsibilities
A traditional security control is to ensure that there are no instances where one individual is solely
responsible for setting, implementing and policing controls and, at the same time, responsible for the
use of the systems. The use of a number of people, all responsible for some part of information system
controls or operations, allows each to act as a check upon another. Since no employee is performing
all the steps in a single transaction, the others involved in the transaction can monitor for accidents and
crime.
Systems development
Management of input media
Operating the system
Management of documentation and file archives
Distribution of output
Where possible, to segregate responsibilities fully, no one person should cross these task boundaries.
Associated with this type of security control is the use of rotation of duties and unannounced audits.
Hiring practices – to ensure that the most effective and efficient staff is chosen and that
the company is in compliance with legal requirements. Practices include:
a) Background checks
b) Confidentiality agreements
c) Employee bonding to protect against losses due to theft
d) Conflict of interest agreements
e) Non-compete agreements
Employee handbook – distributed to all employees upon being hired, should explain items
such as
a) Security policies and procedures
b) Company expectations
c) Employee benefits
d) Disciplinary actions
e) Performance evaluations etc.
Network Security
Communication networks (wide area or local area networks) generally include devices connected to
the network, and programs and files supporting the network operations. Control is accomplished
through a network control terminal and specialized communications software.
Network access by system engineers should be closely monitored and reviewed to direct
unauthorized access to the network.
Analysis should be performed to ensure workload balance, fast response time and system
efficiency.
A terminal identification file should be maintained by the communication software to
check the authentication of a terminal when it tries to send or receive messages.
Data encryption should be used where appropriate to protect messages from disclosure
during transmission.
Some common network management and control software include Novell NetWare, Windows NT,
UNIX, NetView, NetPass etc.
LAN security
Local area networks (LANs) facilitate the storage and retrieval of programs and data used by a group
of people. LAN software and practices also need to provide for the security of these programs and data.
Risks associated with use of LANs include:
The LAN security provisions available depend on the software product, product version and
implementation. Commonly available network security administrative capabilities include:
anywhere, pass through Wide Area Network (WAN) links to other systems and generally cause as
much or as little havoc as they like.
To minimize the risk of unauthorized dial-in access, remote users should never store their
passwords in plain text login scripts on notebooks and laptops. Furthermore, portable PCs
should be protected by physical keys and/or basic input output system (BIOS) based
passwords to limit access to data if stolen.
Client/Server Security
A client/server system typically contains numerous access points. Client/server systems utilize
distributed techniques, creating increased risk of access to data and processing. To effectively secure
the client/server environment, all access points should be identified. In mainframe-based applications,
centralized processing techniques require the user to go through one pre-defined route to access all
resources. In a client/server environment, several access points exist, as application data may exist on
the client or the server. Each of these routes must therefore be examined individually and in relation to
each other to determine that no exposures are left unchecked.
In order to increase the security in a client/server environment, the following control techniques should
be in place:
Internet Threats
The very nature of the Internet makes it vulnerable to attack. It was originally designed to allow for
the freest possible exchange of information, data and files. However, today the freedom carries a price.
Hackers and virus-writers try to attack the Internet and computers connected to the Internet and those
who want to invade other’s privacy attempt to crack into databases of sensitive information or snoop
on information as it travels across Internet routes.
It is therefore important in this situation to understand the risks and security factors that are needed to
ensure proper controls are in place when a company connects to the Internet. There are several areas
of control risks that must be evaluated to determine the adequacy of Internet security controls:
Corporate Internet policies and procedures
Firewall standards
Firewall security
Data security controls
a) Disclosure
It is relatively simple for someone to eavesdrop on a ‘conversation’ taking place over the Internet.
Messages and data traversing the Internet can be seen by other machines including e-mail files,
passwords and in some cases key-strokes as they are being entered in real time.
b) Masquerade
A common attack is a user pretending to be someone else to gain additional privileges or access to
otherwise forbidden data or systems. This can involve a machine being reprogrammed to masquerade
as another machine (such as changing its Internet Protocol – IP address). This is referred to as spoofing.
c) Unauthorized access
Many Internet software packages contain vulnerabilities that render systems subject to attack.
Additionally, many of these systems are large and difficult to configure, resulting in a large percentage
of unauthorized access incidents.
d) Loss of integrity
Just as it is relatively simple to eavesdrop a conversation, so it is also relatively easy to intercept the
conversation and change some of the contents or to repeat a message. This could have disastrous effects
if, for example, the message was an instruction to a bank to pay money.
e) Denial of service
Denial of service attacks occur when a computer connected to the Internet is inundated (flooded) with
data and/or requests that must be serviced. The machine becomes so tied up with dealing with these
messages that it becomes useless for any other purpose.
It is difficult to assess the impact of the threats described above, but in generic terms the following
types of impact could occur:
Loss of income
Increased cost of recovery (correcting information and re-establishing services)
Increased cost of retrospectively securing systems
Loss of information (critical data, proprietary information, contracts)
Loss of trade secrets
Damage to reputation
Legal and regulatory non-compliance
Failure to meet contractual commitments
Encryption
Encryption is the process of converting a plaintext message into a secure coded form of text called
cipher text that cannot be understood without converting back via decryption (the reverse process) to
plaintext again. This is done via a mathematical function and a special encryption/decryption password
called the key.
The limitations of encryption are that it can’t prevent loss of data and encryption programs can be
compromised. Therefore encryption should be regarded as an essential but incomplete form of access
control that should be incorporated into an organization’s overall computer security program.
people – the public key. A common form of asymmetric encryption is RSA (named after its inventors
Rivest, Shamir and Adelman).
Firewall security
A firewall is a set of hardware and software equipment placed between an organization’s internal
network and an external network to prevent outsiders from invading private networks.
Companies should build firewalls to protect their networks from attacks. In order to be effective,
firewalls should allow individuals on the corporate network to access the Internet and at the same time
stop hackers or others on the Internet from gaining access to the corporate network to cause damage.
Firewalls are hardware and software combinations that are built using routers, servers and a variety of
software. They should sit in the most vulnerable point between a corporate network and the Internet
and they can be as simple or complex as system administrators want to build them.
There are many different types of firewalls, but many enable organizations to:
Block access to particular sites on the Internet
Prevent certain users from accessing certain servers or services
Monitor communications between an internal and external networks
Eavesdrop and record all communications between an internal network and the outside world
to investigate network penetrations or detect internal subversions.
Encrypt packets that are sent between different physical locations within an organization by
creating a virtual private network over the Internet.
A false sense of security exists where management feels that no further security checks and
controls are needed on the internal network.
Firewalls are circumvented through the use of modems connecting users to Internet Service
Providers.
Mis-configured firewalls, allowing unknown and dangerous services to pass through freely.
Misunderstanding of what constitutes a firewall e.g. companies claiming to have a firewall
merely having a screening router.
Monitoring activities do not occur on a regular basis i.e. log settings not appropriately applied
and reviewed.
Network-based IDSs identify attacks within the network that they are monitoring and issue a warning
to the operator. If a network-based IDS is placed between the Internet and the firewall it will detect all
the attack attempts, whether they do or do not enter the firewall. If the IDS is placed between a firewall
and the corporate network it will detect those attacks that could not enter the firewall i.e. it will detect
intruders. The IDS is not a substitute for a firewall, but complements the function of a firewall.
Host-based IDSs are configured for a specific environment and will monitor various internal resources
of the operating system to warn of a possible attack. They can detect the modification of executable
programs, the deletion of files and issue a warning when an attempt is made to use a privileged
command.
Fire
Natural disasters – earthquake, volcano, hurricane, tornado
Power failure
Power spike
Air conditioning failure
Electrical shock
Equipment failure
Water damage/flooding – even with facilities located on upper floors of high-rise buildings,
water damage is a risk, typically occurring from broken water pipes
Bomb threat/attack
d) Smoke detectors
They supplement not replace fire suppression systems. Smoke detectors should be above and below
the ceiling tiles throughout the facility and below the raised computer room floor. They should produce
an audible alarm when activated and be linked to a monitored station (preferably by the fire
department).
Water based systems (sprinkler systems) – effective but unpopular because they damage
equipment
Dry-pipe sprinkling – sprinkler systems that do not have water in the pipes until an
electronic fire alarm activates the water pumps to send water to the dry pipe system.
Halon systems – release pressurized halon gases that remove oxygen from the air, thus
starving the fire. Halon is popular because it is an inert gas and does not damage
equipment.
Carbon dioxide systems – release pressurized carbon dioxide gas into the area protected
to replace the oxygen required for combustion. Unlike halon, however, carbon dioxide is
unable to sustain human life and can therefore not be set to automatic release.
n) Prohibitions against eating, drinking and smoking within the information processing
facility
Food, drink and tobacco use can cause fires, build-up of contaminants or damage to sensitive
equipment especially in case of liquids. They should be prohibited from the information processing
facility. This prohibition should be overt, for example, a sign on the entry door.
Evacuation plans should emphasize human safety, but should not leave information processing
facilities physically unsecured. Procedures should exist for a controlled shutdown of the computer in
an emergency situation, if time permits.
All organizations are exposed to uncertainties, some of which impact the organization in a negative
manner. In order to support the organization, IT security professionals must be able to help their
organizations’ management understand and manage these uncertainties.
Managing uncertainties is not an easy task. Limited resources and an ever-changing landscape of
threats and vulnerabilities make completely mitigating all risks impossible. Therefore, IT security
professionals must have a toolset to assist them in sharing a commonly understood view with IT and
business managers concerning the potential impact of various IT security related threats to the
mission. This toolset needs to be consistent, repeatable, cost-effective and reduce risks to a reasonable
level.
Risk management is nothing new. There are many tools and techniques available for managing
organizational risks. There are even a number of tools and techniques that focus on managing risks to
information systems. This paper explores the issue of risk management with respect to information
systems and seeks to answer the following questions:
Threats
A threat can be defined as the potential for a threat source to exercise (accidentally trigger or
intentionally exploit) a specific vulnerability.
Threat-Source:
It’s either intent and method targeted at the intentional exploitation of a vulnerability or a situation
and method that may accidentally trigger a vulnerability.
The threat is merely the potential for the exercise of a particular vulnerability. Threats in themselves
are not actions. Threats must be coupled with threat-sources to become dangerous.
This is an important distinction when assessing and managing risks, since each threat-source maybe
associated with a different likelihood, which, as will be demonstrated, affects risk assessment and risk
management. It is often expedient to incorporate threat sources into threats.
Vulnerabilities
Vulnerability is a flaw or weakness in system security procedures, design, implementation, or internal
controls that could be exercised (accidentally triggered or intentionally exploited) and result in a
security breach or a violation of the system’s security policy.
Notice that the vulnerability can be a flaw or weakness in any aspect of the system.
Vulnerabilities are not merely flaws in the technical protections provided by the system.
Significant vulnerabilities are often contained in the standard operating procedures that systems
administrators perform, the process that the help desk uses to reset passwords or inadequate log
review. Another area where vulnerabilities may be identified is at the policy level. For instance, a lack
of a clearly defined security testing policy may be directly responsible for the lack of vulnerability
scanning.
Here are a few examples of vulnerabilities related to contingency planning/ disaster recovery:
•Inadequate information system recovery procedures, for all processing areas (including
networks)
It is vital to manage risks to systems. Understanding risk, and in particular, understanding the specific
risks to a system allow the system owner to protect the information system commensurate with its
value to the organization. The fact is that all organizations have limited resources and risk can never
be reduced to zero. So, understanding risk, especially the magnitude of the risk, allows organizations
to prioritize scarce resources.
Answering these questions requires understanding the services that support making business
continuity decisions.
Technology Basics
The September 2001 attack on the World Trade Center in New York City tested the contingency plans
of American businesses to an unanticipated degree. Companies that had business continuity plans and
contracts in place with vendors of recovery services were able to continue business at alternate sites
with minimum downtime and minimum loss of data, and the alternate facilities provided by the
vendors were not overcrowded even in this largest of disasters. Unfortunately, the massive loss of life
and its dramatic impact on co-workers, business processes, and communities was not anticipated. As
organizations throughout the world attempt to return to business as usual, they must not neglect the
very necessary review and updating of their business continuity plans and contracts. Only then will
the lessons of the World Trade Center disaster have value going forward.
• Customers expect supplies and services to continue— or resume rapidly— in all situations.
Even among corporations with business continuity plans, a KPMG study shows that less than one half
meet an acceptable portion of their recovery objectives. The business infrastructure seems to be less
protected than its stewards think it is, and such surprises usually lie in failure to tend the corporate
domain. Two curable causes of disappointing continuity plan performance may be viewed as “spotty
plans” and “plan rust.” Spotty plans suffer from gaps either in the initial continuity plan or in the
current plan’s rust from lack of exercise (testing).
A business continuity plan, adequately supported throughout the organization, embodies the strategic
framework for a corporate culture that embraces a variety of tactics to mitigate risks that might cause:
• Asset loss
• Regulatory liability
• Customer service failure
• Damage to reputation or brand
These alignment and analysis steps are necessary to obtain executive sponsorship and the commitment
of resources from all stakeholders. Without a basis of business impact analysis and risk assessment,
the plan cannot succeed and may not even be developed.
Here, attention to detail and active participation by all stakeholders ensure the development of a plan
worth implementing. The plan itself must include the recovery strategy with all of its detailed
components and the test plan.
The best plan is only as effective as it is current. Every tactic of business resumption and recovery
must be kept up to date and tested regularly.
Types of Plans
The separate plans that make up a business continuity plan include:
c. Contingency plan
to manage an external event that has far-reaching impact on the business.
Service Options
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 320
One significant trend among business continuity service vendors is to focus on business continuity as
a whole. Recovery itself must be speedy (under 24 hours) for high-availability systems— and the
facilities must provide continuity not only of the data center (the “glass house”), but also of all critical
aspects of its clients’ businesses. This focus provides clients a more integrated service while allowing
the vendor to maintain better account control.
Many service providers offer combinations of tactical consulting with business continuity planning
and management software, sometimes including full continuity management services and hot-site
facilities.
Hardware vendors may combine continuity planning consultancy with rapid hardware replacement
shipment, mobile-site delivery, or hot-site facilities.
Communications and networking vendors may offer high-availability networking and rapid recovery
solutions with tactical consulting.
• Product-Independent Consulting.
Consultants who provide analyses, audits, and tactical recommendations based upon such studies
offer objectivity in the development of the specifications a company should use to select business
continuity products and services.
Virtually all hot-site vendors offer some form of PC-based disaster recovery plan development tool. In
many cases (like consulting services), these packages are provided to a client organization as an
enticement to acquire full hot-site services.
Recovery Assistance
Stand-alone considerations for offsite recovery remain a significant part of the continuity management
strategy. Specific types of service may be combined to provide the exact package any company
specifies:
• OEM Insurance.
Hardware companies may offer a form of insurance guaranteeing that they will replace damaged
computer equipment with a system of equal or greater processing capacity within a specified period of
time. The insurance cost is usually six to eight percent of the monthly maintenance bill.
• Quick Ship.
Most third-party leasing vendors provide guaranteed rapid shipment of replacement hardware as a
recovery option. Customers pay a priority equipment search fee and the normal leasing charges plus a
premium when they request shipment.
Creating and maintaining a BCP helps ensure that an institution has the resources and information
needed to deal with these emergencies.
a) BCP Governance
b) Business Impact Analysis (BIA)
c) Plans, measures, and arrangements for business continuity
d) Readiness procedures
e) Quality assurance techniques (exercises, maintenance and auditing)
Establish control
A BCP contains a governance structure often in the form of a committee that will ensure senior
management commitments and define senior management roles and responsibilities.
The BCP senior management committee is responsible for the oversight, initiation, planning,
approval, testing and audit of the BCP. It also implements the BCP, coordinates activities, approves
the BIA survey, oversees the creation of continuity plans and reviews the results of quality assurance
activities.
Executive sponsor has overall responsibility for the BCP committee; elicits senior
management's support and direction; and ensures that adequate funding is available for the
BCP program.
BCP Coordinator secures senior management's support; estimates funding requirements;
develops BCP policy; coordinates and oversees the BIA process; ensures effective participant
input; coordinates and oversees the development of plans and arrangements for business
continuity; establishes working groups and teams and defines their responsibilities;
coordinates appropriate training; and provides for regular review, testing and audit of the
BCP.
Security Officer works with the coordinator to ensure that all aspects of the BCP meet the
security requirements of the organization.
Chief Information Officer (CIO) cooperates closely with the BCP coordinator and IT
specialists to plan for effective and harmonized continuity.
Business unit representatives provide input, and assist in performing and analyzing the results
of the business impact analysis.
The BCP committee is commonly co-chaired by the executive sponsor and the coordinator.
g. Insurance requirements
Since few organizations can afford to pay the full costs of a recovery; having insurance ensures that
recovery is fully or partially financed.
When considering insurance options, decide what threats to cover. It is important to use the BIA to
help decide both what needs insurance coverage, and the corresponding level of coverage. Some
aspects of an operation may be overinsured, or underinsured. Minimize the possibility of overlooking
a scenario, and to ensure coverage for all eventualities.
Document the level of coverage of your institutional policy, and examine the policy for uninsured
areas and non-specified levels of coverage. Property insurance may not cover all perils (steam
explosion, water damage, and damage from excessive ice and snow not removed by the owner).
Coverage for such eventualities is available as an extension in the policy.
h. Ranking
Once all relevant information has been collected and assembled, rankings for the critical business
services or products can be produced. Ranking is based on the potential loss of revenue, time of
recovery and severity of impact a disruption would cause. Minimum service levels and maximum
allowable downtimes are then determined.
i. Identify dependencies
It is important to identify the internal and external dependencies of critical services or products, since
service delivery relies on those dependencies.
Internal dependencies include employee availability, corporate assets such as equipment, facilities,
computer applications, data, tools, vehicles, and support services such as finance, human resources,
security and information technology support.
External dependencies include suppliers, any external corporate assets such as equipment, facilities,
computer applications, data, tools, vehicles, and any external support services such as facility
management, utilities, communications, transportation, finance institutions, insurance providers,
government services, legal services, and health and safety service.
Another example would be an organization that relies on internal and external telecommunications to
function effectively. Communications failures can be minimized by using alternate communications
networks, or installing redundant systems.
Another example would be a company that uses paper forms to keep track of inventory until
computers or servers are repaired, or electrical service is restored. For other institutions, such as large
financial firms, any computer disruptions may be unacceptable, and an alternate site and data
replication technology must be used.
The risks and benefits of each possible option for the plan should be considered, keeping cost,
flexibility and probable disruption scenarios in mind. For each critical service or product, choose the
most realistic and effective options when creating the overall plan.
4. Response preparation
Proper response to a crisis for the organization requires teams to lead and support recovery and
response operations. Team members should be selected from trained and experienced personnel who
are knowledgeable about their responsibilities.
The number and scope of teams will vary depending on organization's size, function and structure,
and can include:
For the teams to function in spite of personnel loss or availability, it may be necessary to multitask
teams and provide cross-team training.
5. Alternate facilities
If an organization's main facility or Information Technology assets, networks and applications are
lost, an alternate facility should be available. There are three types of alternate facility:
Cold site is an alternate facility that is not furnished and equipped for operation. Proper equipment
and furnishings must be installed before operations can begin, and a substantial time and effort is
required to make a cold site fully operational. Cold sites are the least expensive option.
Warm site is an alternate facility that is electronically prepared and almost completely equipped and
furnished for operation. It can be fully operational within several hours. Warm sites are more
expensive than cold sites.
Hot site is fully equipped, furnished, and often even fully staffed. Hot sites can be activated within
minutes or seconds. Hot sites are the most expensive option.
When considering the type of alternate facility, consider all factors, including threats and risks,
maximum allowable downtime and cost.
For security reasons, some organizations employ hardened alternate sites. Hardened sites contain
security features that minimize disruptions. Hardened sites may have alternate power supplies; back-
up generation capability; high levels of physical security; and protection from electronic surveillance
or intrusion.
Readiness Procedures
Readiness procedures include the following:
1. Training
Business continuity plans can be smoothly and effectively implemented by:
Having all employees and staff briefed on the contents of the BCP and aware of their
individual responsibilities
Having employees with direct responsibilities trained for tasks they will be required to
perform, and be aware of other teams' functions.
2. Exercises
After training, exercises should be developed and scheduled in order to achieve and maintain high
levels of competence and readiness. While exercises are time and resource consuming, they are the
best method for validating a plan. The following items should be incorporated when planning an
exercise:
a. Goal
The part of the BCP to be tested.
b. Objectives
The anticipated results. Objectives should be challenging, specific, measurable, achievable, realistic
and timely.
c. Scope
Identifies the departments or organizations involved, the geographical area, and the test conditions
and presentation.
e. Participant Instructions
Explains that the exercise provides an opportunity to test procedures before an actual disaster.
f. Exercise Narrative
Gives participants the necessary background information, sets the environment and prepares
participants for action. It is important to include factors such as time, location, method of discovery
and sequence of events, whether events are finished or still in progress, initial damage reports and any
external conditions.
g. Communications for Participants
Enhanced realism can be achieved by giving participants access to emergency contact personnel who
share in the exercise. Messages can also be passed to participants during an exercise to alter or create
new conditions.
Exercise complexity level can also be enhanced by focusing the exercise on one part of the BCP
instead of involving the entire organization.
j. Internal review
It is recommended that organizations review their BCP:
External audit
a) Response
b) Continuation of critical services
c) Recovery and restoration
a) Response
Incident response involves the deployment of teams, plans, measures and arrangements. The
following tasks are accomplished during the response phase:
a) Incident management
b) Communications management
c) Operations management
a) Incident management
Incident management includes the following measures:
b) Communications management
Communications management is essential to control rumors, maintain contact with the media,
emergency services and vendors, and assure employees, the public and other affected stakeholders.
Communications management requirements may necessitate building redundancies into
communications systems and creating a communications plan to adequately address all requirements.
c) Operations management
An Emergency Operations Center (EOC) can be used to manage operations in the event of a
disruption. Having a centralized EOC where information and resources can be coordinated, managed
and documented helps ensure effective and efficient response.
b) Continuation
Ensure that all time-sensitive critical services or products are continuously delivered or not disrupted
for longer than is permissible.
Re-deploying personnel
Deciding whether to repair the facility, relocate to an alternate site or build a new facility
Acquiring the additional resources necessary for restoring business operations
Re-establishing normal operations
Resuming operations at pre-disruption levels
Conclusion
When critical services and products cannot be delivered, consequences can be severe. All
organizations are at risk and face potential disaster if unprepared. A business continuity plan is a tool
that allows institutions to not only to moderate risk, but also continuously deliver products and
services despite disruption.
REVISION EXERCISES
1. Discuss some of the information system threats
2. How can an organization control some of the information threats it faces?
3. A trap door is a secret and undocumented entry point within a program which typically
bypasses normal methods of authentication, and usually included for debugging purposes but
may be forgotten or left deliberately. Trap doors can also be inserted by intruders who have
gained access. Suggest four counter measures of controlling trap doors.
4. Define system integrity
5. Define intrusion detection system.
6. What is a firewall and what functions does it perform in relation to organizational network
security.
7. What are some of the vulnerabilities in a contingency plan?
8. Briefly describe three advantages of implementing an online banking system
9. Identify six types of operational information systems in a bank.
10. Define the following terms:
(i) Virus
(ii) Worm
(iii) Logic bomb
(iv) Denial of service
11. Identify four hardware tactics of controlling viruses in an organization.
12. Briefly describe the following systems:
(i) CAD/CAM
(ii) Image Management Software
(iii) Automated Materials Handling Software
(iv) CIM
13. List ten controls over environmental exposures.
(b) What is meant by the term hacking? Identify four exposures that can be caused by
hackers.
(c) Describe three major factors that vulnerability of a system to hacking will depend on.
CHAPTER 9
INTRODUCTION
There is a frequently used expression that emphasizes that information has no ethics. The ethical
aspect of organizations and the manner in which information is managed resides with the values that
are inherent in the people that comprise the organization. The manner in which information is used is
dependent on the ethics and beliefs of the people that make up the organization, especially the
organization’s leadership. It has become increasingly clear that information is a valuable
organizational resource that must be carefully safeguarded and effectively managed just as other
organizational resources are managed. Information cannot secure itself or protect itself from phishers,
spyware, or identity thieves.
In general, people have become much more technologically savvy. Largely due to the dramatically
increased scope of information available via the Internet, the ease of access to information, and the
broadened scope of computer literacy, the security of information and the privacy of individuals have
become areas of significant concern. Concerns about security and privacy as well as ethical dilemmas
dominate our daily lives. As a result of personal concerns and fears, and the rapid increase of theft of
personal information, organizations have developed and / or revised codes of ethical conduct.
Simultaneously, our government agencies have enacted laws and legislation that are specifically
related to ensuring the privacy and security of information and individuals.
Ethic refers to the principles of right and wrong that individuals use to make choices to guide their
behaviors. IT can be used to achieve social progress, but it can also be used to commit crimes and
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 332
threaten cherished social values. Ethical Issues - is governed by the general norms of behaviour and
by specific codes of ethics. Ethical considerations go beyond legal liability.
Knowledge of ethics as it applies to the issues arising from the development and use of information
systems helps us make decisions in our professional life. Professional knowledge is generally
assumed to confer a special responsibility within its domain. This is why the professions have evolved
codes of ethics, that is, sets of principles intended to guide the conduct of the members of the
profession.
End users and IS professionals would live up to their ethical responsibilities by voluntarily following
guidelines set in the code of conduct. For example, you can be a responsible end user by:
Computer Ethics
Although ethical decision-making is a thoughtful process, based on one’s own personal fundamental
principles we need codes of ethics and professional conduct for the following reasons:
The following issues distinguish computing professionals’ ethics from other professionals’ ethics.
Computing (automation) affects such a large segment of the society (personal,
professional, business, government, medical, industry, research, education, entertainment,
law, agriculture, science, art, etc); it changes the very fabric of society.
Information technology is a very public business
Computing is a young discipline
It changes relationships between: people, businesses, industries, governments, etc
o Communication is faster
o Data can be fragile: it may be insecure, invalid, outdated, leaked, lost,
unrecoverable, misdirected, copied, stolen, misrepresented etc.
o The well-being of people, businesses, governments, and social agencies may be
jeopardized through faulty computing systems and/or unethical behaviour by
computing professionals
o Computing systems can change the way people work: it can be make people more
productive but can also isolate them from one another
o Conceivably could create a lower and upper class society
o People can lose their identity in cyberspace
o Honour confidentiality
The principle of honesty extends to issues of confidentiality of information whenever one has made an
explicit promise to honour confidentiality or, implicitly, when private information not directly related
to the performance of one’s duties become available. The ethical concern is to respect all obligations
of confidentiality to employers, clients, and users unless discharged from such obligations by
requirements of the law or other principles of this code.
o Strive to achieve the highest quality, effectiveness and dignity in both the process and
product of professional work.
o Acquire and maintain professional competence
o Know and respect existing laws pertaining to professional work
o Accept and provide appropriate professional review
o Give comprehensive and thorough evaluations of computer systems and their impacts,
including analysis of possible risks.
o Honour contracts, agreements and assigned responsibilities
o Improve public understanding of computing and its consequences
o Access computing and communication resources only when authorized to do so
Software engineers shall commit themselves to making the analysis, specification, design,
development, testing and maintenance of software a beneficial and respected profession. In accordance
with their commitment to the health, safety and welfare of the public, software engineers shall adhere
to the following eight principles.
b) Client and employer - software engineers shall act in a manner that is in the best interest
of their client and employer consistent with public interest.
c) Product – software engineers shall ensure that their products and related modifications
meet the highest professional standards possible.
d) Judgment – software engineers shall maintain integrity and independence in their
professional judgment.
e) Management – software engineering managers and leaders shall subscribe to and promote
an ethical approach to the management of software development and maintenance.
f) Profession – software engineers shall advance the integrity and reputation of the
profession consistent with the public interest.
g) Colleagues – software engineers shall be fair to and supportive of their colleagues
h) Self – software engineers shall participate in lifelong learning regarding the practice of
their profession and shall promote an ethical approach to the practice of the profession.
Ethical Theories
Ethical theories give us the foundation from which we can determine what course of action to take
when an ethical issue is involved. At the source of ethics lies the idea of reciprocity. There are two
fundamental approaches to ethical reasoning:
1. Consequentialist theories
It tells us to choose the action with the best possible consequences. Thus, the utilitarian theory that
represents this approach holds that our chosen action should produce the greatest overall good for the
greatest number of people affected by our decision. This approach is often difficult to apply, since it is
not easy to decide what good and how to measure and compare the resulting good
It argues that it is our duty to do what is right. Your actions should be such that they could serve as a
model of behaviour for others - and, in particular, you should act as you would want others to act
toward you. Our fundamental duty is to treat others with respect, and thus not to treat them solely as a
means to our own purposes.
Treating others with respect, means not violating their rights. The principal individual rights are:
1. Privacy
2. Accuracy
3. Property
4. Access
Tracing an ethical issue to its source and the understanding of which individual rights could be
violated helps understand the issue.
1. Privacy
Privacy is the right of individuals to retain certain information about themselves without disclosure
and to have any information collected about them with their consent protected against unauthorized
access.
The Privacy Act serves as a guideline for a number of ethics codes adopted by various organizations.
The Act specifies the limitations on the data records that can be kept about individuals. The following
are the principal privacy safeguards specified:
2. No use can be made of the records for other than the original purposes without the individuals
consent.
3. The individual has the right of inspection and correction of records pertaining to him or her.
4. The collecting agency is responsible for the integrity of the record-keeping system
The power of information technology to store and retrieve information can have a negative effect on
the right to privacy of every individual. Computers and related technologies enable the creation of
massive databases containing minute details of our lives which can be assembled at a reasonable cost
and can be made accessible anywhere and at any time over telecommunications network throughout
the world.
i. Database matching
Database matching makes it possible to merge separate facts collected about an individual in several
databases. If minute facts about a person are put together in this fashion in a context unrelated to the
purpose of the data collection and without the individual's consent or ability to rectify inaccuracies,
serious damage to the rights of the individual may result.
Legislation and enforcement in the area of privacy in the United States are behind those in a number
of other countries. The countries of the European Union offer particularly extensive legal safeguards
of privacy. In the environment of business globalization, this creates difficulties in the area of
transborder data flow, or transfer of data across national boundaries. Countries with more stringent
measures for privacy protection object to a transfer of personal data into the states where this
protection in more lax. The United Nations has stated the minimum privacy guarantees recommended
for incorporation into national legislation.
Privacy protection relies on the technical security measures and other controls that limit access to
databases and other information stored in computer memories or transmitted over the
telecommunication networks.
2. Accuracy
Pervasive use of information in our societal affairs means that we have become more vulnerable to
misinformation. Accurate information is error-free, complete, and relevant to the decisions that are to
be based on it.
2. A professional should indicate to his or her employer the consequences to be expected if his or her
judgment is overruled
3. System safeguards, such as control audits are necessary to maintain information accuracy. Regular
audits of data quality should be performed and acted upon.
4. Individuals should be given an opportunity to correct inaccurate information held about them in
databases.
5. Contents of databases containing data about individuals should be reviewed at frequent intervals,
with obsolete data discarded.
3. Property
The right to property is largely secured in the legal domain. However, intangibility of information is
at the source of dilemmas which take clarity away from the laws, moving many problems into the
ethical domain. At issue primarily are the rights to intellectual property: the intangible property that
results from an individual's or a corporation's creative activity.
a. Copyright
A method of protecting intellectual property that protects the form of expression (for example, a given
program) rather than the idea itself (for example, an algorithm).
b. Patent
It is a method of protecting intellectual property that protects a non-obvious discovery falling within
the subject matter of the Patent Act.
c. Trade secret
Intellectual property protected by a license or a non-disclosure agreement
Computer programs are valuable property and thus are the subject of theft from computer systems.
Unauthorized copying of software (software piracy) is a major form of software theft because
software is intellectual property which is protected by copyright law and user licensing agreements.
4 Access
It is the hallmark of an information society that most of its workforce is employed in the handling of
information and most of the goods and services available for consumption are information-related.
Three necessities for access to the benefits of an information society include:
3. Access to information
One should strive to broaden the access of individuals to the benefits of information society. This
implies broadening access to skills needed to deal with information by further enabling literacy,
access to information technology, and the appropriate access to information itself.
Intensive work is being done on developing assistive technologies - specialized technologies than
enhance access of the handicapped to the information technology and, in many cases, to the world at
large.
2. They feel a sense of responsibility for the results of their work and have a sense of autonomy and
control
1. Use of computers has displaced workers in middle management (whose primary purpose
was to gather and transfer information) and in clerical jobs.
2. Some categories of work have virtually disappeared which has created unemployment for a
number of workers
3. May create a permanent underclass that will not be able to compete in the job market
4. Computer crime is a growing threat (money theft, service theft, software theft, data
alteration or theft, computer viruses, malicious access, crime on the internet).
5. Health issues
6. Societal issues (privacy, accuracy, property, and access)
Some of the positive effects of information technology include:
Health issues - the use of technology in the workplace raises a variety of health issues. Heavy use of
computers is reportedly causing health problems such as:
1. Job stress
2. Damaged arm and neck muscles
3. Eye strain
4. Radiation exposure
5. Death by computer-caused accidents
Ergonomics - solutions to some health problems are based on the science of ergonomics, sometimes
called human factors engineering. The goal of ergonomics is to design health work environments that
are safe, comfortable, and pleasant for people to work in, thus increasing employee morale and
productivity.
Ergonomics stresses the healthy design of the workplace, workstations, computers and other
machines, and even software packages. Other health issues may require ergonomic solutions
emphasizing job design, rather than workplace design.
Ethical behaviour of employees is highly dependent on the corporate values and norms - on the
corporate culture as a whole. Open debate of ethical issues in the workplace and continuing self-
analysis help keep ethical issues in focus. Many corporations have codes of ethics and enforce them as
part of a general posture of social responsibility.
Morality is a social attribute. Whatever I will count as 'moral' is irrelevant because I am the 'doer'. it
will come from my ego. Only a person whom I 'acted upon" can say if he felt my action to be 'moral'
or not, he will be able to teach me and by that I will go through a change.
Meaning, morality cannot exist in a singular dimension (between me and myself), but is sustained in
duality, in relativity, when two forces gravitate towards each other (when my friend teaches me of
morality). Hence the 'moral dimension' can be revealed only with in a group of people.
Loudon proposes five (5) moral dimensions of the information age are:
Technology and information systems threaten the privacy of individuals to make cheap, efficient and
effective invasion.
Due process requires the existence of a set of rules or laws that clearly define how we treat
information about individuals and that appeal mechanisms available.
2) Property rights
How to move the classical concepts of patent and intellectual property in digital technology? What are
these rights and how to protect? Information technology has hindered the protection of property
because it is very easy to copy or distribute computer information networks. Intellectual property is
subject to various protections under three patents:
Trade secrets: Any intellectual work product used for business purposes may be classified as secret.
Patents: A patent gives the holder, for 17 years, an exclusive monopoly on the ideas on which an
invention.
MANAGEMENT INFORMATION SYSTEMS
STUDY TEXT 341
4) Quality systems
What data standards, information processing programs should be required to ensure the protection of
individual rights and society? It can hold individuals and organizations for avoidable and foreseeable
consequences if their obligation is to see and correct.
5) Quality of life
What values should be preserved and protected in a society based on information and knowledge?
What institutions should protect and which should be protected? The negative social costs of
introducing information technologies and systems are growing along with the power of technology.
Computers and information technologies can destroy valuable elements of culture and society, while
providing benefits.
These five dimensions represent very good guideline considerations, ethical questions and answers
should be a company when introducing a new technology.
Data is a formal representation of concepts, facts or instructions. Information is the meaning that data
has for human beings. Data has, therefore, two different aspects: as potential information for human
beings or as instructions meant for a computer.
Information is not material, but a process or relationship that occurs between a person=s mind and
some sort of stimulus. Information, therefore, is a subjective notion that can be drawn from its
objective representation which we call data.
Different information may be received from the same data. As in the various natural languages the
same word may have different meanings, so in computer programming the same byte or set of digits
(e.g. 01100010) may serve as a carrier of different content.
The new legal doctrine of information law and law on information technology recognises information
as a third fundamental factor besides matter and energy. This concept realises that modern
information technology alters the characteristics of information, especially by strengthening its
importance and by treating it as an active factor that works without human intervention in automatic
processing systems. In this new approach, it is obvious that the legal evaluation of corporeal and
incorporeal (information) objects differs considerably.
Information, being an intangible and an entity that can be possessed, shared and reproduced by many,
is not capable of being property as most corporeal objects do. Unlike corporeal objects, which are
more exclusively attributed to certain persons, information is rather a public good. As such it must
principally flow freely in a free society. This basic principle of free flow of information is essential
for the economic and political system, as indispensable for the governments accountability and the
maintenance of a democratic order.
A second difference between the legal regime of tangibles and intangibles is that the protection of
information has not only to consider the economic interests of its proprietor or holder, but at the same
time must preserve the interests of those, who are concerned with the contents of information - an
aspect resulting in new issues of privacy protection.
A third difference originates from the vulnerability of data for manipulation, interception and erasure -
proprieties that constitute a major concern of computer security, and the criminal law provisions on
computer crime.
Generally, access to government information can be defined as the availability for inspection or
coping of both records and recordings, possessed or controlled by a public authority. This mechanism
came, for the first time in history, in the eighteenth century Sweden with the passage of the Act on
Freedom of the Press (1766). After 1945 this regulatory approach was followed in other Scandinavian
countries, in the United States (since 1996, when the Freedom of Information Act was enacted), and
in several other countries. Among these are Australia, Canada, France, the Netherlands, and New
Zealand. Some other countries have constitutional clauses relating to a right of access, but not always
transformative legislation.
The route by which the promotion of the rights of access to official information has become a strong
political issue is varied. Initially, the publics right to government information had been found to be
closely related to the concept of human rights. Because of its importance for democratic society, the
publics right to information was even acknowledged to constitute a third generation of human rights,
after the civil and political rights of the eighteenth century, and the economic and social rights of the
first half of the twentieth century. As it was stressed in the Council of Europe Recommendation on
AAccess by the Public to Government Records and Freedom of Information@: AA parliamentary
democracy can function adequately only if people in general and their elected representatives are full
informed@2.
The most recent emphasis, however, is on the commercial rather than human rights aspect of public
sector information. There is now a widespread recognition by the private sector of the commercial
value of much government information. Large data sets, as land registers, company registers,
demographic statistics, and topographic information (maps) are routinely produced as a by-product of
the day-to-day functioning of public administration. Information is not an end in itself. Sound and
comprehensive information is needed if government is to frame workable public policies, plan
effective services and distribute resources fairly and equitably. Government information, therefore,
constitutes a resource of considerable importance. The potential of such data for exploitation via the
digital network was noted and encouraged.
Impact of Computerisation
Over the 1970s and 1980s, when computerisation of public sector information systems in the most
developed countries was in its infancy, there were fears that government agencies would use
computerisation as a technology of secrecy rather than a technology of freedom.
In fact, in some countries computerisation of government information had a strong impact on the way
the right of public access has been interpreted by the authorities. For example, when new
programming was necessary to extract information from computer systems, agencies and courts have
sometimes held that such programming is analogous to record creation, and is therefore not required
under the freedom of information laws, which only oblige to search for available records. There is a
common feature of these laws to grant access only to information which is available or can be made
available through reasonable effort.
As electronic records became more common, the freedom of information laws proved to be less useful
in the new environment. Because the wording of these laws usually provide access to paper records,
an authority was not obliged to accommodate a requesters preference for access in an electronic form,
for example a copy on computer tape or disk. There are well known, especially in the United States,
cases of the Governments agency refusal of making computerised records available to the party
concerned in their access4.
Today, in the United States these definitional problems have successfully been solved, With the
adoption of the Amendments Act on Electronic Freedom of Information of 1996, the Government
information maintained in electronic format has become accessible to the public on an equal footing
with paper-based documents. Though, there are still some national legislations that do not allow
requesters to obtain data in machine-readable format, the process of commercialisation of the public
sector information is a present development both in the United States and most countries of Western
Europe. Moreover, due to the traditional concept of the right of access, as a right to request the
handing out of identified documents, the right to search for documents has so far not been a
recognised part of the principle of public domain.
In view of the fast growing information networks, the powerful search engines, and, generally
speaking, the retrieval possibilities of electronic information increase the significance of search rights
as an integrated element of the traditional right of access.
New developments in hardware and software technology, as relational databases and hypertext, not
only enhance computer flexibility and responsiveness to unanticipated form of requests, but also make
it easy to compile and format information for network access. The cost in money and effort to share
information is much lower. As a result, public access to government information can be enhanced.
The most recent event illustrating the tendency of making legal text databases freely available to
citizens is a decision of the Swedish parliament to make its on-line legal information service (Rixlex)
available to the public on a free of charge basis via the Internet.
Public access to official information does not prevent the Government from protecting information
from disclosure for their legitimate aims as stipulated by legal provisions.
In the United States, nine exemptions permit the withholding of records to protect legitimate
government or private interests. Thus, national security information, trade secrets, law enforcement
investigative files, personal data, pre-decisional documents, and other categories of government
records can lawfully be denied to a FOIA requester. The early experience under the Act on Freedom
of Information shows some negative consequences of this legislation for effective law enforcement. It
was estimated that only 7 percent of the 30,000 FOIA requests received annually by the Department
of Justice came from media and other researchers. Many requests came from persons who were
obviously seeking improper personal advantage, including convicted offenders, organised crime
people, drug traffickers, and persons in litigation with the United States who are attempting to use the
FOIA to circumvent the rules of discovery contained in the rules of criminal or civil procedure.
Consequently, the ability of the federal, state, and local governments to combat crime was thought to
be affected, mainly by a decline in the number of informants. A highly detailed Swedish Secrecy Act
contains 16 chapters and more than a hundred articles.
They provide a specific requirements of damage to the interest concerned, as well as a maximum
period of time during which secrecy applies. For example, where the protection of personal
circumstances of individuals is concerned, usually a term of 50 or 70 years is applicable. With regard
to secret information on matters of national defence or foreign relations a maximum period of 40
years has been established. In principle the restrictions laid down in the Secrecy Act are mandatory in
nature, i.e. if a restriction applies the authority involved must refuse access. United States Copyright
Act, ’105 (1994). The prohibition on copyright protection for United States Government works is not
intended to limit protection abroad. Thus, under the Copyright Act, the Federal Government can seek
copyright for its information of other countries.
In Germany and Switzerland, for instance, legislation and jurisprudence is not copyrighted. The
Italian law explicitly bars statutes, regulations, rulings and the like from being copyrighted by Italian
Government, local authorities or a foreign one. In Turkey, legislation and jurisprudence are not
copyrighted as far as they are published officially. Speeches are not copyrighted in the scope of mass
communications, otherwise they are copyrighted. All other governmental works, such as reports,
plans, maps, drawings etc. are copyrighted.
The legal nature of the restrictions based on secrecy interests differs among the various jurisdictions.
In the United States of America, Denmark and France for example the limitations are not mandatory
as is the case in Sweden and the Netherlands but are discretionary in nature. This means that if a
restriction is applicable, the public authority concerned is under no obligation to give access to the
information, but is nevertheless entitled to do so.
3. Susceptibility of computerised information systems for an unauthorised access to data stored and
their possible abuses have constituted another cause of concern;
4. Use of information provided by centralised computer systems on large sectors of the population
who have no opportunity to inspect the accuracy of the information held, may also affect the legal
position of the data subject in a way being harmful for their civil liberties.
Firewall Technology
A firewall is one of several methods of protecting ones network from another mistrusted network. It is
deemed as absolutely indispensable for the Internet users who are running their own Internet World
Wide Web site. The hardware and software that makes up the firewall screens all traffic. The firewall
can be thought of as a pair of mechanisms: one which blocks traffic, and one which permits traffic.
Some firewalls permit only e-mail traffic through them, thereby protecting the network against attacks
other than attacks against the e-mail service. Other firewalls provide less strict protection, and block
services that are known to be problems.
Generally, firewalls are configured to protect against unauthenticated interactive log-ins from the
outside world.
This, more than anything, helps prevent vandals from logging into computers on the network. More
elaborate firewalls block traffic from the outside to the inside, but permit users on the inside to
communicate freely with the outside.
The most straightforward way of use of a firewall is to create a so-called internal site, one that is
accessible only to computers within one’s own local network. Then, all what needs to be done is to
place the server inside the firewall:
As to the web-servers connected to the Internet, they need to place it somewhere outside the firewall.
From the point of security of an organisation as a whole, the safest place to put it is outside the local
network:
This is called a sacrificial lamb configuration. The server is at risk of being broken in, but at least
when it's broken in it does not breach the security of the inner network. On the other hand, web pages
at the server are vulnerable for an unauthorised alteration and other forms of vandalism. give the
world access to public information while giving the internal network access to private documents.
However, the system with the really secret data should be isolated from the rest of the corporate
network, and should not be hooked up to the Internet at all.
Encryption
Encryption is the transformation of data into a form unreadable by anyone without a secret decryption
key. Its purpose is to ensure privacy by keeping the information hidden from anyone for whom it is
not intended, even those who can see the encrypted data. For example, one may wish to encrypt files
on a hard disk to prevent an intruder from reading them. Encryption can also be used to protect e-mail
messages and to verify the identity of the sending part
The combination of advanced mathematical techniques with the enormous growth of the possibilities
for automatic data processing has resulted in very strong cryptographic systems, which are almost
impossible to break. In the open and unsecured networks like the Internet, strong encryption has
become one of the main tools for the protection of privacy, trust, access control and corporate
security, to name only basic possible application of so called public-private key encryption systems.
Under a more traditional single key system, the same key is used both for encrypting and decrypting
the message. Although this is reasonably secure, there is a risk that this key will be intercepted when
parties involved exchange keys. A public key system, however, does not necessitate the exchange of a
secret key in the transmission of messages. The sender encrypts the message with the recipients’
freely-disclosed, unique public key. The recipient, in turn, uses his unique private key to decrypt the
message. It is also possible to encrypt messages with the senders’ private key, allowing anyone who
knows the senders public key to decrypt the message. This process is crucial to creating digital
signature that provides verification of the identity of the message sender.
Currently, the two main cryptographic systems providing for secure e-mail are Pretty Good Privacy
(PGP) and Privacy Enhanced Mail (PEM). Despite export restrictions, PGP is widely available
outside the United States in different versions, becoming de facto international standard24. It is
available for most computers and can be easily configured to work in several different languages,
including Spanish, French and German.
Today, an acute and mostly unresolved conflict exists, however, between the private interests in
protection of secrecy of information by means of encryption, and the interests of the investigating
authorities to obtain timely access to the content of sized or intercepted data. To minimise the
negative effects of the use of cryptography on the investigation of criminal offences two different
approaches have been developed at national level. The legislation of France and the Russian
Federation prohibits the use, distribution, development and export of any cryptographic tool without a
license granted by a special government agency. An alternative approach, supported by a number of
the most developed countries and some international organisations as the Organisation for Economic
Cooperation and Development, the Council of Europe, the European Commission and the
International Chamber of Commerce have proposed the key-escrow scheme, based on the cooperation
of one or more trusted third parties who will hold keys and be required to hand them over to law
enforcement authorities under certain conditions.
Encryption is often recommended as the solution to all security problems. Unfortunately, this is not
the case. Encryption does nothing to protect against many common methods of attack including those
that exploit bad default settings or vulnerabilities in network protocols or software. Information
security requires much more than just encryption. Authentication, configuration management, good
design, access controls, firewalls, auditing, security practices, and security awareness training are a
few of the other techniques needed.
REVISION EXERCISES
1. What are some of the responsibilities of end users of information systems
2. What are some of the ethical theories related to information systems
3. Discuss some of the ethics in development and use of information systems
4. What is the impact of information technology in the workplace?
5. What are the emerging trends and opportunities in the workplace?
6. What is the moral dimension information system?
7. What are the legal issues associated with management of information system?
CHAPTER 10
EMERGING ISSUES IN MANAGEMENT
INFORMATION SYSTEMS
INTRODUCTION
The demand for MIS skills has seen a tremendous resurgence in the past few years. Forecasts are
extremely strong with MIS skill sets dominating the top job roles expected to grow in the future. While
the MIS careers are expected to expand at an accelerated rate, the mix of skill requirements has changed
considerably. With the explosive growth of technology accompanying the usage of the Internet in the
late 1990s, the role of application development (programming) dominated the MIS field. Since then,
outsourcing has moved many of the low level programming jobs overseas. However, the increased
need for higher level technology jobs has become prevalent. Now, the web, communication and
database technologies are maturing and their usage has begun to extend throughout every area of
business practices. These new information technologies are being employed in expansive and creative
ways. The result is that the need for MIS professionals has increased -- but in a different way than
decades past. MIS is now a "people skill" rather than a purely "technical skill"
ELECTRONIC COMMERCE
Electronic commerce (e-commerce) is the buying and selling of goods and services over the Internet.
Businesses on the Internet that offer goods and services are referred to as web storefronts. Electronic
payment to a web storefront can include check, credit card or electronic cash.
Web Storefronts
Web storefronts are also known as virtual stores. This is where shoppers can go to inspect merchandise
and make purchases on the Internet. Web storefront creation package is a new type of program to help
businesses create virtual stores. Web storefront creation packages (also known as commerce servers)
do the following:
Allow visitors to register, browse, place products into virtual shopping carts and purchase
goods and services.
Calculate taxes and shipping costs and handle payment options
Update and replenish inventory
Ensure reliable and safe communications
Collects data on visitors
Generates reports to evaluate the site’s profitability
Web Auctions
Web auctions are a recent trend in e-commerce. They are similar to traditional auctions but buyers and
sellers do not meet face to face. Sellers post descriptions of products at a web site and buyers submit
bids electronically. There are two basic types of web auction sites:
Electronic Payment
The greatest challenge for e-commerce is how to pay for the purchases. Payment methods must be fast,
secure and reliable. Three basic payment methods now in use are:
(i) Checks
After an item is purchased on the Internet, a check for payment is sent in the mail
It requires the longest time to complete a purchase
It is the most traditional and safest method of payment
application system in that the functions it performs are based on business needs and activities. The
applications, transactions and trading partners supported will change over time and the co-mingling of
transactions, purchase orders, shipping notices, invoices and payments in the EDI process makes it
necessary to include application processing procedures and controls in the EDI process.
EDI promotes a more efficient paperless environment. EDI transmissions may replace the use of
standard documents including invoices or purchase orders. Since EDI replaces the traditional paper
document exchange such as purchase orders, invoices or material release schedules, the proper controls
and edits need to be built within each company’s application system to allow this communication to
take place.
OUTSOURCING PRACTICES
Outsourcing is a contractual agreement whereby an organization hands over control of part or all of
the functions of the information systems department to an external party. The organization pays a fee
and the contractor delivers a level of service that is defined in a contractually binding service level
agreement. The contractor provides the resources and expertise required to perform the agreed service.
Outsourcing is becoming increasingly important in many organizations.
The specific objective for IT outsourcing vary from organization to organization. Typically, though,
the goal is to achieve lasting, meaningful improvement in information system through corporate
restructuring to take advantage of a vendor’s competencies.
Reasons for embarking on outsourcing include:
A desire to focus on a business’ core activities
Pressure on profit margins
Increasing competition that demands cost savings
Flexibility with respect to both organization and structure
Business risks associated with outsourcing are hidden costs, contract terms not being met, service costs
not being competitive over the period of the entire contract, obsolescence of vendor IT systems and
the balance of power residing with the vendor. Some of the ways that these risks can be reduced are:
By establishing measurable partnership enacted shared goals and rewards
Utilization of multiple suppliers or withhold a piece of business as an incentive
Formalization of a cross-functional contract management team
Contract performance metrics
Periodic competitive reviews and benchmarking/benchtrending
Implementation of short-term contracts
Outsourcing is the term used to encompass three quite different levels of external provision of
information systems services. These levels relate to the extent to which the management of IS, rather
than the technology component of it, have been transferred to an external body. These are time-share
vendors, service bureaus and facilities management.
TIME-SHARE VENDORS
These provide online access to an external processing capability that is usually charged for on a time-
used basis. Such arrangements may merely provide for the host processing capability onto which the
purchaser must load software. Alternatively the client may be purchasing access to the application. The
storage space required may be shared or private. This style of provision of the ‘pure’ technology gives
a degree of flexibility allowing ad hoc, but processor intensive jobs to be economically feasible.
SERVICE BUREAUS
These provide an entirely external service that is charged by time or by the application task. Rather
than merely accessing some processing capability, as with time-share arrangements, a complete task is
contracted out. What is contracted for is usually only a discrete, finite and often small, element of
overall IS.
The specialist and focused nature of this type of service allows the bureaus to be cost effective at the
tasks it does since the mass coverage allows up-to-date efficiency-oriented facilities ideal for routine
processing work. The specific nature of tasks done by service bureaus tend to make them slow to
respond to change and so this style of contracting out is a poor choice where fast changing data is
involved.
FM deals are increasingly appropriate for stable IS activities in those areas that have long been
automated so that accurate internal versus external cost comparisons can be made. FM can also be
appealing for those areas of high technology uncertainty since it offers a form of risk transfer. The
service provider must accommodate unforeseen changes or difficulties in maintaining service levels.
SOFTWARE HOUSES
A software house is a company that creates custom software for specific clients. They concentrate on
the provision of software services. These services include feasibility studies, systems analysis and
design, development of operating systems software, provision of application programming packages,
‘tailor-made’ application programming, specialist system advice etc. A software house may offer a
wide range of services or may specialize in a particular area.
DATA WAREHOUSING
A data warehouse is a subject-oriented, integrated, time-variant, non-volatile collection of data in
support of management’s decision-making process.
Data warehouses organize around subjects, as opposed to traditional application systems which
organize around processes. Subjects in a warehouse include items such as customers, employees,
financial management and products. The data within the warehouse is integrated in that the final
product is a fusion of various other systems’ information into a cohesive set of information. Data in
the warehouse is accurate to some date and time (time-variant). An indication of time is generally
included in each row of the database to give the warehouse time variant characteristics. The warehouse
data is non-volatile in that the data which enters the database is rarely, if ever, changed. Change is
restricted to situations where accuracy problems are identified. Information is simply appended to or
removed from the database, but never updated. A query made by a decision support analyst last week
renders exact results one week from now.
DATA MINING
This is the process of discovering meaningful new correlations, patterns, and trends by digging into
(mining) large amounts of data stored in warehouses, using artificial intelligence and statistical and
mathematical techniques.
Industries that are already taking advantage of data mining include retail, financial, medical,
manufacturing, environmental, utilities, security, transportation, chemical, insurance and aerospace
industries. Most organizations engage in data mining to:
Discover knowledge – the goal of knowledge discovery is to determine explicit hidden
relationships, patterns, or correlations from data stored in an enterprise’s database.
Specifically, data mining can be used to perform:
criminal activities. A number of legislation has been passed in this direction in these countries. In
Kenya, the law is yet to reflect clearly how computer crime is to be dealt with.
The Internet does not create new crimes but causes problems of enforcement and jurisdiction. The
following discussion shows how countries like England deals with computer crime through legislation
and may offer a point of reference for other countries.
Computer crime is usually in the form of software piracy, electronic break-ins and computer sabotage
be it industrial, personal, political etc.
Computer fraud is any fraudulent behaviour connected with computerization by which someone
intends to gain financial advantage. The different kinds of computer fraud includes:
(i) Input fraud – entry of unauthorized instructions, alteration of data prior to entry or entry
of false data. Requires few technical skills.
(ii) Data fraud – alteration of data already entered on computer, requires few technical skills.
(iii) Output fraud – fraudulent use of or suppression of output data. Less common than input
or data fraud but evidence is difficult to obtain.
(iv) Program fraud – creating or altering a program for fraudulent purposes. This is the real
computer fraud and requires technical expertise and is apparently rare.
damages any property belonging to another intending to destroy or damage such property shall be
guilty of an offence.
Hacking
Gaining unauthorized access to computer programs and data. This was not criminal in England prior
to Computer Misuse Act of 1990.
It is not a comprehensive statute for computer crime and does not generally replace the existing
criminal law. It however creates three new offences.
Pornography is perceived as one of the major problems of computer and Internet use. Use of computers
and the Internet have facilitated distribution of and access to illegal pornography, but have not created
many new legal issues. Specific problems and how they are addressed include:
a. Pseudo-photographs
These are combined and edited images to make a single image. The Criminal Justice Act 1988 and
Protection of Children Act 1978 (if the image appears to be an indecent image of a child) amended to
extend certain indecency offences to pseudo-photographs.
b. Multimedia pornography
Video Recordings Act 1984: supply of video recordings without classification certificate is an offence.
Cyberstalking
Using a public telecommunication system to harass another person may be an offence under the
Telecommunications Act 1984. Pursuing a course of harassing conduct is an offence under the
Protection from Harassment Act 1997.
Rights differ according to subject matter being protected, scope of protection and manner of creation.
Broadly include:
Patents – a patent is the monopoly to exploit an invention for up to twenty years (in UK). Computer
programs as such are excluded from patenting – but may be patented if applied in some technical
or practical manner. The process of making semiconductor chips falls into the patent regime.
Copyrights – a copyright is the right to make copies of a work. Subject matter protected by
copyrights include:
Computer programs are protected as literary works. Literal copying is the copying of program code
while non-literal copying is judged on objective similarity and “look and feel”. Copyright protects most
material on the Internet e.g. linking (problem caused by deep links), framing (displaying a website
within another site), caching and service provider liability.
Registered designs
Trademarks – A trademark is a sign that distinguishes goods and services from each other.
Registration gives partial monopoly over right to use a certain mark. Most legal issues of
trademarks and information technology have arisen from the Internet such as:
o Meta tags – use of a trademarked name in a meta tag by someone not entitled to use it may
be infringement.
o Search engines – sale of “keywords” that are also trademarked names to advertisers may
be infringement
o Domain names – involves hijacking and “cybersquatting” of trademarked domain names
Design rights
Passing off
Law of confidence
Rights in performances
a) Plagiarism
Increased plagiarism because of the Internet. Violates academic dishonesty because copying does not
increase writing and synthesis of skills. One must give credit to the original author.
b) Piracy
In 1994 an MIT student was indicted for placing commercial software on website for copying purposes.
Student was accused of wire fraud and the interstate transportation of stolen property. The case was
thrown out on a technicality ground since the student did not benefit from the arrangement and did not
download the software himself. His offence also did not come under any existing law.
Software publishers estimate that more than 50% of the software in US is pirated and 90% in some
foreign countries. In US, software companies can copyright it and thus control its distribution. It is
illegal to make copies without authorization.
c) Repackaging data and databases
information contained in the books they publish. But ISP may be liable if it is shown that they had been
warned that the information was inaccurate and did nothing to remove it.
Defamatory statements may be published on the WWW, in newsgroups and by email. Author of the
statements will be liable for defamation, but may be difficult to trace or not worth suing. But employers
and Internet service providers may be liable. Defamation is a delict (tort) and employers are vicariously
liable for delicts committed by their employees in the course of their employment. Many employers
try to avoid the possibility of actionable statements being published by their staff by monitoring email
and other messages. Print publishers are liable for defamatory statements published by them, whether
they were aware of them or not. ISPs could be liable in the same way.
TERMINOLOGY
Data Mart
A data mart is a repository of data gathered from operational data and other sources that is designed to
serve a particular community of knowledge workers. In scope, the data may derive from an enterprise-
wide database or data warehouse or be more specialized. The emphasis of a data mart is on meeting
the specific demands of a particular group of knowledge users in terms of analysis, content,
presentation, and ease-of-use. Users of a data mart can expect to have data presented in terms that are
familiar.
In practice, the terms data mart and data warehouse each tend to imply the presence of the other in
some form. However, most writers using the term seem to agree that the design of a data mart tends to
start from an analysis of user needs and that a data warehouse tends to start from an analysis of what
data already exists and how it can be collected in such a way that the data can later be used.
A data warehouse is a central aggregation of data (which can be distributed physically); a data mart is
a data repository that may derive from a data warehouse or not and that emphasizes ease of access and
usability for a particular designed purpose. In general, a data warehouse tends to be a strategic but
somewhat unfinished concept; a data mart tends to be tactical and aimed at meeting an immediate need.
In practice, many products and companies offering data warehouse services also tend to offer data mart
capabilities or services.
REVISION EXERCISES
1. Name the goals that are achieved through the implementation of a computer network.
2. List three advantages of adopting network protocols.
3. E-mail communication has become a popular mode of communication. What advantages do
users of e-mail gain from using this mode of communication?