PCSM Notes
PCSM Notes
CHAPTER 1:
BASIC COMPUTER CONCEPTS
➢ Computer systems are composed of hardware, software, and firmware.
• Hardware is something you can touch and feel; the physical computer itself is an example of
hardware
• Software is the operating system and applications that make the hardware work; the software
provides instructions for the hardware to carry out. Examples are: Windows XP, Microsoft Office,
Adobe Acrobat Reader, and WordPerfect.
• The operating system is an important piece of software that coordinates the interaction between
hardware and software applications, as well as the interaction between a user and the computer.
Operating system examples include: DOS, Windows 98, NT Workstation, Windows 2000,
Windows XP, and Unix.
• A microcomputer, also called a computer or PC, is a unit that performs tasks using software and
comes in three basic models:
– A case (chassis).
– power supply - Converts AC voltage from the wall outlet to DC voltage the computer can
use, supplies DC voltages for internal computer components and has a fan to keep the
computer cool.
– floppy drive - Common storage device that allows data storage to floppy disks (storage
media) which can be used in other computers.
– hard drive - Or hard disk, is a common storage device for maintaining files inside the
computer, usually mounted below or beside the floppy drive.
– CD drive - Holds disks (CDs) that have data, music, or software applications.
– DVD (Digital Versatile Disk) drive - Popular alternative to a CD drive that supports CDs
as well as music and video DVDs.
– Adapters - Smaller electronic circuit cards that normally plug into an expansion slot on the
motherboard allowing other devices to interface with the motherboard, they also may
control some devices.
– Expansion slot - A special connector on the motherboard that allows an adapter to plug in
and connect to the motherboard.
– Riser board - A small board with expansion slots that plugs into the motherboard and
allows adapters to connect at a different angle.
– RAM (Random Access Memory)- volatile memory (loses data inside the chips when
power is shut off) that holds applications and user data while the computer is operating.
– ROM (Read-Only Memory)- non-volatile memory (retains data when power is shut off).
– ROM BIOS- an important chip on the motherboard that holds the start-up software for the
computer to operate, and software instructions for communication of the input/output
devices and important hardware parameters.
– Turning the computer on with the power switch for a computer that is not running is known
as a cold boot; a user can use this technique when running POST is required to help
diagnose a problem.
– A warm boot is performed when a computer that is already on is restarted without using
the power switch. This can be accomplished by pressing the CTRL, ALT, and DEL keys at
the same time, or pressing the computer’s reset switch. This can be helpful when a
technician has made changes to the files that execute when the computer powers on and
needs these changes to take effect, it does not run POST.
– Male ports – Have pins that protrude out from the connector and require a cable with a
female connector.
– Female ports – Have holes in the connector to accept the male cable’s pins.
– D-shell connector – A connector with more pins or holes on the top row than on the
bottom so a connected cable can only be attached in one direction and not accidentally
connected the wrong way; generally represented with the letters DB and the number of pins
such as, DB-9, DB-15, or DB-25.
– Network ports – Used to connect a computer to other computers, including a server and
are available in two types-Ethernet and Token Ring; a network cable connects to the
network port.
– Ethernet – These adapters are the most common type of network card with BNC, RJ-45
(most common today), a 15-pin female D-shell connector (sometimes called AUI), or any
combination of all of them.
– Game ports – A 15-pin D-shell connector for attaching gaming devices like a joystick that
is sometimes confused as a network connector.
Information resources.
The internet is useful resources you can use to gain information about a particular device or
application or to learn how others have dealt with a particular problem you are having.
The first place to look for information is on the manufacturer’s website.
Other more generic troubleshooting sites are: www.pcguide.com,
www.everythingcomputers.comwww.pcsupport.about.com
CHAPTER 2:
POWER SUPPLY
• PCs use DC voltage but power companies supply AC voltage.
• The power supply in a computer converts high-voltage AC power to low-voltage DC power.
The primary functions of a pc power supply are voltage conversion, rectification, filtering, regulation, isolation,
cooling and power management.
• Voltage conversion: involves changing the 110V AC primary power source into the +12V DC and +5V DC
used by older systems and the +3.3V Dc used by newer computer systems.
• Rectification: This function is directly involved with converting the AC power of power source to DC
power needed by the PC’s components.
• Filtering : rectification usually introduces a ripple in the DC voltage which, which filtering smoothes out.
• Regulation: along with filtering, voltage regulation removes any line or load variations in the DC voltage
produced by the power supply.
• Isolation: refers to separating the AC power supply from the converted, rectified, filtered, and regulated
DC power.
• Cooling : the system fan, which controls the air flow through the system case, is located inside the power
supply.
• Power management: -Modern computers have energy-efficiency tools and power management functions
that help reduce the amount of electrical power used by the PC.
In areas where the power source is already a DC, the power supply performs all of the same tasks, except
rectification.
In addition to providing converted power to motherboard and other parts, the power supply sends a very
important signal to the motherboard through its umbilical connection – the power_good (or pwr_ok on an
ATX form factor power supply) signal.
Form Factors
Power supplies, like motherboards are available in a variety of different form factors, typically matching
the form factor of the motherboard and system case as listed below:
• PC XT: - it was placed in the rear right corner of the case, and an up-and down toggle switch on
the exterior was used to power it on/off.
• PC AT – was little larger, had slightly different shape, and had about three times the power wattage
of the PC XT.
• Baby AT :- smaller version of AT form factor. It is only 2 inches narrower than the AT, with the
same height and depth. It is also compatible with the AT form factor, in either desktop or case
styles
• LPX / slimline/ PS2: has reduced height and general dimension, while maintain the same power
production, cooling ability and connector as baby AT and AT.
• ATX: here the LPX of the AC power pass-through outlet used for PC monitors are removed
• NLX: uses the same power supply as ATX.
• SFX: was designed for use in the microATX and FlexATX form factors.
Protecting the PC
Power supply accounts for nearly a third of the problems of a PC. Generally what causes the most
problems with a power supply is the AC power source, which is usually an unreliable, noisy, and
fluctuating electrical noise.
Common electrical problems :
a) Spikes: an electrical spike is an unexpected, short-duration, high-voltage event on the AC power
line. Spikes can be caused by lightening strikes, gerenator switchesovers, power pole incidents.To
protect against spikes – use surge suppressor or Ups that includes surge suppression.
b) Blackouts: is a total loss of power. Can last between split second to days. The best defense against
a blackout is a UPS.
c) Brownouts: is the opposite of a spike,except that a brownout can last for a relatively long time.If
the voltage lingers too long below the normal point,the result can be the same as blackout, or
worse. Brownouts can destroy components by causing a power supply to draw too much current to
make up for the low voltage.
d) Power surge: or overvoltage, is a high- voltage situation that raises the voltage above normal
levels ,much like a spike but for longer period of time.A surge suppressor or a UPS ,which absorbs
an increase in power is a good protection against power surge.
e) Noise: Electrol magnetic interference and radio frequency interfereance are the two main causes of
line noise on Ac power line.UPS is the best bet to filter out line noise.
TYPES OF UPS DEVICES
A UPS is a large battery and a battery charger that provides a Pc /server protection against short term
power outages,surges,spikes and brownouts. A UPS monitors its input voltage ,and when the voltage level
more than a certain percentage from normal it switches to provide electrical service from its battery.
UPS units are units are available in two categories:
a) Standby UPS – it generally does nothing more than provide a battery backup to the pc connected to
it as a safe guard against a power failure.In standby mode, the Ups draws the Pc.
b) Online /Inline UPS – Provides power to a pc through an Ac power service provided from the
UPS’s battery and a power inverter that converts the battery’s Dc power to Ac power.
•Upgrading the system—Suppose you are planning a big upgrade (new motherboard, new hard drive, digital
versatile disk (DVD), and the works) and you are worried that your power supply may be too weak to handle
the new load. When upgrading, remember that a power supply is rated by its power output in watts. You
can get from 100- to 600-watt power supplies to fit the common form factors (ATX and LTX). A power
supply rated between 230 to 350 watts works well for most average systems, unless you are planning to
build a super server with quad Pentium III Xeons, a DVD, an internal tape drive, and four or five internal
small computer system interface (SCSI) drives, in which case you'll need to look into the WTX form factor.
•Intermittent problems—If you have tried everything you can to track down an intermittent problem on the
motherboard without isolating the problem, the power supply may be the real culprit if the problem is at all
related to a power issue. But how can you tell whether the power supply is going bad? Some of the telltale
signs that can tip you off that the power supply is on its way to failure are overheating, occasional boot
failures or errors, frequent parity errors, noisy operation, or mild electrical shocks when you touch the case.
Catastrophic problems—If smoke is coming out of the power supply or off the motherboard, it is very likely
that the power supply has gone awry and needs to be replaced. If the system fan has stopped turning, then
you absolutely need to replace the power supply. You should also test the motherboard with a new power
supply, and be on watch for parity errors, system lockups that are becoming more frequent, and disk read
and peripheral input/output (I/O) errors. These are signs of damaged motherboard components beginning to
fail.
Steps you should use any time you suspect the power supply to be the source of a PC problem include:
1. First, determine that the problem is not something as trivial as a blown fuse caused by a legitimate overload.
Be sure to remove the source of the overload before beginning work.
2. Try to classify the problem by when it is occurring and what it is affecting. The categories you might use are:
• BIOS, boot, or startup problem
• An output power-related problem
• Excessive noise, ripple, or other power conversion errors
• Catastrophic failure that poses danger to the system or the operator (especially the technician)
3. Determine, based on the form factor, what the proper output voltages should be, and measure the
output of the appropriate pins.
• Simple or interactive displays—to give a warning near the end of its charge.
• Warning mechanisms—A UPS designed to support a single computer will generally have a serial
"heartbeat" cable that is attached to a serial (Com) port on the PC. The UPS generates a regular signal that
is monitored by a background process running on the PC. If the UPS fails to signal (i.e., misses too many
heartbeats), the monitoring software (typically supplied by the UPS's manufacturer or it could be a part of
the PC's operating system) tries to gracefully shut down the PC.
This is a very important feature for servers that cache a lot of data in memory instead of on a hard disk to
speed data access times. In this case, should the power suddenly fail, all of the cached data would be lost if
it could not be saved to disk before a shutdown or sync request
• Software interfaces—The software monitor that interacts with the UPS in real time (see previous
description of warning mechanisms) is typically supplied by the manufacturer of the UPS. At a minimum,
these software programs monitor the heartbeat signal sent by the UPS to indicate that power is still
available. Should the UPS stop sending the signal, the software begins the process of performing a system
shutdown
• Line conditioners and alarm systems—A true line conditioner (also known as a power
conditioner) filters the incoming power to isolate line noise and keep voltage levels normal. It
isolates the input power source from the output power in a transformer stage. A line conditioner
cannot protect against a power outage, but it can smooth out any intermittent under- and
overvoltage events (surges and spikes, respectively) that occur on the input source.
Chapter 3
MOTHERBOARD/SYSTEM BOARD/PLANAR
Expansion slots
Processor chip
(the CPU)
The motherboard is a large printed circuit board that is home to many of the most essential parts of the
computer:
• Microprocessor
• Chipset
• Memory sockets and RAM (random access memory) modules
• Cache memory
• IDE (integrated drive electronics), EIDE (Enhanced IDE.
• Expansion bus
• Parallel and serial ports
• Mouse and keyboard connectors
The mainboard or system board, of the computer is the glue that binds all PC's components together.
Motherboard Designs
There are two design approaches for mainboards in a PC: the true motherboard design and the backplane
design.
Figure 1.1 identifies each of the following major parts of the motherboard:
1. Ports
2. Expansion slots
3. AGP (accelerated graphics port) slot
4. CPU (central processing unit) slot and socket
5. Chipset
6. Power connector
7. Memory sockets
8. I/O connectors
9. CMOS (complementary metal oxide semiconductor) battery
10. ROM BIOS
Backplanes
There are actually two types of backplane mainboards: passive and active- A passive backplane mainboard
is only a receiver card with open slots into winch a processor card, which contains a CPU and its support
chips, and I/O (input/output) cards, which provide bus and device interfaces, are plugged. These add-in
cards are referred to as daughterboards. The backplane interconnects the system components through a bus
and provides some basic data buffering services. The backplane design is popular with server type
computers and is quickly upgraded or repaired. This design type provides the advantage of getting a server
back online with only the replacement of a single slotted card, instead of replacing the whole mainboard.
An active backplane design, also called an intelligent backplane design, adds some CPU or controller-driven
circuitry to the backplane board that can speed along the processing. The CPU itself is still on its own card,
which provides for easy replacement.
Motherboard Form factors
Style Width Length Introduced Location of Case type
(inches) (inches) adapter slots
IBM PC 8.5 13 1981 onboard IBM PC
IBM PC XT 8.5 13 1982 onboard IBM PC XT
AT 12 11-13 1984 onboard AT Desktop
or Tower
Baby AT 8.5 10-13 1983 onboard Baby AT
Desktop or
Tower
LPX 9 11-13 1987 Riser Low profile
Micro- AT 8.5 8.5 Early 90’s onboard Baby AT
Desktop or
Tower
ATX 12 9.6 1996 onboard ATX Desktop
or tower
Mini-ATX 11.2 8.2 1996 onboard Smaller ATX
Desktops
Mini-LPX 8-9 10-11 199x Riser Low profile
Micro-ATX 9.6 9.6 1997 onboard Low profile
NLX 8-9 10-13.6 1997 Riser Low profile
Flex-ATX 9 7.5 1999 onboard Flexible
design
AT and Baby AT
The IBM PC XT had a motherboard that measured 8.5 inches wide by 13 inches deep, which was the
same size as the earlier IBM PC motherboard. However, the XT increased the number of adapter card
slots from 5 to 8, and the cassette tape interface port was replaced by the 5.25-inch floppy drive that would
become the standard for storing and transferring information between computers.
The ATX specification also defines the mini-ATX sub specification, which has a board size of 11.2 inches
by 8.2 inches. Other sub specifications of the ATX fit factor you may encounter are the Micro-ATX and
the Flex-ATX.
NLX Form Factor
NLX is a new, standardized, low-profile motherboard form factor. It is designed to support a number of
current and emerging microprocessor technologies along with many newer developments, including
support for AGP video adapters, and tall memory modules and DIMMs. The NLX form provides more
flexibility for the system-level design and for easy removal and replacement of the motherboard, allegedly
without tools. The NLX motherboard measures about 8 inches by 1 inches and uses a plug-in riser board
for its expansion bus support. The riser board attaches to the edge of the mainboard.
Selecting a Motherboard
• Three types of motherboards you can select:
Replacing a Motherboard
• Overview of the replacement process
PC cases come in all sorts of sizes, shapes, colors, and even animals. The variances in size and shape are
driven primarily by the form factor of the case, but increasingly, case designers are adding color, new
plastic and metal materials, and even character faces to case designs in an attempt to make them less
boring and more appealing to a wider audience.
The most common system components found inside the PC's case are:
• Chassis—The chassis is the skeletal framework that provides the structure, rigidity, and strength of the case
and plays a major role in the cooling system of the case.
• Cover—The cover, along with the chassis, plays an important role in the cooling, protection, and structure
of the PC.
• Power supply—to rectify (which means to convert) AC power into DC power for use by the PC'slnternal
electronics. However, it also houses and powers the main system cooling fan.
• Front panel—In addition to giving the PC its looks and providing placement of the power and reset
switches, the front panel provides the user with information on the PC's status and a means of physically
securing the PC.
• Switches—The two main switches on most newer systems, the power switch and the reset button,
are on the front panel. If the power switch is not on the front panel, it is very likely either on the
right-rear corner or near a corner on the back of the PC.
• Drive bays—Beginning with the PC XT, disk drives with removable media have been mounted in
the case so that they can be accessed on the front panel. Typically, the drive bays house 5.25-inch
and 3.5-inch disk drives, such as for floppy disks, CD-ROMs (compact disk-read only memory) and
DVDs (digital versatile disks), and removable hard drives.
Types of cases
Toolless Cases
Many name-brand PCs feature a case that has one or two large knobby screws on the back panel of the case. This
case design is called "toolless" because you should be able to remove and replace the screws with your fingers
without the need for screwdriver or other tools. The cover pieces are held firmly in place by spring lips that apply
pressure to chassis points.
Screwless Cases
Screwless case covers have several individual cover pieces, generally one piece to a side. The key to removing this
type of case cover is to remove the locking panel, which is usually the front panel, to unlock the remaining panels of
the case. The front panel is attached by a spring clip. Pull up and lift off one or more
Release-Button Cases
On release-button cases, which are common on Compaq desktop models, the case is removed by pressing
springed release buttons located on the front or rear of the PC. When you press the release buttons, the
cover, which includes the front, rear, top, and sides, lifts straight off the case.
Front-Screw Cases
On front-screw cases, the screws that hold the cover on the PC are located on the front panel, usually
hidden behind sliding tabs or a snap-on panel. Remove the screws (and possibly some on the rear panel as
well) and pull the case forward and off.
As mentioned previously, the form factor of a PC case defines its style, size, shape, internal organization, and its compatible
components.
The three most popular types of case form factors are the Baby AT, ATX, and NLX
• Baby AT—Although virtually obsolete by today's standards, the Baby AT fom factor still has a very
large installed base from its popularity in past years.
• ATX—The ATX form factor is the de facto standard for motherboards, power supplies, and system
cases. Virtually all Pentium-based systems use the ATX form factor.
• NLX—The NLX form factor, which is also called the slimline form factor, is popular for mass-produced
desktop systems.
Some of the other form factors that have been used or are still in use for system cases include:
• PC XT—This form factor was used on the original desktop PCs, the IBM PC and its successor, the
PC XT. The case was made of heavy-gauge steel and was U-shaped.
• AT—The IBM PC AT, although not much different on the outside from its predecessors, was quite
different on the inside. The motherboard and power supply, which were much larger, were
repositioned inside the case.
• LPX—Although never officially accepted as a standard form factor, the LPX the oldest of the "low
profile" form factors. Over the past 10 years, it has been one of the most popular slimline form
factors sold. Slimline cases are a little shorter than Baby AT or ATX cases. This lower profile is
achieved by moving expansion cards to a riser board that mounts horizontally instead of vertical in
the case, thereby saving inches of height.
• MicroATX and FlexATX—These two ATX-based form factors define specific tions for smaller
versions of the ATX motherboard. MicroATX and FlexATX do not define case form factors, but
manufacturers are designing cases to
CHAPTER 4
MICROPROCESSORS
Objectives
➢ Discuss the working of microprocessor
➢ Discuss the various interfaces of microprocessor
➢ List the types of microprocessors
➢ Discuss the evolution of microprocessors
➢ List the various microprocessor designs
➢ Install the microprocessor
➢ Configure the microprocessor
➢ Upgrade the microprocessor
➢ Troubleshoot the microprocessor
Microprocessor
➢ Is a chip
➢ Has transistors built into it
➢ Has cache to store information
Parts of microprocessor
➢ Arithmetic and Logic Unit -Performs all arithmetic and logical operations
➢ Control Unit - It supervises/ monitors all the operations carried out in the computer
➢ Prefetch Unit
➢ Bus Unit
➢ Decode Unit
➢ Data and instruction cache
➢ Registers - Holds the data temporarily for processing
Interface of Microprocessor
➢ Steps followed by the microprocessor to interface with a device:
• Checks the status of the device.
• Requests the device for transferring data.
• The device sends the data request to the microprocessor.
• The microprocessor sends the required data to the device.
Packaging of microprocessor
➢ Types of microprocessor packaging:
• Pin Grid Array (PGA)
• Staggered Pin Grid Array (SPGA)
• Single edge contact (SEC) and single edge processor packaging (SEPP)
Types of microprocessor
➢ Based on the number of instructions built into it, they can be classified as:
• Complex Instruction Set Computing (CISC) – Many instructions built into it
• Reduced Instruction Set Computing (RISC) – Instructions built into the microprocessor
Microprocessors Timeline
Intel Pentium Microprocessor
➢ Designed to work with everyday applications
• Word processors
• Spreadsheets
• Multimedia applications
• Games
➢ Versions
• Pentium I
• Pentium II
• Pentium III
• Pentium IV
Pentium I
➢ Released in 1993
➢ First chip from the fifth generation of microprocessors
➢ Has a 5-stage data pipeline for executing instructions
Pentium II
➢ Released in 1997
➢ Available on a daughter card that has L2 cache
➢ Has a 14-stage data pipeline for executing instructions
Pentium III
➢ Released in 1999
➢ Has a unique Processor Serial Number (PSN) embedded in the chip
➢ Has a 10-stage data pipeline for executing instructions
Pentium IV
➢ Released in 2000
➢ Enables to work with applications that require a lot of processing
➢ Has a 20-stage data pipeline for executing instructions
➢ Available in the following editions:
• Hyper-Threading (HT)
• HT Extreme
Intel Pentium M
➢ Small in size
➢ Consumes less energy and prolong the battery life
➢ Used in
• Laptops
• notebook computers
Intel Celeron
➢ Cheaper and economical
➢ Used for running applications that do not require a lot of processing
Intel Xeon
➢ Heavy-duty microprocessors
➢ Used to power servers and workstations on a network
➢ Supports multiprocessors
Intel Itanium
➢ Used to power network servers and workstations
➢ Can execute three instructions at a time
➢ Is a Reduced Instruction Set Computing (RISC) based microprocessor and has limited instructions
built into the microprocessor
Microprocessor Design
➢ Specifies the type of the microprocessor that can be installed on the motherboard
➢ Uses the
• Socket
• Slot
Microprocessor Socket
➢ Connects the microprocessor to the motherboard
➢ Available as
• Zero Insertion Force (ZIF) uses a lever to install the microprocessor
• Low Insertion Force (LIF) requires little force to install the microprocessor
Socket 1
➢ 169 pins arranged in three rows
➢ Supplies maximum 5 volts to the microprocessor
➢ Supports the 80486 and 80486 Overdrive microprocessor
Socket 2
➢ 238 pins arranged in four rows
➢ Supplies maximum 5 volts to the microprocessor
➢ Supports the 80486 OverDrive and Pentium OverDrive microprocessors
Socket 3
➢ 237 pins arranged in four rows
➢ Supplies 3.3 to 5 volts to the microprocessor
➢ Voltage can be adjusted using the jumpers on the motherboard
➢ Supports the 80486, AMD, Cyrix and Pentium OverDrive microprocessors
Socket 4
➢ 273 pins arranged in four rows
➢ Supplies maximum 5 volts to the microprocessor
➢ Supports the Pentium and Pentium Overdrive microprocessors
Socket 5
➢ 320 pins arranged in five rows
➢ Supplies maximum 3.3 volts to the microprocessor
➢ Supports the Pentium, Pentium with MMX and Pentium OverDrive microprocessors
Socket 6
➢ 235 pins arranged in four rows
➢ Supplies maximum 3.3 volts to the microprocessor
Socket 7
➢ 321 pins in five rows
➢ supplies 2.5 to 3.3 volts to the microprocessor
➢ This socket supports the Pentium, Pentium with MMX and Pentium OverDrive microprocessors
Socket 8
➢ 387 pins arranged in five rows
➢ Supplies 3.1 to 3.3 volts to the microprocessor
➢ Supports the Pentium Pro microprocessors
Socket 370
➢ 370 pins arranged in six rows
➢ Has L2 cache built into the microprocessor
➢ Supports Celeron 2 and Pentium III microprocessors
Socket 462
➢ Known as Socket A
➢ Has 462 pins but 9 pins are blocked
➢ Has the L2 cache built into the microprocessor
➢ Supports the Athlon and Duron microprocessors
Socket 478
➢ 478 pins
➢ Has the L2 cache built into the microprocessor
➢ Supports the Intel Pentium 4 microprocessor
Slot 1
➢ Supports microprocessors that have 242 pins
➢ Microprocessor is mounted on a card that uses Socket 8
➢ Supplies 2.8 to 3.3 volts to the microprocessor
➢ Supports the Pentium II, III, and Celeron microprocessors
Slot 2
➢ Supports microprocessors that have 330 pins
➢ Supports the Pentium Xeon microprocessors
➢ Found on server motherboards
Slot A
➢ Created by AMD
➢ Supports the Athlon microprocessors
➢ Uses the EV6 protocol for increased speed
✓ Its purpose is to temporarily hold programs and data for processing. In modern computers it also holds the
operating system.
✓ manufacturer, which holds the instructions to check basic hardware interconnecter and
TYPES OF RAM
1. Dynamic Random Access Memory (DRAM)
• More expensive
5. Cache memory
• Small amount of memory typically 256 or 512 kilobytes
7.Virtual Memory.
• Uses backing storage e.g. harddisk as a temporary location for programs and data where
insufficient RAM available.
• Swaps programs and data between the hard-disk and RAM as the CPU requires them for
processing
• Virtual memory is much slower than RAM
Types of Rom
• PROM ( programmable Read Only Memory) –type of ROM that can be written to by the user. Data is held
permanently once it is written.
• EPROM – (Erasable programmable Read Only Memory) – can be programmed by the user but it can be
erased. Its removed from the computer to be erased by use of EPROM erasers.
• EAPROM (Electrically Alterable Programmable Read Only Memory) – it can be read, erased and deleted.
It is erased without removing it from the computer . however the erasing and writing is very slow which
limits the use of these memory.
• EEPROM (Electrically erasable Programmable Read Only Memory) – similar to EAPROM
Memory configurations
• The elementary unit of memory is a bit. A group of 4 bits is called a nibble and a group of 8 bits is
called a byte. One byte is the minimum space required to store one character.
• Spindle motor
• Arm
DISK PLATTER
The platter is made up of a magnetic material, in the flat disk part of the drive.
Data is stored in the platter.
Each set of magnetic particles is collection a unit called a bit.
New hard-drive technology uses thin-film metals and glass platters to increase efficiency and drive storage capacity.
STEPPER MOTOR
1. Use stepper motors for controlling read/write head position.
2. Stepper motors usually use +12V power, but some new low-power drives use +5V power source.
SPINDLE MOTOR
Disk structures
(A) Track - The HDD is divided into number of concentric circles called tracks. Circular path in sector is called
track.
(B) Sector -Data storage area in one track multiple divided into the multiple block is called sector. Each
sector can have 512 bytes of the data.
(C) Cylinder - A set of corresponding tracks in all sides of a hard disk is called cylinder
(D) Storage capacity Its having a fourmula shown below:
storage capacity=number of cylinder’s*tracks per cylinder*sector per tracks*bytes per sector.
Partition for HDD
1.Primary Partition:
• Windows operating systems must be located in a primary partition.
2.Extended Partition:
• A hard disk may contain only one extended partition.
• the extended partition can be subdivided into multiple logical partitions (Other than OS is
a Extended Partition).
3.Logical Partition:
• Linux operating systems can be installed into (and run from) logical partitions.
4.Active Partition:
• Only one partition on a computer can be set as an active partition or bootable partition.
• For example, if you are using Microsoft Windows the partition that contains Windows is
the active partition.
File system in HDD
1. FAT (File Allocation Table)
2. NTFS (New Technology File System)
Note: Example: 1 GB = 1,024 MB but for easy calculations, normally we just say 1 GB = 1,000 MB by
ignoring 24 MB size. Also, 1 MB = 1,000 KB, etc.
CHAPTER 7:
DISPLAY TECHNOLOGY
Most desktop displays use liquid crystal display (LCD) or cathode ray tube (CRT) technology. Nearly all
portable computable devices such as laptops use LCD technology due to their slimmer design and lower
consumption. Monitors using LCD technology (also called flat panel or flat screen displays) are replacing
the CRT on most desktops.
MONITOR
The most frequently used output device is the monitor. Also known as display screens or simply as screens,
monitors present visual images of text and graphics .The output from monitor is often referred to as soft
copy. Monitors vary in size, shape, and cost.
FEATURES
The most important characteristic of a monitor is its clarity. Clarity refers to as the quality and sharpness of
the displayed images. It is a function of several monitor features including;
1. Resolution
2. Dot pitch
3. Refresh rate
4. Size
RESOLUTION
Resolution is one of the most important features .Images are formed on a monitor by a series of dots or
pixel. Resolution is expressed as a matrix of these dots or pixels. For example, many monitors today have a
resolution of 1,280 pixel columns by 1,024 pixel rows for a total of 1,310,720 pixels. The higher a monitors
resolution (the more pixels) the greater the clarity of image produced.
DOT PITCH
Dot pitch is the term used to define the diagonal distance between the two closest dots of the same color,
usually expressed in hundredths of millimeters as shown in the figure below. For example, you might see
0.25 dots pitch. Generally speaking, the smaller the pitch, the greater the number. The lower the dot pitch
(the shorter the distance between pixels), the higher the clarity of images produced.
REFRESH RATE
This indicates how often a displayed image is update or redrawn on the monitor .Most monitors operate at
a rate of 75 hertz which means that the monitor is redrawn 75 times each second. Images displayed on
monitors with refresh rates lower than 75 hertz appear to flicker and can cause eye strain. The faster the
refresh rate (the more frequently images are redrawn), the better the quality of images displayed.
SCREEN SIZE
Screen size or viewable size is measured by the diagonal length of a monitor’s viewing area. Common sizes
are 15,17 19 and 21 inches .The smaller the monitor the better quality of images displayed.
The wide end of the CRT is the display screen which has a phosphor coating (a substance that can emit light
when hit with radiation) called fluorescent screen. When active, the guns beam a stream of charged electrons
onto the fluorescent screen. When the phosphor coating is hit with the right amount of energy, light is
produced in a pattern of very small dots. This same technology is used in x-ray imaging oscilloscope and
other CRT devices .Similarly monitors emit x-radiation .There is one dot for each primary color (RGB),
and the dots are grouped in patterns close together .The name for a collection of all dots in specific location
is a pixel (which stands for picture elements).
Note
The terms anode and cathode are used in electronics as synonyms for positive and negative terminals. For
example, you could refer to the positive terminal of a battery as the anode and the negative terminal as the
cathode. In monitor, since image is formed by electron beam (negative charge) so it’s called as a cathode
ray tube or CRT monitor.
The persistence rate (how long a given line is visible) must hold enough to allow formation of a complete image, but
not so long that it blurs the dots painted in the next pass
These raster passes take place very quickly .The time required to complete a vertical pass (vertical refresh rate
( VRR) the time required to pass once from right to left (horizontal retrace )is known as as the horizontal
refresh rate (HRR) Generally speaking, faster is better. If the vertical refresh rate is too slow, it can cause
flicker, which is not only annoying ,but but can lead to eye strain .The larger the CRT ,the faster the refresh
rate must be to cover the entire area within the amount of time needed to avoid flicker. At 640 *480
resolutions, the minimum refresh rate is 60 Hz; at 1600 *1200 the minimum rate is 85Hz
SCREEN RESOLUTION
The term resolution refers to the degree of detail offered in the presentation of an image .The method of
measurement varies, based on the medium –photographic lenses, films and papers are measured using lines
per inch, whereas computer monitor manufacturers express resolution in pixels per inch. The greater the
number pixels per inch, The greater the number pixels per inch, the smaller the detail that can be displayed
and consequently, the sharper the picture becomes.
Monitor resolution is usually expressed as a *b where a is the number of horizontal pixels,and b is the
number of vertical pixels as shown in the figure above .For example ,640* 480 means that the monitor
resolution is 640 pixels horizontally by 480 pixels vertically.
Modern monitors usually offer a variety of resolutions with different refresh rate .Price and quality should
be compared at the maximum for both ,along with two other factors ,dot pitch and colour depth.
COLOUR DEPTH
Colour depth is a computer graphics term describing the number of bits used to represent the colour of a
single pixel in a bitmapped image .The number of of distinct colours that can be represented by a piece of
hardware or software .Colour depth is sometimes reffered to as bit depth because it is directly related to the
number of bits used for each pixel A 24-bit video adapter, for example. has a colour depth of 2 to the 24th
power(about 16.7 million) colours one would say that its colour depth is 24 bits.
OTHER CONSINDERATIONS IN CHOOSING MONITORS
1. COST AND PICTURE AREA.
The CRT is the most expensive part of the monitor .Graphical user interface (GUI) operating systems
have increased the demand for bigger screens, to allow for more working area so that users can have
more applications open at once or more working room for graphics
2. BANDWIDTH
Bandwidth is used to denote the greatest number of times an electron gun can be turned on and off
in 1 second. Bandwidth is a key design factor because it determines the maximum vertical refresh
rate of a monitor, measured in megahertz (MHz).Higher numbers are better.
3. INTERLACING
Interlacing refreshes the monitor by paining alternate rows on the screens and then coming back and
sweeping the sets of rows that were skipped the first time around as shown in the figure below .This
increase the effective refresh rate but can lead to eye strain.Interlacing is found on less expensive
monitors ,and it should be avoided unless achieving the very lowest cost is the client’s key concern.
Figure below shows the image comparison between non-interlaced image and interlaced image.
When interlacing is used only certain part of image is formed by either interlaced odd field or
interlaced even field. Both odd and even interlaced field combines to form a complete image
4. POWER SAVING FEATURES
Monitors constitute a large percentage of the power consumed by a workstation even when not
actively in use(i.e. during screen blanking).In order to reduce the power consumption, the Video
Electronics Standards Association (VESA) has defined a Display Power Management Signaling
(DPMS) standard which can be used to greatly reduce the amount of power being used by a monitor
during screen blanking. DPMS can be configured in one of three ways; using hardware, software or
a combination of both.
• When a monitor fails to operate or produces improper image ,check the following;
I. Check all the cables ,including power and display.
II. Check the front panel controls. Make any appropriate minor adjustments that are needed.
III. Check the front panel controls. Make any appropriate minor adjustments minor adjustments that are
needed.
IV. Check and, if needed, reinstall the display drivers. Make sure all settings are within required limits .Reinstall
by returning to a plain 16- color, VGA display mode adding resolution; then increase the refresh rate.
V. Try another display adapter; then, if the problem is still unresolved, try another computer.
VI. If the monitor still shows problems, refer to a specialist for further tests.
FLAT PANEL OR LIQUID CRYSTAL DISPLAY (LCD)
• Flat panel displays (FPDs) are thin ,bright display outputs than are gaining a foothold on desktops
as a replacement for traditional CRT monitors.
• They are often called as Liquid Crystal Display or LCD monitor in short.
• The most obvious benefit is the small amount of desk space required, because there is no big case
housing the electronic gun, nor a heavy glass front.
• They are thinner and lighter and draw much less power than CRT.
• FPDs are two to three times brighter than CRT. Since the screen is flat, this means that there is no
distorted image at the edge of the viewing area, as there is with curved CRT.
• FPDs are generally easier on the eyes and don’t require a “warm-up” period to reach full color
saturation.
A pair of polarizing filters layers work with the liquid crystals to control emitted light. As light passes
through the first filter (a) only vertically aligned light waves remain. If the liquid crystals are in their natural
state they are twisted which causes the light wave to turn horizontally. If an electric field is applied the
liquid crystals straighten and the cell doesn’t bend the light. Since the second filter (b) only lets horizontal
light waves through, light that passes through the straight liquid crystals is blocked by the second filter.
• Several different types of FPDS are available today, varying in cost, image quality, and several other
factors that affect both suitability to different computing applications and user acceptance.
• LCD screens are found in laptop computers, digital clocks and watches, microwave ovens, CD
players and many other electronic devices
TYPES OF LCD
1. PASSIVE-MATRIX DISPLAYS (PMDs)
PMDs are the simplest, and they have been used in calculators and watches since 1970. PMDs are
too slow for today’s demanding multimedia PCs.
This type of LCD screen contains a series of wires that criss-cross each other. At the intersection of
each wire contains a single LCD element that allows light to be passed through. A passive-matrix
display does not provide the same quality as an active-matrix display can.
AMDs use thin film transistors (TFTs); TFT also describes this type of display. Each pixel is formed by a
TFT and each pixel control on-off state to form image. TFT makes up the majority of both laptop and
desktop FPDs today. The image is formed by an array of LCDs on a wired grid. The result is a faster response
than the passive array.
3. PLASMA DISPLAY PANEL(PDPs)
PDPs work much like the fluorescent lights found in most offices by energizing an inert gas. Phosphor
films are used to produce a colour image. This technology is used to manufacturer very large FPDs. Like
fluorescent light, PDPs are relatively inexpensive to produce, but lower contrast and brightness, as well
as higher relative power consumption, have thus far limited their use of PC applications.
FLAT PANEL AND CRT DISPLAY COMPARISON.
Type of Display FPD CRT
Cost More expensive. Wider range of vendors,
Few manufacturers; lower initial cost, high cost to
Less expensive to operate due to operate due to higher
lower power consumption electrical power demands
Compatibility Limited selection of display adapters, Wide range of display
fewer supported resolutions adapters and drivers for most
popular resolutions
Ergonomics Flicker-free operation at all No fall-off of image quality at
resolutions, better brightness and reasonable viewing angles;
contrast, optimal viewing angles, no wider range of resolution to
noticeable distortion at edges. meet user’s needs and
working conditions.
Size Smaller “footprint” on desk, light Larger for given screen size,
weight. much heavier construction.
Emissions Lower radio and virtually no Electron gun and phosphors
magnetic emissions. create both RF ( Radio
frequency interference) and
radition
DISPLAY ADAPTERS
• A CRT monitor of FPD device is only half of computer’s display system; it must be matched to a
display adapter, also commonly referred to as graphics adapter, video card, or video controller,
normally found inside CPU as shown in Figure 10-18 whose function is to generate and output
images to a display.
• The term is usually used to refer to a separate, dedicated expansion card that is plugged into a slot
on computer’s motherboard.
• Some video cards offer added functionalities, such as video capture, TV tuner adapter, MPEG -2
and MPEG – 4 decoding or even FireWire, mouse, light pen or joystick connectors.
RESOLUTION STANDARDS
UXGA stands for Ultra Extended Graphics Array.UGA is the newest and highest standard.Although
not as widely as the XGA and SXGA monitors,its popularity is expected to increase dramatically as
21 inch monitors become more widely used.UXGA monitors are primarily used for high engineering
design and graphic arts .
STANDARD PIXELS
SVGA 800*600
XGA 1,024 * 768
SXGA 1280*1024
UXGA 1600 *1,200
Chapter 8:
COMPUTER ASSEMBLY & DISASSEMBLY
LAB 1 – Disassemble a PC
Objectives:
• Disassemble a generic PC
• Verify the correct orientation of the interface cables
Background:
In this procedure you will start a computer to verify that it is operating properly. Then you will disassemble the
computer. You will go through all the steps of disassembly.
Most PCs are capable of working with several different types of disk storage devices. The drives that are normally
included as standard equipment with a PC are a 3 1/2 inch floppy disk drive (FDD), a multi- gigabyte hard disk
drive (HDD), and a CD-ROM drive. These units can typically be found in the front section of the system unit.
Resources:
You will work in teams. The following resources will be required:
1. PC Tool Kit
2. Antistatic Wrist Strap
3. Personal computer (PC)
4. Operating system installed (MS-DOS, MS Windows
5. 95/98/2000/Millennium)
NOTE: This lab assumes an AT-type PC is used. If an ATX-type PC is used some modifications might need to be
made in the step-by-step procedures, i.e. the video and input/output ports may be located on the motherboard
instead of interface cards.
Step 9 – Remove the system board mount and the system board
a) Remove the screws from the system board mount.
b) Pull the system board mount and the system board from the chassis.
c) Remove the system board mounting screws from the system board.
d) Remove the system board.
Table 1-1
Video Card:
Network Card:
Table 1-2
Cable Color/Orientation
Power Switch:
Speaker:
Power LED:
IDE LDE:
Reset:
LAB 2 – Reassemble a PC
Objectives:
• Reassemble a generic PC [disassembled in Lab 1].
• Verify the correct orientation of the interface cables.
Background:
In this procedure, you will reassemble the PC that was disassembled in Lab 1. You will go through all the steps of
reassembly. After reassembling the PC, you will start the computer to verify that it is operating properly.
Resources:
You will work in teams. The following resources will be required:
• PC Tool Kit
• Antistatic Wrist Strap
• Disassembled personal computer (PC)
• Operating system installed on hard drive (MS-DOS, MS Windows 95/98/2000/Millennium)
NOTE: As in Lab 1, this lab assumes an AT-type PC is used. If an ATX-type PC is used some modifications might
need to be made in the step-by-step procedures, i.e. the video and input/output ports along with the IDE ports may
be located on the motherboard instead of interface cards.
Step 2 – Attach the system board mount and the system board
d) Place the system board on the system board mounts.
a) Install the system board mounting screws through the system board.
b) Install the system board mount and the system board into the chassis.
c) Install the screws through the system board mount.
NOTE: Assembly is complete. You need to reboot the computer to see if it is still working properly.
Landfills
In the US, end of life electronics end up in landfills or are exported to developing countries
Developing countries are the worlds dumping grounds for electronic waste
Acid baths are dangerous and cause water and soil contamination
Environmental Impacts
Studies conducted in China discovered heavy contamination in e-waste recycling regions
Soil, air, water, and sediments all contained high levels of contamination
Polychlorinated Biphenyls
Dioxins
Methods of disposal
1) Redesign of computer components
2) Home appliance manufacturers must take back and recycle end-use products
3) Consumer Education
The idea behind a dual-boot system is to install two different OS on the bootable medium.
– This allows the user to run either system, which hopefully allows the user more flexibility
in performing tasks.
– Unfortunately, creating a dual-boot system is a lot more difficult than it should be, and
many times Windows seems to be in the middle of the problems.
– One rule of thumb for installing a dual-boot system that includes Windows is to install the
Windows OS first.
– If you are trying to install two versions of Windows, always install the older version first.
• This is often necessary because new versions of Windows often contain changes to
the file system, and the old versions do not understand these changes.
c) Other Installations
Installing a Solaris/Linux dual-boot Intel system often presents problems for the administrator.
– The Solaris Intel installer limits the number of partitions allowed on a disk where Solaris
will be installed.
• If the disk has more than three partitions, and/or the installation of Solaris
would cause the disk to have more than three partitions, the installation will
fail.
– Linux wants a boot partition, and a kernel partition (minimum).
– Solaris also wants a boot partition and a kernel partition.
– If you wanted to build a dual-boot Linux/Solaris system, you would need four partitions.
However, Solaris will only allow you to install on a disk with three (or fewer) partitions.
– In this case, install Solaris first, and then install Linux on a separate partition later. Better
yet, buy a second disk, and install one OS on each disk!
d) Desktop Installations
Installing the OS on a desktop PC is often a very different problem than installing an OS on a corporate
database server.
– Generally, desktop computers come in two flavors: systems that contain their own copies
of everything and systems that rely on network-based servers for critical applications.
– Standalone Systems
Self-contained systems are often referred to as standalone systems, or “thick clients.”
• These machines typically contain the OS, and local copies of all applications
required by users of the system.
• The installation of a standalone system will require more time than some other
systems, because you have to load the OS, and all of the applications on the local
disk.
• Such installations can easily become multi-day tasks!
Networked Client Systems
– Systems that rely on network-based servers for critical applications/services are typically
referred to as networked client systems, or “thin clients.”
• These machines usually contain a copy of the OS, but very little other software gets
installed on these systems.
• User files typically reside on a network-based file server.
• Applications may reside on yet another network-based server.
• These systems rely on the network to be operational for the system to be useful.
• Such systems are typically very easy installations.
– You load the OS, configure the network connection, and possibly configure
a few other parameters to allow the system to locate the network-based
servers and you are “done”.
e) Server Installations
Installing an OS on a server is often a long, arduous task.
You have to install the OS and configure it to provide services to other computers.
The types of “clients” the server supports will usually complicate this configuration task.
The applications/services provided by the server may provide more complications.
Homogenous Servers
The homogenous server is probably the simplest server to install.
This type of server only provides services to clients of the same architecture/kernel architecture.
• This means that only one version of the OS and all applications need be
installed on the system.
Such systems may be used as a boot server, file server, name server, web server, database server, or many
other purposes.
Heterogeneous Servers
Heterogeneous servers are probably the most difficult system you will ever have to install.
These systems may provide boot services, applications, and/or file services for a variety of systems of
different kernel/hardware architectures.
– For example, a Linux system may be set up to provide file service to Linux, Solaris, and
MacOS boxes via NFS, while providing file service to desktop PCs via Common Internet
File Services (CIFS) by running the Samba application.
– Such servers are typically very complicated beasts to install and configure.
• You will have to install copies of multiple OS for the system to function as a boot
server.
• Similarly, you will have to install application binaries for multiple architectures in
order to support application services for client machines.
Planning for an Installation
The “footprint” or size of the OS should be considered to ensure that the system contains enough disk
space.
How that disk space is parceled might play a role in the OS installation.
Similarly the size of the main memory might need to be taken into consideration.
Disk Space Requirements
One of the most important decisions you will need to make before you install an OS is how much space to
allocate to the system software.
– If you allocate too much space to the system software, users may not have enough space. If
you allocate too much space to users, the system may run out of space.
• Calculate how much space the OS and application binaries will occupy.
• Once you have a number in mind, double it. In a few weeks or months you will be
glad you did.
Installation Methods
• Current OS are typically distributed on CD or DVD media.
• Older releases were distributed on tape cartridges or floppy diskettes.
• More often than not, the distribution media is bootable, and therefore all you have to do is place
the media in the appropriate device and turn on the power.
• The magic of the boot process boots the installation media, and an installation program guides you
through the installation process.
Windows Installations
Most Windows installations give the administrator very few options.
When installing from the distribution media, the administrator selects the partition to install the bits on,
answers a few questions about the local environment. The system does the rest without input from the
operator.
Unfortunately, the information required during the installation is not all collected up front; the
information-gathering process is spread across the entire installation process.
• This makes Windows installation more time consuming than it should be, as the
administrator has to sit and wait for the system to ask questions.
• If the questions were all asked up-front, the administrator would be free to attend to other
tasks while the bits moved from the CD to the hard drive.
Windows CD/DVD Installations
Installation of Windows from CD/DVD media is pretty simple.
• You boot the installation media, answer a few simple questions, and the installer program
does the rest for you.
• Unfortunately, because the process is simple, it is not very configurable.
• The media-based installer is geared to the novice administrator’s capabilities; hence, the
number of decision points and allowable options is very minimal.
• One downside to the CD/DVD installation is that the installation process is just interactive
enough that the operator cannot start the installation and leave for an hour or two.
Network Installations
If you want to customize the installation process, and/or make it completely automatic, you need to build a
network-based installation server.
• Such an installation is referred to as an “unattended” installation in Windows parlance.
• The installation server contains on-line copies of the distribution media, a set of “answer” files that
control what parts of the software get installed on the system, and a “boot daemon” that listens for
installation requests on the network.
• You can customize the answer files to install the OS and any required applications without
operator intervention.
• This is a much more suitable installation method if you have to install 100 computers instead of 2
or 3.
This method comes with a price: someone has to build (and hopefully test) the answer files.
; Microsoft Windows 2000 Professional, Server, Advanced Server and Datacenter
; (c) 1994 - 1999 Microsoft Corporation. All rights reserved.
;; Sample Unattended Setup Answer File
; This file contains information about how to automate the installation
; or upgrade of Windows 2000 Professional and Windows 2000 Server so the
; Setup program runs without requiring user input.
[Unattended]
Unattendmode = FullUnattended
OemPreinstall = NO
TargetPath = WINNT
Filesystem = LeaveAlone
[UserData]
FullName = "Your Name Here"
OrgName = "Your Organization Name"
ComputerName = "COMPUTER_NAME"
[GuiUnattended]
; Sets the Timezone to the Pacific Northwest
; Sets the Admin Password to NULL
; Turn AutoLogon ON and login once
TimeZone = "004"
AdminPassword = *
AutoLogon = Yes
AutoLogonCount = 1
[GuiRunOnce]
; List the programs that you want to launch when the machine is logged into for the first time
[Display]
BitsPerPel = 8
XResolution = 800
YResolution = 600
VRefresh = 70
[Networking]
; When set to YES, setup will install default networking components.
; The components to be set up are TCP/IP, File and Print Sharing, and
; the Client for Microsoft Networks.
InstallDefaultComponents = YES
[Identification]
JoinWorkgroup = Workgroup
UNIX Installations
Many flavors of UNIX allow (in fact, insist on) significant operator interaction during the installation
process.
UNIX installers are often much more willing to allow custom installations than their Windows
counterparts.
• This generally infers that the operator needs to be more knowledgeable about the
specifics of the system to successfully complete the installation process.
• It also means that unattended installations are not feasible without plenty of
advance configuration and planning.
CD/DVD Installations
As with Windows distribution media based installations, the installers used by UNIX OS are somewhat
automated.
– A difference between UNIX and Windows installers is that MOST UNIX installers ask all
of the questions up-front, then use the answers to drive the remainder of the install.
• A typical Solaris 8 installation requires about 20 minutes of operator interaction,
then for the next hour (or more) no interaction is required.
• RedHat Linux installations are similar to Solaris in regards to operator interaction.
While MOST UNIX installations often take care of the interactive portion up-front, a few of the installers
“hold the user’s hand” throughout the installation process (much like Windows).
Network Installations
Most versions of UNIX support a network-based installation system of one form or another.
– Like Windows, these installers require a network-based boot server, rules files that dictate how the
installation is performed, and a boot daemon that runs on the server to manage the process.
• The Solaris JumpStart package is one such network-based installer.
• Sun’s WebStart and the Linux KickStart service are other examples of the
automated network-based installer.
• Because there is no official standard for these network-based tools, and each vendor
has one (or more) of these installers, describing all of the current offerings is
difficult, if not impossible.
Linux Kickstart
Linux may be “kickstarted” from a bootable floppy diskette or from a network-based boot server.
• The floppy diskette must contain a configuration file named ks.cfg.
– This file is the Linux equivalent of the Windows “answer file” for an
unattended installation.
• To perform a network installation, you need to have a DHCP server running on
your network.
• The DHCP server instructs the new system how to contact the boot service machine
identified in the ks.cfg file.
The Kickstart process requires a “rules” file to control the installation.
– The ks.cfg file contains several directives that tell the installer how, and what, to install on
the system.
– The format of the ks.cfg file is as follows.
<command section>
<a list of %pre, %post, and/or %packages directives>
<installclass>
The easiest way to create a Kickstart file is to use the Kickstart configurator utility supplied on the
distribution media.
To start a Kickstart install, you use a special boot floppy.
– The boot floppy may contain a CD-ROM or network “boot block.” In either case, you start
the boot with the following command.
Boot: linux ks=floppy # ks.cfg resides on the floppy
# ks.cfg resides on NFS fileserver
– Boot: linux ks=nfs:<server_name:>/path_to_ks.cfg
Solaris Network Boot Service Daemon
– To provide a Solaris installation server, you must build and configure a system that will
listen for install requests, and know how to deliver the proper files to the host being
installed.
• This requires the sysadmin to complete the following two major tasks.
– Build a server, install the boot server software, and configuration files.
– Install the binaries to be served on the boot server.
Saving Critical Data
In the event your installation involves the upgrade from an old version of an OS to a new version, there is
another critical point to consider.
– What do you do with all of the old files on the system when you get ready to install the new
software?
• If the system is a standalone desktop, and you have a new OS (as well as new
applications to install), you may not need to worry about saving anything from the
old system.
• More often than not, however, before an old OS is shut down for the last time there
are several files you may wish to save.
• You are strongly advised to make a full backup of every file on the system.
Servers often present an even bigger challenge when you are upgrading an OS.
– For instance, because you may not have to change the partitioning to load a new OS, it
would help to know how the disks on the system are currently partitioned.
– Printing out information regarding current disk partitions may be very helpful information
during the upgrade.
– Saving the password file, NIS name, server maps, user files, and other critical information
before you begin the installation procedure is always strongly recommended.
– Again, err on the side of caution, and perform a full file system backup before you begin
the upgrade. Nine out of ten times you will be glad you spent the extra time to do so.
Servers often present an even bigger challenge when you are upgrading an OS.
– For instance, because you may not have to change the partitioning to load a new OS, it
would help to know how the disks on the system are currently partitioned.
– Printing out information regarding current disk partitions may be very helpful information
during the upgrade.
– Saving the password file, NIS name, server maps, user files, and other critical information
before you begin the installation procedure is always strongly recommended.
– Again, err on the side of caution, and perform a full file system backup before you begin
the upgrade. Nine out of ten times you will be glad you spent the extra time to do so.
Summary
• Installing an OS is a huge undertaking.
• There are no set formulas for how it should be done.
• Early steps in the planning should include checking that the new software will run on the
hardware.
• Determination of the type of installation, homogenous server, heterogeneous server, standalone
system, thick client, or thin client should also be addressed before beginning the installation
process.
• System types to be supported and the intended use of the system must also be factored into
installation planning.
• Application availability should be checked, and rechecked.
• If mission-critical applications are not available for the OS to be installed, you may have to change
your plans.
• Installing an OS is a huge undertaking.
• There are no set formulas for how it should be done.
• Early steps in the planning should include checking that the new software will run on the
hardware.
• Determination of the type of installation, homogenous server, heterogeneous server, standalone
system, thick client, or thin client should also be addressed before beginning the installation
process.
• System types to be supported and the intended use of the system must also be factored into
installation planning.
• Application availability should be checked, and rechecked.
• If mission-critical applications are not available for the OS to be installed, you may have to change
your plans.
Chapter 10:
FAULT FINDING AND TROUBLESHOOTING
The focus of this chapter is on generic approaches you can use to troubleshoot and repair a PC, including a
generic troubleshooting process and some suggestions on how to make troubleshooting easier and involve
less guesswork.
The Need for a Troubleshooting Plan
One of the true frustrations about problems on a PC is that in most cases they are not what they seem. The
problem could be software-related—but which software? The problem could be hardware-related—but
which hardware? What exactly was going on when the problem first occurred? Are you sure?
Even with years of experience and training, PC technicians can apply solutions that do not solve the real
problem.
Elements of a Troubleshooting Plan
A troubleshooting plan can be a written checklist that you use for any problem, or just a routine procedure
that you follow each time a problem occurs, with adjustments for the situation. Whatever form your plan
takes, as long as it works and is used, it will be the right plan.
The elements that should be included in any troubleshooting plan are:
• Maintenance journal
• Diagnostic checklist or questions
• Identification of possible causes
• Identification of possible solutions
• Application and testing of solution
• Follow-up
3) Identify possible solutions. You should identify a solution for each of the possible causes you have
identified. A possible cause could have more than one possible solution, in which case you need to
rank the solutions by which will yield the most positive results.
4) Analyze the possible solutions. If two solutions will produce the same result, other considerations
may be involved. Perhaps, one is less expensive or adds value to the PC.
5) Apply a solution. From your analysis of the possible solutions, you shoul pick the one that looks
most promising and implement it.
6) Test the solution. If it solves the problem and provides the desired result, be sure you update the
maintenance journal and all other pertinent documentation. If it doesn't solve the problem, you may
need to repeat as much of the problem-solving process as necessary to find a better fix.
Not every problem requires that you formally and methodically work through these steps
individually. Some problems are very apparent, and the fix is obvious but you should practice
applying this technique on every problem for a while
Another very important step in identifying the problem is the ability to reproduce it. Document in detail
what you think may be an incidental problem that you cannot reproduce. Chances are that you are unable to
reproduce the problem because you are unable for some reason to create the same set of conditions that
caused it in the first place. That doesn't mean it will never happen again, and when it does, you need to be
able to look back and compare the conditions that caused it in each instance. If you are dealing with an
intermittent problem, you should document the answers to the questions in the preceding section and any
other facts you have gathered.
Perhaps the best way to eliminate a possible cause is to remove it from the system and retest. This is true of
hardware and software elements that you believe may be causing a problem. For example, if you think a
conflict exists between two pieces of application software, you should stop one of the software programs to
see if the problem clears up. This same principle applies to hardware problems. If you think a problem is
being caused by conflicts between devices or expansion cards, you should open the case, remove the
suspected component, am restart the system. If the problem disappears, you have found the cause; otherwise,
keep eliminating devices until the problem goes away. If you must remove all of the expansion cards before
a problem clears up, you may also need to begin reinserting the cards in the reverse order to which you took
them out to see when the problem reappears.
Another excellent way to isolate a hardware problem on a PC is to use the known good method. The known-
good method involves replacing the suspected hardware with another of the same make, model, and type
that you absolutely know to be good. If the problem goes away, you have a bad part; otherwise, keep testing.
Applying a Solution
In most situations, the fix to a problem is obvious, especially with software issues. If two applications have
conflicts, not running them together, upgrading one or both, or reinstalling one or both will usually fix the
problem. Always check the software manufacturer's Web site for information relating to a problem. You
may also want to check the operating system manufacturer's Web site for information on this and similar
problems with the software in question. If no information on your problem is available from either the
application or operating system manufacturer, report the software conflict and problem to them, if for no
other reason than to put it on the record.
If the problem is a hardware issue, check to see if the hardware is under warranty and, if so, what restrictions
the warranty imposes before you begin making too many changes. Never make changes that would void a
device's warranty. Contact the manufacturer with all of your information and work with its technical support
people to devise a solution to the problem.
Troubleshooting Tools
A variety of hardware, software, and information resources are available for use during troubleshooting procedures.
Hardware and Software Tools
• The tools that you should have available to troubleshoot a PC include the following:
• A good set of screwdrivers, including a Phillips screwdriver
• Antistatic wrist strap, antistatic mat, and antistatic bags (for removing and storing components)
• Software system testing utilities, such as AMIDiag software from American Megatrends, Inc., Symantec's
Norton Utilities, and Eurosoft's PC check, among others
• A digital multimeter for checking power supply voltages
• A supply of spare known-good components for replacement testing
Your senses are probably the most important "tools" when troubleshooting a PC. For example, your eyes or nose
will find a burned out component faster than any other tool.
Information Resources
The Internet and Web are full of information resources you can use to gain in information about a particular device
or application, or to learn how others have dealt with a problem you are having. Chances are pretty good that you
are not the first to have whatever problem you are having.
Chapter 11:
COMPUTER SUPPORT
On-line Support
If you are familiar with the concept of remote controlling a computer then this should be nothing new to you.
Online support technicians use a special piece of software depending on what kind of PC problem you have, that
you download to your computer. The download is very small and quick. This allows them to connect to your
computer securely as if they were sitting in front of it. There is no need for a technician to come to your
home. Most importantly, you still have the ultimate control over your computer.
You can sit there and watch them work making sure that your important data stays private. If for any reason you
want to disconnect the session, with one simple click you can disconnect the computer repair technician and it’s
impossible for them to get back in to your PC.
Advantages
1. Getting PC help from an online computer repair service will be faster than from a local computer repair
service.
2. You won't have to unhook your computer and load it up in the car. You won't have to wait on a service
person to visit your home or business. You won't have to work within standard business hours. These are
huge advantages if you need your computer fixed fast.
Online computer repair services might not be able to fix every problem under the sun, but if they can fix
yours, they're often the better, and for sure the fastest, bet.
The first thing you need to do is decide on the backbone that’ll be running your helpdesk. The software that
every member of your support staff relies on needs to be top-notch, reliable and offer the features that your
specific business needs to do a stellar job.
If you’re going to be running a call center, you’ll need to take a different approach — tools from proprietary
companies like Interactive Intelligence will prove useful.
If you’re setting up a call center, you’ll want to have an online ticketing system to complement it — many
customers are averse to making calls for support these days — and there are plenty of options. From the
free, open source to commercial solutions such, there’s bound to be a package that suits the size and needs
of your operation.
Remote Assistance Solutions
For web and tech businesses, a helpdesk that offers remote assistance should be a necessity. It allows support
staff to achieve resolutions faster than ever before, removing the variable that is the customer’s ability to
follow instructions over the phone or by email.
Knowledgebase & Canned Replies
As anyone who runs a support helpdesk will tell you, the vast majority of support requests you will receive
will be quite common — meaning that your team can drastically reduce response time by developing a set
of canned replies that can be quickly personalized and sent off for most inquiries.
Most companies will have an idea of the questions that are most likely to pop up before the helpdesk is
launched and can create an initial set of responses for these, but it’s essential to monitor queries and look
for trends. These trends can signal the need for a canned reply or knowledgebase article, or may be cause to
look into having certain features simplified and made more easily apparent for the average user.
Prioritization
It’s important to have a good prioritization system that support staff can use to judge the order in which tickets need
to be dealt with based on a set of standards.
Tickets that have been in the system for too long should be dealt with first, and issues with payments are generally of
much higher importance than issues with the technology itself. Dealing with people’s money is always a sensitive
issue.
Reporting
Setting up a helpdesk with robust reporting helps support managers to identify problems with your helpdesk
and refine the approach. Are customers receiving responses in a timely fashion? Are their problems resolved
in a timely fashion? Such metrics may speak to your management, the work ethic of your staff, or their
effectiveness.
Good reporting will allow you to find problematic staff or issues that take longer to resolve than others
across the board, even when your overall statistics are looking good. Constant iteration towards greater
efficiency and effectiveness in every area of the helpdesk is vital to building a reputation for excellent
customer service.
Customer Feedback
Customer feedback should be combined with reporting in reviews of your helpdesk’s performance. There’s
a human aspect to the support experience that can’t be captured by quantitative reports.
Perhaps your idea of a fast average response time conflicts with that of users. There are always customers
who are unreasonable complainers, but a high volume of feedback of this nature is cause for concern and
may require you to shift your expectations.
If your staff is reporting a resolution for a ticket but feedback indicates that the customer hasn’t been
satisfied, you need to look at that particular employee’s approach to providing support.
What’s most important to look for in customer feedback, however, is how they felt about your staff and
their attitudes. Did they have a pleasant, welcoming experience where they felt staff were happy to solve
their issue? Or did they feel that their support person was condescending — a common problem with those
who have some sort of technological superiority complex?
The attitude of your support staff can sometimes be overlooked in business, and that’s a big mistake. There
are few better ways to develop loyal relationships with your users than by providing them with the best
support experience of their life.
• Tasks.
• Education.
• Knowledge.
• Mind set.
Approaches to training
• On the job versus Off the job type.
• Training methods.
Training Methods
• Lecture Demonstration
• Role Playing.
• Seminars
• Individual assignments.
• Field Trips.
• Case studies.
• Panels.
• Programmed Instruction.
Location
• On site or off site.
• Off site training is good for groups of individuals who are working in different sites to assemble at a
common point.
• Use a footrest.
Eyestrain
Eyes can become strained after staring at a computer screen for a long time, particularly if working in bad light or
with a flickering screen.
Solutions to Eyestrain:
• Take regular breaks – do not work for more than 1 hour at a time.
• Windows around you must have blinds to avoid glare from the sun.
Types of Threats
a) Interruption
– An asset of the system is destroyed of becomes unavailable or unusable
– Attack on availability
– Destruction of hardware
– Cutting of a communication line
– Disabling the file management system
b) Interception
– An unauthorized party gains access to an asset
– Attack on confidentiality
– Wiretapping to capture data in a network
– Illicit copying of files or programs
c) Modification
– An unauthorized party not only gains access but tampers with an asset
– Attack on integrity
– Changing values in a data file
– Altering a program so that it performs differently
– Modifying the content of messages being transmitted in a network
d) Fabrication
– An unauthorized party inserts counterfeit objects into the system
– Attack on authenticity
– Insertion of spurious messages in a network
– Addition of records to a file
Protection
No protection
– Sensitive procedures are run at separate times
Isolation
– Each process operates separately from other processes with no sharing or communication
Share all or share nothing
– Owner of an object declares it public or private
Share via access limitation
– Operating system checks the permissibility of each access by a specific user to a specific
object
– Operating system acts as the guard
Share via dynamic capabilities
– Dynamic creation of sharing rights for objects
Limit use of an object
– Limit no only access to an object but also the use to which that object may be put
– Example: a user may be able to derive statistical summaries but not to determine specific
data values
Protection of Memory
• Security
• Ensure correct function of various processes that are active
Access Matrix
Subject: An entity capable of accessing objects
Object: Anything to which access is controlled
Access rights: The way in which an object is accessed by a subject
Access Control List
• Matrix decomposed by columns
• For each object, an access control list gives users and their permitted access rights
Capability Tickets
• Decomposition of access matrix by rows
• Specifies authorized object and operations for a user
Intrusion Techniques
• Objective of intruder is the gain access to the system or to increase the range of privileges
accessible on a system
• Protected information that an intruder acquires is a password
ID Provides Security
• Determines whether the user is authorized to gain access to a system
• Determines the privileges accorded to the user
– Guest or anonymous accounts have mover limited privileges than others
• ID is used for discretionary access control
– A user may grant permission to files to others by ID
Intrusion Detection
• Assume the behavior of the intruder differs from the legitimate user
• Statistical anomaly detection
– Collect data related to the behavior of legitimate users over a period of time
– Statistical tests are used to determine if the behavior is not legitimate behavior
• Rule-based detection
– Rules are developed to detect deviation form previous usage pattern
– Expert system searches for suspicious behavior
• Audit record
– Native audit records
• All operating systems include accounting software that collects information on user
activity
– Detection-specific audit records
• Collection facility can be implemented that generates audit records containing only
that information required by the intrusion detection system
Malicious Programs
• Those that need a host program
– Fragments of programs that cannot exist independently of some application program,
utility, or system program
• Independent
– Self-contained programs that can be scheduled and run by the operating system
Trapdoor
• Entry point into a program that allows someone who is aware of trapdoor to gain access
• used by programmers to debug and test programs
– Avoids necessary setup and authentication
– Method to activate program if something wrong with authentication procedure
Logic Bomb
• Code embedded in a legitimate program that is set to “explode” when certain conditions are met
– Presence or absence of certain files
– Particular day of the week
– Particular user running application
Trojan Horse
• Useful program that contains hidden code that when invoked performs some unwanted or harmful
function
• Can be used to accomplish functions indirectly that an unauthorized user could not accomplish
directly
– User may set file permission so everyone has
Viruses
• Program that can “infect” other programs by modifying them
– Modification includes copy of virus program
– The infected program can infect other programs
Worms
• Use network connections to spread form system to system
• Electronic mail facility
– A worm mails a copy of itself to other systems
• Remote execution capability
– A worm executes a copy of itself on another system
• Remote log-in capability
– A worm logs on to a remote system as a user and then uses commands to copy itself from
one system to the other
Zombie
• Program that secretly takes over another Internet-attached computer
• It uses that computer to launch attacks that are difficult to trace to the zombie’s creator
Virus Stages
Dormant phase
– Virus is idle
Propagation phase
– Virus places an identical copy of itself into other programs or into certain system areas on
the disk
Triggering phase
– Virus is activated to perform the function for which it was intended
– Caused by a variety of system events
Execution phase
– Function is performed
Types of Viruses
Parasitic
– Attaches itself to executable files and replicates
– When the infected program is executed, it looks for other executables to infect
Memory-resident
– Lodges in main memory as part of a resident system program
– Once in memory, it infects every program that executes
Boot sector
– Infects boot record
– Spreads when system is booted from the disk containing the virus
Stealth
– Designed to hide itself form detection by antivirus software
– May use compression
Polymorphic
– Mutates with every infection, making detection by the “signature” of the virus impossible
– Mutation engine creates a random encryption key to encrypt the remainder of the virus
• The key is stored with the virus
Macro Viruses
• Platform independent
– Most infect Microsoft Word
• Infect document, not executable portions of code
• Easily spread
• A macro is an executable program embedded in a word processing document or other type of file
• Autoexecuting macros in Word
– Autoexecute
• Executes when Word is started
– Automacro
• Executes when defined event occurs such as opening or closing a document
– Command macro
• Executed when user invokes a command (e.g., File Save)
Antivirus Approaches
• Detection
• Identification
• Removal
Generic Decryption
CPU emulator: Instructions in an executable file are interpreted by the emulator rather than the processor
Virus signature scanner: Scan target code looking for known
Emulation control module: Controls the execution of the target code
E-mail Virus
• Activated when recipient opens the e-mail attachment
• Activated by open an e-mail that contains the virus
• Uses Visual Basic scripting language
• Propagates itself to all of the e-mail addresses known to the infected host
Trusted Systems
• Multilevel security
– Information organized into categories
– No read up
• Only read objects of a less or equal security level
– No write down
• Only write objects of greater or equal security level
Access Token
Security ID: Identifies a user uniquely across all the machines on the network (logon name)
Group SIDs: List of the groups to which this user belongs
Privileges: List of security-sensitive system services that this user may call
Default owner: If this process crates another object, this field specifies who the owner is
Default ACL: Initial list of protections applied to the objects that the user creates
Security Descriptor
Flags: Defines type and contents of a security descriptor
Owner: Owner of the object can generally perform any action on the security descriptor
System Access Control List (SACL): Specifies what kinds of operations on the object should generate
audit messages
Discretionary Access Control List (DACL): Determines which users and groups can access this object
for which operations
CHAPTER 13
SYSTEM SELECTION &ACQUISITION
Factors for success
It’s good to know and realize how hard it is to succeed in selecting and acquiring the right system. This
helps to put expectations into right place
There are various reasons why IS-projects fail but what helps to succeed in information systems projects?
▪ Management support
▪ Commitment and feedback from client and end-user
▪ Motivated and competent people working in the project
▪ Realistic goals
▪ Properly done requirements specifications
▪ Sufficient monitoring and guiding
Introduction
Acquisition & selection of the right system goes through the following 4 processes
Main phases in the process
▪ Preparation
▪ Choosing
▪ Monitoring
▪ Finishing
Acquisition chart
Preparation
The overview of acquiring information system
▪ Strategic planning
▪ Planning of strategic goals for information technology (IT), organization’s business
activities strategies and operational activities of a company
▪ In larger companies strategic plans are usually checked in 1-2 year periods and many times
IS-projects are launched from these plans.
Yearly plans
▪ Budgeting for development projects
▪ More specific plans on goals and business activities regarding IT
Changing organizational functions
▪ Every IS-project mean a change in the way things have been done before and may require
large organizational changes
What? Solution
▪ Description of system, problem or need
▪ Outlining acquirements borders
How? Leading the project through
▪ Scheduling, phases and decision points
▪ Resourcing of project
▪ Buy or make IS? How to choose a vendor?
▪ Project management, who, when, how?
▪ Managing documents from project and project documentation
▪ Problem solving, responsibilities, criterion for ending the project
Acquirement plan
▪ Plan is based on scheduling and phasing of project
▪ Schedule needs flexibility even if it is very detailed, progress of the whole project cannot
be delayed because of one little detail. Also alternative routes to finishing project in
problem situations should be considered and planned
▪ Testing should be considered already at this point incase testing needs special
arrangements:
Partners and other outer parties involved
Is infrastructure sufficient?
Architecture
Technical architecture of IS here means for example
▪ Operating system
▪ Database system
▪ Programming languages
▪ Etc.
Choosing technical architecture depends on:
▪ existing architecture and IS´s
▪ infrastructure
▪ resources and services available
▪ Functional requirements
Software components
Markets for readymade software components that can be reasonably easily integrated into a system
are not developed.
There are problems with immaterial and copyright issues.
How is future development handled on the part of bought component (especially in case of
bankruptcy of vendor)
Services required
Responsibilities and work should be divided in a written contract between vendor and customer
Requirements management can be made by own employees, vendor or outside consultant. This is
very important part of project and should be carefully considered between options. Even in case of
readymade system requirements specifications has to be made.
Outside consultant can be very helpful in:
▪ choosing vendor
▪ making requirements specifications
▪ investigating market for readymade software or components
▪ planning testing and approval of final product
▪ helping communication between client and vendor
Acquirement policy
In most cases call for bids is arranged after decision of using a vendor. It takes time and resources
but is the only effective way of making sure that vendor and the price is right
In acquirement plan communication and process of dealing with vendors should be stated. How
informing contestants and communication is dealt, e-mail, written letter, fax, etc. What is the
timetable and how answers are supposed to be sent.
Many companies have ”IT-partners” as a regular vendor, but these companies should also be
compared to others
Pricing can be by the hour or as a whole sum for the project. Some newer model combine these
two
Contract policy should be stated in the acquirement plan, it´s possible to use IT2000 or other
contract models as a core for the contract
Contract flexibility should also be stated. which are the things that are necessities and which are
optional, these constrains should be mentioned also in request for proposal
Acquirement plan should also tell which standards are used in the project.
Acquirement organization
Client should be ready to allocate enough resources for use of vendors who wish to clarify issues
from request for proposal. These persons need to have enough technical competence and business
understanding to answer the questions.
acquirement organization should be shown as part of acquirement plan and it should show:
▪ who prepares and executes the choosing process
▪ Who makes the acquirement decision
▪ Who participate in execution, controlling, steering and finishing of the project
Acquirement organization must have the authority to make decisions it needs
Human resource ❖ Technology used, operating area, methods and working habits are known
involved risks poorly
❖ Little experience on project work or project leading
❖ Poorly motivated and committed users
❖ Users are inexperienced on technology and its implementing, testing and
training
❖ Management is not committed to project
❖ Project members do not have enough time
❖ Expected changes in personnel in project members during project
Risks in implemented ❖ New and untested technology
technology ❖ Unestablished technology
❖ Scaling of capacity is unsuccessful
❖ Peripherals, computers, software has to be tailored a lot
Risks involving ❖ Vendor or partner is not financially solid or is wrong size compared
clients, partners and to project
vendors ❖ Vendor or partner does not have time for this project
❖ Many partners in one project
❖ Responsibilities and who does what is not clear on contract
❖ Effects on clients functions are not known
Risks in project ❖ Poor project culture: Project management processes and techniques
management are on poor level
❖ Project manager or project member are not familiar with and/or do
not use the newest techniques and instruments
❖ During project lots of demands for changes arise
❖ Management or steeringgroup is too big and unefficient
Risks involving ❖ Risks of using the projects outcome has not been analysed properly
outcome of the ❖ Reactions of clients, users, etc. are stronger and more negative than
project expected
❖ Outcome of project is too difficult to use
❖ Outcome of project is not flexible enough
❖ Technology becomes too old-fashioned before the end of products
economiclife-cycle
❖ Availability and continuing of maintenance and support is unsure
❖ Security of information is not adequate
❖ Currency rates, evolving of prices or taxation becomes uneconomic
Choosing
Choosing solution and vendor
▪ Making request for proposal
▪ Comparing proposals
▪ Decision
▪ Contract
▪ Preliminary project plan
Contract terms
Written contract can be validated by lawyers or experienced person who used to make contracts
It’s possible to use IT2000 model for contracts, or use the model from vendor
Payments should phased with project phasing so that vendors interest is to get phases ready to be
paid for them, it’s also reasonable to define a warranty sum which is paid after the warranty time is
over
It´s also wise to determine the costs of maintenance and owner of source code at the contract
Also issues of copyright have to be checked if used ready-made software or components as part of
system
Pricing
It’s possible to use different pricing models:
▪ Pricing by the hour
▪ Risk is mainly on client
▪ Pricing by contract
▪ Risk is mainly on vendor
▪ Pricing by sharing risks
▪ Vendor represents an estimate from hours needed to finish the project and the costs
for it. If the the hours are not enough to complete the project the next hours are
much cheaper. A certain max sum can be set. There have to be a way for both
parties to control the working hours needed for project. If all the hours are not
needed the difference can be given to project members as a motivational bonus
Pricing by function points
▪ In call for bids the “size” of project is represented and it is asked to give price per
function point
▪ The positive side of this model for vendor is that changes of system will be priced
automatically, for client this reduces the risks because the vendor is committed to
certain price per point
▪ Also a effective tool for change management because the price of each change is
easily calculated
Comparing bids
Bids are evaluated first separately
▪ the best offers will be taken to next round (comparing more than three is difficult)
Then bids are evaluated together and compared to each other, this also may mean new more
explicit call for bids for the chosen ones or winner might be chosen through negotiations and more
specific evaluation of bids
Giving points
▪ Each member of group that chooses vendor gives points to his/hers special knowledge area,
after each part of each bid has been thoroughly evaluated the group together decides the
points given to the certain answer
▪ Points will be put into a chart where bids can be evaluated simultaneously
After finalists are clear they should be met face to face in order to find out compatibility of habits
and organizations, how easy or hard it would be to work together
Monitoring
Audits on progress of project should be held with project group and steering group at steady pace
In planned decision points the results of phases are accepted
In project group, steering group and management group changes to project are discussed and
decided
Management group is leading the project
Finishing
On finishing phase conclusion on acquirement and project are compiled and restored for future
learning processes
Project manager writes final report on project, then the report is compared to original project plan
It´s important to learn from what was done right and what went wrong
Management group verifies the final report
Functions of company should be adjusted to new circumstances as rapidly as possible to fully
benefit from the new IS
CHAPTER 14:
EMERGING TRENDS IN ICT
1. Software –as-a-service (SaaS) – Delivery model for software in which yo pay for software on a pay-per-
use basis instead of buying the software outright.
• Use any device anywhere to do anything
• Pay a small fee and store files on the web
• Access those files later with your “ regular” computer
• Makes use of an application service provider
We live in “pull” environment i.e you visit websites and request for information about products and services. The
future is “push ” environment in which businesses come to you with products and services based on your profile.
Businesses will know so much about you that they can tailor and customize offerings.
3. F2b2C – Factory-to-business-to-consumer.
A consumer communicates through a business on the Internet and directly provides product specifications to a
factory that makes the customized and personalized product to the consumer's specifications and then ships it
directly to the consumer.
• The business (small b) is only an intermediary between the consumer (capital C) and the factory (capital F)
• A form of disintermediation
• Disintermediation - the use of the Internet as a delivery vehicle, whereby intermediate players in a
distribution channel can be bypassed
4. Physiological Interaction
Now, you use keyboards, mice, and the like. These are physical interfaces
Physiological interfaces will actually capture and use your real body characteristics
• Voice
• Iris scan
• And the like
5. Immortal Avatars
Imagine speaking to your great, great, great grandchildren about the wonders of the 21st century long after your
physical body has decomposed!!
6. Ubiquitous Computing
Connecting everything to the web and monitoring of house pets, plants, operation of appliances.
• Cave automatic virtual environment (CAVE) - special 3-D virtual reality room that can display images
of people and objects in other CAVEs
• These are holographic devices
• Holographic device - creates, captures, and/or displays images in 3-D form
Virtual reality - three-dimensional computer simulation in which you actively and physically participate
Glove - input device; captures movement and strength of your hands and fingers
Headset (head-mounted display) - I/O device; captures your head movement; screen covers your field of vision
Walker- input device; captures movement of your feet as you walk or turn
9. Biometrics
Biometrics - the use of physiological characteristics - fingerprint, iris, voice sound, and even breath - to provide
identification
That's the narrow definition
Can also create custom-fitting clothes using biometrics
Biometric Security
Today's systems (ATMs for example) use only the first two. One reason why identity theft is so high.
Integrating Biometrics with Transaction Processing
TPS - captures events of a transaction
Biometric processing system - captures information about you, perhaps...
• Weight loss
• Pregnancy
• Use of drugs
• Alcohol level
• Vitamin deficiencies
• Is this ethical?
• Can banks use ATMs and determine if you've been drinking?
• How will businesses of the future use biometric information? Ethically? Or otherwise?