Computer Maintainance Module
Computer Maintainance Module
Introduction to CPU
The CPU is the brains of the computer. Sometimes referred to simply as the processor or central
processor, the CPU is where most calculations take place. In terms of computing power, the CPU
is the most important element of a computer system. On large machines, CPUs require one or more
printed circuit boards. On personal computers and small workstations, the CPU is housed in a
single chip called a microprocessor. The CPU is the most important element of a computer system.
Many people wrongly call the system case (chasis) as CPU. But the chasis is housing for devices
like CPU, RAM, disks, and motherboard and expansion cards.
The Arithmetic logic unit (ALU), which performs arithmetic (addition, subtraction,
multiplication, and division) and logical (comparison, negation, conjunction, and disjunction)
operations.
The control unit, which extracts instructions from memory and decodes and executes them,
calling on the ALU when necessary.
The Memory unit or registers, which store intermediate results of ALU.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 1
CPUs have gone through many changes through the few years since Intel came out with the first
one. IBM chose Intel’s 8088 processor for the brains of the first PC. This choice by IBM is what
made Intel the perceived leader of the CPU market. Intel remains the perceived leader of
microprocessor development. While newer contenders have developed their own technologies for
their own processors, Intel continues to remain more than a viable source of new technology in this
market, with the ever-growing AMD nipping at their heels.
The first four generations of Intel processor took on the “8″ as the series
name, which is why the technical types refer to this family of chips as the 8088, 8086, and 80186.
This goes right on up to the 80486, or simply the 486. The following chips are considered the
dinosaurs of the computer world.
Intel-8086(1978)
This chip was skipped over for the original PC, but was used in a few later computers that didn’t
amount to much. It was a true 16-bit processor and talked with its cards via a 16 wire data
connection. The chip contained 29,000 transistors and 20 address lines that gave it the ability to
talk with up to 1 MB of RAM.
Intel-8088(1979)
The 8088 is, for all practical purposes, identical to the 8086. The only difference is that it handles
its address lines differently than the 8086. This chip was the one that was chosen for the first IBM
PC, and like the 8086, it is able to work with the 8087 math coprocessor chip.
NEC-V20-and-V30(1981)
Clones of the 8088 and 8086. They are supposed to be about 30% faster than the Intel ones.
Intel-80286(1982)
It is also called as 286 Processor. It is a 16-bit, 134,000 transistor processor capable of addressing
up to 16 MB of RAM. In addition to the increased physical memory support, this chip is able to
work with virtual memory, thereby allowing much for expandability. The 286
was the first “real” processor. It introduced the concept of protected mode.
This is the ability to multitask, having different programs run separately but at
the same time.
Intel-386(1985–1990)
The 386 signified a major increase in technology from Intel. The 386 was a 32-
bit processor, meaning its data output was immediately twice that of the 286.
Containing 275,000 transistors. It came in 80386DX 16, 20, 25, and 33 MHz
versions. The 32-bit address bus allowed the chip to work with a full 4 GB of RAM and a
staggering 64 TB of virtual memory. In addition, the 386 was the first chip to use instruction
pipelining, which allows the processor to start working on the next instruction before the previous
one is complete. 386 chips were designed to be user friendly.
All chips in the family were pin-for-pin compatible and they were binary compatible with the
previous 186 chips, meaning that users didn’t have to get new software to use it. Also, the 386
offered power friendly features such as low voltage requirements and System Management Mode
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 2
(SMM) which could power down various components to save power. Overall, this chip was a big
step for chip development. It set the standard that many later chips would follow. It offered a
simple design which developers could easily design for.
Intel-486(1989 – 1994)
The 80486DX was released in 1989. It was a 32-bit processor containing
1.2 million transistors. It had the same memory capacity as the 386 (both
were 32-bit) but offered twice the speed at 26.9 million instructions per
second (MIPS) at 33 MHz. There are some improvements here, though,
beyond just speed. Also, the 486 came in 5 volt and 3 volt versions,
allowing flexibility for desktops and laptops.
AMD K5 (1996)
This chip was designed to go head to head with the Pentium processor. It was designed to fit right
into Socket 7 motherboards, allowing users to drop K5′s into the motherboards they might have
already had. The chip was fully compatible with all x86 software. In order to rate the speed of the
chips, AMD devised the P-rating system (or PR rating). This number identified the speed as
compared to the true Intel Pentium equivalent. K5′s ran from 75 MHz to 166 MHz (in P-ratings).
They contained 24KB of L1 cache and 4.3 million transistors. While the K5′s were nice little chips
for what they were, AMD quickly moved on with their release of K6.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 3
AMD Athlon (1999 – Present)
The original Athlon came at 500MHz. Designed at a 0.25
micron level; the chip boasted a super-pipelined,
superscalar micro-architecture. It contained nine
execution pipelines, a super-pipelined FPU and an again-
enhanced 3dNow technology. These issues all rolled into
one gave Athlon a real performance reputation. One
notable feature of the Athlon is the new Slot interface.
While Intel could play games by patenting Slot 1, AMD
decided to call the bet by developing a Slot of their own – Slot A. Slot A looks just like Slot 1,
although they are not electrically compatible. But, the closeness of the two interfaces allowed
motherboard manufacturers to more easily manufacturer mainboard PCBs that could be
interchangeable. They would not have to re-design an entire board to accommodate either Intel or
AMD – they could do both without too much hassle.
Later In May 2001, AMD released Athlon “Palomino”, also dubbed the Athlon 4.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 4
The Pentium Pro (also called “P6″ or “PPro”) is a RISC
chip with a 486 hardware emulator on it, running at 200
MHz or below. Several techniques are used by this chip
to produce more performance than its predecessors.
Three instructions can be decoded in each clock cycle.
Pentium II (1997)
The Pentium II is like the child of a Pentium MMX mother and the
Pentium Pro Father. Pentium II is optimized for 32-bit applications and
a 7 socket interface to the motherboard. It also contains the MMX
instruction set, which is almost a standard by this time. The chip uses
the dynamic execution technology of the Pentium Pro, allowing the
processor to predict coming instructions, accelerating work flow. It actually analyzes program
instruction and re-orders the schedule of instructions into an order that can be run the quickest.
Pentium II has 32KB of L1 cache (16KB each for data and instructions) and has a 512KB of L2
cache on package. The L2 cache runs at ½ the speed of the processor, not at full speed.
Nonetheless, the fact that the L2 cache is not on the motherboard, but instead in the chip itself,
boosts performance.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 5
Pentium IV (2000 – Current)
Pentium IV is a truly new CPU architecture and serves as the beginning to
new technologies that we will see for the next several years. This new Net-
Burst architecture is designed with future speed increase in mind.
According to Intel, Net-Burst is made up of four new technologies: Hyper
Pipelined Technology, Rapid Execution Engine, Execution Trace Cache
and a 400MHz system bus.
Later Media GX (1996) was launched by Cyrix. It could support up to 128 MB of EDO RAM in 4
separate memory banks, and the video sub-system could support resolutions of up to 1280x1024x8
or 1024x768x16.
Common sockets have retention clips that apply a constant force, which must be overcome when a
device is inserted. For chips with a large number of pins, either zero-insertion force (ZIF) sockets
or land grid array (LGA) sockets are used instead. These designs apply a compression force once
either a handle (for ZIF type) or a surface plate (LGA type) is put into place. This provides
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 6
superior mechanical retention while avoiding the risk of bending pins when inserting the chip into
the socket.
CPU sockets are used in desktop and server computers. As they allow easy swapping of
components, they are also used for prototyping new circuits. Laptops typically use surface mount
CPUs, which need less space than a socketed part.
Intel and AMD have created a set of socket and slot designs for their processors. Each socket or
slot is designed to support a different range of original and upgrade processors.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 7
Xeon
Socket 423 2000 Intel Pentium 4 PGA 400 MT/s (100 MHz)
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 8
AMD Sempron
AMD Turion 64
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 9
AMD Athlon X2
AMD Phenom
AMD Phenom II
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 10
(6000 series)
And Sockets 4, 5, 7, and 8 are Pentium and Pentium Pro processor sockets as shown in below.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 11
SOCKET 1
Socket 1, is a 169-pin PGA socket. Motherboards that have this socket can support any of the
486SX, DX, and DX2 processors and the DX2/Overdrive versions. This type of socket is found on
most 486 systems that originally were designed for Overdrive upgrades. The below figure shows
the pin-out of Socket 1.
The original DX processor draws a maximum 0.9 amps of 5V power in 33MHz form (4.5 watts)
and a maximum 1 amp in 50MHz form (5 watts). The DX2 processor, or Overdrive processor,
draws a maximum 1.2 amps at 66MHz (6 watts). This minor increase in power requires only a
passive heat-sink consisting of aluminum fins that are glued to the processor with thermal transfer
epoxy. Passive heat-sinks don't have any mechanical components like fans. Heat-sinks with fans or
other devices that use power are called active heat-sinks. Overdrive processors rated at 40MHz or
less do not have heat-sinks.
SOCKET 2
It is a clock-multiplied chip that runs at 2.5 times the motherboard speed. The below figure shows
the pin-out configuration of the official Socket 2 design.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 12
It is also called as Pentium Over-Drive, it is not a full-scale (64-bit) Pentium. Intel released the
design of Socket 2 a little prematurely and found that the chip ran too hot for many systems. The
company solved this problem by adding a special active heat-sink to the Pentium Overdrive
processor. This active heat-sink is a combination of a standard heat-sink and a built-in electric fan.
Another problem with this particular upgrade is power consumption. The 5V Pentium Overdrive
processor draws up to 2.5 amps at 5V (including the fan) or 12.5 watts, which is more than double
the 1.2 amps (6 watts) drawn by the DX2 66 processor.
SOCKET 3
Intel had created a new socket to support both the DX4 processor, which runs on 3.3V, and the
3.3V Pentium Overdrive processor. In addition to the new 3.3V chips, this new socket supports the
older 5V SX, DX, DX2, and even the 5V Pentium Overdrive chip. The design, called Socket 3, is
the most flexible upgradeable 486 design. Below figure shows the pin-out specification of Socket
3.
Notice that Socket 3 has one additional pin and several others plugged in compared with Socket 2.
Socket 3 provides for better keying, which prevents an end user from accidentally installing the
processor in an improper orientation. However, one serious problem exists: This socket can't
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 13
automatically determine the type of voltage that is provided to it. We will likely find a jumper on
the motherboard near the socket to enable selecting 5V or 3.3V operation.
SOCKET 4
Socket 4 is a 273-pin socket designed for the original Pentium processors. The original Pentium
60MHz and 66MHz version processors had 273 pins and plugged into Socket 4. It is a 5V only
socket because all the original Pentium processors run on 5V. This socket accepts the original
Pentium 60MHz or 66MHz processor and the Overdrive processor. The below figure shows the
pin-out specification of Socket 4.
SOCKET 5
The Pentium OverDrive for Pentium processors has an active heatsink (fan) assembly that draws
power directly from the chip socket. The chip requires a maximum 4.33 amps of 3.3V to run the
chip (14.289 watts) and 0.2 amp of 5V power to run the fan (one watt), which results in a total
power consumption of 15.289 watts. This is less power than the original 66MHz Pentium
processor requires, yet it runs a chip that is as much as four times faster.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 14
SOCKET 6
The last 486 socket was designed for the 486 DX4 and the 486 Pentium OverDrive processor.
Socket 6 was intended as a slightly redesigned version of Socket 3 and had an additional 2 pins
plugged for proper chip keying. Socket 6 has 235 pins and accepts only 3.3V 486 or Overdrive
processors.
SOCKET 7 (SUPER 7)
Socket 7 is essentially the same as Socket 5 with one additional key pin in the opposite inside
corner of the existing key pin. Socket 7, therefore, has 321 pins total in a 21x21 SPGA
arrangement. The real difference with Socket 7 is not with the socket itself, but with the
companion voltage regulator module (VRM) circuitry on the motherboard that must accompany it.
The VRM is either a small circuit board or a group of circuitry embedded in the motherboard that
supplies the proper voltage level and regulation of power to the processor.
SOCKET 8
Socket 8 is a special SPGA socket featuring a whopping 387 pins. This was specifically designed
for the Pentium Pro processor with the integrated L2 cache. The additional pins are to enable the
chipset to control the L2 cache integrated in the same package as the processor. The below figure
shows the Socket 8 pin-out.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 15
SOCKET 370 (PGA-370)
In January 1999, Intel introduced a new socket for P6 class processors. The socket was called
Socket 370 or PGA-370 because it has 370 pins and originally was designed for lower-cost PGA
versions of the Celeron and Pentium III processors. The below figure shows the top view of Socket
370.
SOCKET 423
Socket 423 is a ZIF-type socket introduced in November 2000 for the original Pentium 4. The
figure shows pin 1 location.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 16
Socket 423 supports a 400MHz processor bus, which connects the processor to the Memory
Controller Hub (MCH), which is the main part of the motherboard chipset.
Socket 423 uses a unique heatsink mounting method that requires standoffs attached either to the
chassis or to a special plate that mounts underneath the motherboard. This was designed to support
the weight of the larger heatsinks required for the Pentium 4. Because of this, many Socket 423
motherboards require a special chassis that has the necessary additional standoffs installed.
SOCKET A (462)
AMD introduced Socket A, also called Socket 462, in June 2000 to support the PGA versions of
the Athlon and Duron processors. It is designed as a replacement for Slot A used by the original
Athlon processor. Socket A has 462 pins and 11 plugs oriented in an SPGA form. Socket A has the
same physical dimensions and layout as Socket 370; however, the location and placement of the
plugs prevent Socket 370 processors from being inserted. Socket A supports 32 voltage levels from
1.100V to 1.850V in 0.025V increments, controlled by the VID0-VID4 pins on the processor. The
automatic voltage regulator module circuitry typically is embedded on the motherboard.
There are 11 total plugged holes, including 2 of the outside pin holes at A1 and AN1. These are
used to allow for keying to force the proper orientation of the processor in the socket. The pinout
of Socket A is shown in below diagram.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 17
After the introduction of Socket A, AMD moved all Athlon (including all Athlon XP) processors
to this form factor, phasing out Slot A. In addition, for a time AMD also sold a reduced L2 cache
version of the Athlon called the Duron in this form factor. The Athlon 64 uses a different processor
socket called Socket 754.
SOCKET 754
Socket 754 is used with the new AMD Athlon 64 processor, which is AMD's first 64-bit processor
for desktop computers.
CPU SLOTS
Intel designed two types of slots that could be used on motherboards.
Slot 1 is a 242-pin slot designed to accept Pentium II, Pentium III, and most Celeron processors.
Slot 2, on the other hand, is a more sophisticated 330-pin slot designed for the Pentium II Xeon
and Pentium III Xeon processors, which are primarily for workstations and servers. Besides the
extra pins, the biggest difference between Slot 1 and Slot 2 is the fact that Slot 2 was designed to
host up to four-way or more processing in a single board. Slot 1 allows only single or dual
processing functionality.
Note that Slot 2 is also called SC330, which stands for slot connector with 330 pins. Intel later
discovered less-expensive ways to integrate L2 cache into the processor core and no longer
produces Slot 1 or Slot 2 processors. Both Slot 1 and Slot 2 processors are now obsolete, and many
systems using these processors have been retired or upgraded with socket-based motherboards.
Slot 1, also called SC242 (slot connector 242 pins), is used by the SEC design that is used with the
cartridge-type Pentium II/III and Celeron processors. Below figure shows the Slot 1 dimensions
and pin layouts.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 18
Slot 2, also called as SC330 (slot connector 330 pins), is used on high-end motherboards that
support the Pentium II and III Xeon processors. Below table shows Slot 2 dimensions and pin
layout.
It is a type of microprocessor that recognizes a relatively limited number of instructions. Until the
mid-1980s, the tendency among computer manufacturers was to build increasingly complex CPUs
that had ever-larger sets of instructions. At that time, however, a number of computer
manufacturers decided to reverse this trend by building CPUs capable of executing only a very
limited set of instructions. One advantage of reduced instruction set computers is that they can
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 19
execute their instructions very fast because the instructions are so simple. Another, perhaps more
important advantage, is that RISC chips require fewer transistors, which makes them cheaper to
design and produce.
There is still considerable controversy among experts about the ultimate value of RISC
architectures. Its proponents argue that RISC machines are both cheaper and faster, and are
therefore the machines of the future. Many of today's RISC chips support as many instructions as
yesterday's CISC chips. And today's CISC chips use many techniques formerly associated with
RISC chips.
Besides performance improvement, some advantages of RISC and related design improvements
are:
A new microprocessor can be developed and tested more quickly if one of its aims is to be
less complicated.
Operating system and application programmers who use the microprocessor's instructions
will find it easier to develop code with a smaller instruction set.
The simplicity of RISC allows more freedom to choose how to use the space on a
microprocessor.
Higher-level language compilers produce more efficient code than formerly because they
have always tended to use the smaller set of instructions to be found in a RISC computer.
Most personal computers use a CISC architecture, in which the CPU supports as many as two
hundred instructions. It is an alternative architecture, used by many workstations and also some
personal computers. The PowerPC microprocessor, used in IBM's RISC System/6000 workstation and
Macintosh computers, is a RISC microprocessor. Intel's Pentium microprocessors are CISC
microprocessors. RISC takes each of the longer, more complex instructions from a CISC design and
reduces it to multiple instructions that are shorter and faster to process.
Minimal Instruction Set Computer (MISC) is a processor architecture with a very small number
of basic operations and corresponding opcodes, Such instruction sets are commonly stack based
rather than register based to reduce the size of operand specifiers. Such as stack machine
architecture is inherently simpler since all instructions operate on the top-most stack entries. A
result of this is a smaller instruction set, a smaller and faster instruction decode unit, and overall
faster operation of individual instructions. It supports minimum instructions.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 20
The stream concept can be used for describing a machine's structure .A stream means a sequence
of items (data or instructions).
SISD corresponds to the traditional mono-processor. In this a single data stream is being processed
by one instruction stream.
In this SIMD, multiple processing units of the same type process on multiple-data streams. This
group is dedicated to array processing machines.
In this MISD computer, multiple processing units operate on one single data stream
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 21
MIMD (Multiple-Instruction streams, Multiple-Data streams)
This machine type builds the group for the traditional multi-processors. In this several processing
units operate on multiple-data streams.
UPGRADING CPU
Upgrading a computer's central processing unit is a fairly easy, although major, endeavor. While
upgrading a CPU, keep in mind concerns such as thermal characteristics, motherboard
compatibility and system performance. Upgrading a CPU should only take a few minutes, but it
does require careful attention to detail.
Remove the motherboard from the computer's case. This process requires removing the
motherboard mounting screws and add-in cards. Removal of the old heat sink and fan are required
as well. Once these tasks are completed, the motherboard CPU socket is ready for the new CPU to
be installed. It is good practice to examine the motherboard's CPU socket for any thermal
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 22
compound that might have accidentally been applied while installing the last processor. This
compound is sometimes conductive and should be removed.
Safety of Computers
Many of the components making up the motherboard are susceptible to ESD, as a necessary
precaution always wear a grounded wrist strap and if the motherboard is to be worked on out of the
computer chassis, place the board on a grounded rubber mat. These precautions should ensure that
no extraneous static voltages are inadvertently applied to board components.
Ensure that motherboard documentation and layout details are on hand while working on the
board. You should check to make sure that the board layout is the same as the one you are working
on before removing any cables or items from the motherboard. To remove a motherboard, first
remove the mains power cable. Then remove all external cables to board connectors. Remove all
other parts obstructing clear access to the board fixings; such as power supply, expansion cards,
bus extenders and chassis bars. Note down where each item was removed and any specific details;
such as which slot an expansion card was removed from. Before removing any set of cables,
ribbon connector or wire going to the motherboard, note their colour, distinguishing features and
their orientation with board connectors and pins. Be sure to note where the other end of the cable,
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 23
wire or ribbon is also attached for reference. This will ensure that all connectors can be returned to
their previous location correctly.
It may be necessary to remove a processor cooling fan. Either choose to just remove the supply to
the internal power, leaving the fan in place, or determine how the fan is attached to the processor
heat conducting fitting. Often there are several screws located in the cooling fins which you will
need to remove before the fan unit can be taken out. Once the complete board can be viewed, note
down any chip switch settings and the configuration of jumpers. Moving the board could dislodge
one or several jumpers or switches, take your time. Physically the board is quite strong when
mounted in the computer. Depending on the motherboard design, there will be a mix of small
plastic pillars supporting the board and a number of screws to give a firm fitting and to ensure a
good electrical connection to the chassis metalwork. By raising the motherboard from the chassis
in this way, none of the electrical contacts resulting from the soldered components can short circuit
through the chassis metalwork.
Computer Monitor
The computer monitor is an output device that is part of your computer's display system. A cable
connects the monitor to a video adapter (video card) that is installed in an expansion slot on your
computer’s motherboard. This system converts signals into text and pictures and displays them on
a TV-like screen (the monitor).
The computer sends a signal to the video adapter, telling it what character, image or graphic to
display. The video adapter converts that signal to a set of instructions that tell the display device
(monitor) how to draw the image on the screen.
There are a couple of electromagnets (yokes) around the collar of the tube that actually bend the
beam of electrons. The beam scans (is bent) across the monitor from left to right and top to bottom
to create, or draw the image, line by line. The number of times in one second that the electron gun
redraws the entire image is called the refresh rate and is measured in Hertz (Hz). If the scanning
beam hits each and every line of pixels, in succession, on each pass, then the monitor is known as a
non-interlaced monitor. A non-interlaced monitor is preferred over an interlaced monitor. The
electron beam on an interlaced monitor scans the odd numbered lines on one pass, and then scans
the even lines on the second pass. This results in an almost unperceivable flicker that can cause
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 24
eye-strain. This type of eye-strain can result in blurred vision, sore eyes, headaches and even
nausea.
Video Technologies
Video technologies differ in many different ways. However, the major 2 differences are resolution
and the number of colors it can produce at those resolutions.
Resolution
Resolution is the number of pixels that are used to draw an image on the screen. If you could count
the pixels in one horizontal row across the top of the screen, and the number of pixels in one
vertical column down the side, that would properly describe the resolution that the monitor is
displaying. It’s given as two numbers. If there were 800 pixels across and 600 pixels down the
side, then the resolution would be 800 X 600. Multiply 800 times 600 and you’ll get the number of
pixels used to draw the image (480,000 pixels example). A monitor must be matched with the
video card in the system. The monitor has to be capable of displaying the resolutions and colors
that the adapter can produce. It works the other way around too. If your monitor is capable of
displaying a resolution of 1,024 X 768 but your adapter can only produce 640 X 480, then that’s
all you’re going to get.
Monochrome
Monochrome monitors are very basic displays that produce only one color. The basic text mode in
DOS is 80 characters across and 25 down. When graphics were first introduced, they were fairly
rough by today’s standards, and you had to manually type in a command to change from text mode
to graphics mode. A company called Hercules Graphics developed a video adapter that could do
this for you. Not only could it change from text to graphics, but it could do it on the fly whenever
the application required it. Today’s adapters still basically use the same methods.
The Color Graphics Adapter (CGA) introduced color to the personal computer. In APA mode it
can produce a resolution of 320 X 200 and has a palette of 16 colors but can only display 4 at a
time. With the introduction of the IBM Enhanced Graphics Adapter (EGA), the proper monitor
was capable of a resolution of 640 X 350 pixels and could display 16 colors from a palette of 64.
In VGA, colors are produced digitally. Each electron beam could be either on or off. There are
three electron guns, one for each color, red, green and blue (RGB). This combination could
produce 8 colors. By cutting the intensity of the beam in half, we could get 8 more colors for a
total of 16. IBM came up with the idea of developing an analog display system that could produce
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 25
64 different levels of intensity. Their new Video Graphics Array adapter was capable of a
resolution of 640 X 480 pixels and could display up to 256 colors from a palette of over 260,000.
This technology soon became the standard for almost every video card and monitor being
developed.
Once again, manufacturers began to develop video adapters that added features and enhancements
to the VGA standard. Super-VGA is based on VGA standards and describes display systems with
several different resolutions and a varied number of colors. When SVGA first came out it could be
defined as having capabilities of 800 X 600 with 256 colors or 1024 X 768 with 16 colors.
However, these cards and monitors are now capable of resolutions up to 1280 X 1024 with a
palette of more than 16 million colors.
Extended Graphics Array was developed by IBM. It improved upon the VGA standard (also
developed by IBM) but was a proprietary adapter for use in Micro Channel Architecture expansion
slots. It had its own coprocessor and bus-mastering ability, which means that it had the ability to
execute instructions independent of the CPU. It was also a 32-bit adapter capable of increased data
transfer speeds. XGA allowed for better performance, could provide higher resolution and more
colors than the VGA and SVGA cards at the time. However, it was only available for IBM
machines. Many of these features were later incorporated by other video card manufacturers.
The first mainstream video card to support color graphics on the PC was IBM's Color Graphics
Adapter (CGA) standard. The CGA supports several different modes; the highest quality text mode
is 80x25 characters in 16 colors. Graphics modes range from monochrome at 640x200 to 16 colors
at 160x200. The card refreshes at 60 Hz.
Note that the maximum resolution of CGA is actually significantly lower than MDA: 640x200.
These dots are accessible individually when in a graphics mode but in text each character was
formed from a matrix that is 8x8, instead of the MDA's 9x14, resulting in much poorer text quality.
CGA is obsolete, having been replaced by EGA.
IBM's next standard after CGA was the Enhanced Graphics Adapter or EGA. This standard
offered improved resolutions and more colors than CGA, although the capabilities of EGA are still
quite poor compared to modern devices. EGA allowed graphical output up to 16 colors (chosen
from a palette of 64) at screen resolutions of 640x350, or 80x25 text with 16 colors, all at a refresh
rate of 60 Hz. You will occasionally run into older systems that still use EGA; EGA-level graphics
are the minimum requirement for Windows 3.x and so some very old systems still using Windows
3.0 may be EGA. There is of course no reason to stick with EGA when it is obsolete and VGA
cards are so cheap and provide much more performance and software compatibility.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 26
Video Graphics Adapter (VGA)
The replacement for EGA was IBM's last widely-accepted standard: the Video Graphics Array or
VGA. VGA, supersets of VGA, and extensions of VGA form today the basis of virtually every
video card used in PCs. Introduced in the IBM PS/2 model line, VGA was eventually cloned and
copied by many other manufacturers. When IBM fell from dominance in the market, VGA
continued on and was eventually extended and adapted in many different ways.
Most video cards today support resolutions and color modes far beyond what VGA really is, but
they also support the original VGA modes, for compatibility. Most call themselves "VGA
compatible" for this reason. Many people don't realize just how limited true VGA really is; VGA is
actually pretty much obsolete itself by today's standards, and 99% of people using any variant of
Windows are using resolution that exceeds the VGA standards. True VGA supports 16 colors at
640x480 resolution, or 256 colors at 320x200 resolutions (and not 256 colors at 640x480, even
though many people think it does). VGA colors are chosen from a palette of 262,144 colors (not
16.7 million) because VGA uses 6 bits to specify each color, instead of the 8 that is the standard
today.
VGA (and VGA compatibility) is significant in one other way as well: they use output signals that
are totally different than those used by older standards. Older displays sent digital signals to the
monitor, while VGA (and later) send analog signals. This change was necessary to allow for more
color precision. Older monitors that work with EGA and earlier cards use so-called "TTL"
(transistor-transistor logic) signaling and will not work with VGA. Some monitors that were
produced in the late 80s actually have a toggle switch to allow the selection of either digital or
analog inputs.
Keyboard
A computer keyboard is a peripheral modeled after the typewriter keyboard. Keyboards are
designed for the input of text and characters and also to control the operation of a computer.
Physically, computer keyboards are an arrangement of rectangular or near-rectangular buttons, or
"keys". Keyboards typically have characters engraved or printed on the keys; in most cases, each
press of a key corresponds to a single written symbol. However, to produce some symbols requires
pressing and holding several keys simultaneously or in sequence; other keys do not produce any
symbol, but instead affect the operation of the computer or the keyboard itself. Roughly 50% of all
keyboard keys produce letters, numbers or signs (characters). Other keys can produce actions when
pressed, and other actions are available by the simultaneous pressing of more than one action key.
The keys on the keyboard are grouped according to their functions as follows:
Alphanumeric keys: The group of keys that comprises the alphabets, punctuation marks,
and digits. These keys are used to enter text, digit, and punctuation marks.
Function keys: The group of keys found at the top of keyboard labeled from F1 to F12.
These keys execute different commands based on the applications that are running.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 27
Numeric keypad: Found at the right most of the keyboard, is the numeric keypad. These
keys work with the special key called NumLock which is located at the left-top corner of
the numeric keypad. when NumLock is on, the numeric keypad is used to enter digits and
arithmetic operators. However, when NumLock is off, the numeric keypad is used as cursor
movement keys.
Cursor movement keys: The Cursor, also called the insertion point, is the symbol on the
display screen that shows where data may be entered next. The cursor movement keys, or
arrow keys, are used to move the cursor around the text on the screen. These keys move
the cursor left, right, up or down. The keys labeled Page Up and Page Down move the
cursor, the equivalent of one page, up or down on the screen. Similarly, the keys labeled
Home and End move the cursor to the beginning and end of the same line respectively.
Editing keys: Editing keys are the keys which are used to make our text stylish. They
change what has been entered. Editing keys include: Spacebar, Enter (Return), Delete,
Backspace, etc.
Special keys: Special keys are keys that are used to execute some commands. They also
work in combination with other keys to execute commands. These keys include: Shift, Alt,
Ctrl etc.
Mouse
A mouse is a device that is rolled about on a desktop to direct a pointer on the computer’s display
screen. The pointer is a symbol, usually an arrow that is used to select items from lists (menus) on
the screen or to position the cursor. The cursor, also called an insertion point, is the symbol on the
screen that shows where data may be entered next, such as text in a document.
1. Mechanical: Has a rubber or metal ball on its underside that can roll in all directions.
Mechanical sensors within the mouse detect the direction the ball is rolling and move the screen
pointer accordingly.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 28
2. Optomechanical: Same as a mechanical mouse, but uses optical sensors to detect motion of the
ball.
3. Optical: Uses a laser to detect the mouse's movement. You must move the mouse along a
special mat with a grid so that the optical mechanism has a frame of reference. Optical mice have
no mechanical moving parts. They respond more quickly and precisely than mechanical and
optomechanical mice, but they are also more expensive.
Serial mice connect directly to an RS-232C serial port or a PS/2 port. This is the simplest
type of connection.
PS/2 mice connect to a PS/2 port.
USB mice.
Cordless mouse aren't physically connected at all. Instead they rely on infrared or radio waves to
communicate with the computer. Cordless mice are more expensive than both serial and bus mice,
but they do eliminate the cord.
Computer case
Computer case, most people called it wrongly by naming it as CPU, is actually a ‘house’ of the
computer hardware parts of a computer. Generally, computer casing comes in two design, either
tower case or desktop case. For better expendabilities, it is suggested that the tower computer cases
are more ideal, and it best fits the end-users requirement. The reason behind this is that it is easier
to upgrade, to add new devices, and easier to disassemble too. But, the disadvantages are tower
computer cases are ‘wastage’ of your computer table space, whether you place it on the table or
underneath it. They are already more than enough for my usage, but one factor must be considered,
is that a tower case, regardless of high case, medium case or mini case, must has enough hard disk
drive bay (inside the casing) for installing more than two or more hard disk drive.
PC Power Supplies
The power supply converts electricity received from a wall outlet (120V AC or 230V AC) into DC
current amounts that are needed by the various components of the system. There are 2 different
types of power supplies that correspond to 2 different types of motherboards, and hence, case
designs.
AT - This is an older design in which the connector to the system board uses 2 6- pin (P8/P9)
connections. It is important that the 2 connectors are plugged into the system board correctly and
not switched. P8 should be plugged into P1 on the system board and P9 should be connected to P2.
ATX - A newer specification that uses a single 20 pin connection to the system board. These
connectors are keyed to make sure that the connector is plugged in properly. Both models provide
4 levels of DC voltage. ATX power supplies add an additional voltage of +3.3V. The wires coming
out of the power supply are color coded with the black one as the ground wire.
Yellow: +12
Blue: -12
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 29
Red: +5
White: -5
Circuitry: +/- 5 volts
Motor: +/- 12 volts
Laptops and portables utilize an external power supply and rechargeable battery system. Batteries
were typically nickel-cadmium, but newer technologies have introduced nickel metal-hydride and
lithium-ion batteries that provide extended life and shorter recharge times. Lithium batteries are
also used to power a computer's CMOS ROM.
Motherboard
It is the main circuit board of a microcomputer. The motherboard contains the connectors for
attaching additional boards. Typically, the motherboard contains the CPU, BIOS, memory, mass
storage interfaces, serial and parallel ports, expansion slots, and all the controllers required to
control standard peripheral devices, such as the display screen, keyboard, and disk drive.
Collectively, all these chips that reside on the motherboard are known as the motherboard's
chipset.
On most PCs, it is possible to add memory chips directly to the motherboard. You may also be
able to upgrade to a faster PC by replacing the CPU chip. To add additional core features, we may
need to replace the motherboard entirely.
Sound Card
An expansion board that enables a computer to manipulate and output sounds. Sound cards are
necessary for nearly all CD-ROMs and have become commonplace on modern personal
computers. Sound cards enable the computer to output sound through speakers connected to the
board, to record sound input from a microphone connected to the computer, and manipulate sound
stored on a disk. Nearly all sound cards support MIDI, a standard for representing music
electronically. In addition, most sound cards are Sound Blaster-compatible, which means that they
can process commands written for a Sound Blaster card, the de facto standard for PC sound.
Sound cards use two basic methods to translate digital data into analog sounds:
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 30
Secondary Memory
Secondary memory (or secondary storage) is the slowest and cheapest form of memory. It cannot
be processed directly by the CPU. It must first be copied into primary storage (also known as
RAM).
Secondary memory devices include magnetic disks like hard drives and floppy disks; optical disks
such as CDs and CD ROMs ; and magnetic tapes, which were the first forms of secondary
memory.
Floppy Disk
A soft magnetic disk. It is called floppy because it flops if we wave it (at least, the 5¼-inch variety
does). Unlike most hard disks, floppy disks (often called floppies or diskettes) are portable,
because we can remove them from a disk drive. Disk drives for floppy disks are called floppy
drives. Floppy disks are slower to access than hard disks and have less storage capacity, but they
are much less expensive. And most importantly, they are portable.
8-inch: The first floppy disk design, invented by IBM in the late 1960s and used in the
early 1970s as first a read-only format and then as a read-write format. The typical
desktop/laptop computer does not use the 8-inch floppy disk.
5¼-inch: The common size for PCs made before 1987 and the predecessor to the 8-inch
floppy disk. This type of floppy is generally capable of storing between 100K and 1.2MB
(megabytes) of data. The most common sizes are 360K and 1.2MB.
3½-inch: Floppy is something of a misnomer for these disks, as they are encased in a rigid
envelope. Despite their small size, microfloppies have a larger storage capacity than their
cousins -- from 400K to 1.4MB of data. The most common sizes for PCs are 720K (double
density) and 1.44MB (high-density). Macintoshes support disks of 400K, 800K, and
1.2MB.
CD-ROM
A type of optical disk capable of storing large amounts of data up to 1GB, although the most
common size is 650MB (megabytes). A single CD-ROM has the storage capacity of 700 floppy
disks, enough memory to store about 300,000 text pages.
CD-ROMs are stamped by the vendor, and once stamped; they cannot be erased and filled with
new data. To read a CD, we need a CD-ROM player. All CD-ROMs conform to a standard size
and format, so we can load any type of CD-ROM into any CD-ROM player. In addition, CD-ROM
players are capable of playing audio CDs, which share the same technology. CD-ROMs are
particularly well-suited to information that requires large storage capacity. This includes large
software applications that support color, graphics, sound, and especially video.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 31
Printers
A device that prints text or illustrations on the paper. There are many different types of printers. In
terms of the technology utilized, printers fall into the following categories:
Daisy-Wheel: Similar to a ball-head typewriter, this type of printer has a plastic or metal
wheel on which the shape of each character stands out in relief. A hammer presses the
wheel against a ribbon, which in turn makes an ink stain in the shape of the character on the
paper. Daisy-wheel printers produce letter-quality print but cannot print graphics.
Dot-Matrix: Creates characters by striking pins against an ink ribbon. Each pin makes a
dot, and combinations of dots form characters and illustrations.
Ink-Jet: Sprays ink at a sheet of paper. Ink-jet printers produce high-quality text and
graphics.
Laser: Uses the same technology as copy machines. Laser printers produce very high
quality text and graphics.
LCD & LED: Similar to a laser printer, but uses liquid crystals or light-emitting diodes
rather than a laser to produce an image on the drum.
Line Printer: Contains a chain of characters or pins that print an entire line at one time.
Line printers are very fast, but produce low-quality print.
Thermal Printer: An inexpensive printer that works by pushing heated pins against heat-
sensitive paper. Thermal printers are widely used in calculators and fax machines.
Quality of type: The output produced by printers is said to be either letter quality (as good
as a typewriter), near letter quality, or draft quality. Only daisy-wheel, ink-jet, and laser
printers produce letter-quality type. Some dot-matrix printers claim letter-quality print, but
if you look closely, you can see the difference.
Speed: Measured in characters per second (cps) or pages per minute (ppm), the speed of
printers varies widely. Daisy-wheel printers tend to be the slowest, printing about 30 cps.
Line printers are fastest (up to 3,000 lines per minute). Dot-matrix printers can print up to
500 cps, and laser printers range from about 4 to 20 text pages per minute.
Impact or non-impact: Impact printers include all printers that work by striking an ink
ribbon. Daisy-wheel, dot-matrix, and line printers are impact printers. Nonimpact printers
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 32
include laser printers and ink-jet printers. The important difference between impact and
non-impact printers is that impact printers are much noisier.
Graphics: Some printers (daisy-wheel and line printers) can print only text. Other printers
can print both text and graphics.
Fonts: Some printers, notably dot-matrix printers, are limited to one or a few fonts. In
contrast, laser and ink-jet printers are capable of printing an almost unlimited variety of
fonts. Daisy-wheel printers can also print different fonts, but you need to change the daisy
wheel, making it difficult to mix fonts in the same document.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 33
CHAPTER 3 (SYSTEM MEMORY AND POWER SUPPLY)
Hard disk drives are almost as amazing as microprocessors in terms of the technology they use and
how much progress they have made in terms of capacity, speed, and price in the last 20 years. The
first PC hard disks had a capacity of 10 megabytes and a cost of over $100 per MB. Modern hard
disks have capacities approaching 100 gigabytes and a cost of less than 1 cent per MB. This
represents an improvement of 1,000,000% in just fewer than 20 years, or around 67% cumulative
improvement per year. At the same time, the speeds of the hard disk and its interfaces have
increased dramatically as well.
The hard disk plays a significant role in the following important aspects of your computer system:
Performance: The hard disk plays a very important role in overall system performance,
probably more than most people recognize (though that is changing now as hard drives get
more of the attention they deserve). The speed at which the PC boots up and programs load
is directly related to hard disk speed. The hard disk's performance is also critical when
multitasking is being used or when processing large amounts of data such as graphics work,
editing sound and video, or working with databases.
Storage Capacity: This is kind of obvious, but a bigger hard disk lets you store more
programs and data.
Software Support: Newer software needs more space and faster hard disks to load it
efficiently. It's easy to remember when 1 GB was a lot of disk space; it's even easy to
remember when 100 MB was a lot of disk space. Now a PC with even 1 GB is considered
by many to be "crippled", since it can barely hold modern (inflated) operating system files
and a complement of standard business software.
Reliability: One way to assess the importance of an item of hardware is to consider how
much grief is caused if it fails. By this standard, the hard disk is the most important
component by a long shot. The hardware can be replaced, but data cannot. A good quality
hard disk, combined with smart maintenance and backup habits, can help ensure that the
nightmare of data loss doesn't become part of your life.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 34
All information stored on a hard disk is recorded in tracks, which are concentric circles placed on
the surface of each platter, much like the annual rings of a tree. The tracks are numbered, starting
from zero, starting at the outside of the platter and increasing as you go in. A modern hard disk has
tens of thousands of tracks on each platter.
Data is accessed by moving the heads from the inner to the outer part of the disk, driven by the
head actuator. This organization of data allows for easy access to any part of the disk, which is
why disks are called random access storage devices. Each track can hold many thousands of bytes
of data. It would be wasteful to make a track the smallest unit of storage on the disk, since this
would mean small files wasted a large amount of space. Therefore, each track is broken into
smaller units called sectors. Each sector holds 512 bytes of user data, plus as many as a few dozen
additional bytes used for internal drive control and for error detection and correction.
Primary Memory
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 35
The main memory stores the program instructions and the data in binary machine code. The
Control Unit deals with the instructions and the arithmetic and logic unit handles calculations and
comparisons with the data. Data and instructions are moved by buses.
There are two types of memory in the Immediate Access Store of the computer, RAM and ROM.
RAM is Random Access Memory which loses its contents when the computer is switched off (it
is volatile). This memory can be written to, instructions and data can be loaded into it. The kind of
memory used for holding programs and data being executed is called RAM. RAM differs from
ROM in that it can be both read and written; it is considered volatile storage because unlike ROM,
the components of RAM are lost when the power is turned off. RAM is also sometimes called
Read – Write Memory or RWM.
Common RAM sizes are such as 8, 32, 64,128, 256, 512 MB.
Synchronize DRAM
- Much faster than the rest (4X faster than DRAM)
- Become popular now a days
- Expensive.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 36
ROM (Read Only Memory)
ROM or Read Only Memory is non-volatile and is used to store programs permanently (the
start-up or "boot" instructions, for example), the computer cannot store anything in this type of
memory. When the programs and data files (known as the software) are not in RAM, they are
stored on backing store such as tapes or discs. The tape or disc drives and any input and output
devices connected to the CPU are known collectively as peripherals. Rom is typically a read only
memory that is most commonly used to store system – level programs that we want to have
available to the PC at all times. The most common being the BIOS program which is stored in a
ROM. The two main reasons that read – only memory is used are for performance and security.
ROM can be of different types:
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 37
in use, the life of the EEPROM can be an important design consideration. A special form of
EEPROM is flash memory, which uses normal PC voltages for erasure and reprogramming.
Secondary Memory
Secondary memory (or secondary storage) is the slowest and cheapest form of memory. It
cannot be processed directly by the CPU. It must first be copied into primary storage (also
known as RAM).
Secondary memory devices include magnetic disks like hard drives and floppy disks; optical
disks such as CDs and CD ROMs ; and magnetic tapes, which were the first forms of
secondary memory.
Floppy Disk
It is a soft magnetic disk. It is called floppy because it flops if we wave it (at least, the 5¼-inch
variety does). Unlike most hard disks, floppy disks (often called floppies or diskettes) are
portable, because we can remove them from a disk drive. Disk drives for floppy disks are
called floppy drives. Floppy disks are slower to access than hard disks and have less storage
capacity, but they are much less expensive. And most importantly, they are portable.
8-inch: The first floppy disk design, invented by IBM in the late 1960s and used in the
early 1970s as first a read-only format and then as a read-write format. The typical
desktop/laptop computer does not use the 8-inch floppy disk.
5¼-inch: The common size for PCs made before 1987 and the predecessor to the 8-inch
floppy disk. This type of floppy is generally capable of storing between 100K and 1.2MB
(megabytes) of data. The most common sizes are 360K and 1.2MB.
3½-inch: Floppy is something of a misnomer for these disks, as they are encased in a rigid
envelope. Despite their small size, microfloppies have a larger storage capacity than their
cousins -- from 400K to 1.4MB of data. The most common sizes for PCs are 720K (double
density) and 1.44MB (high-density). Macintoshes support disks of 400K, 800K, and
1.2MB.
CD-ROM
A type of optical disk capable of storing large amounts of data up to 1GB, although the most
common size is 650MB (megabytes). A single CD-ROM has the storage capacity of 700
floppy disks, enough memory to store about 300,000 text pages.
CD-ROMs are stamped by the vendor, and once stamped; they cannot be erased and filled with
new data. To read a CD, we need a CD-ROM player. All CD-ROMs conform to a standard size
and format, so we can load any type of CD-ROM into any CD-ROM player. In addition, CD-
ROM players are capable of playing audio CDs, which share the same technology. CD-ROMs
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 38
are particularly well-suited to information that requires large storage capacity. This includes
large software applications that support color, graphics, sound, and especially video.
Cache Memory
It is a special high-speed storage mechanism. It can be either a reserved section of main memory or
an independent high-speed storage device. Two types of caching are commonly used in personal
computers: memory caching and disk caching. A memory cache, sometimes called a cache store or
RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the
slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective
because most programs access the same data or instructions over and over. By keeping as much of
this information as possible in SRAM, the computer avoids accessing the slower DRAM.
Some memory caches are built into the architecture of microprocessors. The Intel 80486
microprocessor, for example, contains an 8K memory cache, and the Pentium has a 16K cache.
Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with
external cache memory, called Level 2 (L2) caches. These caches sit between the CPU and the
DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger. Disk
caching works under the same principle as memory caching, but instead of using high-speed
SRAM, a disk cache uses conventional main memory. The most recently accessed data from the
disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access
data from the disk, it first checks the disk cache to see if the data is there. Disk caching can
dramatically improve the performance of applications, because accessing a byte of data in RAM
can be thousands of times faster than accessing a byte on a hard disk.
When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged
by its hit rate. Many cache systems use a technique known as smart caching, in which the system
can recognize certain types of frequently used data.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 39
Upgrading System Memory
System memory provides the working memory of the CPU. Insufficient quantities of RAM can
cause a system to slow down and run such more poorly than it otherwise could. Conversely,
upgrading the quantity and quality of RAM can make a sluggish system in to faster, robust
machine. Further applications and operating systems require varying quantities of RAM in which
to load, with the basic rule being that newer programs always need more RAM than older versions
of the same program, windows 2000 runs poorly on less than 128 MB of RAM for example
whereas windows 95 runs fine with that amount.
Upgrading RAM modules is one of the most common system upgrades you can perform, but
requires you to consider at least the following issues.
Open the computer case and locate the SIMM or DIMM slots on the system board.
Determine how many RAM modules you need to fill a bank. Remember you should fill an
entire bank or memory errors may occur.
Remove the RAM from the antistatic
SIMM modules must be inserted in to SIMM slots on an angle and snapped upright in to
position so they are perpendicular to the mother board.
DIMM modules must be inserted in to the slot and process down firmly.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 40
PC POWER SUPPLY
The power supply converts electricity received from a wall outlet (120V AC or 230V AC) into DC
current amounts that are needed by the various components of the system. There are 2 different
types of power supplies that correspond to 2 different types of motherboards, and hence, case
designs.
AT - This is an older design in which the connector to the system board uses 2 6- pin (P8/P9)
connections. It is important that the 2 connectors are plugged into the system board correctly and
not switched. P8 should be plugged into P1 on the system board and P9 should be connected to P2.
ATX - A newer specification that uses a single 20 pin connection to the system board. These
connectors are keyed to make sure that the connector is plugged in properly. Both models provide
4 levels of DC voltage. ATX power supplies add an additional voltage of +3.3V. The wires coming
out of the power supply are color coded with the black one as the ground wire.
Yellow: +12
Blue: -12
Red: +5
White: -5
Circuitry: +/- 5 volts
Motor: +/- 12 volts
Laptops and portables utilize an external power supply and rechargeable battery system. Batteries
were typically nickel-cadmium, but newer technologies have introduced nickel metal-hydride and
lithium-ion batteries that provide extended life and shorter recharge times. Lithium batteries are
also used to power a computer's CMOS ROM.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 41
CHAPTER 4 (BUS, CARDS AND PORTS)
Bus Designs
A computer bus is a method of transmitting data from one part of the computer to another part of
the computer. Generally the computer bus will connect all devices to the computer CPU and main
memory. The computer bus consists of two parts the address bus and a data bus. The data bus
transfers actual data whereas the address bus transfers information about where the data should go.
Introduced by IBM, ISA or Industry Standard Architecture was originally an 8-bit bus and later
expanded to a 16-bit bus in 1984. When this bus was originally released it was a proprietary bus,
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 42
which allowed only IBM to create peripherals and the actual interface. Later however in the early
1980's the bus was being created by other clone manufacturers.
In 1993, Intel and Microsoft introduced a PnP ISA bus that allowed the computer to automatically
detect and setup computer ISA peripherals such as a modem or sound card. Using the PnP
technology an end-user would have the capability of connecting a device and not having to
configure the device using jumpers or dipswitches.
To determine if an ISA card is an 8-bit or 16-bit card physically look at the card. We will notice
that the first portion of the slot closest to the back of the card is used if the card is an 8-bit card.
However, if both sections of the card are being utilized the card is a 16-bit card. Today many
manufacturers are trying to eliminate the usage of the ISA slot however for backwards
compatibility we may find 1 or 2 ISA slots with additional PCI slots, AGP slots, etc. However, we
may also not have any ISA slots. We highly recommend when purchasing any new internal
expansion card that you stay away from ISA as it has for the most part disappeared.
Short for Micro Channel Architecture, MCA was introduced by IBM in 1987, MCA or the Micro
Channel bus was a competition for ISA BUS. The MCA bus offered several additional features
over the ISA such as a 32-bit bus, automatically configure cards (similar to what Plug and Play is
today), and bus mastering for greater efficiency.
One of the major downfalls of the MCA bus was it being a proprietary BUS and because of
competing BUS designs the MCA BUS never became widely used and have since been phased out
of the desktop computers.
Short for Extended Industry Standard Architecture, EISA was announced September of 1988.
EISA is a computer bus designed by 9 competitors to compete with IBM's MCA BUS. These
competitors were AST Research, Compaq, Epson, Hewlett Packard, NEC, Olivetti, Tandy, WYSE,
and Zenith Data Systems.
The EISA Bus provided 32-bit slots at an 8.33 MHz cycle rate for the use with 386DX, or higher
processors. In addition the EISA can accommodate a 16-bit ISA card in the first row.
Unfortunately, while the EISA bus is backwards compatible and is not a proprietary bus the EISA
bus never became widely used and is no longer found in computers today.
The VESA (Video Electronics Standards Association) a nonprofit organization founded by NEC,
released the VLB or VESA Local Bus 1.0 in 1992. The VLB is a 32- bit bus that and had direct
access to the system memory at the speed of the processor, commonly the 486 CPU (33 / 40 MHz).
VLB 2.0 was later released in 1994 and had a 64-bit bus and a bus speed of 50 MHz,
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 43
unfortunately, because the VLB heavily relied on the 486 processor when the Pentium Processor
arose in the Market place manufacturers began switching to PCI.
Introduced by Intel in 1992, revised in 1993 to version 2.0, and later revised in 1995 to PCI 2.1.
PCI is short for Peripheral Component Interconnect and is a 32-bit computer bus that is also
available as a 64-bit bus today. The PCI bus is the most found and commonly used bus on
computers today for computer expansion cards.
Introduced by Intel in 1997, AGP or Advanced Graphic Port is a 32-bit bus designed for the high
demands of 3-D graphics. AGP has a direct line to the computer’s memory which allows 3-D
elements to be stored in the system memory instead of the video memory. For AGP to work in a
computer must have the AGP slot which comes with most Pentium II and Pentium III machines.
The computer also needs to be running Windows 95 OSR2.1, Windows 98, Windows 98 SE,
Windows 2000, or higher.
USB or Universal Serial Bus is an external bus that supports transfer rates of 12 Mbps, can support
127 devices and supports hot plugging.
Video Card
A board that plugs into a personal computer to give it displays capabilities. The display capabilities
of a computer, however, depend on both the logical circuitry (provided in the video adapter) and
the display monitor. A monochrome monitor, for example, cannot display colors no matter how
powerful the video adapter. Many different types of video adapters are available for PCs. Most
conform to one of the video standards defined by IBM or VESA. Each adapter offers several
different video modes. The two basic categories of video modes are text and graphics. In text
mode, a monitor can display only ASCII characters. In graphics mode, a monitor can display any
bit-mapped image. Within the text and graphics modes, some monitors also offer a choice of
resolutions. At lower resolutions a monitor can display more colors. Modern video adapters
contain memory, so that the computer's RAM is not used for storing displays. In addition, most
adapters have their own graphics coprocessor for performing graphics calculations. These adapters
are often called graphics accelerators.
Like most parts of the PC, the video card had very humble beginnings; it was only responsible for
taking what the processor produced as output and displaying it on the screen. Early on, this was
simply text, and not even color at that. Video cards today are much more like coprocessors; they
have their own intelligence and do a lot of processing that would otherwise have to be done by the
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 44
system processor. This is a necessity due to the enormous increase both in how much data we send
to our monitors today, and the sophisticated calculations that must be done to determine what we
see on the screen. This is particularly so with the rise of graphical operating systems, and 3D
computing.
The video card in your system plays a significant role in the following important aspects of your
computer system:
Performance: The video card is one of the components that have an impact on system
performance. For some people (and some applications) the impact is not that significant;
for others, the video card's quality and efficiency can impact on performance more than any
other component in the PC! For example, many games that depend on a high frame rate
(how many times per second the screen is updated with new information) for smooth
animation, are impacted far more by the choice of video card than even by the choice of
system CPU.
Software Support: Certain programs require support from the video card. The software
that normally depends on the video card the most includes games and graphics programs.
Some programs (for example 3D-enhanced games) will not run at all on a video card that
doesn't support them.
Reliability and Stability: While not a major contributor to system reliability, choosing
the wrong video card can cause problematic system behavior. In particular, some cards or
types of cards are notorious for having unstable drivers, which can cause a lost of
difficulties.
Comfort and Ergonomics: The video card, along with the monitor, determines the
quality of the image you see when you use your PC. This has an important impact on how
comfortable the PC is to use. Poor quality video cards don't allow for sufficiently high
refresh rates, causing eyestrain and fatigue.
Sound Card
An expansion board that enables a computer to manipulate and output sounds. Sound cards are
necessary for nearly all CD-ROMs and have become commonplace on modern personal
computers. Sound cards enable the computer to output sound through speakers connected to the
board, to record sound input from a microphone connected to the computer, and manipulate sound
stored on a disk. Nearly all sound cards support MIDI, a standard for representing music
electronically. In addition, most sound cards are Sound Blaster-compatible, which means that they
can process commands written for a Sound Blaster card, the de facto standard for PC sound.
Sound cards use two basic methods to translate digital data into analog sounds:
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 45
Ports (Connectors)
External ports appear at the rear of the PC, through slots cut in to the case which are either port of
an expansion card or the motherboard and are male or female in gender.
1. PS/2 Ports(Din) : Port is used to connect the keyboard of the mouse to the PC. Both PS2
keyboard and mouse ports are Circular 6 pin female ports.
2. Serial Ports: Serial ports transfer data 1 bit at a time and are used to connect mice, external
modems, and other serial devices to the computer. Serial ports can be either 9 pin or 25 pin
male ports. All computers have at least one 9 pin serial port, and many still have a 25 pin serial
port.
3. Parallel Ports – parallel ports are the 25 pin female DB (Data bus) ports on the back of your
PC. Parallel communications transfers data 8 bits, or 1 byte, at a time parallel ports are used by
printers, zip drives and scanners. The official maximum length that a standard parallel cable is 5m.
4. PS/2 Ports: Keyboards and mouse are usually connected to a computer system through a
round six – pin mini – DIN (Commonly called a PS/2 connector. They can be also connected by a
USB Port. Keyboards can also be connected through a round five pin DIN and a mouse with a 9
pin or 25pin serial connectors.
5. Video Port: Monitors connect to you PC using a DB Video connector. Order CGA
(Color/Graphics Adapter) and EGA (Enhanced Graphics Adapter) standard monitors used 9 pin
female DB CONNECTORS. Most of the monitors you see to day are VGA (video graphics
adapter), SVGA (Super VGA), or XGA (Extended Graphics Array), and connect to the computer
using male DB connect to the computer using male DB connectors with 15 pins in three rows.
6. Audio Ports: all sound cards have integrated mini- audio ports. Devices such as microphones
and speakers connect to the audio ports using mini- audio connectors.
7. Joystick / MIdI Ports: Many sound cards have a female DB-15 ports that supports a joystick
or MIDI (Musical Instrument Digital Interface) box for attaching musical instruments to the PC.
These devices connect to the port using a male DB connector with 15 pins in two rows.
8. Modems Port: Modems connect to the telephone line using RJ-11connectors. All modems
have at least one RJ-11 port, and many modems have two RJ- 11 ports one for the modem and the
other for a telephone, so you can use the telephone line for voice when the modem is not in use.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 46
9. Network Card Ports (RJ45 Port): Network Interface Cards (NICs) enable you to plug
network cables in to the PC. [Thus experience one of the fundamentally important sides of
computers]. Most network cables have either an RJ45 or BNC connector that connects to the NIC
in a corresponding port.
10. USB: is a recent port technology that cans daisy chain up to 127 USB devices. USB ports
transfer data at speeds up to 12 megabytes per second, making them much faster than traditional
parallel or serial communications. USB is supported by all windows operating systems after
Windows 95 but was not supported by the original release of Windows 95 or Windows NT.
11. SCSI: is a system – level interface that provides a complete additional expansion bus in to
which you add peripherals such as hard disk drives, CD-ROMs, tape back up drives and scanners.
SCSI devices have a variety of interfaces, but the 50 pin SCSI-2 port is the most common. You
might also see 68 pin or 25 – pin ports on some devices or Pc’s.
SCSI (Small Computer System Interface)
Small Computer Systems Interface (SCSI) is a device interface standard used by PCs, Apple,
Macintosh computers and many UNIX systems to attach peripheral devices such as hard disk
drives, CD – ROMs, tape backup drives and scanners. It is introduced by a company called
Shugart systems in 1979 as a system independent means of mass storage.
SCSI manifests itself through a SCSI chain which is a series of SCSI devices working together
through a SCSI card /bus called a host adapter. The host adapter is the device/card that attaches
the SCSI chain to the PC and can be a PCI or ISA design.
A SCSI device can have any SCSI ID, as long as no two devices connected to a single host adapter
share the same ID. Some conventions exist for SCSI IDs. Typically, most people set the host
adapter to 7. You can set a SCSI ID for a particular device by configuring jumpers or switches on
that device. All internal SCSI hard drives, for example use Jumpers to set their SCSI IDs.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 47
Whenever you send a signal down a wire, some of that signal reflects back up the wire, creating an
echo and causing electronic chaos. SCSI chains use termination to prevent this problem.
Termination simply means putting something on the ends of the wire to prevent this echo.
Terminators are usually pull down resisters and can manifest themselves in many different ways.
Most of the SCSI devices inside a PC have the appropriate termination built in, but you can
purchase a special plug that you snap on to the end of the SCSI chain.
The rule with SCSI is you must terminate only the end of the SCSI chain. A SCSI chain refers to a
number of devices including the host adapter linked by a cable. You must terminate the ends of the
cable, which usually means you need to terminate the two devices in to which the ends of the cable
plug.
Types of SCSI
SCSI comes with three basic different standard of speed.
SCSI – 1
It is an expansion enhancement of the original standard and defines support for many of
the more advanced SCSI features that are in use today.
The SCSI – 2 standard adapted many commands (18) that must be supported by SCSI –
2 compliant devices. This set of commands called the common command set (CCS),
made the job of hooking up devices from various manufacturers easy and for
addressing devices besides hard drives, including CD- ROM drives tape drives and
scanners.
SCSI – 2 appear in many different data bus widths and data transfer speeds. It defined
two optional 16 bit and 32 bit busses called wide SCSI and a new, optional 10MHZ
speed called fast SCSI.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 48
SCSI – 3
The Internal and external connectors on a Wide Ultra2 SCSI host adapter. In the upper
photo you can see two connectors; facing you is a 68-pin (wide) high-density connector,
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 49
and facing up is a 50-pin (narrow) "regular density" connector. In the lower photo is a 68-
pin (wide) high-density connector.
SCSI Cabling
Internal SCSI devices use ribbon cables, while external devices use insulated cable.
The most common kind of SCSI cable is type A. which has 50 wires and is used for 8
bit data transfers under the SCSI – 1.
SCSI – 2 16 – bit data transfers required a different type of cable, known eventual as
type B or latter type P with 68 wires.
Some of the higher – end SCSI – 3 host adapters and drives use an 80 pin cable called
SCA 80.
An assortment of different internal ribbon cables used for connecting SCSI hardware. Note that
some are strictly flat cables, but the one on the far left and the one third from the right are partially
flat and partially twisted pair cable.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 50
If you have any external SCSI devices, ensure that they are connected to the mains supply and
turned on and that internal devices are connected to the power supply unit.
Replace the lid on your PC, and reboot it.
If prompted by windows install the SCSI, drivers for your new SCSI Card and devices.
Check that you are able to access the new SCSI device.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 51
Chapter 5 (Monitors and Printers)
Computer Monitor
The computer monitor is an output device that is part of your computer's display system. A cable
connects the monitor to a video adapter (video card) that is installed in an expansion slot on your
computer’s motherboard. This system converts signals into text and pictures and displays them on
a TV-like screen (the monitor).
The computer sends a signal to the video adapter, telling it what character, image or graphic to
display. The video adapter converts that signal to a set of instructions that tell the display device
(monitor) how to draw the image on the screen.
The CRT, or Cathode Ray Tube, is the "picture tube" of your monitor. Although it is a large
vacuum tube, it's shaped more like a bottle. The tube tapers near the back where there's a
negatively charged cathode, or "electron gun". The electron gun shoots electrons at the back of the
positively charged screen, which is coated with a phosphorous chemical. This excites the
phosphors causing them to glow as individual dots called pixels (picture elements). The image you
see on the monitor's screen is made up of thousands of tiny dots (pixels).The distance between the
pixels has a lot to do with the quality of the image. If the distance between pixels on a monitor
screen is too great, the picture will appear "fuzzy", or grainy. The closer together the pixels are, the
sharper the image on screen. The distance between pixels on a computer monitor screen is called
its dot pitch and is measured in millimeters. You should try to get a monitor with a dot pitch of .28
mm or less.
There are a couple of electromagnets (yokes) around the collar of the tube that actually bend the
beam of electrons. The beam scans (is bent) across the monitor from left to right and top to bottom
to create, or draw the image, line by line. The number of times in one second that the electron gun
redraws the entire image is called the refresh rate and is measured in Hertz (Hz). If the scanning
beam hits each and every line of pixels, in succession, on each pass, then the monitor is known as a
non-interlaced monitor. A non-interlaced monitor is preferred over an interlaced monitor. The
electron beam on an interlaced monitor scans the odd numbered lines on one pass, and then scans
the even lines on the second pass. This results in an almost unperceivable flicker that can cause
eye-strain. This type of eye-strain can result in blurred vision, sore eyes, headaches and even
nausea.
Note some important points:-
A monitor is the primary output device of the computer that operates very similar to how
your regular television set works. The principle is based up on the use of an electronic
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 52
screen called a Cathode Ray Tube (CRT), which is the major (& most expensive) part of
the monitor.
The CRT is linked with a phosphorous material that grows when it is struck by a stream of
electrons. This material is arranged in to an array of millions of tiny cells, usually called
Dots.
At the back of the monitor is a set of electron guns which produce a controlled stream of
electrons, much as the name implies.
Deflection system
Cathode
Phosphorus on the
inner surface
Connection Pin
Accelerating
System
Control
grid Focusing System
Control Grid – reduce the number of electrons living the cathode (brightness control)
Accelerators – import enough energy to produce light when they strike a phosphorous
Phosphorous – emits a visible light for a certain period after the bombardment has ceased.
A beam of electrons (Cathode rays) emitted by the electron gun (Cathode) passes through focusing
and deflection systems that direct the beam towards specified direction on the phosphorous coated
screen. The phosphors then emit a small spot of light at each position contacted by the electron
beam. As light emitted by the phosphor fades very rapidly, a method called refresh is used to keep
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 53
the phosphor glowing and two redraw the picture repeatedly by quickly directing the electron
beam back over the same point. The minimum number of points that can be displayed without
overlap on a CRT is referred to as the resolution.
A CRT monitor displays color pictures using a combination of phosphors that emit
different colored lights. By combining the emitted light from the different phosphors, a
range of color can be generated.
Size of a monitor measured diagonally can be typically 14, 15, 17, 25 inches
Video Technologies
Video technologies differ in many different ways. However, the major 2 differences are resolution
and the number of colors it can produce at those resolutions.
Resolution
Resolution is the number of pixels that are used to draw an image on the screen. If you could count
the pixels in one horizontal row across the top of the screen, and the number of pixels in one
vertical column down the side, that would properly describe the resolution that the monitor is
displaying. It’s given as two numbers. If there were 800 pixels across and 600 pixels down the
side, then the resolution would be 800 X 600. Multiply 800 times 600 and you’ll get the number of
pixels used to draw the image (480,000 pixels example). A monitor must be matched with the
video card in the system. The monitor has to be capable of displaying the resolutions and colors
that the adapter can produce. It works the other way around too. If your monitor is capable of
displaying a resolution of 1,024 X 768 but your adapter can only produce 640 X 480, then that’s
all you’re going to get.
Monochrome
Monochrome monitors are very basic displays that produce only one color. The basic text mode in
DOS is 80 characters across and 25 down. When graphics were first introduced, they were fairly
rough by today’s standards, and you had to manually type in a command to change from text mode
to graphics mode. A company called Hercules Graphics developed a video adapter that could do
this for you. Not only could it change from text to graphics, but it could do it on the fly whenever
the application required it. Today’s adapters still basically use the same methods.
In VGA, colors are produced digitally. Each electron beam could be either on or off. There are
three electron guns, one for each color, red, green and blue (RGB). This combination could
produce 8 colors. By cutting the intensity of the beam in half, we could get 8 more colors for a
total of 16. IBM came up with the idea of developing an analog display system that could produce
64 different levels of intensity. Their new Video Graphics Array adapter was capable of a
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 54
resolution of 640 X 480 pixels and could display up to 256 colors from a palette of over 260,000.
This technology soon became the standard for almost every video card and monitor being
developed.
Once again, manufacturers began to develop video adapters that added features and enhancements
to the VGA standard. Super-VGA is based on VGA standards and describes display systems with
several different resolutions and a varied number of colors. When SVGA first came out it could be
defined as having capabilities of 800 X 600 with 256 colors or 1024 X 768 with 16 colors.
However, these cards and monitors are now capable of resolutions up to 1280 X 1024 with a
palette of more than 16 million colors.
Extended Graphics Array was developed by IBM. It improved upon the VGA standard (also
developed by IBM) but was a proprietary adapter for use in Micro Channel Architecture expansion
slots. It had its own coprocessor and bus-mastering ability, which means that it had the ability to
execute instructions independent of the CPU. It was also a 32-bit adapter capable of increased data
transfer speeds. XGA allowed for better performance, could provide higher resolution and more
colors than the VGA and SVGA cards at the time. However, it was only available for IBM
machines. Many of these features were later incorporated by other video card manufacturers.
The first mainstream video card to support color graphics on the PC was IBM's Color Graphics
Adapter (CGA) standard. The CGA supports several different modes; the highest quality text mode
is 80x25 characters in 16 colors. Graphics modes range from monochrome at 640x200 to 16 colors
at 160x200. The card refreshes at 60 Hz.
Note that the maximum resolution of CGA is actually significantly lower than MDA: 640x200.
These dots are accessible individually when in a graphics mode but in text each character was
formed from a matrix that is 8x8, instead of the MDA's 9x14, resulting in much poorer text quality.
CGA is obsolete, having been replaced by EGA.
IBM's next standard after CGA was the Enhanced Graphics Adapter or EGA. This standard
offered improved resolutions and more colors than CGA, although the capabilities of EGA are still
quite poor compared to modern devices. EGA allowed graphical output up to 16 colors (chosen
from a palette of 64) at screen resolutions of 640x350, or 80x25 text with 16 colors, all at a refresh
rate of 60 Hz. You will occasionally run into older systems that still use EGA; EGA-level graphics
are the minimum requirement for Windows 3.x and so some very old systems still using Windows
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 55
3.0 may be EGA. There is of course no reason to stick with EGA when it is obsolete and VGA
cards are so cheap and provide much more performance and software compatibility.
Troubleshooting Monitors
Symptom Check
No picture: Check if power switch and computer power switch are in the on
position
Check if the signal cable is correctly connected to the video card
Check if the pins of D Sub connector are not bent.
Check if the computer is in the power saving mode.
Image is not stable: Check if the signal cable is suitable to the video card.
Image is not centered: Adjust H & V center to get the proper image.
Printers
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 56
Printers are add on peripheral output device that transform text and graphics from your PC
in to hard – copy output on paper or transparency.
Printer can be black and white or colored.
Printers are connected to the computer system through the 25 female printer (LPT) ports.
Depending on the print quality and speed, printers can be basically of the following kinds:
a) Dot Matrix: The oldest kind of printer common in the PC world is the dot matrix, which
was used on PCs primarily in the 1980s. These units use an array of pins and a ribbon to
print text, and in some cases, graphics. Noisy, slow and low in quality. These have been
almost entirely pushed out of the market by inkjet and laser printers. They are still sold for
special needs, especially for printing multi- port continuous forms for business Purposes
(lasers and inkjets can’t print through multi – sheet carbon forms).
b) Laser: These printers use technology similar to that of a photocopier that use light to create
high – quality print out at high speed. They are expensive, however and they are generally
limited to black and white output unless you want to spend a lot of money on a color laser
printer.
c) Ink Jet: The most popular type of printer sold today .Ink jet printers use a special print
head that ejects microscopic dots of colored ink on to paper to create an output image. They
are relatively in expensive to buy and almost all will print in color.
d) Daisy-Wheel: Similar to a ball-head typewriter, this type of printer has a plastic or metal
wheel on which the shape of each character stands out in relief. A hammer presses the
wheel against a ribbon, which in turn makes an ink stain in the shape of the character on the
paper. Daisy-wheel printers produce letter-quality print but cannot print graphics.
e) LCD & LED: Similar to a laser printer, but uses liquid crystals or light-emitting diodes
rather than a laser to produce an image on the drum.
f) Line Printer: Contains a chain of characters or pins that print an entire line at one time.
Line printers are very fast, but produce low-quality print.
g) Thermal Printer: An inexpensive printer that works by pushing heated pins against heat-
sensitive paper. Thermal printers are widely used in calculators and fax machines.
Quality of type: The output produced by printers is said to be either letter quality (as good
as a typewriter), near letter quality, or draft quality. Only daisy-wheel, ink-jet, and laser
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 57
printers produce letter-quality type. Some dot-matrix printers claim letter-quality print, but
if you look closely, you can see the difference.
Speed: Measured in characters per second (cps) or pages per minute (ppm), the speed of
printers varies widely. Daisy-wheel printers tend to be the slowest, printing about 30 cps.
Line printers are fastest (up to 3,000 lines per minute). Dot-matrix printers can print up to
500 cps, and laser printers range from about 4 to 20 text pages per minute.
Impact or non-impact: Impact printers include all printers that work by striking an ink
ribbon. Daisy-wheel, dot-matrix, and line printers are impact printers. Nonimpact printers
include laser printers and ink-jet printers. The important difference between impact and
non-impact printers is that impact printers are much noisier.
Graphics: Some printers (daisy-wheel and line printers) can print only text. Other printers
can print both text and graphics.
Fonts: Some printers, notably dot-matrix printers, are limited to one or a few fonts. In
contrast, laser and ink-jet printers are capable of printing an almost unlimited variety of
fonts. Daisy-wheel printers can also print different fonts, but you need to change the daisy
wheel, making it difficult to mix fonts in the same document.
Installing Printers
Troubleshooting Printers
Printer problems are sometimes the most difficult computer peripherals to diagnose. The common
printer problems and their likely trouble shooting methods are mentioned below.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 58
1) Feed and output Problems
When paper feed problems occur, either the paper can become jammed in the feed mechanism
or there will not be papers outputted, at all halting the printing process. If two much paper in
the paper tray, this can cause the feed mechanism to peek several sheets of paper and try to
send them simultaneously through the printer and usually followed by paper jam.
2) Out of paper Error
This problem will encounter you, if no paper is in the printer. Thus, get enough paper to the
printer.
3) Input/output errors
I/O errors occur, when the computer is unable to communicate with the printer. To trouble
shoot this problem check the following possibilities.
Is the printer plugged in?
Is the printer turned on?
Are all cables firmly turned on?
Is the proper drive for your printer installed?
Are the IRA & I/O settings for the printer correct?
If all these checkouts, try restarting your computer, simply restarting can sometimes fix
a multi-out put error, connect the printer to another computer.
If the printer does not work on the other computer you know you have a printer
problem, if it works file on the other computer, you have a configuration problem.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 59
When the toner cartridge in a laser printer runs low, is an issue a warning before the cartridge
turns out completely. Replace the cartridge as soon as you see the error to avoid half – finished
or delayed print jobs.
The BIOS (Basic Input/output System) is also known as the System BIOS or ROM BIOS. The
BIOS software is built into the PC, and is the first code run by a PC when powered on ('boot
firmware'). When the PC starts up, the first job for the BIOS is the power-on self-test, which
initializes and identifies system devices such as the CPU, RAM, video display card, keyboard and
mouse, hard disk drive, optical disc drive and other hardware. The BIOS then locates boot loader
software held on a peripheral device (designated as a 'boot device'), such as a hard disk or a
CD/DVD, and loads and executes that software, giving it control of the PC. This process is known
as booting, or booting up, which is short for bootstrapping.
BIOS have a user interface (UI), typically a menu system accessed by pressing a certain key on the
keyboard when the PC starts. In the BIOS UI, a user can:
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 60
Configure hardware
Set the system clock
Enable or disable system components
Select which devices are eligible to be a potential boot device
Set various password prompts, such as a password for securing access to the BIOS UI
functions itself and preventing malicious users from booting the system from unauthorized
peripheral devices.
The BIOS provides a small library of basic input/output functions used to operate and control the
peripherals (such as the keyboard, text display functions and so forth), and these software library
functions are callable by external software.
The role of the BIOS has changed over time. As of 2011, the BIOS is being replaced by the more
complex Extensible Firmware Interface (EFI) in many new machines, but BIOS remains in
widespread use. EFI booting has been supported in only Windows versions supporting the Linux
kernel and later, Mac OS X on Intel-based Macs However, the distinction between BIOS and EFI
is rarely made in terminology by the average computer user, making BIOS a catch-all term for
both systems.
In modern PCs the BIOS is stored in rewritable memory, allowing the contents to be replaced or
'rewritten'. This rewriting of the contents is sometimes termed flashing. This can be done by a
special program, usually provided by the system's manufacturer, or at POST, with a BIOS image in
a hard drive or USB flash drive. A file containing such contents is sometimes termed 'a BIOS
image'. BIOS might be reflashed in order to upgrade to a newer version to fix bugs or provide
improved performance or to support newer hardware, or a reflashing operation might be needed to
fix a damaged BIOS. BIOS may also be "flashed" by putting the file on the root of a USB drive
and booting.
If the expansion ROM wishes to change the way the system boots (such as from a network device
or a SCSI adapter for which the BIOS has no driver code), it can use the BIOS Boot Specification
(BBS) to register its ability to do so. Once the expansion ROM have registered using the BBS, the
user can select among the available boot options from within the BIOS's user interface. This is
why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's
user interface until the expansion ROMs have finished executing and registering themselves with
the BBS .
System Resources
A system resource is a collective name for IRQs, I/O addresses and DMA channels used by
devices in a PC. When you install a new device, such as a network card sound card or internal
modem in to a PC, you have to give it a set of system resources that enable it to communicate with
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 61
the processor and system memory, without conflicting with other devices. You may have to
configure one or more of the following resources.
1) Interrupts (IRQ)
2) I/O Address (Base I/O port)
3) Direct Memory address (DMA)
4) Memory Addresses (RAM)
IRQ 1 Keyboard
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 62
IRQ 4 Default com1, com3
IRQ 7 LPT 1
I/O Addresses
Different types of bit patterns are used by the CPU to talk to the devices inside your computer
which are known as the I/O addresses. The CPU uses I/O addresses to talk to everything in the
PC.
The I/O part is generally referred to using its hexadecimal (or port) address in the range of
0000- FFFF and references to I/O ports are usually made using the start address only.
Three basic rules apply to I/O addresses: all devices have I/O addresses, all devices use more
than one address and two devices can not have the same I/O address in a single system.
Every device in your computer either has a present I/O address or you must assign or give it
one. Basic devices in the computer have present I/O addresses. The primary hard drive
controller on a mother board, for example, always gets the present I/O ADDRESSES OF 01F0
– 01FF. A sound card is contrast, has to have I/O addresses configured when you install it in
to a system.
All devices respond to more than one pattern of 1’s and 0’s. The CPU uses the different I/O
addresses to give various commands to each device and each device must also have one or
more I/O addresses to respond to the CPU. For example, the hard drive’s I/O address range is
01F0 – 01FE if the CPU sends a 01F0 pattern, it asks the hard – drive controller if an error
exists anywhere. The command 01F1 is a totally separate command. No device has only one
I/O address.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 63
Common Original IBM I/O Address list.
I/O Address Rage Usage
03F8 – 03FF COM1
03F0 – 03F7 Floppy Controller
03E8 – 03EF COM3
0378 – 037F LPT1
02F8 - 02FF COM2
02E8 – 02EF COM4
0278 – 027F LPT2
0210 – 0217 Joystick
01F0 – 01FF Primary hard drive controller
0170 – 0177 Secondary Hard drive controller
0070 – 0071 CMOS clock
0060 – 0063 Keyboard
Direct Memory Access (DMA)
DMA is a method by which a hardware device can transfer data to and from system
memory directly, rather than using the CPU. (It is the process of accessing memory without
using the CPU). DMA enabled devices are designed to carry out this function more
efficiently than the CPU. DMA is typically used by hard disk and floppy controllers, tape
streamers, NIC and sound cards.
Originally IBM XTs had one DMA controller four DMA channels capable of 8 – bit
transfers. From the IBM AT a second DMA controller was added, providing four more
channels capable of 16 – bit transfers.
DMA Channel Function transfer Size in bits
0 8- bit
1 Open for use 8 – bit
2 Floppy disk controller 8 – bit
3 Open for use 8 – bit
4 Internal 16 - bit
5 Open for use 16 – bit
6 Open for use 16 – bit
7 Open for use 16 – bit
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 64
Plug and Play
Plug and Play (PnP) consists of a series of standards designed to enable devices to self – configure. PnP, makes devices installation
trivial. You simply install a device and it automatically configures its I/O address. IRQ and DMA with no user invention.
For PNP to work properly, the PC needs two items:
i. A PnP BIOS if you have a Pentium or latter computer, you have a PnP BIOS.
ii. A PnP operating system such as windows 9x or windows 2000.Older operating systems such as DOS and Windows 3x,
could use PnP devices with the help of special drivers and utility programs, such as windows 3x and DOS.
An operating system is the single most important software when you run a computer, it is what
takes care of pretty much everything on a computer system, while the majority of computers we
see happen to be using one ‘type’ of operating system performing the same functions, operating
systems can be branched into several different types.
In a batch processing operating system interaction between the user and processor is limited or
there is no interaction at all during the execution of work. Data and programs that need to be
processed are bundled and collected as a ‘batch’ and executed together.
Batch processing operating systems are ideal in situations where:
There are large amounts of data to be processed.
Similar data needs to be processed.
Similar processing is involved when executing the data.
The system is capable of identifying times when the processor is idle at which time ‘batches’
maybe processed. Processing is all performed automatically without any user intervention.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 65
A real-time operating system processes inputs simultaneously, fast enough to affect the next input
or process. Real-time systems are usually used to control complex systems that require a lot of
processing like machinery and industrial systems.
A single user OS as the name suggests is designed for one user to effectively use a computer at a
time.
In this type of OS several applications maybe simultaneously loaded and used in the memory.
While the processor handles only one application at a particular time it is capable of switching
between the applications effectively to apparently simultaneously execute each application. This
type of operating system is seen everywhere today and is the most common type of OS, the
Windows operating system would be an example.
This type of OS allows multiple users to simultaneously use the system, while here as well, the
processor splits its resources and handles one user at a time, the speed and efficiency at which it
does this makes it apparent that users are simultaneously using the system, some network systems
utilize this kind of operating system.
In a distributed system, software and data may be distributed around the system, programs and
files maybe stored on different storage devices which are located in different geographical
locations and maybe accessed from different computer terminals.
While we are mostly accustomed to seeing multi-tasking and multi-user operating systems, the
other operating systems are usually used in companies and firms to power special systems.
Historically operating systems have been tightly related to the computer architecture, it is good
idea to study the history of operating systems from the architecture of the computers on which they
run. Operating systems have evolved through a number of distinct phases or generations which
corresponds roughly to the decades.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 66
The earliest electronic digital computers had no operating systems. Machines of the time were so
primitive that programs were often entered one bit at time on rows of mechanical switches (plug
boards). Programming languages were unknown (not even assembly languages).
By the early 1950's, the routine had improved somewhat with the introduction of punch cards. The
General Motors Research Laboratories implemented the first operating systems in early 1950's for
their IBM 701. The system of the 50's generally ran one job at a time. These were called single-
stream batch processing systems because programs and data were submitted in groups or batches.
The systems of the 1960's were also batch processing systems, but they were able to take better
advantage of the computer's resources by running several jobs at once. So operating systems
designers developed the concept of multiprogramming in which several jobs are in main memory
at once; a processor is switched from job to job as needed to keep several jobs advancing while
keeping the peripheral devices in use.
Fourth Generation
With the development of LSI (Large Scale Integration) circuits, chips, operating system entered in
the system entered in the personal computer and the workstation age. Microprocessor technology
evolved to the point that it becomes possible to build desktop computers as powerful as the
mainframes of the 1970s. Two operating systems have dominated the personal computer scene:
MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines using the Intel 8088 CPU
and its successors, and UNIX, which is dominant on the large personal computers using the
Motorola 6899 CPU family.
The OS allows the computer programs to function correctly by allowing them to communicate
with other programs and the computer. Windows is an OS, which bring all the programs together
and allows the user to access them. The operating system manages the use of the hardware among
the various application programs for the various users and provides the user a relatively simple
machine to use.
An operating system provides an environment for the execution of programs by providing services
needed by those programs.
The services programs request fall into five categories
1. Process Control
2. File System Management
3. I/O Operation
4. Interprocess Communication
5. Information Maintenance
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 67
The operating system must try to satisfy these requests in a multi-user, multi-process environment
while managing.
Resource allocation
Error Detection
Protection
File Systems
The FAT file system is simple yet robust. It offers reasonable performance even in light-weight
implementations and is therefore widely adopted and supported by virtually all existing operating
systems for personal computers as well as some home computers and embedded systems. This
makes it a useful format for solid-state memory cards and a convenient way to share data between
operating systems.
FAT file systems are the default file system for removable media (with the exception of CDs and
DVDs) and as such are commonly found on floppy disks, super-floppies, memory and flash
memory cards or USB flash drives and are supported by most portable devices such as PDAs,
digital cameras, camcorders, media players, or mobile phones. While FAT12 is omnipresent on
floppy disks, FAT16 and FAT32 are typically found on the larger media.
The first reserved sector (sector 0) is the Boot Sector . It includes an area called the BIOS
Parameter Block (with some basic file system information, in particular its type, and pointers to the
location of the other sections) and usually contains the operating system's boot loader code.
Important information from the Boot Sector is accessible through an operating system structure
called the Drive Parameter Block (DPB) in DOS and OS/2. The total count of reserved sectors is
indicated by a field inside the Boot Sector. For FAT32 file systems, the reserved sectors include a
File System Information Sector at sector 1 and a Backup Boot Sector at sector 6.
This typically contains two copies (may vary) of the File Allocation Table for the sake of
redundancy checking, although rarely used, even by disk repair utilities. These are maps of the
Data Region, indicating which clusters are used by files and directories. In FAT12 and FAT16 they
immediately follow the reserved sectors.
This is a Directory Table that stores information about the files and directories located in the root
directory. It is only used with FAT12 and FAT16, and imposes on the root directory a fixed
maximum size which is pre-allocated at creation of this volume. FAT32 stores the root directory in
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 68
the Data Region, along with files and other directories, allowing it to grow without such a
constraint.
This is where the actual file and directory data is stored and takes up most of the partition.
Short for New Technology File System, is one of the file system for the Windows NT operating
system (Windows NT also supports the FAT file system). NTFS has features to improve
reliability, such as transaction logs to help recover from disk failures. To control access to files, we
can set permissions for directories and/or individual files. NTFS files are not accessible from other
operating systems such as DOS.
For large applications, NTFS supports spanning volumes, which means files and directories can be
spread out across several physical disks.
NTFS supersedes the FAT file system as the preferred file system for Microsoft’s Windows
operating systems. NTFS has several improvements over FAT and HPFS (High Performance File
System), such as improved support for metadata, and the use of advanced data structures to
improve performance, reliability, and disk space utilization, plus additional extensions, such as
security access control lists (ACL) and file system journaling.
HPFS or High Performance File System is a file system created specifically for the OS/2
operating system to improve upon the limitations of the FAT file system.
Disk Sector
In computer disk storage, a sector is a subdivision of a track on a magnetic disk or optical disc.
Each sector stores a fixed amount of user data. Traditional formatting of these storage media
provides space for 512 bytes or 2048 bytes of user-accessible data per sector. Newer hard drives
use 4096 byte (4 KB or 4K) Advanced Format sectors.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 69
Mathematically, the word sector means a portion of a disk between a center, two radii and a
corresponding arc, which is shaped like a slice of a pie. Thus, the disk sector refers to the
intersection of a track and mathematical sector.
In disk drives, each physical sector is made up of three basic parts, the sector header, the data area
and the error-correcting code (ECC). The sector header contains information used by the drive and
controller. This information includes synch bytes, address identification, flaw flag and header
parity bytes. The header may also include an alternate address to be used if the data area is
undependable. The address identification is used to insure that the mechanics of the drive have
positioned the read/write head over the correct location. The data area contains the recorded user
data. The ECC field contains codes based on the data field, which are used to check and possibly
correct errors that may have been introduced into the data.
Fragmentation
Basic principle:
When a computer program requests blocks of memory from the computer system, the blocks are
allocated in chunks. When the computer program is finished with a chunk, it can free the chunk
back to the computer. The size and the amount of time a chunk is held by a program vary. During
its lifespan, a computer program can request and free many chunks of memory.
When a program is started, the free memory areas are long and contiguous. Over time and with
use, the long contiguous regions become fragmented into smaller and smaller contiguous areas.
Eventually, it may become impossible for the program to request large chunks of memory.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 70
Overhead
The memory allocator needs to store all the information related to all memory allocations. This
information includes the location, size and ownership of any free blocks, as well as other internal
status details. Overhead comprises all the additional system resources that the programming
algorithm requires. A dynamic memory allocator typically stores this overhead information in the
memory it manages. This leads to wastage of memory. Hence, it is considered as a part of memory
fragmentation.
Internal fragmentation
When the memory allocated is larger than required, the rest is wasted. Some reasons for excess
allocation are:
The term "internal" refers to the fact that the unusable storage is inside the allocated region. While
this may seem foolish, it is often accepted in return for increased efficiency or simplicity.
There are some basic memory allocation rules which all memory allocators must adhere to.
According to the "segregated free list" allocator policy, all memory allocations must start at an
address divisible by 4, 8, or 16. The memory allocator may assign blocks of only certain
predefined sizes to clients. It depends upon the processor architecture. Also, extra bytes are
assigned to a program for alignment and metadata.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 71
For example, when a client requests a block of 23 bytes, it may well get 24 or 28 bytes or even
more.
Or in many file systems, each file always starts at the beginning of a cluster because this simplifies
organization and makes it easier to grow files. Any space left over between the last byte of the file
and the first byte of the next cluster is a form of internal fragmentation called file slack, slack
space, or cluster overhang. Slack space is a very important source of evidence in computer forensic
investigation
Another common example: English text is often stored with one character in each 8-bit byte even
though in standard ASCII encoding the most significant bit of each byte is always zero. The
unused bits are a form of internal fragmentation. This may be reclaimed by compressing the text if
it is long enough.
External fragmentation
External fragmentation is the inability to use free memory as the free memory is divided into small
blocks of memory and these blocks are interspersed with the allocated memory. It is a weakness of
certain storage allocation algorithms, occurring when an application allocates and deallocates
("frees") regions of storage of varying sizes, and the allocation algorithm responds by leaving the
allocated and deallocated regions interspersed. The result is that although free storage is available,
it is effectively unusable because it is divided into pieces that are too small to satisfy the demands
of the application. The term "external" refers to the fact that the unusable storage is outside the
allocated regions.
For example, fragmentation is 90% when 100 MB free memory is present but largest free block of
memory for allocation is just 10 MB.
Data fragmentation
Data fragmentation occurs when a piece of data in memory is broken up into many pieces that are
not close together. It is typically the result of attempting to insert a large object into storage that
has already suffered external fragmentation.
For example, files in a file system are usually managed in units called blocks or clusters. When a
file system is created, there is free space to store file blocks together contiguously. This allows for
rapid sequential file reads and writes. However, as files are added, removed, and changed in size,
the free space becomes externally fragmented, leaving only small holes in which to place new data.
When a new file is written, or when an existing file is extended, the operating system puts the new
data in new non-contiguous data blocks to fit into the available holes. The new data blocks are
necessarily scattered, slowing access due to seek time and rotational latency of the read/write head,
and incurring additional overhead to manage additional locations. This is called file system
fragmentation.
When writing a new file of a known size, if there are any empty holes that are larger than that file,
the operating system can avoid data fragmentation by putting the file into any one of those holes.
There are a variety of algorithms for selecting which of those potential holes to put the file; each of
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 72
them is a heuristic approximate solution to the bin packing problem. The "best fit" algorithm
chooses the smallest hole that is big enough. The "worst fit" algorithm chooses the largest hole.
The "first-fit algorithm" chooses the first hole that is big enough. The "next fit" algorithm keeps
track of where each file was written.
Windows Registry
The Windows Registry is a very powerful and logically designed computer database that controls
virtually every facet of your computer's operation and is found on all Windows installations.
While the computer's microprocessor is the brain of the hardware, the Registry is the brain of the
Operating System. A deleted or corrupted Registry entry could render an application or even the
entire operating system completely unusable.
The regedit command opens the Registry and provides the user with a graphical interface that
allows for editing or inserting keys that can manipulate software throughout the computer's
operating system.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 73
The above figure shows the five sections that comprise the Registry. All sections begin with the
HKEY prefix (abbreviation for Hive Key) and each section controls a specific function within the
Windows operating system.
The Classes_Root hive manages installed applications and their associated file extensions.
The Current_User hive manages the data and preferences of the currently logged in user.
The Local_Machine hive controls settings that affect the entire operating system.
The Users hive holds the settings pertaining to all users who have logged into that
computer.
The Current_Config contains per session data. This data changes the next time a different
user logs in or if the computer is restarted.
The Registry is comprised of folders, or Keys, and within those folders STRING, BINARY,
DWORD, MULTIWORD, or EXPANDABLE STRING values that can launch, stop, or change
process that run on your computer.
The Registry can be edited manually by expanding each section and drilling down (expanding)
into the hive, or folder structure, and locating the entry that needs to be modified or inserting a new
entry to control or support a new function.
In many corporate settings, the Network or Computer administrators will disable a users ability to
access the Registry. However, most computers used at home or for recreational purposes
automatically give the main account administrative privileges.
One of the most vulnerable and, at the same time, powerful Keys within the Registry is the
(HKEY_LOCAL_MACHINE). Within this key, authors of spyware, malware, virus purveyors, and
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 74
even useful software programs will modify the processes of a user computer from the moment that
user logs in.
Registry Cleaner
Most personal computers sold today contain many applications that are preinstalled by the
manufacturer. Updates or complete reinstallations of applications may result in duplicate or
additional Registry entries requiring more processor and memory overhead as the computer is
required to scan deeper into the Registry’s section.
Removing these applications may delete the application files but that does not necessarily remove
the Registry entries. Having an easy to use, Registry Cleaner can assist the user in determining
what key can be discarded and, more importantly, what keys need to be left untouched for the
computer to operate correctly.
With the right Registry Cleaner, you can safeguard yourself from Internet thieves and other e-
miscreants by removing potential security breaches and the malware installed by their applications.
The right Registry Cleaner is must for today’s computer user. With the right one, a computer has a
fighting chance to reach its full potential.
Architecture of Windows NT
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 75
The architecture of Windows NT, is a layered design that consists of two main components.
1) User Mode
2) Kernal Mode
It is a preemptive operating system, which has been designed to work with uniprocessor and
symmetrical multi-processor (SMP) based computers. To process input/output (I/O) requests, they
use packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting
with Windows 2000, Microsoft began making 64-bit versions, before this, the operating systems
only existed in 32-bit versions.
User Mode:
The user mode is made up of subsystems which can pass I/O requests to the appropriate kernel
mode drivers via the I/O manager (which exists in kernel mode). Two subsystems make up the
user mode layer of Windows NT.
Environment subsystem
Integral subsystem.
The Environment subsystem was designed to run applications written for many different types of
operating systems. None of the environment subsystems can directly access hardware, and must
request access to memory resources through the Virtual Memory Manager that runs in Kernel
mode.
There are three main environment subsystems: the Win32 subsystem, an OS/2 subsystem and a
POSIX subsystem.
1) Win 32
2) OS/2
3) POSIX
The POSIX environment subsystem supports applications that are strictly written to either the
POSIX standard or the related ISO/IEC standards. The POSIX subsystem has been an area of
recent active development and is a major feature of Windows Computer Cluster Server 2003.
The Integral subsystem looks after operating-system specific functions on behalf of the
subsystem. It consists of a security subsystem, a workstation service and a server service. The
security subsystem deals with security tokens, grants or denies access to user accounts based on
resource permissions, handles login requests and initiates login authentication, and determines
which system resources need to be audited by Windows NT and also looks after Active Directory.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 76
The workstation service looks after network redirector, which provides the computer access to the
network. The server service allows the computer to provide network services.
Kernel Mode:
Windows NT kernel mode has full access to the hardware and system resources of the computer
and runs code in a protected memory area. It controls access to scheduling, thread prioritization,
memory management and the interaction with hardware. The kernel mode stops user mode
services and applications from accessing critical areas of the operating system that they should not
have access to; user mode processes must ask the kernel mode to perform such operations on their
behalf.
Kernel mode consists of executive services, which are made up of many modules that do specific
tasks, kernel drivers, and a Hardware Abstraction Layer, (HAL). It is a layer between the physical
hardware of the computer and the rest of the operating system.
It deals with I/O, object management, security and process management. These are divided into
several subsystems, among which are Cache Manager, Configuration Manager, I/O Manager,
Local Procedure Call (LPC), Memory Manager, Object Manager, Process Structure and Security
Reference Monitor (SRM). When all of this is grouped together, the components can be called as
Executive services
Object Manager
The Object Manager is an executive subsystem that all other executive subsystems, especially
system calls, must pass through to gain access to Windows NT resources, essentially making it a
resource management infrastructure service. The object manager is used to reduce the duplication
of object resource management functionality in other executive subsystems, which could
potentially lead to bugs and make development of Windows NT harder. To the object manager,
each resource is an object, whether that resource is a physical resource (such as a file system or
peripheral) or a logical resource (such as a file). Object creation is a process in two phases,
creation and insertion.
Cache Controller
It closely coordinates with the Memory Manager, I/O Manager and I/O drivers to provide a
common cache for regular file I/O.
Configuration Manager
I/O Manager
It allows devices to communicate with user-mode subsystems. It translates user-mode read and
write commands into read or write IRPs which it passes to device drivers. It accepts file system I/O
requests and translates them into device specific calls, and can incorporate low-level device drivers
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 77
that directly manipulate hardware to either read input or write output. It also includes a cache
manager to improve disk performance by caching read requests and write to the disk in the
background.
It provides inter-process communication ports with connection semantics. LPC ports are used by
user-mode subsystems to communicate with their clients, by Executive subsystems to
communicate with user-mode subsystems.
Memory Manager
It manages virtual memory, controlling memory protection and the paging of memory in and out of
physical memory to secondary storage, and implements a general-purpose allocator of physical
memory.
Process Structure
It handles process and thread creation and termination, and it implements the concept of Job, a
group of processes that can be terminated as a whole, or be placed under shared restrictions (such a
total maximum of allocated memory, or CPU time). Job objects were introduced in Windows
2000.
PnP Manager
It handles Plug and Play and supports device detection and installation at boot time. It also has the
responsibility to stop and start devices on demand, this can happen when a bus (such as USB )
gains a new device and needs to have a device driver loaded to support it.
Power Manager
It deals with power events (power-off, stand-by, hibernate, etc.) and notifies affected drivers with
special IRPs (Power IRPs).
The primary authority for enforcing the security rules of the security integral subsystem. It
determines whether an object or resource can be accessed, via the use of access control lists
(ACLs), which are made up of access control entries (ACEs). ACEs contain a security identifier
(SID) and a list of operations that the ACE gives a select group , a user account, group account, or
login session, permission (allow, deny, or audit) to that resource.
Sha
Department of Information Technology, Bule Hora University, Ethiopia. Page 78