0% found this document useful (0 votes)
13 views27 pages

Magzhan Kairanbay

The document discusses the hardware components of a computer system including the CPU, memory, and I/O devices. It describes how the CPU fetches and executes instructions and contains registers like the program counter. It also discusses memory hierarchies including caches, RAM, and disk storage. Multicore processors and memory technologies like SSDs are also covered.

Uploaded by

baglannurkasym6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views27 pages

Magzhan Kairanbay

The document discusses the hardware components of a computer system including the CPU, memory, and I/O devices. It describes how the CPU fetches and executes instructions and contains registers like the program counter. It also discusses memory hierarchies including caches, RAM, and disk storage. Multicore processors and memory technologies like SSDs are also covered.

Uploaded by

baglannurkasym6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

OS2

Magzhan Kairanbay
Hardware
An operating system is intimately tied to the hardware of the computer it runs on. It
extends the computer’s instruction set and manages its resources. To work, it
must know a great deal about the hardware, at least about how the hardware
appears to the programmer.
Hardware
The CPU, memory, and I/O devices are
all connected by a system bus and
communicate with one another over it.
Modern personal computers have a
more complicated structure, involving
multiple buses, which we will look at
later.
Processor
The ‘‘brain’’ of the computer is the CPU. It fetches instructions from memory and executes them.
The basic cycle of every CPU is to fetch the first instruction from memory, decode it to determine
its type and operands, execute it, and then fetch, decode, and execute subsequent instructions.
The cycle is repeated until the program finishes. In this way, programs are carried out. Each CPU
has a specific set of instructions that it can execute. Thus an x86 processor cannot execute ARM
programs and an ARM processor cannot execute x86 programs. Because accessing memory to
get an instruction or data word takes much longer than executing an instruction, all CPUs contain
some registers inside to hold key variables and temporary results. In addition to the general
registers used to hold variables and temporary results, most computers have several special
registers that are visible to the programmer. One of these is the program counter, which contains
the memory address of the next instruction to be fetched. After that instruction has been fetched,
the program counter is updated to point to its successor.
Processor
Another register is the stack pointer, which points to the top of the current stack in memory. The stack contains one
frame for each procedure that has been entered but not yet exited. A procedure’s stack frame holds those input
parameters, local variables, and temporary variables that are not kept in registers.

Yet another register is the PSW (Program Status Word). This register contains the condition code bits, which are set
by comparison instructions, the CPU priority, the mode (user or kernel), and various other control bits. User programs
may normally read the entire PSW but typically may write only some of its fields. The PSW plays an important role in
system calls and I/O.

Every time it stops a running program, the operating system must save all the registers so they can be restored when
the program runs later.

To improve performance, CPU designers have long abandoned the simple model of fetching, decoding, and executing
one instruction at a time. Many modern CPUs have facilities for executing more than one instruction at the same time.
For example, a CPU might have separate fetch, decode, and execute units, so that while it is executing instruction n,
it could also be decoding instruction n + 1 and fetching instruction n + 2. Such an organization is called a pipeline.
Processor
Even more advanced than a pipeline design
is a superscalar CPU, shown in Fig. 1-7(b).
In this design, multiple execution units are
present, for example, one for integer
arithmetic, one for floating-point arithmetic,
and one for Boolean operations. Two or
more instructions are fetched at once,
decoded, and dumped into a holding buffer
until they can be executed. As soon as an
execution unit becomes available, it looks in
the holding buffer to see if there is an
instruction it can handle, and if so, it
removes the instruction from the buffer and
executes it.
Processor
User programs always run in user mode, which permits only a subset of the
instructions to be executed and a subset of the features to be accessed.
Generally, all instructions involving I/O and memory protection are disallowed in
user mode. Setting the PSW mode bit to enter kernel mode is also forbidden, of
course. To obtain services from the operating system, a user program must make
a system call, which traps into the kernel and invokes the operating system. The
TRAP instruction switches from user mode to kernel mode and starts the
operating system. When the work has been completed, control is returned to the
user program at the instruction following the system call.
Multithreaded and Multicore Chips
if one of the processes needs to read a word from memory (which takes many clock cycles), a
multithreaded CPU can just switch to another thread. Multithreading does not offer true
parallelism. Only one process at a time is running, but thread-switching time is reduced to the
order of a nanosecond.

Multithreading has implications for the operating system because each thread appears to the
operating system as a separate CPU. Consider a system with two actual CPUs, each with two
threads. The operating system will see this as four CPUs.

Beyond multithreading, many CPU chips now hav e four, eight, or more complete processors or
cores on them. The multicore chips of Fig. 1-8 effectively carry four minichips on them, each with
its own independent CPU. (The caches will be explained below.) Some processors, like Intel
Xeon Phi and the Tilera TilePro, already sport more than 60 cores on a single chip. Making use
of such a multicore chip will definitely require a multiprocessor operating system.
Multithreaded and Multicore Chips
Incidentally, in terms of sheer numbers,
nothing beats a modern GPU (Graphics
Processing Unit). A GPU is a processor with,
literally, thousands of tiny cores. They are
very good for many small computations
done in parallel, like rendering polygons in
graphics applications. They are not so good
at serial tasks. They are also hard to
program. While GPUs can be useful for
operating systems (e.g., encryption or
processing of network traffic), it is not likely
that much of the operating system itself will
run on the GPUs.
Memory
The second major component in any
computer is the memory. Ideally, a memory
should be extremely fast (faster than
executing an instruction so that the CPU is
not held up by the memory), abundantly large,
and dirt cheap. No current technology
satisfies all of these goals, so a different
approach is taken. The memory system is
constructed as a hierarchy of layers, as
shown in Fig. 1-9. The top layers have higher
speed, smaller capacity, and greater cost per
bit than the lower ones, often by factors of a
billion or more.
Memory
The top layer consists of the registers internal to the CPU. They are made of the same material
as the CPU and are thus just as fast as the CPU. Consequently, there is no delay in accessing
them. The storage capacity available in them is typically 32 × 32 bits on a 32-bit CPU and 64 ×
64 bits on a 64-bit CPU. Less than 1 KB in both cases. Programs must manage the registers
(i.e., decide what to keep in them) themselves, in software.

Next comes the cache memory, which is mostly controlled by the hardware. Main memory is
divided up into cache lines, typically 64 bytes, with addresses 0 to 63 in cache line 0, 64 to 127 in
cache line 1, and so on. The most heavily used cache lines are kept in a high-speed cache
located inside or very close to the CPU. When the program needs to read a memory word, the
cache hardware checks to see if the line needed is in the cache. If it is, called a cache hit, the
request is satisfied from the cache and no memory request is sent over the bus to the main
memory.
Memory
Caches are such a good idea that modern CPUs have two of them. The first level
or L1 cache is always inside the CPU and usually feeds decoded instructions into
the CPU’s execution engine. Most chips have a second L1 cache for very heavily
used data words. The L1 caches are typically 16 KB each. In addition, there is
often a second cache, called the L2 cache, that holds several megabytes of
recently used memory words. The difference between the L1 and L2 caches lies in
the timing. Access to the L1 cache is done without any delay, whereas access to
the L2 cache involves a delay of one or two clock cycles
Memory
Main memory comes next in the hierarchy of Fig. 1-9. This is the workhorse of the
memory system. Main memory is usually called RAM (Random Access Memory).
Old-timers sometimes call it core memory, because computers in the 1950s and 1960s
used tiny magnetizable ferrite cores for main memory. They hav e been gone for decades
but the name persists. Currently, memories are hundreds of megabytes to several
gigabytes and growing rapidly. All CPU requests that cannot be satisfied out of the cache
go to main memory.
In addition to the main memory, many computers have a small amount of nonvolatile
random-access memory. Unlike RAM, nonvolatile memory does not lose its contents
when the power is switched off. ROM (Read Only Memory) is programmed at the factory
and cannot be changed afterward. It is fast and inexpensive. On some computers, the
bootstrap loader used to start the computer is contained in ROM.
Memory
Next in the hierarchy is magnetic disk (hard disk). Disk storage
is two orders of magnitude cheaper than RAM per bit and
often two orders of magnitude larger as well. The only problem
is that the time to randomly access data on it is close to three
orders of magnitude slower. The reason is that a disk is a
mechanical device, as shown in Fig. 1-10.

A disk consists of one or more metal platters that rotate at


5400, 7200, 10,800 RPM or more. A mechanical arm pivots
over the platters from the corner, similar to the pickup arm on
an old 33-RPM phonograph for playing vinyl records.

Information is written onto the disk in a series of concentric


circles. At any giv en arm position, each of the heads can read
an annular region called a track. Together, all the tracks for a
given arm position form a cylinder
Memory
Sometimes you will hear people talk about disks that are really not disks at all, like
SSDs, (Solid State Disks). SSDs do not have moving parts, do not contain platters
in the shape of disks, and store data in (Flash) memory. The only ways in which
they resemble disks is that they also store a lot of data which is not lost when the
power is off.
Input, output
I/O devices also interact heavily with the operating system. As we saw in Fig. 1-6, I/O devices
generally consist of two parts: a controller and the device itself. The controller is a chip or a set of
chips that physically controls the device. It accepts commands from the operating system, for
example, to read data from the device, and carries them out.

SATA is currently the standard type of disk on many computers. Since the actual device interface
is hidden behind the controller, all that the operating system sees is the interface to the controller,
which may be quite different from the interface to the device.

SATA stands for Serial ATA and AT A in turn stands for AT Attachment. In case you are curious
what AT stands for, this was IBM’s second generation ‘‘Personal Computer Advanced
Technology’’ built around the then-extremely-potent 6-MHz 80286 processor that the company
introduced in 1984. What we learn from this is that the computer industry has a habit of
continuously enhancing existing acronyms with new prefixes and suffixes.
Input, output
Because each type of controller is different, different software is needed to control
each one. The software that talks to a controller, giving it commands and
accepting responses, is called a device driver. Each controller manufacturer has to
supply a driver for each operating system it supports. Thus a scanner may come
with drivers for OS X, Windows 7, Windows 8, and Linux, for example.
Every controller has a small number of registers that are used to communicate
with it. For example, a minimal disk controller might have registers for specifying
the disk address, memory address, sector count, and direction (read or write). To
activate the controller, the driver gets a command from the operating system, then
translates it into the appropriate values to write into the device registers. The
collection of all the device registers forms the I/O port space,
Input, output
Input and output can be done in three different ways. In the simplest method, a user
program issues a system call, which the kernel then translates into a procedure call to
the appropriate driver. The driver then starts the I/O and sits in a tight loop continuously
polling the device to see if it is done (usually there is some bit that indicates that the
device is still busy). When the I/O has completed, the driver puts the data (if any) where
they are needed and returns. The operating system then returns control to the caller. This
method is called busy waiting and has the disadvantage of tying up the CPU polling the
device until it is finished.
The second method is for the driver to start the device and ask it to give an interrupt
when it is finished. At that point the driver returns. The operating system then blocks the
caller if need be and looks for other work to do. When the controller detects the end of
the transfer, it generates an interrupt to signal completion.
Bus
This system has many buses (e.g., cache, memory, PCIe, PCI, USB, SATA, and DMI), each with a different transfer
rate and function. The operating system must be aware of all of them for configuration and management. The main
bus is the PCIe (Peripheral Component Interconnect Express) bus.

The PCIe bus was invented by Intel as a successor to the older PCI bus, which in turn was a replacement for the
original ISA (Industry Standard Architecture) bus. Capable of transferring tens of gigabits per second, PCIe is much
faster than its predecessors. It is also very different in nature. Up to its creation in 2004, most buses were parallel and
shared. A shared bus architecture means that multiple devices use the same wires to transfer data. Thus, when
multiple devices have data to send, you need an arbiter to determine who can use the bus. In contrast, PCIe makes
use of dedicated, point-to-point connections. A parallel bus architecture as used in traditional PCI means that you
send each word of data over multiple wires. For instance, in regular PCI buses, a single 32-bit number is sent over 32
parallel wires. In contrast to this, PCIe uses a serial bus architecture and sends all bits in a message through a single
connection, known as a lane, much like a network packet. This is much simpler, because you do not have to ensure
that all 32 bits arrive at the destination at exactly the same time

The USB (Universal Serial Bus) was invented to attach all the slow I/O devices, such as the keyboard and mouse, to
the computer.
BIOS
Every PC contains a parentboard (formerly called a motherboard before political correctness hit
the computer industry). On the parentboard is a program called the system BIOS (Basic Input
Output System). The BIOS contains low-level I/O software, including procedures to read the
keyboard, write to the screen, and do disk I/O, among other things. Nowadays, it is held in a flash
RAM, which is nonvolatile but which can be updated by the operating system when bugs are
found in the BIOS. When the computer is booted, the BIOS is started. It first checks to see how
much RAM is installed and whether the keyboard and other basic devices are installed and
responding correctly. It starts out by scanning the PCIe and PCI buses to detect all the devices
attached to them. If the devices present are different from when the system was last booted, the
new devices are configured. The BIOS then determines the boot device by trying a list of devices
stored in the CMOS memory. The user can change this list by entering a BIOS configuration
program just after booting. Typically, an attempt is made to boot from a CD-ROM (or sometimes
USB) drive, if one is present. If that fails, the system boots from the hard disk. The first sector
from the boot device is read into memory and executed
Mainframe OS
At the high end are the operating systems for mainframes, those room-sized
computers still found in major corporate data centers. These computers differ from
personal computers in terms of their I/O capacity. A mainframe with 1000 disks
and millions of gigabytes of data is not unusual; a personal computer with these
specifications would be the envy of its friends. Mainframes are also making
something of a comeback as high-end Web servers, servers for large-scale
electronic commerce sites, and servers for business-to-business transactions.
Server OS
One level down are the server operating systems. They run on servers, which are
either very large personal computers, workstations, or even mainframes. They
serve multiple users at once over a network and allow the users to share hardware
and software resources. Servers can provide print service, file service, or Web
service. Internet providers run many server machines to support their customers
and Websites use servers to store the Web pages and handle the incoming
requests. Typical server operating systems are Solaris, FreeBSD, Linux and
Windows Server 201x
Multiprocessor OS
An increasingly common way to get major-league computing power is to connect
multiple CPUs into a single system. Depending on precisely how they are
connected and what is shared, these systems are called parallel computers,
multicomputers, or multiprocessors. They need special operating systems, but
often these are variations on the server operating systems, with special features
for communication, connectivity, and consistency
Personal computer OS
The next category is the personal computer operating system. Modern ones all
support multiprogramming, often with dozens of programs started up at boot time.
Their job is to provide good support to a single user. They are widely used for
word processing, spreadsheets, games, and Internet access. Common examples
are Linux, FreeBSD, Windows 7, Windows 8, and Apple’s OS X. Personal
computer operating systems are so widely known that probably little introduction is
needed. In fact, many people are not even aware that other kinds exist.
Handheld computer OS
Continuing on down to smaller and smaller systems, we come to tablets,
smartphones and other handheld computers. A handheld computer, originally
known as a PDA (Personal Digital Assistant), is a small computer that can be held
in your hand during operation. Smartphones and tablets are the best-known
examples. As we have already seen, this market is currently dominated by
Google’s Android and Apple’s iOS, but they hav e many competitors. Most of
these devices boast multicore CPUs, GPS, cameras and other sensors, copious
amounts of memory, and sophisticated operating systems. Moreover, all of them
have more third-party applications (‘‘apps’’) than you can shake a (USB) stick at.
Embedded OS
Embedded systems run on the computers that control devices that are not
generally thought of as computers and which do not accept user-installed
software. Typical examples are microwave ovens, TV sets, cars, DVD recorders,
traditional phones, and MP3 players. The main property which distinguishes
embedded systems from handhelds is the certainty that no untrusted software will
ever run on it. You cannot download new applications to your microwave oven—all
the software is in ROM. This means that there is no need for protection between
applications, leading to design simplification. Systems such as Embedded Linux,
QNX and VxWorks are popular in this domain.
Other OS
Sensor Node OS

Real time OS

Smart card OS

You might also like