Operating Systems Chapter 2 075845
Operating Systems Chapter 2 075845
Each computer system includes a basic set of programs called the operating system. The most
important program in the set is referred to as the kernel. It is loaded into RAM when the system boots
and contains many critical procedures that are required for the system to operate. The other programs
are less crucial utilities; they can provide a wide variety of interactive experiences for the user - as well
as doing all the jobs the user bought the computer for; but the essential shape and capabilities of the
system are determined by the kernel. The kernel provides key facilities to everything else on the system
and determines many of the characteristics of higher software. Therefore, we often use the term
“operating system” as a synonym for “kernel.”
Interact with the hardware components, servicing all low-level programmable elements
included in the hardware platform.
Provide an execution environment to the applications that run on the computer system (the so-
called user programs).
Some operating systems permit all user programs to directly play with the hardware components (a
typical example is MS-DOS). In contrast, a Unix-like operating system hides all low-level details
concerning the physical organization of the computer from applications run by the user. When a
program wants to use a hardware resource, it must issue a request to the operating system. The kernel
evaluates the request and, if it chooses to grant the resource, interacts with the proper hardware
components on behalf of the user program.
So in order to enforce this mechanism, modern operating systems rely on the availability of specific
hardware features that prohibit user programs to directly interact with low-level hardware components
or to access arbitrary memory locations. In particular, the hardware introduces at least two different
execution modes for the CPU: a nonprivileged mode for user programs and a privileged mode for the
kernel. Unix calls these User Mode and Kernel Mode, respectively. This is also applicable to other
modern operating systems.
Multiuser Systems
A multiuser system is a computer that is able to concurrently and independently execute several
applications belonging to two or more users. Concurrently means that applications can be active at the
same time and contend for the various resources such as CPU, memory, hard disks, and so on.
Independently means that each application can perform its task with no concern for what the
applications of the other users are doing. Switching from one application to another, of course, slows
down each of them and affects the response time seen by the users. Many of the complexities of
modern operating system kernels, are present to minimize the delays enforced on each program and to
provide the user with responses that are as fast as possible.
6
Multiuser operating systems must include several features such as:
To ensure safe protection mechanisms, operating systems must use the hardware protection associated
with the CPU privileged mode. Otherwise, a user program would be able to directly access the system
circuitry and overcome the imposed bounds. Unix/windows/linux etc are a multiuser system that
enforces the hardware protection of system resources.
In a multiuser system, each user has a private space on the machine; usually, he/she owns some quota
of the disk space to store files, receives private mail messages, and so on. The operating system must
ensure that the private portion of a user space is visible only to its owner. In particular, it must ensure
that no user can exploit a system application for the purpose of violating the private space of another
user.
All users are identified by a unique number called the User ID, or UID. Usually only a restricted
number of persons are allowed to make use of a computer system. When one of these users starts a
working session, the system asks for a login name and a password. If the user does not input a valid
pair, the system denies access. Because the password is assumed to be secret, the user’s privacy is
ensured.
To selectively share material with other users, each user is a member of one or more user groups,
which are identified by a unique number called a user group ID. Each file is associated with exactly
one group. For instance, access can be set so the user owning the file has read and write privileges, the
group has read-only privileges, and other users on the system are denied access to the file.
Any Unix-like operating system has a special user called root or superuser. The system administrator
must log in as root to handle user accounts, perform maintenance tasks such as system backups and
program upgrades, and so on. The root user can do almost everything, because the operating system
does not apply the usual protection mechanisms to her. In particular, the root user can access every file
on the system and can manipulate every running user program.
Processes
All operating systems use one fundamental abstraction: the process. A process can be defined either as
“an instance of a program in execution” or as the “execution context” of a running program. In
traditional operating systems, a process executes a single sequence of instructions in an address space;
the address space is the set of memory addresses that the process is allowed to reference. Modern
7
operating systems allow processes with multiple execution flows - that is, multiple sequences of
instructions executed in the same address space.
Multiuser systems must enforce an execution environment in which several processes can be active
concurrently and contend for system resources, mainly the CPU. Systems that allow concurrent active
processes are said to be multiprogramming or multiprocessing. It is important to distinguish programs
from processes; several processes can execute the same program concurrently, while the same process
can execute several programs sequentially.
On uniprocessor systems, just one process can hold the CPU, and hence just one execution flow can
progress at a time. Generally, the number of CPUs is always restricted, and therefore only a few
processes can progress at once. An operating system component called the scheduler chooses the
process that can progress. Some operating systems allow only nonpreemptable processes, which means
that the scheduler is invoked only when a process voluntarily relinquishes the CPU. But processes of a
multiuser system must be preemptable; the operating system tracks how long each process holds the
CPU and periodically activates the scheduler.
Unix is a multiprocessing operating system with preemptable processes. Although when no user is
logged in and no application is running, several system processes monitor the peripheral devices. In
particular, several processes listen at the system terminals waiting for user logins. When a user inputs a
login name, the listening process runs a program that validates the user password. If the user identity is
acknowledged, the process creates another process that runs a shell into which commands are entered.
When a graphical display is activated, one process runs the window manager, and each window on the
display is usually run by a separate process. When a user creates a graphics shell, one process runs the
graphics windows and a second process runs the shell into which the user can enter the commands. For
each user command, the shell process creates another process that executes the corresponding program.
Unix-like operating systems/windows/Linux adopt a process/kernel model. Each process has the
illusion that it’s the only process on the machine, and it has exclusive access to the operating system
services. Whenever a process makes a system call (i.e., a request to the kernel), the hardware changes
the privilege mode from User Mode to Kernel Mode, and the process starts the execution of a kernel
procedure with a strictly limited purpose. In this way, the operating system acts within the execution
context of the process in order to satisfy its request. Whenever the request is fully satisfied, the kernel
procedure forces the hardware to return to User Mode and the process continues its execution from the
instruction following the system call.
Kernel Architecture
As stated before, most Unix kernels are monolithic: each kernel layer is integrated into the whole
kernel program and runs in Kernel Mode on behalf of the current process. In contrast, microkernel
operating systems demand a very small set of functions from the kernel, generally including a few
synchronization primitives, a simple scheduler, and an interprocess communication mechanism.
Several system processes that run on top of the microkernel implement other operating system-layer
functions, like memory allocators, device drivers, and system call handlers.
8
Although academic research on operating systems is oriented toward microkernels , such operating
systems are generally slower than monolithic ones, because the explicit message passing between the
different layers of the operating system has a cost. However, microkernel operating systems might
have some theoretical advantages over monolithic ones. Microkernels force the system programmers to
adopt a modularized approach, because each operating system layer is a relatively independent
program that must interact with the other layers through well-defined and clean software interfaces.
Moreover, an existing microkernel operating system can be easily ported to other architectures fairly
easily, because all hardware-dependent components are generally encapsulated in the microkernel
code. Finally, microkernel operating systems tend to make better use of random access memory
(RAM) than monolithic ones, because system processes that aren’t implementing needed
functionalities might be swapped out or destroyed.
modularized approach
Because any module can be linked and unlinked at runtime, system programmers must introduce well-
defined software interfaces to access the data structures handled by modules. This makes it easy to
develop new modules.
Platform independence
Even if it may rely on some specific hardware features, a module doesn’t depend on a fixed hardware
platform. For example, a disk driver module that relies on the SCSI standard works as well on an IBM-
compatible PC as it does on Hewlett-Packard’s Alpha.
A module can be linked to the running kernel when its functionality is required and unlinked when it is
no longer useful; this is quite useful for small embedded systems.
No performance penalty
Once linked in, the object code of a module is equivalent to the object code of the statically linked
kernel. Therefore, no explicit message passing is required when the functions of the module are
invoked.
9
Computer-System Organization/model
A modern general-purpose computer system comprises of one or more CPUs and a number of device
controllers connected through a common bus that provides access between components and shared
memory as depicted in Figure 1.2 (Silberschatz, Gagne & Galvin, 2018).
Each device controller is in charge of a specific type of device (for example, a disk drive, audio device,
or graphics display). Depending on the controller, more than one device may be attached. For instance,
one system USB port can connect to a USB hub, to which several devices can connect. A device
controller maintains some local buffer storage and a set of special-purpose registers. The device
controller is responsible for moving the data between the peripheral devices that it controls and its
local buffer storage.
Typically, operating systems have a device driver for each device controller. This device driver
understands the device controller and provides the rest of the operating system with a uniform interface
to the device. The CPU and the device controllers can execute in parallel, competing for memory
cycles. To ensure orderly access to the shared memory, a memory controller synchronizes
access to the memory.
Interrupts
Consider a typical computer operation: a program performing I/O. To start an I/O operation, the device
driver loads the appropriate registers in the device controller. The device controller, in turn, examines
the contents of these registers to determine what action to take (such as “read a character from the
10
keyboard”). The controller starts the transfer of data from the device to its local buffer. Once the
transfer of data is complete, the device controller informs the device driver that it has finished its
operation.
The device driver then gives control to other parts of the operating system, possibly returning the data
or a pointer to the data if the operation was a read. For other operations, the device driver returns status
information such as “write completed successfully” or “device busy”. But how does the controller
inform the device driver that it has finished its operation? This is accomplished via an interrupt.
Therefore, programmed I/O is sometimes known as polled I/O. Once the CPU detects a “data ready”
condition, it acts according to instructions programmed for that particular register.
The advantage of using this approach is that we have programmatic control over the behaviour of each
device. Program changes can adjust to the number and type of devices in the system as well as their
polling priorities plus intervals. However, constant register polling is a problem. The CPU is in
continual “busy wait” loop until it starts receiving an I/O request. So it won’t be able to do any useful
work until there is I/O to process. With respect to these limitations, programmed I/O is best suited for
special-purpose systems such as automated teller machines and systems that control or monitor
environmental events.
Interrupt-Driven I/O
Interrupt-driven I/O can be regarded as the opposite of the programmed I/O. Instead of the CPU
continually asking its attached devices whether they have any inputs, the devices notify the CPU when
they have data to send. The CPU proceeds with other tasks until a device requesting service interrupts
it. Interrupts are usually signalled with a bit in the CPU flags register called an interrupt flag.
Once the interrupt flag is set, the OS interrupts whatever program is currently executing plus saves that
program’s state and variable information. System then fetches the address of the I/O service routine.
After the CPU has finished servicing the I/O, it then restores the information which it saved from the
program that was running when the interrupt occurred, and the program execution resumes.
Interrupt-driven I/O is similar to programmed I/O in that the service routine can be modified to
accommodate hardware changes. Since vectors for various types of hardware are usually kept in the
same locations in systems running the same type and level of OS, these vectors can easily be changed
to point to vector-specific code. For instance, if someone invents a new type of hard drive that is not
yet supported by a popular OS, the manufacturer of that disk drive may update the disk I/O vector to
point to a code particular to that disk drive. Sadly, some of the early DOS-based virus writers also
deployed the same idea. They would replace the DOS I/O vectors with pointers to their own
disreputable code, destroying many systems in the process. Fortunately, these Oss have mechanisms in
place to safeguard against this kind of vector manipulation.
11
Storage hierarchy (Memory hierarchy)
The design constraints of a computer’s memory can be summed up by three questions: How much?
How fast? How expensive? The question of how much is rather open ended. Suppose the capacity is
there, applications will likely be developed to use it. However, the question of how fast is in a way,
easier to answer. So in order to attain the best performance, memory must be capable of keeping up
with the processor, i.e. as instruction one being executed by processor; we would not want it to pause
while waiting for instructions or operand.
For a practical system, the cost of memory must be realistic in relationship to other components. So
there is the need for a trade-off to be made among the three key characteristics which are: capacity,
access time, and cost. Thus, a mixture of technologies is deployed to implement memory systems, and
across this spectrum of technologies, the following relationships hold:
The problem facing the designer is obvious since he/she would like to use memory technologies that
provide for large capacity memory. This is due to the fact that the capacity is required and the cost per
bit is low.
However, to meet performance requirements, the designer has to use expensive, relatively lower
capacity memories with short access time. The method of resolving this problem is not to rely on a
single memory component or technology, but to deploy a memory hierarchy as depicted below
As one goes down the hierarchy, it could be seen that there is:
The smaller, more expensive, faster memories are supplemented by larger, cheaper and slower
memories. The key to the success of this organisation is item (d); decreasing frequency of access. The
use of two levels of memory to reduce access time works in theory, but only if condition (a) through (d)
are applicable. By using a variety of technologies, a wide range of memory system exists that addresses
conditions (a) through (c) but condition (d) fortunately is also valid.
Windows has a highly modular architecture. Each system function is managed by just one component
of the OS. The rest of the OS and all applications access that function through the responsible
component using standard interfaces. Key system data can only be accessed through the appropriate
function. In principle, any module can be removed, upgraded, or replaced without rewriting the entire
system or its standard application program interface (APIs).
Executive: Contains the base OS services, such as memory management, process and thread
management, security, I/O, and interprocess communication.
Kernel: Controls execution of the processor(s). The Kernel manages thread scheduling,
process switching, exception and interrupt handling, and multiprocessor synchronization.
Unlike the rest of the Executive and the user level, the Kernel’s own code does not run in
threads.
Hardware abstraction layer (HAL): Maps between generic hardware commands and
responses and those unique to a specific platform. It isolates the OS from platform-specific
hardware differences. The HAL makes each computer’s system bus, direct memory access
(DMA) controller, interrupt controller, system timers, and memory module look the same to
the Executive and Kernel components. It also delivers the support needed for symmetric
multiprocessing (SMP), explained subsequently.
Device drivers: Dynamic libraries that extend the functionality of the Executive. These
include hardware device drivers that translate user I/O function calls into specific hardware
device I/O requests and software components for implementing file systems, network
protocols, and any other system extensions that need to run in kernel mode.
Windowing and graphics system: Implements the graphical user interface (GUI) functions,
such as dealing with windows, user interface controls, and drawing.
The Windows Executive includes components for specific system functions and provides an
API for user-mode software. Following is a brief description of each of the Executive modules:
I/O manager: Provides a framework through which I/O devices are accessible to applications,
and is responsible for dispatching to the appropriate device drivers for further processing. The
I/O manager implements all the Windows I/O APIs and enforces security and naming for
devices, network protocols, and file systems (using the object manager).
13
Cache manager: Improves the performance of file-based I/O by causing recently referenced
file data to reside in main memory for quick access, and by deferring disk writes by holding
the updates in memory for a short time before sending them to the disk.
Object manager: Creates, manages, and deletes Windows Executive objects and abstract data
types that are used to represent resources such as processes, threads, and synchronization
objects. It enforces uniform rules for retaining, naming, and setting the security of objects. The
object manager also creates
object handles, which consist of access control information and a pointer to the object.
Plug-and-play manager: Determines which drivers are required to support a particular device
and loads those drivers.
Power manager: Coordinates power management among various devices and can be
configured to reduce power consumption by shutting down idle devices, putting the processor
to sleep, and even writing all of memory to disk and shutting off power to the entire system.
Security reference monitor: Enforces access-validation and audit-generation rules. The
Windows object-oriented model allows for a consistent and uniform view of security, right
down to the fundamental entities that make up the Executive. Thus, Windows uses the same
routines for access validation and for audit checks for all protected objects, including files,
processes, address spaces,
and I/O devices.
Virtual memory manager: Manages virtual addresses, physical memory, and the paging files
on disk. Controls the memory management hardware and data structures which map virtual
addresses in the process’s address space to physical pages in the computer’s memory.
Process/thread manager: Creates, manages, and deletes process and thread objects.
Configuration manager: Responsible for implementing and managing the system registry,
which is the repository for both system wide and per-user settings of various parameters.
Advanced local procedure call (ALPC) facility: Implements an efficient cross-process
procedure call mechanism for communication between local processes implementing services
and subsystems. Similar to the remote procedure call (RPC) facility used for distributed
processing.
14