Operating System Overview-24-34
Operating System Overview-24-34
hardware and the topmost layer Layer N provides an interface to the application programs/
users. The below figure depicts the architecture of a layered operating system.
➔ Communication happens only between adjacent layers. Bypassing of layers is now allowed. That
is a layer say layer M in this structure can request for services only from a layer immediately
below it layer M-1. Also it can provide services only to the layer immediately above it layer M+1.
a layer only needs to know what services are offered by the layer below it and not how the
services are offered. Essentially, the implementation of a service is separated from its interface.
Example, a layer providing Processor Scheduling may internally use the FCFS algorithm or SJF
algorithm to provide necessary service. But a layer requiring the services from this layer is
completely unaware of such details. Moreover, the scheduling algorithm can be easily changed to
say Round Robin Algorithm without disturbing the layers above it. This is known as
Encapsulation. A great advantage of encapsulation is that implementation of a layer can be
easily changed without affecting the other layers, provided the interface remains the same.
Layering thus makes it easier to build, maintain and enhance an operating system. The OS can
be debugged starting at the zeroth layer, adding one layer at a time until the entire system
works correctly. In this approach, pinpointing the location of the error is easier.
➔ On the other hand, this layered approach also makes these Operating Systems somewhat less
efficient compared to Monolithic Operating Systems, as all requests pass through multiple layers
of software before they reach the hardware. For example, if a program needs to read data from
a file, it will execute a system calL.The system-call interface layer will trap this call and it will
27
call the fi1e system management layer with arguments like file name and the desired operation,
The file system management layer in tum will call memory management layer passing a pointer
to the memory location corresponding to the file. The memory management layer will use the
services of the processor scheduler layer that will finally allocate the processor to carry out the
desired operation. As a result, the overheads incurred at each layer may make the overall
operation slower
➔ Additionally, at times it may not be possible to decide.;1f.l'Ilasthould go in a panicular layer.
There may be overlapping functionalities, or layers that focus on too many or too little features.
Thus it may happen that some of the layers do not do much work and others are heavily loaded
thus creating performance bottlenecks.
➔ Examples of Layered Operating Systems are VAX/VMS and UNIX
➔ There are many advantages of using the Microkernel approach. This approach makes the
system very robust. If there is some problem with a particular server, then the service can be
reconfigured and restarted without having to restart the entire operating system. The system
is also very extensible that is a new service can be easily added simply by adding a new server
without impacting the other parts of the system. As a kerne;l and servers are separated fro,m
each other, it is also possible to build different operating systems catering to a particular
environment using a single microkernel. For example, a number of operating systems like Mac
OS X, Tr64 UNIX and some favours of Linux are implemented with a Mach Microkernel.
Finally, as only the Microkernel interacts with the underlying hardware, porting the operating
system to various platforms is easier.
➔ Despite these features, the first generation Microkernels were somewhat inefficient and slower
due to the overheads incurred by the extensive message passing between application programs
and various servers. Additionally, despite the name, the size of many Microkernels was not very
small thus occupying a considerable portion of memory. However, subsequent generations of
Microkernels have tried to address this problem in various ways and as a result, latest
Microkernels like L4 and QNX are much smaller in size and are yet very efficient.
29
# Linux
➔ Linux does not use a microkernel approach.
➔ Linux is structured as a collection of modules, a number of which can be automatically loaded
and unloaded on demand. These relatively independent blocks are referred to as loadable
modules. A module is an object file whose code can be linked to and unlinked from the kernel at
runtime. A module implements some specific function, such as a file system, a device driver, or
some other feature of the kernel’s upper layer. A module does not execute as its own process or
thread, although it can create kernel threads for various purposes as necessary. Rather, a
module is executed in kernel mode on behalf of the current process. Thus, although Linux may be
considered monolithic, its modular structure overcomes some of the difficulties in developing and
evolving the kernel.
➔ The Linux loadable modules have two important characteristics:
➔ Dynamic linking: A kernel module can be loaded and linked into the kernel while the kernel is
already in memory and executing. A module can also be unlinked and removed from memory at
any time.
➔ Stackable modules: The modules are arranged in a hierarchy. Individual modules serve as
libraries when they are referenced by client modules higher up in the hierarchy, and as clients
when they reference modules further down.
➔ Dynamic linking facilitates configuration and saves kernel memory. In Linux, a user program or
user can explicitly load and unload kernel modules using the insmod and rmmod commands. The
kernel itself monitors the need for particular functions and can load and unload modules as
needed. With stackable modules, dependencies between modules can be defined. This has two
benefits:
➔ 1. Code common to a set of similar modules (e.g., drivers for similar hardware) can be moved
into a single module, reducing replication.
➔ 2. The kernel can make sure that needed modules are present, refraining from unloading a
module on which other running modules depend, and loading any additional required modules
when a new module is loaded.
➔ Below Figure is an example that illustrates the structures used by Linux to manage modules. The
figure shows the list of kernel modules after only two modules have been loaded: FAT and VFAT.
Each module is defined by two tables: the module table and the symbol table (kernel_symbol).
The module table includes the following elements:
➔ *next: Pointer to the following module. All modules are organized into a linked list. The list
begins with a pseudo module.
➔ *name: Pointer to module name
➔ size: Module size in memory pages
30
➔ usecount: Module usage counter. The counter is incremented when an operation involving the
module’s functions is started and decremented when the operation terminates.
➔ flags: Module flags
➔ nsyms: Number of exported symbols
➔ ndeps: Number of referenced modules
➔ *syms: Pointer to this module’s symbol table
➔ *deps: Pointer to list of modules that are referenced by this module
➔ *refs: Pointer to list of modules that use this module
➔ The symbol table lists symbols that are defined in this module and used elsewhere.
# Kernel Components
➔ Below Figure shows the main components of the Linux kernel as implemented on an IA-64
architecture (e.g., Intel Itanium). The figure shows several processes running on top of the
kernel. Each box indicates a separate process, while each squiggly line with an arrowhead
represents a thread of execution. The kernel itself consists of an interacting collection of
components, with arrows indicating the main interactions. The underlying hardware is also
depicted as a set of components with arrows indicating which kernel components use or control
which hardware components. All of the kernel components, of course, execute on the processor
but, for simplicity, these relationships are not shown. Briefly, the principal kernel components
are the following:
➔ Signals: The kernel uses signals to call into a process. For example, signals are used to notify a
process of certain faults, such as division by zero.
➔ System calls: The system call is the means by which a process requests a specific kernel service.
There are several hundred system calls, which can be roughly grouped into six categories: file
system, process, scheduling, interprocess communication, socket (networking), and
miscellaneous.
➔ Processes and scheduler: tes, manage, and schedule processes.
➔ Virtual memory: Allocates and manages virtual memory for processes.
➔ File systems: Provide a global, hierarchical namespace for files, directories, and other
file-related objects and provide file system functions.
➔ Network protocols: Support the Sockets interface to users for the TCP/IP protocol suite.
➔ Character device drivers: Manage devices that require the kernel to send or receive data one
byte at a time, such as terminals, modems, and printers.
➔ Block device drivers: Manage devices that read and write data in blocks, such as various forms
of secondary memory (magnetic disks, CD-ROMs, etc.).
31
➔ Network device drivers: Manage network interface cards and communications ports that
connect to network devices, such as bridges and routers.
➔ Traps and faults: Handle traps and faults generated by the processor, such as a memory fault.
➔ Physical memory: Manages the pool of page frames in real memory and allocates pages for
virtual memory.
➔ Interrupts: Handle interrupts from peripheral devices
➔ Scheduling Related
sched_get_priority_max Return the maximum priority value that can be used with the
scheduling algorithm identified by policy.
sched_setscheduler Set both the scheduling policy (e.g., FIFO) and the
associated parameters for the process pid.
➔ IPC Related
semctl Perform the control operation specified by cmd on the semaphore set semid.
shmctl Allow the user to receive information on a shared memory segment; set the
owner, group, and permissions of a shared memory segment; or destroy a
segment.
➔ Socket Related
bind Assign the local IP address and port for a socket. Return 0 for success
and -1 for error.
connect Establish a connection between the given socket and the remote socket
associated with sockaddr.