Introduction To Operating Systems, System Structures
Introduction To Operating Systems, System Structures
OPERATING SYSTEMS
(18CS43)
MODULE 1
TEXT BOOK:
1. Operating System Principles –Abraham Silberschatz, Peter Baer Galvin,
GregGagne, 7th edition, Wiley-India, 2009
REFERENCE BOOKS:
1. Operating Systems: A Concept Based Approach –D.M Dhamdheere, 2ndEdition,
Tata McGraw-Hill, 2002.
2. Operating Systems –P.C.P. Bhatt, 2ndEdition, PHI, 2006.
3. Operating Systems –Harvey M Deital, 3rdEdition, Addison Wesley, 1990.
OPERATING SYSTEMS – 18CS43 2021
INTRODUCTION TO OPERATING SYSTEMS, STRUCTURES
System View:
System has many resources that may be used to solve a problem.
• The OS acts as a manager of these resources (resource allocator).
• The OS must decide how to allocate these resources to programs and the users.
• The OS need to control various I/O devices & user programs.
• An OS is a control program used to manage the execution of user program to
prevent errors and improper use of the computer.
Operating System Definition
• OS is a resource allocator that manages all resources.
• Decides between conflicting requests for efficient and fair resource.
• OS act as a control program. It controls execution of programs to prevent errors and
improper use of the computer.
Computer-System Operation
• I/O devices and the CPU can execute concurrently.
• Each device controller is in charge of a particular device type.
• Each device controller has a local buffer.
• CPU moves data from/to main memory to/from local buffers.
• Device controller informs CPU that it has finished its operation by causing an
interrupt.
• The occurrence of an event is usually signalled by an interrupt from either hardware
or the software. Hardware may trigger an interrupt by executing a special operation
called a system call (also called as a monitor call).
Common Functions of Interrupts:
An operating system is interrupt driven. Interrupt transfers control to the interrupt
serviceroutine generally, through the interrupt vector, which contains the addresses of all
theservice routines.
Interrupt Handling:
• The operating system preserves the state of the CPU by storing registers
and the program counter.
• Determines which type of interrupt has occurred.
Interrupt Timeline:
Storage Structure
• Main memory – only large storage media that the CPU can access directly.
• Secondary storage – extension of main memory that provides large non-
volatile storage capacity.
• Magnetic disks – rigid metal or glass platters covered with magnetic
recording material.
• Disk surface is logically divided into tracks, which are subdivided into sectors. The
disk controller determines the logical interaction between the device and the computer.
Storage Hierarchy
• Storage systems organized in hierarchy
• Speed
• Cost
• Volatility
Caching –copying information into faster storage system; main memory can be viewedas
a last cache for secondary storage.
Batch Systems:
• To improve the processing speed operators batched together the jobs with similar
needs and processed it through the computers. This is called Batch Systems.
• In batch systems the CPU may be idle for some time because the speed of the
mechanical devices slower compared to the electronic devices. Later
improvement in technology and introduction of disks resulted in faster I/O
devices. The introduction of disks allowed the OS to store all the jobs on the
disk.
• The OS could perform the scheduling to use the resources and perform the
task efficiently.
Disadvantages of Batch Systems:
• Turnaround time can be large from user.
• Difficult to debug the program.
• A job can enter into infinite loop.
• A job could corrupt the monitor.
• Due to lack of protection scheme, one job may affect the pending jobs.
Multi programmed System:
Shruthi.P, Dept CSE, GAT Page 5
OPERATING SYSTEMS - 18CS43 2021
• If there are two or more programs in the memory at the same time sharing the
processor, this is referred as multi programmed OS.
• It increases the CPU utilization by organizing the jobs so that the CPU will always
have one job to execute.
• Jobs entering the systems are kept in memory.
• OS picks the job from memory & it executes it.
• Having several jobs in the memory at the same time requires some form of memory
management.
• Multi programmed systems monitors the state of all active program and system
resources and ensures that CPU is never idle until there are no jobs.
• While executing a particular job, if the job has to wait for any task like I/O
operation to be complete then the CPU will switch to some other jobs and starts
executing it and when the first job finishes waiting the CPU will switch back to
that.
• This will keep the CPU & I/O utilization busy.
The following figure shows the memory layout of multi programmed OS:
DESKTOP SYSTEMS:
• Pc’s appeared in 1970’s and during this they lacked the feature needed to protect
an operating system from user program and they even lack neither multiuser nor
multi tasking.
• The goals of those OS changed later with the time and new systems include
Microsoft Windows & Apple Macintosh.
• The Apple Macintosh OS ported to more advanced hardware & includes new
features like virtual memory & multi tasking.
• Micro computers are developed for single user in 1970’s &they can
accommodate software with large capacity & greater speeds.
• MS-DOS is an example for micro computer OS & are used by commercial,
educational, government enterprises.
MS-DOS:
• MS-DOS is an example of single tasking system.
• It has command interpreter system i.e. invoked when the computer is started.
• To run a program MS-DOS uses simple method.
• It does not create a process when one process is running MS-DOS the program into
memory & gives the program as much as possible.
• It lacks the general multitasking capabilities.
PROCESS MANAGEMENT:
• A process is a program in execution.
• A process abstraction is a fundamental OS mechanism for the
management of concurrent program execution.
• The OS responds by creating process.
• Process requires certain resources like CPU time, Memory, I/O devices. These
resources are allocated to the process when it created or while it is running.
• When process terminates the process reclaims all the reusable resources.
• A program by itself is not a process but is a passive entity
The OS is responsible for the following activities of the process management:
• Creating & destroying of the user & system process.
• Allocating H/w resources among the processes.
• Controlling the progress of the process.
• Provides mechanism for process communication.
Shruthi.P, Dept CSE, GAT Page 9
OPERATING SYSTEMS - 18CS43 2021
• Provides mechanism for deadlock handling.
MEMORY MANAGEMENT:
• Main Memory is the centre to the operation of the modern computer.
• Main Memory is the array of bytes ranging from hundreds of thousands to
billions.
• The I/O operation reads and writes data in main Memory.
• The main Memory is generally a large storage device in which a CPU can address
& access directly.
• When a program is to be executed it must be loaded into memory& mapped to
absolute address.
• When it is executing it access the data & instruction from Memory by generating
absolute address.
• When the program terminates all available Memory will be returned back.
The OS is responsible for the following activities:
• Keeping track of which part of the Memory is used & by whom.
• Deciding which process are to be loaded into Memory.
• Allocating & de allocating Memory space as needed.
FILE MANAGEMENT:
• File management is one of the most visible component of an OS.
• Computer stores data on different types of physical media like Magnetic Disks,
Magnetic tapes, optical disks etc.
• For convenient use of the computer system the OS provides uniform logical view of
information storage.
• The OS maps file on to physical media & access these files via storage devices.
• A file is logical collection of information.
• File consists of both program & data. Data files may be numeric, alphabets or
alphanumeric.
• Files can be organized into directories.
The OS is responsible for the following activities:
• Creating & deleting of files.
• Creating & deleting directories.
• Supporting primitives for manipulating files & directories.
• Mapping files onto secondary storage.
• Backing up files on stable storage media.
STORAGE MANAGEMENT:
• Is a mechanism where the computer system may store information in a way that it
can be retrieved later.
• They are used to store both data & programs.
• Since the size of the M/y is small & volatile Secondary storage devices is used.
• Magnetic disk is central importance of computer system.
The OS is responsible for the following activities:
• Free space management.
• Storage allocation.
• Disk scheduling: The entire speed of computer system depends on the speed of the
disk sub system.
Networking:
• Networking enables users to share resources & speed up computations.
• The process communicates with one another through various communication lines
like high speed buses or N/w.
• Following parameters are considered while designing the N/w,
o Topology of N/w.
o Type of N/w.
o Physical media.
o Communication protocol.
o Routing algorithms.
DISTRIBUTED SYSTEMS
• A distributed system is one in which H/w or S/w components located at
the networked computers communicate & co ordinate their actions only
by passing messages.
• A distributed systems looks to its user like an ordinary OS but runs on multiple,
Independent CPU’s.
• Distributed systems depends on networking for their functionality which allows
for communication so that distributed systems are able to share computational
tasks and provides rich set of features to users.
• N/w may vary by the protocols used, distance between nodes & transport media.
o Protocols: TCP/IP, ATM etc.
o Network: LAN, MAN, WAN etc.
o Transport Media: copper wires, optical fibres & wireless transmissions
Advantages of Distributed Systems:
1. Resource sharing.
2. Higher reliability.
3. Better price performance ratio.
4. Shorter response time.
5. Higher throughput.
6. Incremental growth.
Client-Server Systems:
The centralized system today acts as server program to satisfy the requests of client. Server
system can be classified as follows
Computer-Server Systems: Provides an interface to which client can send requests
toperform some actions, in response to which they execute the action and send back result to
the client.
File-Server Systems: Provides a file system interface where clients can create, update,
read& delete files.
Peer-to-Peer Systems:
• PC’s are introduced in 1970’s they are considered as standalone computers.
• With wide spread use of internet PC’s were connected to computer networks.
• With the introduction of the web in mid 1990’s N/w connectivity became an
essential component of a computer system.
• All modern PC’s & workstation can run a web.
• Os also includes system software that enables the computer to access the web.
• The processor can communicate with one another through various
communication lines like high speed buses or telephones lines.
SPECIAL-PURPOSE SYSTEMS:
Clustered Systems:
• Like parallel systems the clustered systems will have multiple CPU but they are
composed of two or more individual system coupled together.
• Clustered systems share storage & closely linked via LAN N/w.
• Clustering is usually done to provide high availability.
• Clustered systems are integrated with H/w & S/w.
o H/w clusters means sharing of high performance disk.
o S/w clusters are in the form of unified control of a computer system in a
cluster.
• A layer of S/w cluster runs on the cluster nodes. Each node can monitor one or more
of the others.
• If the monitored Machine fails the monitoring Machine take ownership of its
storage and restart the applications that were running on failed Machine.
Clustered systems can be categorized into two groups
1. Asymmetric clustering
• One Machine is in hot standby mode while others are running the application.
• The hot standby Machine does nothing but it monitors the active server.
• If the server fails the hot standby Machine becomes the active server.
2. Symmetric clustering
• Two or more hosts are running the Application & they monitor each other.
• This mode is more efficient since it uses all the available H/w.
Real-Time Systems
• Real time system is one which were originally used to control autonomous systems
like satellites, robots, hydroelectric dams etc.
• Real time system is one that must react to I/p & responds to them quickly.
• A real time system should not be late in response to one event.
• A real time should have well defined timeconstraints.
• Realtime systems are of two types:
1. Hard Real Time Systems
COMPUTING ENVIRONMENTS
Different types of computing environments are:
• Traditional Computing.
• Web Based Computing.
• Embedded Computing.
One set of operating-system services provides functions that are helpful to the user.
• User interface: Almost all operating systems have a user interface (UI).
Thisinterface can take several forms. One is a command-line interface (CLI),
which uses text commands and a method for entering them (say, a program to allow
entering and editing of commands). Another is a batch interface, in which
commands and directives to control those commands are entered into files, and
those files are executed. Most commonly a graphical user interface (GUI) is
used. Here, the interface is a window system with a pointing device to direct I/O,
choose from menus, and make selections and a keyboard to enter text. Some
systems provide two or all three of these variations.
• Program execution: The system must be able to load a program into memory
andto run that program. The program must be able to end its execution, either
normally or abnormally (indicating error).
System Calls:
• System provides interface between the process & the OS.
• The calls are generally available as assembly language instruction.
• Several languages have been defined to replace assembly language program.
• A system call instruction generates an interrupt and allows OS to gain control of the
processors.
• System calls occur in different ways depending on the computer.
• Some time more information is needed to identify the desired system call.
• The exact type & amount of information needed may vary according to the
particular OS & call.
Example of System call: The sequence of system calls used to write a simple program to read
data from one file to another file are as follows:
• The first input will be the names of the two files: the input file and the output file.
• The file names can be obtained using read () system call i.e., to write a prompting
message on the screen and then to read from the keyboard the characters that
define the two files.
• Once the two files are obtained, the program will open the input file and create the
output file by using open () and create () system call.
• When the program tries to open the input file, it may find that there is no file of
that name or that file is protected against access. In these cases, the program will
print a message on the console and then terminate abnormally. If the input file
exists, then a new output file will be created using create () system call. If the
output file already exists then, the program will abort using abort () system calls
or may delete the existing file and create a new output file.
• Now both the files will be set up and we enter a loop that reads from the input file
and writes to the output file. Each read and write must return status information
regarding possible error conditions.
• Finally, after the entire file is copied, the program may close both files, write a
message on the console and finally terminate normally. The following diagram
depicts the example of how the system calls are used.
The following diagram depicts the example:
Figure: The handling of a user application invoking the open ( ) system call
Process Control
• end, abort
• load, execute create process, terminate process
• get process attributes, set process attributes
• wait for time, wait event, signal event
• allocate and free memory
• create file, delete file
• open, close ,read, write, reposition
• get file attributes, set file attributes
Device management
• request device, release device
• read, write, reposition
• get device attributes, set device attributes
• logically attach or detach devices
Information maintenance
• get time or date, set time or date
• get system data, set system data
• get process, file, or device attributes
• set process, file, or device attributes
Communications
• create, delete communication connection
• send, receive messages
• Transfer status information
• attach or detach remote devices
DEVICE MANAGEMENT:
• The system calls are also used for accessing devices.
• Many of the system calls used for files are also used for devices.
• In multi user environment
• The requirements are made to use the device.
• After using the device must be released using release system call the device is free
to be used by another user.
o These functions are similar to open & close system calls of files.
• Read, write & reposition system calls may be used with devices.
• MS-DOS & UNIX merge the I/O devices & the files to form file services
structure.
• In file device structure I/O devices are identified by file names.
1. Simple Structures
• Simple structure OS are small, simple & limited systems.
• The structure is not well defined.
• MS-DOS is an example of simple structure OS.
2. Layered Approach:
• In this OS is divided into number of layers, where one layer is built on the top
of another layer.
• The bottom layer is hardware and higher layer is the user interface.
• An OS is an implementation of abstract object i.e. the encapsulation of data &
operation to manipulate these data.
• The main advantage of layered approach is the modularity i.e. each layer uses the
services & functions provided by the lower layer.
• This approach simplifies the debugging & verification.
• Once first layer is debugged the correct functionality is guaranteed while
debugging the second layer.
• If an error is identified then it is a problem in that layer because the layer below it
is already debugged.
• Each layer is designed with only the operations provided by the lower level layers.
• Each layer tries to hide some data structures, operations & hardware from the
higher level layers.
• A problem with layered implementation is that they are less efficient then the
other types.
• The following diagram depicts the layered approach:
3. Micro Kernels:
• Micro kernel is a small Os which provides the foundation for modular extensions.
• The main function of the micro kernels is to provide communication facilities
between the current program and various services that are running in user space.
• This approach was supposed to provide a high degree of flexibility and
modularity.
• The benefit of this approach is as follows:
o Easier to extend a microkernel
o Easier to port the operating system to new architectures o
More reliable (less code is running in kernel mode)
o More secure
• All the new services are added to the user space & do not need the modification
of kernel.
• This approach also provides more security & reliability.
• Most of the services will be running as user process rather than the kernel process.
• This was popularized by use in Mach OSX Structure and the diagram is as shown
below:
4. Modules:
• Most modern operating systems implement kernel modules.
• Uses object-oriented approach.
• Each core component is separate.
• CPU scheduling can create the appearance that users have their own processor.
• Spooling and a file system can provide virtual card readers and virtual line printers.
VMware Architecture:
SYSTEM BOOT:
• Operating systems are designed to run on any of a class of machines; the system must
be configured for each specific computer site.
• SYSGEN program obtains information concerning the specific configuration of the
hardware system.
• Booting – starting a computer by loading the kernel.
• Bootstrap program – code stored in ROM that is able to locate the kernel, load it into
memory, and start its execution.
Process Concept
Process:
• A process is a program in execution.
• A process is more than the program code, which is sometimes known as the
textsection.
• The process includes
o the current activity, as represented by the value of the program counter.
o the processstack,which contains temporary data.
o adata section,which contains global variables.
o aheap,which is memory that is dynamically allocated during process runtime.
Threads: A single thread of control allows the process to perform only one task at one time.
Example: The user cannot simultaneously type in characters and run the
spell checker within the same process. Many modern operating systems have
extended the process concept to allow a process to have multiple threads of
execution and thus to perform more than one task at a time.
Process Scheduling:
• The objective of multiprogramming is to have some process running
at all times, to maximize CPU utilization.
• The objective of time sharing is to switch the CPU among processes
so frequently that users can interact with each program while it is
running.
• The process scheduler selects an available process (possibly from a
set of several available processes) for program execution on the CPU.
Scheduling Queues:
Job queue:
• As processes enter the system, they are put into a job queue.
• This consists of all processes in the system.
Ready queue
• The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called the ready queue.
• This queue is generally stored as a linked list.
Queuing diagram:
Context Switch
• The task of Switching the CPU to another process by performing a
state save of the current process and a state restore of a different
process.
• Perform a state save of the current state of the CPU, be it in kernel
or user mode, and then a state restore to resume operations.
• When a context switch occurs, the kernel saves the context of the
old process in its PCB and loads the saved context of the new
process scheduled to run.
• Context-switch time is pure overhead, because the system does no
useful work while switching.
• Its speed varies from machine to machine, depending on the
memory speed, the number of registers that must be copied, and
the existence of special instructions.
• Typical speeds are a few milliseconds.
• Context-switch times are highly dependent on hardware support.
• A context switch here simply requires changing the pointer to the
current register set.
Operations on Processes:
The processes in most systems can execute concurrently, and they may be
created and deleted dynamically.
• The process at the top of the tree is the sched process, with pid of 0.
• The sched process creates several children processes—including
pageout and fsflush. These processes are responsible for
managing memory and file systems.
• The sched process also creates the init process, which serves as
the root parent process for all user processes.
• Two children of init—inetd and dtlogin. inetd is responsible for networking services
such as telnet and ftp;
• dtlogin is the process representing a user
login screen. which in turns creates the
sdt_shel process.
• Below sdt_shel, a user's command-line shell—the C-shell or csh—is created.
• We also see a csh process with pid of 7778 representing a user who
has logged onto the system using telnet .
• This user has started the Netscape browser (pid of 7785) and the emacs editor (pid of
8105).
• When a process creates a new process, two possibilities exist in terms of execution:
o The parent continues to execute concurrently with its children.
o The parent waits until some or all of its children have terminated.
• There are also two possibilities in terms of the address space of the new process:
o The child process is a duplicate of the parent process (it has
the same program and data as the parent).
o The child process has a new program loaded into it.
int main( )
{
pid_t pid;
/* fork another process */
pid=fork();
if (pid < 0) { /* error occurred */
Figure: Communication models (a) Message passing model (b) Shared memory model
Shared-Memory Systems
• Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory.
• A shared-memory region resides in the address space of the process
creating the shared-memory segment.
• Other processes that wish to communicate using this shared-memory
segment must attach it to their address space.
• They can then exchange information by reading and writing data in the shared areas.
• The form of the data and the location are determined by these
processes and are not under the operating system's control.
• The processes are also responsible for ensuring that they are not
writing to the same location simultaneously.
}
Consumer
while (true) {
while (in == out)
; // do nothing
nextConsumed= buffer[out];
out = (out + 1) % BUFFER
SIZE; return item;
}
Two types of buffers can be used. They are:
• The unbounded buffer places no practical limit on the size of
the buffer. The consumer may have to wait for new items, but
the producer can always produce new items.
• The bounded buffer assumes a fixed buffer size. In this case,
the consumer must wait if the buffer is empty, and the
producer must wait if the buffer is full.
Message-Passing Systems
• Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
• For example, a chat program used on the World Wide Web could be designed so that
chat participants communicate with one another by exchanging messages.
• A message-passing facility provides at least two operations:
o send(message) and receive(message).
• Messages sent by a process can be of either fixed or variable size.
• Fixed-sized messages:
• The system-level implementation is straightforward.
Naming:
• Direct communication, each process that wants to communicate must
explicitly name the recipient or sender of the communication.
• send(P, message)—Send a message to process P.
• receive (Q, message)—Receive a message from process Q. A
• A link is associated with exactly two processes. Between each pair of processes, there
exists exactly one link.
o symmetryin addressing: Both the sender process and
thereceiver process must name the other to communicate.
o asymmetryin addressing: Only the sender names therecipient;
the recipient is not required to name the sender.
• In this scheme, the send() and receive () primitives are defined as follows:
o send(P, message)—Send a message to process P.
o receive(id, message)—-Receive a message from any process.
• The disadvantage in both of these schemes (symmetric and
asymmetric) is the limited modularity of the resulting process
definitions.
• Indirect communication, the messages are sent to and received from mailboxes, or
ports.
• A mailbox can be viewed abstractly as an object into which messages
can be placed by processes and from which messages can be
removed.
• Each mailbox has a unique identification.
o send(A, message)—Send a message to mailbox A.
o receive(A, message)—Receive a message from mailbox A.
• In this scheme, a communication link has the following properties:
o A link is established between a pair of processes only if both
members of the pair have a shared mailbox.
o A link may be associated with more than two processes.
o Between each pair of communicating processes, there may be
a number of different links, with each link corresponding to
one mailbox.
• A mailbox may be owned either by a process or by the operating system.
o Create a new mailbox.
o Send and receive messages through the mailbox.
o Delete a mailbox.
Synchronization