DOS Unit I
DOS Unit I
DOS Unit I
Telecommunication networks
Cluster Computing
Grid Computing
Data rendering
Types
There are three types of Distributed OS.
System Calls
A system call is a way for a user program to interface with the operating system.
The program requests several services, and the OS responds by invoking a series
of system calls to satisfy the request. A system call can be written in assembly
language or a high-level language like C or Pascal. System calls are predefined
functions that the operating system may directly invoke if a high-level language
is used.
In this article, you will learn about the system calls in the operating system and
discuss their types and many other things.
A system call is a method for a computer program to request a service from the
kernel of the operating system
Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system
call with kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that
are not present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.
There are various situations where you must require system calls in the
operating system. Following of the situations are as follows:
The Applications run in an area of memory known as user space. A system call
connects to the operating system's kernel, which executes in kernel space. When
an application creates a system call, it must first obtain permission from the
kernel. It achieves this using an interrupt request, which pauses the current
process and transfers control to the kernel.
If the request is permitted, the kernel performs the requested action, like creating
or deleting a file. As input, the application receives the kernel's output. The
application resumes the procedure after the input is received. When the
operation is finished, the kernel returns the results to the application and then
moves data from kernel space to user space in memory.
A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time. A more complicated system call, such as
connecting to a network device, may take a few seconds. Most operating systems
launch a distinct kernel thread for each system call to avoid bottlenecks. Modern
operating systems are multi-threaded, which means they can handle various
system calls at the same time.
There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Now, you will learn about all the different types of system calls one-by-one.
Process Control
Process control is the system call that is used to direct the processes. Some
process control examples include creating, load, abort, end, execute, process,
terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write,
etc.
Device Management
Device management is a system call that is used to deal with devices. Some
examples of device management include read, device, write, get device attributes,
release device, etc.
Information Maintenance
Communication
Communication is a system call that is used for communication. There are some
examples of communication, including create, delete communication
connections, send, receive messages, etc.
There are various examples of Windows and Unix system calls. These are as
listed below in the table:
Process Windows Unix
open()
The open() system call allows you to access a file on a file system. It allocates
resources to the file and provides a handle that the process may refer to. Many
processes can open a file at once or by a single process only. It's all based on the
file system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three arguments
in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its
execution before proceeding. When a parent process makes a child process, the
parent process execution is suspended until the child process is finished.
The wait() system call is used to suspend the parent process. Once the child
process has completed its execution, control is returned to the parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call
is one way for a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of
the most common ways to create processes in operating systems. When a parent
process spawns a child process, execution of the parent process is interrupted
until the child process completes. Once the child process has completed its
execution, control is returned to the parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies
that the program no longer requires the file, and the buffers are flushed, the file
information is altered, and the file resources are de-allocated as a result.
exec()
When an executable file replaces an earlier executable file in an already
executing process, this system function is invoked. As a new process is not built,
the old process identification stays, but the new process replaces data, stack,
data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call
indicates that the thread execution is complete, which is especially useful in
multi-threaded environments. The operating system reclaims resources spent by
the process following the use of the exit() system function.
OS Structure
Operating system can be implemented with the help of various structures.
The structure of the OS depends mainly on how the various common
components of the operating system are interconnected and melded into the
kernel. Depending on this we have following structures of the operating
system:
Simple structure:
Such operating systems do not have well defined structure and are small,
simple and limited systems. The interfaces and levels of functionality are not
well separated. MS-DOS is an example of such operating system. In MS-DOS
application programs are able to access the basic I/O routines. These types of
operating system cause the entire system to crash if one of the user programs
fails.
Diagram of the structure of MS-DOS is shown below.
Advantages of Simple structure:
It delivers better application performance because of the few
interfaces between the application program and the hardware.
Easy for kernel developers to develop such an operating system.
Disadvantages of Simple structure:
The structure is very complicated as no clear boundaries exists
between modules.
It does not enforce data hiding in the operating system.
Layered structure:
An OS can be broken into pieces and retain much more control on system. In
this structure the OS is broken into number of layers (levels). The bottom
layer (layer 0) is the hardware and the topmost layer (layer N) is the user
interface. These layers are so designed that each layer uses the functions of
the lower level layers only. This simplifies the debugging process as if lower
level layers are debugged and an error occurs during debugging then the error
must be on that layer only as the lower level layers have already been
debugged.
The main disadvantage of this structure is that at each layer, the data needs
to be modified and passed on which adds overhead to the system. Moreover
careful planning of the layers is necessary as a layer can use only lower level
layers. UNIX is an example of this structure.
Advantages of Layered structure:
Properties of Process
Here are the important properties of the process:
Creation of each process requires separate system calls for each process.
It is an isolated execution entity and does not share data and
information.
Processes use the IPC(Inter-Process Communication) mechanism for
communication that significantly increases the number of system calls.
Process management takes more system calls.
A process has its stack, heap memory with memory, and data map.
Properties of Thread
Here are important properties of Thread:
Semaphore
A semaphore is a variable that controls the access to a common resource
by multiple processes. The two types of semaphores are binary
semaphores and counting semaphores.
Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the
critical section at a time. This is useful for synchronization and also
prevents race conditions.
Barrier
A barrier does not allow individual processes to proceed until all the
processes reach it. Many parallel languages and collective routines impose
barriers.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a
loop while checking if the lock is available or not. This is known as busy
waiting because the process is not doing any useful operation even though
it is active.
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given as
follows −
Pipe
Socket
The socket is the endpoint for sending or receiving data in a network. This
is true for data sent between processes on the same computer or data sent
between different computers on the same network. Most of the operating
systems use sockets for interprocess communication.
File
A file is a data record that may be stored on a disk or acquired on demand
by a file server. Multiple processes can access a file as required. All
operating systems use files for data storage.
Signal
Shared Memory
Message Queue
Multiple processes can read and write data to the message queue without
being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of
interprocess communication is as follows −
Scheduling
Definition
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating
systems. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process
is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling
queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in
this queue.
Device queues − The processes which are blocked due to unavailability
of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the
system; in the above diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are
described below −
S.N. State & Description
1 Running
When a new process is created, it enters into the system as in the
running state.
2
Not Running
Processes that are not running are kept in queue, waiting for their turn
to execute. Each entry in the queue is a pointer to a particular process.
Queue is implemented by using linked list. Use of dispatcher is as
follows. When a process is interrupted, that process is transferred in the
waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the
queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from
the queue and loads them into memory for execution. Process loads into the
memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal.
Time-sharing operating systems have no long term scheduler. When a process
changes the state from new to ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects a process
among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers are faster than long-term
schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from
the memory. It reduces the degree of multiprogramming. The medium-term
scheduler is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A
suspended processes cannot make any progress towards completion. In this
condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Medium-Term
Scheduler Scheduler
Context Switch
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from
the same point at a later time. Using this technique, a context switcher enables
multiple processes to share a single CPU. Context switching is an essential part
of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process
control block. After this, the state for the process to run next is loaded from its
own PCB and used to set the PC, registers, etc. At that point, the second process
can start executing.
Context switches are computationally intensive since register and memory state
must be saved and restored. To avoid the amount of context switching time,
some hardware systems employ two or more sets of processor registers. When
the process is switched, the following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU
based on particular scheduling algorithms. There are six popular process
scheduling algorithms which we are going to discuss in this chapter −
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
Dining-Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated
around a circular table with one chopstick between each pair of
philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to
him. One chopstick may be picked up by any one of its adjacent
followers but not both. This problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-
free manner.
Note:-
Refer Classwork note for Algorithms Example of Above Classical IPC Problems