OS Unit 1 Formatted
OS Unit 1 Formatted
2 Marks
1. What are the five major activities of an operating system with regard to
process management? ( APR’14 )
The various activities that the operating system performs with regard to process
management are mainly process scheduling and context switching.The five major
activities are:
The creation and deletion of both user and system processes
The suspension and resumption of processes
The provision of mechanisms for process synchronization
The provision of mechanisms for process communication
The provision of mechanisms for deadlock handling
Multiprocessor Operating System refers to the use of two or more central processing
units (CPU) within a single computer system. These multiple CPUs are in a close
communication sharing the computer bus, memory and other peripheral devices. These
systems are referred as tightly coupled systems.
Switching the CPU to another process requires performing a state save of the
current process and a state restore of a different process. This task is known as a
Context switch.
When a context switch occurs, the kernel saves the context of the old process
in its PCB and loads the saved context of the new process scheduled to run.
Context-switch time is pure overhead, because the system does no useful work
while switching.
The nodes in the distributed systems can be arranged in the form of client/server
systems or peer to peer systems.
6. Name any three components of the system (Nov ’15)
Multiprocessing Multiprogramming
Multiprocessing refers to processing Multiprogramming keeps several
of multiple processes at same time by programs in main memory at the same
multiple CPUs. time and execute them concurrently
utilizing single CPU.
It utilizes multiple CPUs. It utilizes single CPU.
It permits parallel processing. Context switching takes place.
Less time taken to process the jobs. More Time taken to process the jobs.
It facilitates much efficient utilization Less efficient than multiprocessing.
of devices of the computer system.
10. Differentiate tightly coupled systems and loosely coupled systems? (May 2017)
12. What is the use of job queues, ready queues & device queues? (May 2017)
wait(S) {
while (S <= 0)
; // no-op
s--;
}
The classical definitions of signal in pseudocode is:
Signal(S){
S++;
}
15. Mention any one method for inter process communication [Nov 2018]
This allows flow of data in one direction only. Analogous to simplex systems
(Keyboard). Data from the output is usually buffered until input process receives it
which must have a common origin.
This is a pipe with a specific name it can be used in processes that don’t have a
shared common process origin. E.g. is FIFO where the datails written to a pipe is
first named.
3. Message Queuing –
This allows messages to be passed between processes using either a single queue
or several message queue. This is managed by system kernel this messages are co-
ordinated using an API.
4. Semaphores –
This is used in solving problems associated with synchronization and to avoid race
condition. These are integer values which are greater than or equal to 0.
5. Shared memory –
6. Sockets –
This method is mostly used to communicate over a network between a client and a
server. It allows for a standard connection which is computer and OS independent.
The user services are kept in user address space, and kernel services are kept
under kernel address space, thus also reduces the size of kernel and size of
operating system as well.
18. Why the operating system is viewed as the resource allocator [Nov 2019]
A computer system has many resources – hardware & software that may be
required to solve a problem, like CPU time, memory space, file-storage space, I/O
devices & so on.
The OS acts as a manager for these resources so it is viewed as a resource
allocator.
The OS is viewed as a control program because it manages the execution of user
programs to prevent errors & improper use of the computer.
19. What is the difference between job and process scheduler [Nov 2019]
Mainframes are a type of computer that generally are known for their large size, amount of
storage, processing power and high level of reliability. They are primarily used by large
organizations for mission-critical applications requiring high volumes of data processing.
Processing Unit
CPU has various printed circuits boards, memory modules, different processors, and
interfaces for each channels. All channels works as a communication medium in between
input/output terminal and memory modules. Main objective of using all those channels is
to transfer data and for managing the system components.
Controller Unit
Control unit is also called bus. In mainframe computer system has several buses for
different devices such as tape, disk etc, and further it is linked to its storage unit area.
Storage Unit
Storing unit is used to perform different tasks such as insertion data, saving, retrieving,
and access data. Storing unit contains the several devices such as hard drives; tape drives,
punch cards etc, and these are controlled by CPU. These devices have capacity million
time faster to PC.
Multiprocessors
Mainframe computer system consist multiprocessor unit; it means that it has multiple
processors for processing the massive data in small time frame with using (Error handling
and interrupt handling).
Motherboard
Mainframe’s motherboard contains the several high speed processors, main memory
(RAM), and different hardware parts which are performed their function through its bus
architecture. In this motherboard, 128 bit buses concept is used.
Input/output Channels
Mainframe computer system leads to some techniques like as IOCDS, ESCON, FICON,
CHIPD, and more.
ICODS – ICODS stands for I/O Control Data Set.
ESCON – ESCON stands for Enterprise Systems Connection.
FICON – FICON stands for Fiber Connector.
Every computer contains the hard disk for storing the data for long life, but this
mainframe computer saves the whole data within itself into application form. When all
users try to login from remotely with its connected terminals then mainframe computer
allows all remote terminals for accessing their all files as well as programs.
Due to store all data and program files into one mainframe system, it can lead to enhance
its productivity as well as efficiency. Administrators have access to insert all applications
and data into mainframe system, and they can decide how many users should be access for
them. So, mainframe system has great firm-wall for harmful intruder’s attacks.
Mainframe computer system contains the limited number of processing time to split into
all users, who are logged in presently with system. Mainframe system decides that which
types of priorities should be linked with different types of users. Administrator has power
to select that how to assign processor time.
IBM z15
IBM z14
Tianhe-1A; NUDT YH Cluster
Jaguar; Cray XT5
Nebulae; Dawning TC3600 Blade
Mainframe computers are more popular due to their long-life performance, because
it can run smoothly up to 50 years after its proper installation.
Mainframe application programs get outstanding performance due to their large
scaled memory management.
Mainframe computer systems are capable to share their over workload on the other
multiple processors and I/O terminals, and due to this process enhances its
performance.
Mainframe systems have ability to manage different complicated operating
systems such as UNIX, VMS, and other IBM O/S like as Z/OS, Z/VM.
Mainframe systems have less probability in getting any errors, and bugs during
processing time. If any time, some errors tries to enter in the system then they are
able to remove them.
Mainframe systems are designed to support “Tightly Coupled Clustering
Technology”, due to this feature; it can manage approximate 32 systems along with
single system image. If system gets fails due to any hardware component damage
then running tasks could be shifted to other live system, and any data do not
corrupt in this entire process.
Mainframe system is eligible to support maximum Input/output devices.
Mainframe system is able to execute multiple programs concurrently.
In mainframe system, virtual storage system can be used.
It can generate I/O bandwidth in large amount.
It supports to Zero fault tolerant computing system.
It is capable to manage several users.
It can also support centralized computing system.
Advantages of Mainframe Computer
Virtualization System
With using its Virtualization property, Mainframe computer system can be divided into
small logical segments for eliminating the memory limitation, and we can great computing
performance.
Reliability
Mainframe computer system is able to identify their errors and bugs, and it can self
recover of them without any other embedded resources. Today, Modern mainframe
computer systems are capable to run frequently for 40 to 50 years without getting any
errors.
Self-Serviceability
If, mainframe computer system gets errors during processing time then it is capable to fix
them without degrading its performance in short duration.
Protection
Mostly, mainframe computer are used by large scale organizations because they need to
secure their confidential data. So, mainframe computer system allows to more attention
on authentication protection for storing data.
Flexible Customization
Mainframe computer system’s Customization is performed as per the client’s
requirement. Mainframe computer can support multiple operating systems at same time.
Mainframe computer systems are getting more popularity due to their long lasting
performance (up to 40 years).
Scheduling Queues
All processes, upon entering into the system, are stored in the Job Queue.
Processes in the Ready state are placed in the Ready Queue.
Processes waiting for a device to become available are placed in Device Queues.
There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it is
selected for execution (or dispatched). Once the process is assigned to the CPU and is
executing, one of the following several events can occur:
The process could issue an I/O request, and then be placed in the I/O queue.
The process could create a new subprocess and wait for its termination.
The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready
state, and is then put back in the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and resources
deallocated.
From the job queue, the Job Processor, selects processes and loads them into the
memory for execution.
Primary aim of the Job Scheduler is to maintain a good degree of
Multiprogramming.
An optimal degree of Multiprogramming means the average rate of process
creation is equal to the average departure rate of processes from the execution
memory.
This problem is called starvation which may arise if the short-term scheduler makes
some mistakes while selecting the job.
This scheduler removes the processes from memory (and from active contention for
the CPU), and thus reduces the degree of multiprogramming. At some later time, the
process can be reintroduced into memory and its execution van be continued where it
left off.
This scheme is called swapping. The process is swapped out, and is later swapped in,
by the medium term scheduler.
1.3 Explain Different type of system calls? (APR’15) [Nov 2018] [Nov 2014]
Explain the various types of system calls with an example for each [Nov 2019]
A system call is a mechanism that provides the interface between a process and the
operating system. It is a programmatic method in which a computer program requests a
service from the kernel of the OS.
System call offers the services of the operating system to the user programs via API
(Application Programming Interface). System calls are the only entry points for the kernel
system.
Types of System calls - Here are the five types of system calls used in OS:
Process Control
File Management
Device Management
Information Maintenance
Communications
Process Control - This system calls perform the task of process creation, process
termination, etc.
End and Abort
Load and Execute
Create Process and Terminate Process
Wait and Signed Event
Allocate and free memory
File Management - File management system calls handle file manipulation jobs like
creating a file, reading, and writing, etc.
Create a file
Delete file
Open and close file
Read, write, and reposition
Device Management - Device management does the job of device manipulation like
reading from device buffers, writing into device buffers, etc.
Request and release device
Logically attach/ detach devices
Get and Set device attributes
Information Maintenance - It handles information and its transfer between the OS and
the user program.
Get or set time and date
Get process and device attributes
wait()
In some systems, a process needs to wait for another process to complete its
execution. This type of situation occurs when a parent process creates a child
process, and the execution of the parent process remains suspended until its child
process executes.
The suspension of the parent process automatically occurs with a wait() system
call. When the child process ends execution, the control moves back to the parent
process.
fork()
Processes use this system call to create processes that are a copy of themselves.
With the help of this system Call parent process creates a child process, and the
execution of the parent process will be suspended till the child process executes.
exec()
This system call runs when an executable file in the context of an already running
process that replaces the older executable file.
However, the original process identifier remains as a new process is not built, but
stack, data, head, data, etc. are replaced by the new process.
kill():
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 16
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
The kill() system call is used by OS to send a termination signal to a process that
urges the process to exit.
However, a kill system call does not necessarily mean killing the process and can
have various meanings.
exit():
The exit() system call is used to terminate program execution.
Specially in the multi-threaded environment, this call defines that the thread
execution is complete.
The OS reclaims resources that were used by the process after the use of exit()
system call.
An Operating System supplies different kinds of services to both the users and to the
programs as well. It also provides application programs (that run within an Operating
system) an environment to execute it freely. It provides users the services run various
programs in a convenient manner.
Here is a list of common services offered by an almost all operating systems:
User Interface
Program Execution
File system manipulation
Input / Output Operations
Communication
Resource Allocation
Error Detection
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 17
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
Accounting
Security and protection
Usually, Operating system comes in three forms or types. Depending on the interface their
types have been further subdivided. These are:
Command line interface
Batch based interface
Graphical User Interface
o The command line interface (CLI) usually deals with using text commands
and a technique for entering those commands.
o The batch interface (BI): commands and directives are used to manage
those commands that are entered into files and those files get executed.
o Another type is the graphical user interface (GUI): which is a window
system with a pointing device (like mouse or trackball) to point to the I/O,
choose from menus driven interface and to make choices viewing from a
number of lists and a keyboard to entry the texts.
The operating system must have the capability to load a program into memory and
execute that program.
Furthermore, the program must be able to end its execution, either normally or
abnormally / forcefully.
Programs need has to be read and then write them as files and directories.
File handling portion of operating system also allows users to create and delete
files by specific name along with extension, search for a given file and / or list file
information.
Some programs comprise of permissions management for allowing or denying
access to files or directories based on file ownership.
Error Detection
Errors may occur within CPU, memory hardware, I/O devices and in the user
program.
For each type of error, the OS takes adequate action for ensuring correct and
consistent computing.
Accounting
This service of the operating system keeps track of which users are using how
much and what kinds of computer resources have been used for accounting or
simply to accumulate usage statistics.
Cooperating processes are those that can affect or are affected by other processes running
on the system. Cooperating processes may share data with each other.
Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These
subtasks can complete by different cooperating processes. This leads to faster and
more efficient completion of the required tasks.
Information Sharing
Sharing of information between multiple processes can be accomplished using
cooperating processes. This may include access to the same files. A mechanism is
required so that the processes can access the files in parallel to each other.
Convenience
There are many tasks that a user needs to do such as compiling, printing, editing
etc. It is convenient if these tasks can be managed by cooperating processes.
Computation Speedup
Subtasks of a single task can be performed parallely using cooperating processes.
This increases the computation speedup as the task can be executed faster.
However, this is only possible if the system has multiple processing elements.
Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or messages.
Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such as
memory, variables, files, databases etc. Critical section is used to provide data
integrity and writing is mutually exclusive to prevent inconsistent data.
In the above diagram, Process P1 and P2 can cooperate with each other using
shared data such as memory, variables, files, databases etc.
Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This
may lead to deadlock if each process is waiting for a message from the other to
perform an operation. Starvation is also possible if a process never receives a
message. A diagram that demonstrates cooperation by communication is given as
follows −
In the above diagram, Process P1 and P2 can cooperate with each other using messages to
communicate.
A cooperating process is one that can affect or be affected by other process executing
in the system cooperating process an:
1. Directly share a logical address data space (i.e., code & data)
2. Share data only through files/ messages
Example- producer-consumer problem
A producer can produce one item while the consumer is consuming another item.
The producer and consumer must be synchronized, so that the consumer does not try to
consume an item that has not yet been produced. In this situation, the consumer must
wait until an item is produced.
The buffer may either be provided by the operating system through the use of an
inter process-communication (IPC) or by explicitly coded by the application
programmer with the use of shared memory.
#define BUFFER_SIZE 10
typedef struct {
. . . } item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;
#define BUFFER-SIZE 10
typedef s t r u c t {
...
) item;
item buffer [BUFFER-SIZE];
i n ti n = 0;
i n t out = 0;
The shared buffer is implemented as a circular array with two logical pointers: in and out.
The variable in points to the next free position in the buffer;
The variable out points to the first full position in the buffer.
The buffer is empty when in== out;
The buffer is full when ((in +1) % BUFFERSIZE) == out.
Producer process
The producer process has a local variable nextproduced in which the new item to be
produced is stored
while (1) {
/* produce an item in nextproduced */
while ( ((in + 1) % BUFFER-SIZE) == out)
; /* do nothing */
buffer[in] = nextproduced;
in = (in + 1) % BUFFER-SIZE;
Consumer process - The consumer process has a local variable nextconsumed in which
the item to be consumed is stored
while (1) {
while (in == out)
; // do nothing
nextconsumed = buffer [out];
out = (out + 1) % BUFFER-SIZE;
/* consume the item in nextconsumed */
Here,
in variable is used by producer t identify the next empty slot in the buffer.
out variable is used by the consumer to identify where it has to the consumer the
item.
counter is used by producer and consumer to identify the number of filled slots in
the buffer.
Shared Resources
1. buffer
2. counter
When producer and consumer are not executed can current then inconsistency arises. Here
the value of a counter that is used by both producer and consumer will be wrong if both
are executed concurrently without any control. The producer and consumer processes
share the following variables:
With the variables in and out initialized to the value 0. In The shared buffer there are two
logical pointers; in and out that is implemented in the form of a circular array. The in
variables points to the next free position in the buffer and out variable points to the first
full position in the buffer. When, in = out the buffer is empty and when in+1 mod n =
out the buffer is full.
1.6 What are the benefits of Inter-Process communication? Explain about how two
different processes can communicate with each other using shared memory?
(NOV’16)
Shared Memory
Message passing
The shared memory in the shared memory model is the memory that can be
simultaneously accessed by multiple processes. This is done so that the processes can
communicate with each other. All POSIX systems, as well as Windows operating systems
use shared memory.
A diagram that illustrates the shared memory model of process communication is given as
follows −
In the above diagram, the shared memory can be accessed by Process 1 and Process 2.
Message passing model allows multiple processes to read and write data to the message
queue without being connected to each other. Messages are stored on the queue until their
recipient retrieves them. Message queues are quite useful for interprocess communication
and are used by most operating systems.
In the above diagram, both the processes P1 and P2 can access the message queue and
store and retrieve data.
The message passing model has slower communication than the shared memory model
because the connection setup takes time.
One of the ways to manage interprocess communication is by using sockets. They provide
point-to-point, two-way communication between two processes. Sockets are an endpoint
of communication and a name can be bound to them. A socket can be associated with one
or more processes.
Types of Sockets
Sequential Packet Socket: This type of socket provides a reliable connection for
datagrams whose maximum length is fixed This connection is two-way as well as
sequenced.
Datagram Socket: A two-way flow of messages is supported by the datagram
socket. The receiver in a datagram socket may receive messages in a different
order than that in which they were sent. The operation of datagram sockets is
similar to that of passing letters from the source to the destination through a mail.
Stream Socket: Stream sockets operate like a telephone conversation and provide
a two-way and reliable flow of data with no record boundaries. This data flow is
also sequenced and unduplicated.
Raw Socket: The underlying communication protocols can be accessed using the
raw sockets.
Socket Creation
Sockets can be created in a specific domain and the specific type using the following
declaration − int socket(int domain,int type,int protocol)
If the protocol is not specified in the above system call, the system uses a default protocol
that supports the socket type. The socket handle is returned. It is a descriptor.
The bind function call is used to bind an internet address or path to a socket. This is shown
as follows − intbind(int s,conststructsockaddr*name,intnamelen)
Connecting the stream sockets is not a symmetric process. One of the processes acts as a
server and the other acts as a client.
The server specifies the number of connection requests that can be queued using the
following declaration – intlisten(int s,int backlog)
The client initiates a connection to the server’s socket by using the following declaration −
intconnect(int s,structsockaddr*name,intnamelen)
A new socket descriptor which is valid for that particular connection is returned by the
following declaration − intaccept(int s,structsockaddr*addr,int*addrlen)
Stream Closing
The socket is discarded or closed by calling close().
Although pipe can be accessed like an ordinary file, the system actually manages it
as FIFO queue. A pipe file is created using the pipe system call. A pipe has an input end
and an output end.
One can write into a pipe from input end and read from the output end. A pipe
descriptor, has an array that stores two pointers, one pointer is for its input end and the
other pointer is for its output end.
When a process attempts to write into the pipe, the write request is immediately
executed if the pipe is not full.
However, if pipe is full the process is blocked until the state of pipe changes. Similarly,
a reading process is blocked, if it attempts to read more bytes that are currently in pipe,
otherwise the reading process is executed. Only one process can access a pipe at a time.
Limitations :
As a channel of communication a pipe operates in one direction only.
Pipes cannot support broadcast i.e. sending message to multiple processes at
the same time.
The read end of a pipe reads any way. It does not matter which process is
connected to the write end of the pipe. Therefore, this is very insecure mode
of communication.
1.8 Explain the differences between OS for mainframe computers and personal
computers [6] [May 2018]
All the computers are tied together in a network either a Local Area Network
(LAN) or Wide Area Network (WAN), communicating with each other so that different
portions of a Distributed application run on different computers from any geographical
location.
Examples of distributed systems range from simple systems in which a single client talks
to a single server to huge amorphous networks like the Internet as a whole.
Desirable Features
Transparency
Object class level
System call and interprocess communication level
Minimal interference
Can be done by minimizing freezing time
Freezing time: a time for which the execution of the process is stopped for
transferring its information to the destination node
Robustness
The failure of a node other than the one on which a process is currently running
should not affect the execution of that process
Distributed Systems
A distributed system contains multiple nodes that are physically separate but linked
together using the network. All the nodes in this system communicate with each other and
handle processes in tandem. Each of these nodes contains a small part of the distributed
operating system software.
Client/Server Systems
In client server systems, the client requests a resource and the server provides that
resource. A server may serve multiple clients at the same time while a client is in contact
with only one server. Both the client and server usually communicate via a computer
network and so they are a part of distributed systems.
1: Telecommunication networks:
telephone networks and cellular networks
computer networks such as the Internet
wireless sensor networks
routing algorithms
2: Network Applications:
World Wide Web and peer-to-peer networks
massively multiplayer online games and virtual reality communities
Clustered systems
Clustered systems are similar to parallel systems as they both have multiple CPUs.
However, a major difference is that clustered systems are created by two or more
individual computer systems merged together. Basically, they have independent computer
systems with a common storage and the systems work together.
A diagram to better illustrate this is −
The clustered systems are a combination of hardware clusters and software clusters. The
hardware clusters help in sharing of high-performance disks between the systems. The
software clusters make all the systems work together.
Each node in the clustered systems contains the cluster software. This software monitors
the cluster system and makes sure it is working as required. If any one of the nodes in the
clustered system fail, then the rest of the nodes take control of its storage and resources
and try to restart.
Multiprocessor Systems
Most computer systems are single processor systems i.e they only have one processor.
However, multiprocessor or parallel systems are increasing in importance nowadays.
These systems have multiple processors working in parallel that share the computer clock,
memory, bus, peripheral devices etc. An image demonstrating the multiprocessor
architecture is
−
Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric
multiprocessors. Details about them are as follows −
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system
and they all communicate with each other. All the processors are in a peer to peer
relationship i.e. no master - slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of Unix for
the Multimax Computer.
Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master
processor that gives instruction to all the other processors. Asymmetric multiprocessor
system contains a master slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before
symmetric multiprocessors were created. Now also, this is the cheaper option.
Enhanced Throughput
If multiple processors are working in tandem, then the throughput of the system increases
i.e. number of processes getting executed per unit of time increase. If there are N
processors then the throughput increases by an amount just under N.
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple
computer systems, still they are quite expensive. It is much cheaper to buy a simple single
processor system than a multiprocessor system.
A single processor system contains only one processor. So only one process can be
executed at a time and then the process is selected from the ready queue. Most general-
purpose computers contain the single processor systems as they are commonly in use.
A single processor system can be further described using the diagram below −
As in the above diagram, there are multiple applications that need to be executed.
However, the system contains a single processor and only one process can be executed at a
time.
Most systems use a single processor. In a single-processor system, there is one main CPU
capable of executing a general-purpose instruction set, including instructions from user
processes.
Almost all systems have other special-purpose processors as well. They may come in the
form of device-specific processors, such as disk, keyboard, and graphics controllers; or, on
mainframes, they may come in the form of more general-purpose processors, such as I/O
processors that move data rapidly among the components of the system.
All of these special-purpose processors run a limited instruction set and do not run user
processes. Sometimes they are managed by the operating system, in that the operating
system sends them information about their next task and monitors their status.
A real-time system is defined as a data processing system in which the time interval
required to process and respond to inputs is so small that it controls the environment. The
time taken by the system to respond to an input and display of required updated
information is termed as the response time. So in this method, the response time is very
less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of a
processor or the flow of data and real-time systems can be used as a control device in a
dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For example, Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time
systems, secondary storage is limited or missing and the data is stored in ROM. In these
systems, virtual memory is almost never found.
Soft real-time systems are less restrictive. A critical real-time task gets priority over other
tasks and retains the priority until it completes. Soft real-time systems have limited utility
than hard real-time systems. For example, multimedia, virtual reality, Advanced
Scientific Projects like undersea exploration and planetary rovers, etc.
1.13 Discuss with examples, how the problem of maintaining coherence of cached
data manifests itself in the following processing environments: [May 2019]
1. Single-processor systems
2. Multiprocessor systems
3. Distributed systems
Multiprocessor systems
It is a bit more complicated since each of the CPUs might contain its own local
cache. In such an environment, a copy of A may exist simultaneously in several
caches.
Since the various CPUs can all execute concurrently, we must make sure that an
update to the value of A in one cache is immediately reflected in all other caches
where A resides.
This situation is called cache coherency, and is usually a hardware problem.
Distributed systems
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 38
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
The situation here is even more complex. In this environment, several copies of the
same file can be kept on different computers that are distributed in space.
Since the various replicas may be accessed and updated concurrently, some
distributed systems ensure that, when a replica is updated in one place, all other
replicas are brought up to date as soon as possible.
Part 2
2.1 List the responsibilities of operating system in connection with process and
memory management (4) [May 2017]
The operating system manages the Primary Memory or Main Memory. Main memory is
made up of a large array of bytes or words where each byte or word is assigned a certain
address. Main memory is a fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory.
The operating system is responsible for the following activities in connection with
memory management.
Keep track of which parts of memory are currently being used and by whom.
Decide which processes are to be loaded into memory when memory space
becomes available.
In a multiprogramming system, the OS takes a decision about which process
will get Memory and how much.
Allocates the memory when a process requests
It also de-allocates the Memory when a process no longer requires or has been
terminated.
Priority scheduling also helps OS to involve priority assignments. The processes with
higher priority should be carried out first, whereas jobs with equal priorities are carried out
on a round-robin or FCFS basis. Priority can be decided based on memory requirements,
time requirements, etc.
Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm
comes from the round-robin principle, where each person gets an equal share of something
in turn. It is mostly used for scheduling algorithms in multitasking. This algorithm method
helps for starvation free execution of processes.
This algorithm separates the ready queue into various separate queues. In this method,
processes are assigned to a queue based on a specific property of the process, like the
process priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as it needs to use other types
of algorithms in order to schedule the jobs.
Process Concept
A process is an instance of a program in execution.
Batch systems work in terms of "jobs". Many modern process concepts are still
expressed in terms of jobs, ( e.g. job scheduling ), and the two terms are often used
interchangeably.
The Process
Process is the execution of a program that performs the actions specified in that program.
It can be defined as an execution unit where a program runs.
o The text section comprises the compiled program code, read in from non-
volatile storage when the program is launched.
o The data section stores global and static variables, allocated and initialized
prior to executing main.
o The heap is used for dynamic memory allocation, and is managed via calls to
new, delete, malloc, free, etc.
o The stack is used for local variables. Space on the stack is reserved for local
variables when they are declared ( at function entrance or elsewhere, depending
on the language ), and the space is freed up when the variables go out of scope.
o When processes are swapped out of memory and later restored, additional
information must also be stored and restored. Key among them are the program
counter and the value of all program registers.
Process State
Processes may be in one of 5 states, as shown below.
o New - The process is in the stage of being created.
o Ready - The process has all the resources available that it needs to run, but
the CPU is not currently working on this process's instructions.
o Running - The CPU is working on this process's instructions.
o Waiting - The process cannot run at the moment, because it is waiting for
some resource to become available or for some event to occur. For
example, the process may be waiting for keyboard input, disk access
request, inter-process messages, a timer to go off, or a child process to
finish.
o Terminated - The process has completed.
For each process there is a Process Control Block, PCB, which stores the following (types
of) process-specific information, as illustrated in Figure
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory)
and will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one
process and start executing it. Selecting the process which is to be executed next, is known
as scheduling.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 43
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process
may come to the blocked or wait state during the execution then in that case the processor
starts executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of
the process (PCB) will be deleted and the process gets terminated by the Operating
system.
The system program serves as a part of the operating system. It traditionally lies between
the user interface and the system calls. The user view of the system is actually defined by
system programs and not system calls because that is what they interact with and system
programs are closer to the user interface.
In the above image, system programs as well as application programs form a bridge
between the user interface and the system calls. So, from the user view the operating
system observed is actually the system programs and not the system calls.
Status Information
The status information system programs provide required data on the current or past status
of the system. This may include the system date, system time, available memory in
system, disk space, logged in users etc.
Communications
These system programs are needed for system communications such as web browsers.
Web browsers allow systems to communicate and access information from the network as
required.
File Manipulation
These system programs are used to manipulate system files. This can be done using
various commands like create, delete, copy, rename, print etc. These commands can create
files, delete files, copy the contents of one file into another, rename files, print them etc.
File Modification
System programs that are used for file modification basically change the data in the file or
modify it in some other way. Text editors are a big example of file modification system
programs.
Application Programs
Application programs can perform a wide range of services as per the needs of the users.
These include programs for database systems, word processors, plotting tools,
spreadsheets, games, scientific applications etc.
An operating system provides the environment within which programs are executed. To
construct such an environment, the system is partitioned into small modules with a well-
defined interface.
Process Management The CPU executes a large number of programs. While its main
concern is the execution of user programs, the CPU is also needed for other system
activities. These activities are called processes. A process is a program in execution
In general, a process will need certain resources such as CPU time, memory, files, I/O
devices, etc., to accomplish its task. These resources are given to the process when it is
created. In addition to the various physical and logical resources that a process obtains
when it is created, some initialization data (input) may be passed along.
The operating system is responsible for the following activities in connection with
processes managed.
o The creation and deletion of both user and system processes
o The suspension are resumption of processes.
o The provision of mechanisms for process synchronization
o The provision of mechanisms for deadlock handling.
Memory Management
Memory is central to the operation of a modern computer system. Memory is a large array
of words or bytes, each with its own address. Interaction is achieved through a sequence of
reads or writes of specific memory address. The CPU fetches from and stores in memory.
In order for a program to be executed it must be mapped to absolute addresses and loaded
in to memory. As the program executes, it accesses program instructions and data from
memory by generating these absolute is declared available, and the next program may be
loaded and executed.
In order to improve both the utilization of CPU and the speed of the computer's response
to its users, several processes must be kept in memory.
The operating system is responsible for the following activities in connection with
memory management.
o Keep track of which parts of memory are currently being used and by
whom.
o Decide which processes are to be loaded into memory when memory space
becomes available.
o Allocate and deallocate memory space as needed.
Secondary Storage Management [Disk Management]
The main purpose of a computer system is to execute programs. These programs, together
with the data they access, must be in main memory during execution. Since the main
memory is too small to permanently accommodate all data and program, the computer
system must provide secondary storage to backup main memory.
Most modem computer systems use disks as the primary on-line storage of information,
of both programs and data. Hence the proper management of disk storage is of central
importance to a computer system.
The operating system is responsible for the following activities in connection with
disk management
o Free space management
o Storage allocation
o Disk scheduling.
I/O System
One of the purposes of an operating system is to hide the peculiarities of specific hardware
devices from the user. For example, in Unix, the peculiarities of I/O devices are hidden
from the bulk of the operating system itself by the I/O system. The I/O system consists of:
o A buffer caching system
o A general device driver code
o Drivers for specific hardware devices.
File Management
File management is one of the most visible services of an operating system. Computers
can store information in several different physical forms; magnetic tape, disk, and drum
are the most common forms. Each of these devices has its own characteristics and physical
organization.
For convenient use of the computer system, the operating system provides a uniform
logical view of information storage. The operating system abstracts from the physical
properties of its storage devices to define a logical storage unit, the file. Files are mapped,
by the operating system, onto physical devices.
The operating system implements the abstract concept of the file by managing mass
storage device, such as types and disks.. Finally, when multiple users have access to files,
it may be desirable to control by whom and in what ways files may be accessed.
The operating system is responsible for the following activities in connection with file
management:
o The creation and deletion of files
o The creation and deletion of directory
o The support of primitives for manipulating files and directories
o The mapping of files onto disk storage.
o Backup of files on stable (non volatile) storage.
Protection System
The various processes in an operating system must be protected from each other’s
activities. For that purpose, various mechanisms which can be used to ensure that the files,
memory segment, CPU and other resources can be operated on only by those processes
that have gained proper authorization from the operating system.
For example, memory addressing hardware ensure that a process can only execute within
its own address space. The timer ensure that no process can gain control of the CPU
without relinquishing it. Finally, no process is allowed to do its own I/O, to protect the
integrity of the various peripheral devices.
Protection can improve reliability by detecting latent errors at the interfaces between
component subsystems. Early detection of interface errors can often prevent contamination
of a healthy subsystem by a subsystem that is malfunctioning. An unprotected resource
cannot defend against use (or misuse) by an unauthorized or incompetent user.
Networking
The processors in the system are connected through a communication network, which can
be configured in the number of different ways. The network may be fully or partially
connected. The communication network design must consider routing and connection
strategies, and the problems of connection and security.
A distributed system provides the user with access to the various resources the system
maintains. Access to a shared resource allows computation speed-up, data availability, and
reliability.
One of the most important components of an operating system is its command interpreter.
The command interpreter is the primary interface between the user and the rest of the
system.
Many commands are given to the operating system by control statements. When a new job
is started in a batch system or when a user logs-in to a time-shared system, a program
which reads and interprets control statements is automatically executed. This program is
variously called (1) the control card interpreter, (2) the command line interpreter, (3) the
shell (in Unix), and so on. Its function is quite simple: get the next command statement,
and execute it.
The command statement themselves deal with process management, I/O handling,
secondary storage management, main memory management, file system access,
protection, and networking.
A computer contains various hardware like processor, RAM, monitor etc. So, OS must
ensure that these devices remain intact (not directly accessible by the user).It is divided
into three categories:
1) CPU Protection
It means that a process should not hogg (hold) CPU forever otherwise other processes will
not get the process. For that purpose, a timer is introduced to prevent such a situation. A
process is given a certain time for execution after which a signal is sent to the process
which makes the process to leave CPU. Hence process will not hogg the CPU.
2) Memory Protection
There may be multiple processes in the memory so it is possible that one process may try
to access other process memory.
To prevent such situation, we use two register:
1. Base Register
2. Limit Register
Base register store the starting address of the program and Limit Register store the size
of the process. So, whenever a process wants to access address in memory then it is
checked that it can access the memory or not.
3) I/O protection
To ensure CPU protection OS ensure that below case should not occur
View I/O of other process
Terminate I/O of another process
Give priority to a particular process I/O
If an application process wants to access any I/O device then it will be done through
system call so that OS will monitor the task.Like In C language write() and read() is a
system call to read and write on file. There are two modes in instruction execute:
1. User mode
The system performs a task on behalf of user application this instruction. In this
mode, the user cannot directly access hardware and reference memory.
2. Kernel mode
2.7 List down various types of storage devices. Illustrate the hierarchy of storage
devices based on the speed and size characteristics. [NOV’16]
A memory element is the set of storage devices which stores the binary data in the type of
bits. In general, the storage of memory can be classified into two categories such as
volatile as well as non- volatile.
Characteristics of Storage
Volatility
Volatile memory needs power to work and loses its data when power is switched
off. However, it is quite fast so it is used as primary memory.
Non - volatile memory retains its data even when power is lost. So, it is used for
secondary memory.
Mutability
Mutable storage is both read and write storage and data can be overwritten as required.
Primary storage typically contains mutable storage and it is also available in secondary
storage nowadays.
Accessibility
Storage access can be random or sequential. In random access, all the data in the storage
can be accessed randomly and roughly in the same amount of time. In sequential storage,
the data needs to accessed in sequential order i.e. one after the other.
Addressability
Each storage location in memory has a particular memory address. The data in a particular
location can be accessed using its address.
Capacity
The capacity of any storage device is the amount of data it can hold. This is usually
represented in the form of bits or bytes.
Performance
Performance can be described in terms of latency or throughput.
Latency is the time required to access the storage. It is specified in the form of
read latency and write latency.
Throughput is the data reading rate for the memory. It can represented in the form
of megabytes per second.
Primary Memory
The primary memory is also known as internal memory, and this is accessible by the
processor straightly. This memory includes main, cache, as well as CPU registers.
Secondary Memory
The secondary memory is also known as external memory, and this is accessible by the
processor through an input/output module. This memory includes an optical disk,
magnetic disk, and magnetic tape.
Performance
Previously, the designing of a computer system was done without memory hierarchy, and
the speed gap among the main memory as well as the CPU registers enhances because of
the huge disparity in access time, which will cause the lower performance of the system.
So, the enhancement was mandatory. The enhancement of this was designed in the
memory hierarchy model due to the system’s performance increase.
Ability
The ability of the memory hierarchy is the total amount of data the memory can store.
Because whenever we shift from top to bottom inside the memory hierarchy, then the
capacity will increase.
Access Time
The access time in the memory hierarchy is the interval of the time among the data
availability as well as request to read or write. Because whenever we shift from top to
bottom inside the memory hierarchy, then the access time will increase
2.8 What is the purpose of interrupts? What are the differences between a trap and
an interrupt? Can traps be generated intentionally by a user program? If so, what is
the purpose? (5) (May 2017)
Interrupt is the mechanism by which modules like I/O or memory may interrupt the
normal processing by CPU. It may be either clicking a mouse, dragging a cursor,
printing a document etc the case where interrupt is getting generated.
Purpose:
External devices are comparatively slower than CPU. So if there is no interrupt CPU
would waste a lot of time waiting for external devices to match its speed with that of
CPU. This decreases the efficiency of CPU. Hence, interrupt is required to eliminate
these limitations.
With Interrupt:
1. Suppose CPU instructs printer to print a certain document.
2. While printer does its task, CPU engaged in executing other tasks.
3. When printer is done with its given work, it tells CPU that it has done with its
work.
(The word ‘tells’ here is interrupt which sends one message that printer has
done its work successfully.).
The purpose of interrupts is to alter the flow of execution in response to some event.
Without interrupts, a user may have to wait for a given application to have a higher
priority over the CPU to be ran. This ensures that the CPU will deal with the process
immediately.In processors with a privileged mode of execution, the interrupt also causes a
mode switch into kernel mode.
Advantages:
It increases the efficiency of CPU.
It decreases the waiting time of CPU.
Stops the wastage of instruction cycle.
Disadvantages:
CPU has to do a lot of work to handle interrupts, resume its previous
execution of programs (in short, overhead required to handle the interrupt
request.).
Interrupt Trap
Interrupt is an electronic alerting signal Software Interrupt caused either by an
sent to the processor from an external exceptional condition (e.g. divide by zero or
device, either a part of the computer itself invalid memory access)in the processor
such as a disk controller or an external itself, or a special instruction in the
peripheral. instruction set which causes an interrupt
when it is executed is called Trap.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS T51-OPERATING SYSTEMS Page 54
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY PUDUCHERR
Y