0% found this document useful (0 votes)
75 views24 pages

Important Functions of An Operating System: Security

An operating system acts as an interface between the user and computer hardware, allocating resources and services like memory, processors, and I/O devices. The main functions of an OS include security, system performance monitoring, job accounting, error detection, coordination between software/users, and memory, processor, device, and file management. A key advantage of multiprogramming systems is that the CPU is rarely idle as multiple processes can run simultaneously, improving system throughput and user experience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views24 pages

Important Functions of An Operating System: Security

An operating system acts as an interface between the user and computer hardware, allocating resources and services like memory, processors, and I/O devices. The main functions of an OS include security, system performance monitoring, job accounting, error detection, coordination between software/users, and memory, processor, device, and file management. A key advantage of multiprogramming systems is that the CPU is rarely idle as multiple processes can run simultaneously, improving system throughput and user experience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Q 1. What is operating system.describe main function of os.

What is the main


advantage of multiprogramming system
Answer- An operating system is a program on which application programs are
executed and acts as a communication bridge (interface) between the user and
the computer hardware.
The main task an operating system carries out is the allocation of resources and
services, such as the allocation of memory, devices, processors, and information.
The operating system also includes programs to manage these resources, such as
a traffic controller, a scheduler, memory management module, I/O programs, and
a file system.
Important functions of an operating System:
1. Security –
The operating system uses password protection to protect user data and similar
other techniques. it also prevents unauthorized access to programs and user data.

2. Control over system performance –


Monitors overall system health to help improve performance. records the
response time between service requests and system response to having a
complete view of the system health. This can help improve performance by
providing important information needed to troubleshoot problems.

3. Job accounting –
Operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or
group of users.

4. Error detecting aids –


The operating system constantly monitors the system to detect errors and avoid
the malfunctioning of a computer system.
5.Coordination between other software and users –
Operating systems also coordinate and assign interpreters, compilers, assemblers,
and other software to the various users of the computer systems.

5. Memory Management –
The operating system manages the Primary Memory or Main Memory. Main
memory is made up of a large array of bytes or words where each byte or word is
assigned a certain address. Main memory is fast storage and it can be accessed
directly by the CPU. For a program to be executed, it should be first loaded in the
main memory. An Operating System performs the following activities for memory
management:
It keeps track of primary memory, i.e., which bytes of memory are used by which
user program. The memory addresses that have already been allocated and the
memory addresses of the memory that has not yet been used. In
multiprogramming, the OS decides the order in which processes are granted
access to memory, and for how long. It Allocates the memory to a process when
the process requests it and deallocates the memory when the process has
terminated or is performing an I/O operation.
6.Processor Management –
In a multi-programming environment, the OS decides the order in which
processes have access to the processor, and how much processing time each
process has. This function of OS is called process scheduling. An Operating System
performs the following activities for processor management. Keeps track of the
status of processes. The program which performs this task is known as a traffic
controller. Allocates the CPU that is a processor to a process. De-allocates
processor when a process is no more required.
7.Device Management – An OS manages device communication via their
respective drivers. It performs the following activities for device management.
Keeps track of all devices connected to the system. designates a program
responsible for every device known as the Input/Output controller. Decides which
process gets access to a certain device and for how long. Allocates devices in an
effective and efficient way. Deallocates devices when they are no longer required.
8.File Management –
A file system is organized into directories for efficient or easy navigation and
usage. These directories may contain other directories and other files. An
Operating System carries out the following file management activities. It keeps
track of where information is stored, user access settings and status of every file,
and more… These facilities are collectively known as the file system.
Advantages of multiprogramming systems
• CPU is used most of time and never become idle
• The system looks fast as all the tasks runs in parallel
• Short time jobs are completed faster than long time jobs
• Multiprogramming systems support multiply users
• Resources are used nicely
• Total read time taken to execute program/job decreases
• Response time is shorter
• In some applications multiple tasks are running and multiprogramming
systems better handle these type of applications
Q.GIVE THE ESSENTIAL PROPERTIES OF BATCH AND TIME OPERATING SYSTEM
ANSWER- BATCH OPERATING SYSTEM
batch processing is a technique in which an Operating System collects the
programs and data together in a batch before processing starts. An operating
system does the following activities related to batch processing −
• The OS defines a job which has predefined sequence of commands,
programs and data as a single unit.
• The OS keeps a number a jobs in memory and executes them without any
manual information.
• Jobs are processed in the order of submission, i.e., first come first served
fashion.
• When a job completes its execution, its memory is released and the output
for the job gets copied into an output spool for later printing or processing.
Advantages
• Batch processing takes much of the work of the operator to the computer.
• Increased performance as a new job get started as soon as the previous job
is finished, without any manual intervention.
Disadvantages
• Difficult to debug program.
• A job could enter an infinite loop.
• Due to lack of protection scheme, one batch job can affect pending jobs.
Time sharing operating system
Multiprogrammed, batched systems provided an environment where various
system resources were used effectively, but it did not provide for user interaction
with computer systems. Time sharing is a logical extension of multiprogramming.
The CPU performs many tasks by switches are so frequent that the user can
interact with each program while it is running.
A time shared operating system allows multiple users to share computers
simultaneously. Each action or order at a time the shared system becomes
smaller, so only a little CPU time is required for each user. As the system rapidly
switches from one user to another, each user is given the impression that the
entire computer system is dedicated to its use, although it is being shared among
multiple users.
A time shared operating system uses CPU scheduling and multi-programming to
provide each with a small portion of a shared computer at once. Each user has at
least one separate program in memory. A program loaded into memory and
executes, it performs a short period of time either before completion or to
complete I/O.This short period of time during which user gets attention of CPU is
known as time slice, time slot or quantum.It is typically of the order of 10 to 100
milliseconds. Time shared operating systems are more complex than
multiprogrammed operating systems. In both, multiple jobs must be kept in
memory simultaneously, so the system must have memory management and
security. To achieve a good response time, jobs may have to swap in and out of
disk from main memory which now serves as a backing store for main memory. A
common method to achieve this goal is virtual memory, a technique that allows
the execution of a job that may not be completely in memory.

In above figure the user 5 is active state but user 1, user 2, user 3, and user 4 are
in waiting state whereas user 6 is in ready state.
6. Active State –
The user’s program is under the control of CPU. Only one program is available in
this state.
7. Ready State –
The user program is ready to execute but it is waiting for for it’s turn to get the
CPU.More than one user can be in ready state at a time.
8. Waiting State –
The user’s program is waiting for some input/output operation. More than one
user can be in a waiting state at a time.
Requirements of Time Sharing Operating System :
An alarm clock mechanism to send an interrupt signal to the CPU after every time
slice. Memory Protection mechanism to prevent one job’s instructions and data
from interfering with other jobs.
Advantages :
9. Each task gets an equal opportunity.
10. Less chances of duplication of software.
11. CPU idle time can be reduced.
Disadvantages :
12. Reliability problem.
13. One must have to take of security and integrity of user programs and data.
14. Data communication problem.
Q. Explain scheduling algorithm with suitable example
ANSWER- A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms. There are six popular process
scheduling algorithms which we are going to discuss in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive
algorithms are designed so that once a process enters the running state, it cannot
be preempted until it completes its allotted time, whereas the preemptive
scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)
• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.

Wait time of each process is as follows −


Process Wait Time : Service Time - Arrival Time
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
Shortest Job Next (SJN)
• This is also known as shortest job first, or SJF
• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where required CPU time is
not known.
• The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time
Process Arrival Time Execution Time Service Time
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8

Waiting time of each process is as follows −


Process Waiting Time
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25
Priority Based Scheduling
• Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time requirements
or any other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority.
Here we are considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5

Waiting time of each process is as follows −


Process Waiting Time
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
• Shortest remaining time (SRT) is the preemptive version of the SJN
algorithm.
• The processor is allocated to the job closest to completion but it can be
preempted by a newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is
not known.
• It is often used in batch environments where short jobs need to give
preference.
Round Robin Scheduling
• Round Robin is the preemptive process scheduling algorithm.
• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
• Context switching is used to save states of preempted processes.

Wait time of each process is as follows −


Process Wait Time : Service Time - Arrival Time
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make
use of other existing algorithms to group and schedule jobs with common
characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound
jobs in another queue. The Process Scheduler then alternately selects jobs from
each queue and assigns them to the CPU based on the algorithm assigned to the
queue.
Q. IS IT POSSIBLE TO HAB=VE A DEADLOCK INVOLVING ONLY ONE
PROCESS?WHAT ARE THE CONDITION NESSARY FOR DEADLOCK TO OCCUR?
WHAT ARE THE VARIOUS METHODS FOR HANDELING DEADLOCK?
ANSWER- It is not possible to have a deadlock involving only one single process.
The deadlock involves a circular “hold-and-wait” condition between two or more
processes, so “one” process cannot hold a resource, yet be waiting for another
resource that it is holding.
Furthermore, what are the 4 conditions required for deadlock to occur?
Four Necessary and Sufficient Conditions for Deadlock
• mutual exclusion. The resources involved must be unshareable; otherwise,
the processes would not be prevented from using the resource when
necessary.
• hold and wait or partial allocation.
• no pre-emption.
• resource waiting or circular wait.
Deadlock detection, deadlock prevention and deadlock avoidance are the main
methods for handling deadlocks. Details about these are given as follows −
Deadlock Detection
Deadlock can be detected by the resource scheduler as it keeps track of all the
resources that are allocated to different processes. After a deadlock is detected, it
can be handed using the given methods −
• All the processes that are involved in the deadlock are terminated. This
approach is not that useful as all the progress made by the processes is
destroyed.
• Resources can be preempted from some processes and given to others until
the deadlock situation is resolved.
Deadlock Prevention
It is important to prevent a deadlock before it can occur. So, the system checks
each transaction before it is executed to make sure it does not lead to deadlock. If
there is even a slight possibility that a transaction may lead to deadlock, it is never
allowed to execute.
Some deadlock prevention schemes that use timestamps in order to make sure
that a deadlock does not occur are given as follows −
• Wait - Die Scheme
• In the wait - die scheme, if a transaction T1 requests for a resource that is
held by transaction T2, one of the following two scenarios may occur −
o TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system
earlier than T2, then it is allowed to wait for the resource which will
be free when T2 has completed its execution.
o TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the system
after T2, then T1 is killed. It is restarted later with the same
timestamp.
• Wound - Wait Scheme
• In the wound - wait scheme, if a transaction T1 requests for a resource that
is held by transaction T2, one of the following two possibilities may occur −
o TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system
earlier than T2, then it is allowed to roll back T2 or wound T2. Then
T1 takes the resource and completes its execution. T2 is later
restarted with the same timestamp.
o TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the system
after T2, then it is allowed to wait for the resource which will be free
when T2 has completed its execution.
Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has
occurred. The wait for graph can be used for deadlock avoidance. This is however
only useful for smaller databases as it can get quite complex in larger databases.
Wait for graph
The wait for graph shows the relationship between the resources and
transactions. If a transaction requests a resource or if it already holds a resource,
it is visible as an edge on the wait for graph. If the wait for graph contains a cycle,
then there may be a deadlock in the system, otherwise not.

Ostrich Algorithm
The ostrich algorithm means that the deadlock is simply ignored and it is assumed
that it will never occur. This is done because in some systems the cost of handling
the deadlock is much higher than simply ignoring it as it occurs very rarely. So, it is
simply assumed that the deadlock will never occur and the system is rebooted if it
occurs by any chance.
Q. WHAT IS VIRTUAL MEMORY? EXPLAIN DEMAND PAGAING
ANSWER- Virtual Memory is a storage allocation scheme in which secondary
memory can be addressed as though it were part of the main memory. The
addresses a program may use to reference memory are distinguished from the
addresses the memory system uses to identify physical storage sites, and
program-generated addresses are translated automatically to the corresponding
machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer
system and the amount of secondary memory is available not by the actual
number of the main storage locations.
It is a technique that is implemented using both hardware and software. It maps
memory addresses used by a program, called virtual addresses, into physical
addresses in computer memory.
All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be
swapped in and out of the main memory such that it occupies different places in
the main memory at different times during the course of execution.
A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and use of page or segment table permits
this.
If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution. This means that the
required pages need to be loaded into memory whenever required. Virtual
memory is implemented using Demand Paging or Demand Segmentation.

Demand Paging :
The process of loading the page into memory on demand (whenever page fault
occurs) is known as demand paging.
The process includes the following steps :
1.If the CPU tries to refer to a page that is currently not available in the main
memory, it generates an interrupt indicating a memory access fault.
2.The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3.The OS will search for the required page in the logical address space.
4.The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the decision-making
of replacing the page in physical address space.
5.The page table will be updated accordingly.
6.The signal will be sent to the CPU to continue the program execution and it will
place the process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating
system and the required page is brought into memory.
Q. WHAT DOY UNDERSTAND BY PAGE REPLACEMENT? EXPLAIN FIFO AND LRU
page replacement algorithm?
ANSWER- In an operating system that uses paging for memory management, a
page replacement algorithm is needed to decide which page needs to be replaced
when new page comes in.
Page Fault – A page fault happens when a running program accesses a memory
page that is mapped into the virtual address space, but not loaded in physical
memory.
Since actual physical memory is much smaller than virtual memory, page faults
happen. In case of page fault, Operating System might have to replace one of the
existing pages with the newly needed page. Different page replacement
algorithms suggest different ways to decide which page to replace. The target for
all algorithms is to reduce the number of page faults.
Page Replacement Algorithms:
1. First in First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the
queue is selected for removal.
Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames.Find
number of page faults.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e
1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3
—>1 Page Fault.
Finally, when 3 come it is not available so it replaces 0 1-page fault
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page
faults when increasing the number of page frames while using the First in First
Out (FIFO) page replacement algorithm. For example, if we consider reference
string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total page faults, but if we
increase slots to 4, we get 10-page faults.
2. Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with
4-page frames.Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page
fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Q. EXPLAIN DISK SCHEDULING? AND ALGORITHEM TOO
ANSWER- Disk scheduling is done by operating systems to schedule I/O requests
arriving for the disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
• Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller. Thus other I/O
requests need to wait in the waiting queue and need to be scheduled.
• Two or more request may be far from each other so can result in greater
disk arm movement.
• Hard drives are one of the slowest parts of the computer system and thus
need to be accessed in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have
a quick look at some of the important terms:
• Seek Time:Seek time is the time taken to locate the disk arm to a specified
track where the data is to be read or write. So the disk scheduling algorithm
that gives minimum average seek time is better.
• Rotational Latency: Rotational Latency is the time taken by the desired
sector of disk to rotate into a position so that it can access the read/write
heads. So the disk scheduling algorithm that gives minimum rotational
latency is better.
• Transfer Time: Transfer time is the time to transfer the data. It depends on
the rotating speed of the disk and number of bytes to be transferred.
• Disk Access Time: Disk Access Time is:
Disk Access Time = Seek Time +
Rotational Latency + Transfer Time

• Disk Response Time: Response Time is the average of time spent by a


request waiting to perform its I/O operation. Average Response time is the
response time of the all requests. Variance Response Time is measure of
how individual request are serviced with respect to average response time.
So the disk scheduling algorithm that gives minimum variance response
time is better.
Disk Scheduling Algorithms
1.FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the
requests are addressed in the order they arrive in the disk queue.Let us
understand this with the help of an example.

Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
So, total seek time:
=(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16)
=642
2.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are
executed first. So, the seek time of every request is calculated in advance in the
queue and then they are scheduled according to their calculated seek time. As a
result, the request near the disk arm will get executed first. SSTF is certainly an
improvement over FCFS as it decreases the average response time and increases
the throughput of system.Let us understand this with the help of an example.
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

So, total seek time:


=(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-40)+(190-170)
=208
3: In SCAN algorithm the disk arm moves into a particular direction and services
the requests coming in its path and after reaching the end of disk, it reverses its
direction and again services the request arriving in its path. So, this algorithm
works as an elevator and hence also known as elevator algorithm. As a result, the
requests at the midrange are serviced more and those arriving behind the disk
arm will have to wait.

Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move
“towards the larger value”.

Therefore, the seek time is calculated as:


=(199-50)+(199-16)
=332
Q. What is the difference between Direct and sequential access? Explain access
method?
Answer- When a file is used, information is read and accessed into computer
memory and there are several ways to access this information of the file. Some
systems provide only one access method for files. Other systems, such as those of
IBM, support many access methods, and choosing the right one for a particular
application is a major design problem.
There are three ways to access a file into a computer system: Sequential-Access,
Direct Access, Index sequential Method.
Sequential Access –
It is the simplest access method. Information in the file is processed in order, one
record after the other. This mode of access is by far the most common; for
example, editor and compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -
read next- read the next position of the file and automatically advance a file
pointer, which keeps track I/O location. Similarly, for the -write next- append to
the end of the file and advance to the newly written material.
Key points:
o Data is accessed one record right after another record in an order.
o When we use read command, it move ahead pointer by one
o When we use write command, it will allocate memory and move the
pointer to the end of the file
o Such a method is reasonable for tape.

o
Direct Access –
Another method is direct access method also known as relative access method. A
filed-length logical record that allows the program to read and write record
rapidly. in no particular order. The direct access is based on the disk model of a
file since disk allows random access to any file block. For direct access, the file is
viewed as a numbered sequence of block or record. Thus, we may read block 14
then block 59, and then we can write block 17. There is no restriction on the order
of reading and writing for a direct access file.
A block number provided by the user to the operating system is normally a
relative block number, the first relative block of the file is 0 and then 1 and so on.

Index sequential method –


It is the other method of accessing a file that is built on the top of the sequential
access method. These methods construct an index for the file. The index, like an
index in the back of a book, contains the pointer to the various blocks. To find a
record in the file, we first search the index, and then by the help of pointer we
access the file directly.
Key points:
o It is built on top of Sequential access.
o It control the pointer by using index.

You might also like