Important Functions of An Operating System: Security
Important Functions of An Operating System: Security
3. Job accounting –
Operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or
group of users.
5. Memory Management –
The operating system manages the Primary Memory or Main Memory. Main
memory is made up of a large array of bytes or words where each byte or word is
assigned a certain address. Main memory is fast storage and it can be accessed
directly by the CPU. For a program to be executed, it should be first loaded in the
main memory. An Operating System performs the following activities for memory
management:
It keeps track of primary memory, i.e., which bytes of memory are used by which
user program. The memory addresses that have already been allocated and the
memory addresses of the memory that has not yet been used. In
multiprogramming, the OS decides the order in which processes are granted
access to memory, and for how long. It Allocates the memory to a process when
the process requests it and deallocates the memory when the process has
terminated or is performing an I/O operation.
6.Processor Management –
In a multi-programming environment, the OS decides the order in which
processes have access to the processor, and how much processing time each
process has. This function of OS is called process scheduling. An Operating System
performs the following activities for processor management. Keeps track of the
status of processes. The program which performs this task is known as a traffic
controller. Allocates the CPU that is a processor to a process. De-allocates
processor when a process is no more required.
7.Device Management – An OS manages device communication via their
respective drivers. It performs the following activities for device management.
Keeps track of all devices connected to the system. designates a program
responsible for every device known as the Input/Output controller. Decides which
process gets access to a certain device and for how long. Allocates devices in an
effective and efficient way. Deallocates devices when they are no longer required.
8.File Management –
A file system is organized into directories for efficient or easy navigation and
usage. These directories may contain other directories and other files. An
Operating System carries out the following file management activities. It keeps
track of where information is stored, user access settings and status of every file,
and more… These facilities are collectively known as the file system.
Advantages of multiprogramming systems
• CPU is used most of time and never become idle
• The system looks fast as all the tasks runs in parallel
• Short time jobs are completed faster than long time jobs
• Multiprogramming systems support multiply users
• Resources are used nicely
• Total read time taken to execute program/job decreases
• Response time is shorter
• In some applications multiple tasks are running and multiprogramming
systems better handle these type of applications
Q.GIVE THE ESSENTIAL PROPERTIES OF BATCH AND TIME OPERATING SYSTEM
ANSWER- BATCH OPERATING SYSTEM
batch processing is a technique in which an Operating System collects the
programs and data together in a batch before processing starts. An operating
system does the following activities related to batch processing −
• The OS defines a job which has predefined sequence of commands,
programs and data as a single unit.
• The OS keeps a number a jobs in memory and executes them without any
manual information.
• Jobs are processed in the order of submission, i.e., first come first served
fashion.
• When a job completes its execution, its memory is released and the output
for the job gets copied into an output spool for later printing or processing.
Advantages
• Batch processing takes much of the work of the operator to the computer.
• Increased performance as a new job get started as soon as the previous job
is finished, without any manual intervention.
Disadvantages
• Difficult to debug program.
• A job could enter an infinite loop.
• Due to lack of protection scheme, one batch job can affect pending jobs.
Time sharing operating system
Multiprogrammed, batched systems provided an environment where various
system resources were used effectively, but it did not provide for user interaction
with computer systems. Time sharing is a logical extension of multiprogramming.
The CPU performs many tasks by switches are so frequent that the user can
interact with each program while it is running.
A time shared operating system allows multiple users to share computers
simultaneously. Each action or order at a time the shared system becomes
smaller, so only a little CPU time is required for each user. As the system rapidly
switches from one user to another, each user is given the impression that the
entire computer system is dedicated to its use, although it is being shared among
multiple users.
A time shared operating system uses CPU scheduling and multi-programming to
provide each with a small portion of a shared computer at once. Each user has at
least one separate program in memory. A program loaded into memory and
executes, it performs a short period of time either before completion or to
complete I/O.This short period of time during which user gets attention of CPU is
known as time slice, time slot or quantum.It is typically of the order of 10 to 100
milliseconds. Time shared operating systems are more complex than
multiprogrammed operating systems. In both, multiple jobs must be kept in
memory simultaneously, so the system must have memory management and
security. To achieve a good response time, jobs may have to swap in and out of
disk from main memory which now serves as a backing store for main memory. A
common method to achieve this goal is virtual memory, a technique that allows
the execution of a job that may not be completely in memory.
In above figure the user 5 is active state but user 1, user 2, user 3, and user 4 are
in waiting state whereas user 6 is in ready state.
6. Active State –
The user’s program is under the control of CPU. Only one program is available in
this state.
7. Ready State –
The user program is ready to execute but it is waiting for for it’s turn to get the
CPU.More than one user can be in ready state at a time.
8. Waiting State –
The user’s program is waiting for some input/output operation. More than one
user can be in a waiting state at a time.
Requirements of Time Sharing Operating System :
An alarm clock mechanism to send an interrupt signal to the CPU after every time
slice. Memory Protection mechanism to prevent one job’s instructions and data
from interfering with other jobs.
Advantages :
9. Each task gets an equal opportunity.
10. Less chances of duplication of software.
11. CPU idle time can be reduced.
Disadvantages :
12. Reliability problem.
13. One must have to take of security and integrity of user programs and data.
14. Data communication problem.
Q. Explain scheduling algorithm with suitable example
ANSWER- A Process Scheduler schedules different processes to be assigned to the
CPU based on particular scheduling algorithms. There are six popular process
scheduling algorithms which we are going to discuss in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive
algorithms are designed so that once a process enters the running state, it cannot
be preempted until it completes its allotted time, whereas the preemptive
scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)
• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.
Ostrich Algorithm
The ostrich algorithm means that the deadlock is simply ignored and it is assumed
that it will never occur. This is done because in some systems the cost of handling
the deadlock is much higher than simply ignoring it as it occurs very rarely. So, it is
simply assumed that the deadlock will never occur and the system is rebooted if it
occurs by any chance.
Q. WHAT IS VIRTUAL MEMORY? EXPLAIN DEMAND PAGAING
ANSWER- Virtual Memory is a storage allocation scheme in which secondary
memory can be addressed as though it were part of the main memory. The
addresses a program may use to reference memory are distinguished from the
addresses the memory system uses to identify physical storage sites, and
program-generated addresses are translated automatically to the corresponding
machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer
system and the amount of secondary memory is available not by the actual
number of the main storage locations.
It is a technique that is implemented using both hardware and software. It maps
memory addresses used by a program, called virtual addresses, into physical
addresses in computer memory.
All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be
swapped in and out of the main memory such that it occupies different places in
the main memory at different times during the course of execution.
A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and use of page or segment table permits
this.
If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution. This means that the
required pages need to be loaded into memory whenever required. Virtual
memory is implemented using Demand Paging or Demand Segmentation.
Demand Paging :
The process of loading the page into memory on demand (whenever page fault
occurs) is known as demand paging.
The process includes the following steps :
1.If the CPU tries to refer to a page that is currently not available in the main
memory, it generates an interrupt indicating a memory access fault.
2.The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3.The OS will search for the required page in the logical address space.
4.The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the decision-making
of replacing the page in physical address space.
5.The page table will be updated accordingly.
6.The signal will be sent to the CPU to continue the program execution and it will
place the process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating
system and the required page is brought into memory.
Q. WHAT DOY UNDERSTAND BY PAGE REPLACEMENT? EXPLAIN FIFO AND LRU
page replacement algorithm?
ANSWER- In an operating system that uses paging for memory management, a
page replacement algorithm is needed to decide which page needs to be replaced
when new page comes in.
Page Fault – A page fault happens when a running program accesses a memory
page that is mapped into the virtual address space, but not loaded in physical
memory.
Since actual physical memory is much smaller than virtual memory, page faults
happen. In case of page fault, Operating System might have to replace one of the
existing pages with the newly needed page. Different page replacement
algorithms suggest different ways to decide which page to replace. The target for
all algorithms is to reduce the number of page faults.
Page Replacement Algorithms:
1. First in First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the
queue is selected for removal.
Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames.Find
number of page faults.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e
1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3
—>1 Page Fault.
Finally, when 3 come it is not available so it replaces 0 1-page fault
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page
faults when increasing the number of page frames while using the First in First
Out (FIFO) page replacement algorithm. For example, if we consider reference
string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total page faults, but if we
increase slots to 4, we get 10-page faults.
2. Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with
4-page frames.Find number of page faults.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page
fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Q. EXPLAIN DISK SCHEDULING? AND ALGORITHEM TOO
ANSWER- Disk scheduling is done by operating systems to schedule I/O requests
arriving for the disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
• Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller. Thus other I/O
requests need to wait in the waiting queue and need to be scheduled.
• Two or more request may be far from each other so can result in greater
disk arm movement.
• Hard drives are one of the slowest parts of the computer system and thus
need to be accessed in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have
a quick look at some of the important terms:
• Seek Time:Seek time is the time taken to locate the disk arm to a specified
track where the data is to be read or write. So the disk scheduling algorithm
that gives minimum average seek time is better.
• Rotational Latency: Rotational Latency is the time taken by the desired
sector of disk to rotate into a position so that it can access the read/write
heads. So the disk scheduling algorithm that gives minimum rotational
latency is better.
• Transfer Time: Transfer time is the time to transfer the data. It depends on
the rotating speed of the disk and number of bytes to be transferred.
• Disk Access Time: Disk Access Time is:
Disk Access Time = Seek Time +
Rotational Latency + Transfer Time
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
So, total seek time:
=(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16)
=642
2.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are
executed first. So, the seek time of every request is calculated in advance in the
queue and then they are scheduled according to their calculated seek time. As a
result, the request near the disk arm will get executed first. SSTF is certainly an
improvement over FCFS as it decreases the average response time and increases
the throughput of system.Let us understand this with the help of an example.
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move
“towards the larger value”.
o
Direct Access –
Another method is direct access method also known as relative access method. A
filed-length logical record that allows the program to read and write record
rapidly. in no particular order. The direct access is based on the disk model of a
file since disk allows random access to any file block. For direct access, the file is
viewed as a numbered sequence of block or record. Thus, we may read block 14
then block 59, and then we can write block 17. There is no restriction on the order
of reading and writing for a direct access file.
A block number provided by the user to the operating system is normally a
relative block number, the first relative block of the file is 0 and then 1 and so on.