100% found this document useful (1 vote)
16 views5 pages

Os Notes

Operating System notes

Uploaded by

Suyash Thorat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
16 views5 pages

Os Notes

Operating System notes

Uploaded by

Suyash Thorat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

An operating system acts as an interface between the user of a computer system and the

computer hardware. The purpose of an operating system is to provide an operating environment


in which a user can execute programs in a convenient and efficient manner.

The purpose of fork() is to create a new process, which becomes the child process of the
caller

Mutual exclusion in an operating system is a technique that ensures that only one thread or
process can access a shared resource at a time.

I/O Bound Process is a process which spends more time in I/O operation than computation
(time spends with CPU) is I/O bound process.

Bootstrap loader is used to locate the kernel and loads it into main memory and starts
execution. Some computer systems, such as PCs, use a two-step process in which a simple
bootstrap loader fetches a more complex boot program from disk, which in turn loads the kernel.

Kernel is the core component of an operating system (OS) that acts as the interface between
the hardware and the software

Semaphore is a variable used to control access to a common resource by multiple processes


and avoid critical section problem. A semaphore is an integer variable, which can be accessed
only through two operations Wait() and Signal().
Types - Binary & Counting

Multilevel Queue Scheduling algorithm partitions the ready queue into separate queues. In
this processes are permanently assigned to one queue, depending upon their properties such
as the size of the memory or the type of the process or priority of the process. So each queue
follows a separate scheduling algorithm. In this algorithm scheduling the processes are
classified into different groups such as System processes, Interactive processes, Interactive
editing processes, Batch processes, User processes etc.
A multilevel queue scheduling algorithm partitions the ready queue in several separate queues.
Advantages MLQ: 1. In MLQ, the processes are permanently assigned to their respective
queues and do not move between queues. This results in low scheduling overhead. 2. In MLQ
one can apply different scheduling algorithms to different processes. 3. There are many
processes which we are not able to put them in the one single queue which is solved by MLQ
scheduling as we can now put them in different queues.
Disadvantages MLQ: 1. The processes in lower priority queues may have to starve for CPU in
case processes are continuously arriving in higher priority queues. 2. In MLQ the process does
not move from one queue to another queue.

Paging is a storage mechanism used in OS to retrieve processes from secondary storage to the
main memory as pages. The primary concept behind paging is to break each process into
individual pages. Thus the primary memory would also be separated into frames.
One page of the process must be saved in one of the given memory frames. These pages can
be stored in various memory locations, but finding contiguous frames/holes is always the main
goal. Process pages are usually only brought into the main memory when they are needed;
else, they are stored in the secondary storage.
The frame sizes may vary depending on the OS. Each frame must be of the same size. Since
the pages present in paging are mapped on to the frames, the page size should be similar to the
frame size.

Process is an instance of an executing program. A process is a program in execution. Process


is also called as job, task and unit of work. A process is defined as, an entity which represents
the basic unit of work to be implemented in the system.
Types of Process State
New State - When a program in secondary memory is started for execution, the process is said
to be in a new state.
Ready State - After being loaded into the main memory and ready for execution, a process
transitions from a new to a ready state. The process will now be in the ready state, waiting for
the processor to execute it. Many processes may be in the ready stage in a multiprogramming
environment.
Run State - After being allotted the CPU for execution, a process passes from the ready state to
the run state.
Terminate State - When a process’s execution is finished, it goes from the run state to the
terminate state. The operating system deletes the process control box (or PCB) after it enters
the terminate state.
Block or Wait State - If a process requires an Input/Output operation or a blocked resource
during execution, it changes from run to block or the wait state. The process advances to the
ready state after the I/O operation is completed or the resource becomes available.
Suspend Ready State - If a process with a higher priority needs to be executed while the main
memory is full, the process goes from ready to suspend ready state. Moving a lower-priority
process from the ready state to the suspend ready state frees up space in the ready state for a
higher-priority process. Until the main memory becomes available, the process stays in the
suspend-ready state. The process is brought to its ready state when the main memory becomes
accessible.
Suspend Wait State - If a process with a higher priority needs to be executed while the main
memory is full, the process goes from the wait state to the suspend wait state. Moving a
lower-priority process from the wait state to the suspend wait state frees up space in the ready
state for a higher-priority process.

Fragmentation refers to a process of information storage where the memory space of the
system is used inadequately, thus reducing the overall efficiency or ability or both (sometimes).
The implications of the process of fragmentation depend entirely on the specific allocation of
storage space schemes in the operation along with the particular fragmentation types. In some
instances, fragmentation leads to some unused storage capacity. This concept is also applicable
to the generated unused space in this very situation.

The memory used for the preservation of the data set (like file formats) is very similar to the
other systems (like the FAT file system), irrespective of the amount of fragmentation (it happens
from null to the extreme).

Internal Fragmentation Whenever a memory block gets allocated with a process, and in case
the process happens to be smaller than the total amount of requested memory, a free space is
ultimately created in this memory block. And due to this, the memory block’s free space is
unused. This is what causes internal fragmentation. Read more on Internal Fragmentation here.

External Fragmentation occurs whenever a method of dynamic memory allocation happens to


allocate some memory and leave a small amount of unusable memory. The total quantity of the
memory available is reduced substantially in case there’s too much external fragmentation. So,
there’s enough memory space in order to complete a request, and it is not contiguous. Thus, it
is known as external fragmentation. Read more on External Fragmentation here.
Schedulers are special system software which handles process scheduling in various ways.
The main task of scheduler is to select the jobs to be submitted into the system and to decide
which process to run. In other words, the job of process scheduling is done by a software
routine (module) called as scheduler.

Long-term Scheduler -

The long-term scheduler selects the job or process to be executed from job pool on a secondary
storage device and loads them into memory for execution. The long-term scheduler executes
less frequently. The long-term scheduler is invoked when the process leaves the system.
Because of the longer duration between executions. The long-term scheduler can afford to take
more time to decide which process should be selected for execution. A long-term scheduler
determines which programs are admitted to the system for processing. A long-term scheduler
selects processes from the queue and loads them into memory for execution. Process loads
into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a
balanced mix of jobs, such as I/O bound and processor bound.

Short-term Scheduler -

Short-term scheduler selects a job from ready queue and submits it to CPU. As the short-term
scheduler selects only one job at a time, it is invoked very frequently. The main objective of
short-term scheduler is to increase system performance in accordance with the chosen set of
criteria. It is the change of ready state to running state of the process. CPU scheduler selects a
process among the processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers. In case of
input/output bound jobs as the ready queue is almost empty, short-term scheduler has very less
work to do.

Medium-term Scheduler -

Some OSs, such as time-sharing systems, may introduce an additional, intermediate level of
scheduling (medium term scheduler). Medium term scheduling is a part of swapping so it is also
known swapper. Swapping is term associated with the medium-term scheduler by which
processes are temporarily removed and then brought back to the ready queue. The medium
term scheduler temporarily removes processes from main memory and places them on
secondary memory (such as a disk drive) or vice versa. This is commonly referred to as
"swapping out" or "swapping in" (Refer Fig. 2.8). Medium-term scheduling removes the
processes from the memory. It reduces the degree of multiprogramming.

MMU stands for Memory management unit also known as PMMU (paged memory
management unit), Every computer system has a memory management unit , it is a hardware
component whose main purpose is to convert virtual addresses created by the CPU into
physical addresses in the computer's memory. In simple words, it is responsible for memory
management In a device as it acts as a bridge between the CPU and the RAM, which ensures
that programs can run smoothly and access the required data without clashes or unauthorized
access. It is usually integrated in the processor but in some cases it also constructed as a
separate Integrated circuit (IC).
Functions of Memory Management Unit(MMU)
The MMU converts virtual addresses created by running programs to corresponding physical
addresses in the computer's memory.
MMUs also play a crucial role in implementing memory protection mechanisms. By
implementing access control rules and regulations, they stop illegal usage of particular memory
locations.
MMUs serves a part in the implementation of this method, which allows heavier programs to be
executed than what can fit in the physical RAM. The system can use virtual memory to extend
RAM by using a portion of the storage space on the disc and dynamically switch data between
RAM and the disc as needed.
Memory segmentation is a feature found in certain MMUs. It splits the computer's memory into
sections that have multiple authorizations and features. This segmentation provides a more
granular control over memory access and aids in optimizing memory utilization.

You might also like