Case Study Aos

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Case Study 1

Aim: - Prepare case study on process state transition.

The process, from its creation to completion, passes through various states. The minimum number of states
is five. The names of the states are not standardized although the process may be in one of the following
states during execution.

 The various states of a process are:

1.  New. A process that has just created but has not yet admitted to the pool of executable processes by
the operating system.
2.  Ready. These processes prepared to execute when given the opportunity. A ready process has all the
resources needed for its execution, except the processor. Processes usually take the ready state
immediately upon creation. All ready processes are waiting to have the processor so that they can run.
3.  Running. The process whose instructions executed, called running process. A running process
possesses all resources needed for its execution including the processor.
4.  Waiting/Blocked. The process waiting for some event to occur or blocked until some event occurs
such as the completion of I/O operation. Such a process cannot execute even if a CPU is available.
5. Terminated. The process has finished its execution. All the tasks in a process completed.

 Process State Transition


▪ A state transition is a change from one state to another. A state transition caused by the occurrence of
some event in the system.
▪ A process has to go through various states for performing its task.
▪ The transition of a process from one state to another occurs depending on the flow of the execution of
the process. It is not necessary for a process to undergo all the states.
▪ A new process is added to a data structure called a ready queue, also known as ready pool or pool
of executable processes. This queue stores all processes in a first-in first-out (FIFO) manner. A new
process added into the ready queue from its rear end and the process at the front of the ready queue
sent for execution.
▪ If the process does not voluntarily release the CPU before the time slice expires, the interrupting cycle
generates an interrupt, causing the operating system to regain control. An interrupt is a request to
processor usually activated by a task needing attention.
▪ Each process assigned a time slice for the execution. A time slice is a very short period of time and its
duration varies in different systems.
▪ The CPU executes the process at the front end of the ready queue and that process makes a state
transition from ready to the running state. The assignment of the CPU to the first process on the ready
queue called dispatching.
Process State transition indicated as

dispatch (process name) : ready → running


▪ The operating system then adds the previously running process to the rear end of ready queue and
allocates CPU to the first process on the ready queue.

These state transitions indicated as:

Timer runout (process name): running ready and dispatch (process name) : ready running
▪ If a running process initiates an input/output operation before its time slice expires, the Funning
process voluntarily releases the CPU. Sent to the waiting queue and the process state marked as
waiting blocked.

This state transition indicated as:

block (process name): running – blocked


After the competition of I/O task, the blocked or waiting process restored and placed back in the ready
queue and the process state marked as ready. When the execution of process ends, the process state
marked as terminated and the operating system reclaims all the resources allocated to the process. A
process terminated after its successful completion. Called normal exit of a program. In certain cases, a
process terminates or stops its execution prematurely

Conclusion: Thus, we study the basic idea of a Process state transition. In which how the process was
managed out on the computer.
Case Study 2

Aim: -
- An operating system is the most important software that runs on a computer. It manages the computer's
memory and processes, as well as all of its software and hardware. It also allows you to communicate with
the computer without knowing how to speak the computer's language.

Views of operating system: -

User view –

The user view depends on the system interface that is used by the users. Some systems are designed for a
single user to monopolize the resources to maximize the user's task. In these cases, the OS is designed
primarily for ease of use, with little emphasis on quality and none on resource utilization. The user
viewpoint focuses on how the user interacts with the operating system through the usage of various
application programs.

▪ Single User View Point: - Most computer users use a monitor, keyboard, mouse, printer, and
other accessories to operate their computer system. In some cases, the system is designed to
maximize the output of a single user. As a result, more attention is laid on accessibility, and
resource allocation is less important.
▪ Multiple User View Point: - Another example of user views in which the importance of user
experience and performance is given is when there is one mainframe computer and many
users on their computers trying to interact with their kernels over the mainframe to each
other. In such circumstances, memory allocation by the CPU must be done effectively to give
a good user experience.
▪ Embedded System User View Point: - Some systems, like embedded systems that lack a user
point of view. The remote control used to turn on or off the tv is all part of an embedded
system in which the electronic device communicates with another program where the user
viewpoint is limited and allows the user to engage with the application.

Hardware View of Operating System:

▪ The Operating System manages the resources efficiently in order to offer the services to the user
programs. Operating System acts as a resource manager:
Allocation of res The operating system is mainly used to
control the hardware and coordinate its use among the
various application programs for the different users.
▪ The computer hardware contains a central processing unit
(CPU), the memory, and the input/output (I/O) devices, and
it provides the basic computing resources for the system.
The operating system manages the resources efficiently to
offer the services to the user programs

Controlling the execution of a program


 Control the operations of I/O devices
 Protection of resources
 Monitors the data

System View of Operating System: 

▪ The OS may also be viewed as just a resource allocator. A computer system comprises various
sources, such as hardware and software, which must be managed effectively. The operating system
manages the resources, decides between competing demands, controls the program execution, etc.

▪ According to this point of view, the operating system's purpose is to maximize performance. The
operating system is responsible for managing hardware resources and allocating them to programs
and users to ensure maximum performance.

▪ From the user point of view, we've discussed the numerous applications that require varying degrees
of user participation. However, we are more concerned with how the hardware interacts with the
operating system than with the user from a system viewpoint.

 Hardware upgrades
 New services
 Fixes the issues of resources
 Controls the user and hardware operations.

Conclusion: Thus, we study the Role of an operating system that how it can work with all essential parts of
a computer.
 
Case Study 3

Aim: - Prepare case study on Multiprogramming, Multitasking and Multiprocessing


with diagram.

Multiprogramming: - 
It is the ability of an operating system which executes more than one program on a single processor
machine. More than one task or program can store or reside into the main memory at one point of time.
In this concept the CPU executes some part of one program, and then continues with another part of the
program, and so on. Because of this process, the CPU will never go into the idle state unless there is no
process ready to execute at the time of Context Switching.
The diagram given below depicts the
multiprogramming −

Advantages
The advantages of multiprogramming are as follows −
 Very high CPU utilization.
 Less waiting time for the processes.
 Multi-programming decreases total read time that is needed to execute a job.
 Allows multiple Users
 Increased Resources Utilization
 Increased Throughput
 Improved Memory Utilization

Multi-processing: - 
Multiprocessing is the ability of an operating system which is used to execute more than one process
simultaneously on a multiprocessor
machine. In this, a computer uses more
than one CPU at a time. Two or more
processors present in the same computer,
sharing all the resources like the
system bus, memory, and other I/O is called
a Multiprocessing System.
Advantages
The advantages of multi-processing are as follows −
 As the workload is distributed evenly between the different processors it becomes more
accurate and the reliability increases.
 This is one of the examples of true parallel processing that means, more than one process
executing at the same time.
 By increasing the number of processors, more work can be completed in less time which leads
to increasing the throughput.
 Cost saving

Multi-tasking: - 

Multitasking is the ability of an operating


system and it is a logical extension of
multiprogramming. It is the ability of an
operating system to execute more than one
task simultaneously on a single processor
machine.
Actually, no two tasks on a single processor
machine can be executed at the same time, the
CPU switches from one task to the next so
quickly that it appears all the tasks are
executing at the same time. Multitasking is
based on time sharing alongside the concept of context switching.

Advantages
The advantages of multi-tasking are as follows −
 It will reduce starvation because each process is given a particular time quantum for
execution.
 Saves time.
 Increases productivity.
 Prevents procrastination.

Conclusion: Thus, we study the basic difference between the Multitasking, multiprogramming and
multiprocessing systems. a multi-program system executes more than one program on a single processor
machine, a multi-process system executes more than one process simultaneously on a multiprocessor
machine and multi- tasking system executes more than one process simultaneously on a multiprocessor
machine.

Case Study 4

Aim: - Prepare case study on Deadlock


Detection with diagram.

Deadlock is a situation where a set of processes


are blocked because each process is holding a
resource and waiting for another resource acquired
by some other process.
Consider an example when two trains are
coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other.
A similar situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other.
For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is
acquired by process 2, and process 2 is waiting for resource 1.

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
 Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a time)
 Hold and Wait: A process is holding at least one resource and waiting for resources.
 No Preemption: A resource cannot be taken from a process unless the process releases the resource.
 Circular Wait: A set of processes are waiting for each other in circular form.
Deadlock Detection and Recovery:
In this approach, The OS doesn't apply any
mechanism to avoid or prevent the
deadlocks. Therefore, the system considers
that the deadlock will definitely occur. In
order to get rid of deadlocks, The OS
periodically checks the system for any
deadlock. In case, it finds any of the
deadlock then the OS will recover the
system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the help of
Resource allocation graph.
OS Deadlock Detection and Recovery
In single instanced resource types, if a cycle is being formed in the system then there will definitely be a
deadlock. On the other hand, in multiple instanced resource type graph, detecting a cycle is not just enough.
We have to apply the safety algorithm on the system by converting the resource allocation graph into the
allocation matrix and request matrix.
In order to recover the system from deadlocks, either OS considers resources or processes.

Conclusion: - A deadlock in OS is a situation in which more than one process is blocked because it is
holding a resource and also requires some resource that is acquired by some other process. The four
necessary conditions for a deadlock situation are mutual exclusion, no preemption, hold and wait and
circular set.
Case Study 8

Aim: - Explain the concept of Preemptive vs non-preemptive scheduling.

What is Preemptive Scheduling?

Preemptive Scheduling is a scheduling method where the tasks are mostly assigned with their priorities.
Sometimes it is important to run a task with a higher priority before another lower priority task, even if the
lower priority task is still running.

What is Non-Preemptive Scheduling?


In this type of scheduling method, the CPU has been allocated to a specific process. The process that keeps
the CPU busy will release the CPU either by switching context or terminating. It is the only method that can
be used for various hardware platforms. That’s because it doesn’t need specialized hardware (for example, a
timer) like preemptive Scheduling.
Preemptive vs Non-Preemptive Scheduling: Comparison Table

Preemptive Scheduling Non-preemptive Scheduling

A processor can be preempted to execute the Once the processor starts its execution, it must
different processes in the middle of any current finish it before executing the other. It can’t be
process execution. paused in the middle.
CPU utilization is more efficient compared to Non- CPU utilization is less efficient compared to
Preemptive Scheduling. preemptive Scheduling.

Waiting and response time of preemptive Waiting and response time of the non-preemptive
Scheduling is less. Scheduling method is higher.

Preemptive Scheduling is prioritized. The highest When any process enters the state of running, the
priority process is a process that is currently state of that process is never deleted from the
utilized. scheduler until it finishes its job.

Preemptive Scheduling is flexible. Non-preemptive Scheduling is rigid.

Examples: – Shortest Remaining Time First, Round Examples: First Come First Serve, Shortest Job
Robin, etc. First, Priority Scheduling, etc.

Preemptive Scheduling algorithm can be pre- In non-preemptive scheduling process cannot be


empted that is the process can be Scheduled Scheduled

In this process, the CPU is allocated to the In this process, CPU is allocated to the process until
processes for a specific time period. it terminates or switches to the waiting state.

Advantages of Preemptive Scheduling –

 Preemptive scheduling method is more robust, approach so one process cannot monopolize the CPU.
 Choice of running task reconsidered after each interruption.
 Each event cause interruption of running tasks

Advantages of Non-Preemptive Scheduling –

 Offers low scheduling overhead


 Tends to offer high throughput
 It is conceptually very simple method

Disadvantages of Preemptive Scheduling –

 Need limited computational resources for Scheduling


 Takes a higher time by the scheduler to suspend the running task, switch the context, and dispatch
the new incoming task.

Disadvantages of Non-Preemptive Scheduling –

 It can lead to starvation especially for those real-time tasks


 Bugs can cause a machine to freeze up

Conclusion – Hence we studied how the process will manage in Preemptive vs non-preemptive scheduling.

Case Study 5

Aim: - Prepare case study on Memory Fragmentation.

The user of a computer continuously loads and unload the processes from the main memory. Processes are
stored in blocks of the main memory. When it happens that there are some free memory blocks but still not
enough to load the process, then this condition is called fragmentation. The state of fragmentation depends
on the system of memory allocation. In most cases, memory space is wasted, which is known as memory
fragmentation.

Reason for memory fragmentation –

User processes and resources are loaded and released from the main memory; the same processes are stored
in blocks of main memory. During loading and swapping of the process there are many spaces that are not
able to load any other process due to their size. Due to the dynamic allocation nature of main memory
processes, the main memory is available, but not enough to load another process in its place.

Types of memory fragmentation: There are two types of memory fragmentation.


Internal Fragmentation

At the point when a process is assigned to a memory block and if that process is small than the requested
memory space, it makes a vacant space in the assigned memory block. The difference between assigned and
requested memory space is then called
internal fragmentation. Commonly, the internal
fragmentation of memory is isolated into
fixed-sized blocks.

The reason for the internal fragmentation occurrence – When the memory assigned to a process is larger
than the memory requested by the process for example, if the memory space is divided into fixed-size blocks
of 50 bytes.

Let us consider the process that a P1 request only40 bytes and a fixed-sized block of 50 bytes are assigned to
the process. Now, the difference between allocated and requested memory is 10 bytes. This unutilized space
is known as internal fragmentation.

External Fragmentation

Typically, external fragmentation occurs in the case of dynamic or variable size segmentation. Although the
total space available in memory is sufficient to execute the process; however, this memory space is not
contiguous, which restricts process execution.

The reason for the internal fragmentation occurrence – When the portions of memory allocated to keep any
external process are not continuous and are too small.

For example, if RAM has a total of 20 KB of free space but is not in a contiguous manner, or it is in a
fragmented way. If a process with a size of 20 KB wants to load on RAM, it cannot load because space is
not contagiously free.
Solution to avoid internal fragmentation –

In basic terms, internal fragmentation can be decreased by effectively allocating the littlest partition but
sufficient for the process. However, the issue will not be solved completely but it can be reduced to some
extent.

Measures to avoid external fragmentation –

The solution to external fragmentation is condensation or alteration of memory contents. In this technique all
the contents of the memory are manipulated and all the free memory is put together in a big block.

Conclusion – Hence we studied how the fragmentation occur in memory of computer.

Case Study 6

Aim: - Prepare case study on Paging concept.

Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. The process of retrieving processes in the form of pages from the
secondary storage into the main memory is known as paging.

The basic purpose of paging is to separate each procedure into pages. Additionally, frames
will be used to split the main memory. This scheme permits the physical address space of a
process to be non – contiguous.
For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main memory will be
divided into the collection of 16 frames of 1 KB each.

There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each. Here, all the
processes are divided into pages of 1 KB each so that operating system can store one page in one frame.

At the beginning of the process, all the frames remain empty so that all the pages of the processes will get
stored in a contiguous way.

In this example you can see that A2 and A4 are moved to the waiting state after some time. Therefore, eight
frames become empty, and so other pages can be loaded in that empty blocks. The process A5 of size 8
pages (8 KB) are waiting in the ready queue.

Advantages of Paging

 Easy to use memory management algorithm


 No need for external Fragmentation
 Swapping is easy between equal-sized pages and page frames.

Disadvantages of Paging

 May cause Internal fragmentation


 Page tables consume additonal memory.
 Multi-level paging may lead to memory reference overhead.

->

Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main
memory in the form of pages.

The paging process should be protected by using the concept of insertion of an additional bit called
Valid/Invalid bit.
The biggest advantage of paging is that it is easy to use memory management algorithm

Paging may cause Internal fragmentation

Segmentation method works almost similarly to paging, only difference between the two is that segments
are of variable-length whereas, in the paging method, pages are always of fixed size.

You can achieve sharing by segments referencing multiple processes.

Segmentation is costly memory management algorithm

You might also like