0% found this document useful (0 votes)
20 views14 pages

Midterm Coverage

Process management is a critical function of operating systems that involves creating, scheduling, monitoring, and terminating processes while ensuring efficient resource allocation and system performance. It utilizes various techniques such as scheduling algorithms, synchronization, and inter-process communication, and differs between operating systems like Windows and Linux. Additionally, processes and threads are distinct entities, with processes being independent units of execution and threads being lightweight components within a process.

Uploaded by

sjedy51
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views14 pages

Midterm Coverage

Process management is a critical function of operating systems that involves creating, scheduling, monitoring, and terminating processes while ensuring efficient resource allocation and system performance. It utilizes various techniques such as scheduling algorithms, synchronization, and inter-process communication, and differs between operating systems like Windows and Linux. Additionally, processes and threads are distinct entities, with processes being independent units of execution and threads being lightweight components within a process.

Uploaded by

sjedy51
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

PROCESS AND THREAD

What is process management?

Process management in computer is a fundamental aspect of operating systems that involves creating,
scheduling, monitoring and terminating processes. It is responsible for resource allocation and sharing
among processes, managing the execution of processes, and ha
ndling process communication and synchronization. The goals: are to achieve efficient computation,
system reliability and robustness, and user responsiveness. Process management also provides control
and visibility of system activities for users and administrators.

How does the OS schedule processes?

The OS schedules processes through an algorithm known as a scheduler. This scheduler determines
which processes run, when, and for how long, with a focus on efficient use of CPU and overall system
performance. It uses techniques like preemptive or non-preemptive scheduling and priority scheduling
to manage the order and time allocation of processes. The specific scheduling policy used can differ
between operating systems, and can include First-Come-First-Served (FCFS), Shortest Job Next (SJN), and
Round Robin amongst others.

What are the techniques used for process management in operating systems?

Process management in operating systems utilizes several techniques including process scheduling,
process synchronization, process communication, and deadlock handling. Other techniques also
encompass priority assignment, interruption handling, process switching, and address mapping. These
techniques are used to operate, organize, and manage processes effectively, ensuring optimal use of the
CPU and other resources. All these techniques together constitute the process management strategy of
an operating system.

How does process management differ in Windows and Linux?

Process management in Windows is handled through a graphical user interface which provides a user-
friendly approach for individual users. It uses threads extensively for better responsiveness and code
separation. On the other hand, Linux process management is largely performed through the command
line interface, providing more control and flexibility to users with technical knowledge. Linux traditionally
treats threads as light-weight processes, although modern versions have more specialized
implementation similar to Windows.

What algorithms are used for CPU scheduling of processes?

There are several algorithms used for CPU scheduling of processes in operating systems. These include,
First-Come, First-Serve (FCFS), Shortest-Job-Next (SJN), Priority Scheduling, Round Robin (RR), and
Multilevel Queue Scheduling. Each algorithm has its own strategy for managing and allocating CPU time
to the various processes that need it. The choice of which one to use depends on the specific
requirements and circumstances of the system.

What is the Difference Between Process and Thread?

What is Thread?

A thread is a set of code running within a process that keeps track of its location in memory to pick up
where it left off the next time. This is a lightweight process that does the same job as a process.
1
Example of Thread

When working with large amounts of data. If you try to open the files simultaneously, it may take a long
time. Threads allow you to split the data into pieces and open them paralleled. This way, the program
can be completed much faster.

Application of Thread

 While the movie plays on the device, various threads control the audio and video in the
background.

 While Downloading videos, if you are playing video games at the same time, that indicates multi-
threading.

 When processing data in a database, threads can be used to process the data cooperatively. In
this scenario, data is written to the database in parallel, and threads read the data back.

 Download files and view them at the same time. So, it is possible as threads are working behind
it.

What is a Process?

A process is essentially an instance of a computer program that is currently being executed. It comprises
the program code and its current activity. It is worth noting that each process has its own separate
memory address space, meaning that it operates independently and remains isolated from other
processes. Processes can communicate with each other through inter-process communication (IPC)
mechanisms.

Key Characteristics of Process

 Independence: Each process operates independently and has a dedicated set of resources.

 Memory Allocation: Every process has its separate memory space, which is not shared with any
other process.

 Communication: Processes communicate with each other using IPC mechanisms like pipes-a
temporary software connection between two programs or commands ., sockets-one endpoint of
a two way communication link between two programs running on the network, and shared
memory-an operating-system feature that allows the database server threads and processes to
share data by sharing access to pools of memory.

 Overhead: Creating a new process is resource-intensive since it requires allocating memory and
other resources.

Applications of Process

 Launch your browser if you want to search for something on the web. So, this can be process.

 The process can start when launching a music player and listen to some cool music of your
choice.

 When antivirus is running, it is a process.

 When you compile a program in C or C++, the compiler creates binary code. Both original code
and binary code are programs. When you actually run the binary it becomes a process.
2
3
Difference between PROCESS and THREAD

Aspect Process Thread

An independent unit of
A lightweight, executable unit within a
Definition execution containing its
process.
own memory space.

Has its own separate Shares memory space with other


Memory Space
memory space. threads within the same process.

Higher overhead due to


Lower overhead, more efficient in
Overhead separate memory and
resource usage.
resource allocation.

Depends on the process; multiple


Execution Operates independently. threads can run in parallel within one
process.

Communicates through Communicates directly through shared


Communication
IPC mechanisms. memory.

Can be started, stopped,


Controlled within the context of a
Control and controlled
process.
independently.

Each process has its own


Resource Share resources of the process they
resources (files, variables,
Allocation belong to.
etc.).

Processes are isolated Threads can directly affect each other


Isolation
from each other. within the same process.

Failure in one process


A failure in one thread can affect all
Failure Impact does not affect other
threads of the process.
processes.

Longer creation time due Shorter creation time since less resource
Creation Time
to resource allocation. allocation is needed.

Suitable for applications


Ideal for tasks requiring frequent
Use Case needing isolated
communication and shared resources.
execution.

4
A process is an active entity, unlike a program, which is considered a “passive” entity. A thread is a
sequence of instructions that runs in the same context for the whole process. Each running thread
consumes less memory than the process because each thread occupies just one bit of memory space. A
process can contain multiple threads, which consumes more memory per thread than one thread alone
would.

States of a Process in Operating Systems

In an operating system, a process is a program that is being executed. During its execution, a process
goes through different states. Understanding these states helps us see how the operating system
manages processes, ensuring that the computer runs efficiently.

There must be a minimum of five states. Even though the process could be in one of these states during
execution, the names of the states are not standardized. Each process goes through several stages
throughout its life cycle.

Process States in Operating System

The states of a process are as follows:

 New State: In this step, the process is about to be created but not yet created. It is the program
that is present in secondary memory that will be picked up by the OS to create the process.

 Ready State: New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and is
waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU
are maintained in a queue called a ready queue for ready processes.

 Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available CPU cores.

 Wait State: Whenever the process requests access to I/O or needs input from the user or needs
access to a critical region(the lock for which is already acquired) it enters the blocked or waits
state. The process continues to wait in the main memory and does not require CPU. Once the
I/O operation is completed the process goes to the ready state.

 Terminated or Completed State: Process is killed as well as PCB is deleted. The resources
allocated to the process will be released or deallocated.

How Does a Process Move From One State to Other State?


5
A process can move between different states in an operating system based on its execution status and
resource availability. Here are some examples of how a process can move between different states:

 New to Ready: When a process is created, it is in a new state. It moves to the ready state when
the operating system has allocated resources to it and it is ready to be executed.

 Ready to Running: When the CPU becomes available, the operating system selects a process
from the ready queue depending on various scheduling algorithms and moves it to the running
state.

 Running to Wait: When a process needs to wait for an event to occur (I/O operation or system
call), it moves to the blocked state. For example, if a process needs to wait for user input, it
moves to the blocked state until the user provides the input.

 Running to Ready: When a running process is preempted by the operating system, it moves to
the ready state. For example, if a higher-priority process becomes ready, the operating system
may preempt the running process and move it to the ready state.

 Wait to Ready: When the event a blocked process was waiting for occurs, the process moves to
the ready state. For example, if a process was waiting for user input and the input is provided, it
moves to the ready state.

 Running to Terminated: When a process completes its execution or is terminated by the


operating system, it moves to the terminated state.

Types of Schedulers

 Long-Term Scheduler: Decides how many processes should be made to stay in the ready state.
This decides the degree of multiprogramming. Once a decision is taken it lasts for a long time
which also indicates that it runs infrequently. Hence it is called a long-term scheduler.

 Short-Term Scheduler: Short-term scheduler will decide which process is to be executed next
and then it will call the dispatcher. A dispatcher is a software that moves the process from ready
to run and vice versa. In other words, it is context switching. It runs frequently. Short-term
scheduler is also called CPU scheduler.

 Medium Scheduler: Suspension decision is taken by the medium-term scheduler. The medium-
term scheduler is used for swapping which is moving the process from main memory to
secondary and vice versa. The swapping is done to reduce degree of multiprogramming.

Multiprogramming

We have many processes ready to run. There are two types of multiprogramming:

 Preemption – Process is forcefully removed from CPU. Pre-emption is also called time sharing or
multitasking.

 Non-Preemption – Processes are not removed until they complete the execution. Once control is
given to the CPU for a process execution, till the CPU releases the control by itself, control
cannot be taken back forcibly from the CPU.

Degree of Multiprogramming

The number of processes that can reside in the ready state at maximum decides the degree of
multiprogramming, e.g., if the degree of programming = 100, this means 100 processes can reside in the
ready state at maximum.
6
Operation on The Process

 Creation: The process will be ready once it has been created, enter the ready queue (main
memory), and be prepared for execution.

 Planning: The operating system picks one process to begin executing from among the numerous
processes that are currently in the ready queue. Scheduling is the process of choosing the next
process to run.

 Application: The processor begins running the process as soon as it is scheduled to run. During
execution, a process may become blocked or wait, at which point the processor switches to
executing the other processes.

 Killing or Deletion: The OS will terminate the process once its purpose has been fulfilled. The
process’s context will be over there.

 Blocking: When a process is waiting for an event or resource, it is blocked. The operating system
will place it in a blocked state, and it will not be able to execute until the event or resource
becomes available.

 Resumption: When the event or resource that caused a process to block becomes available, the
process is removed from the blocked state and added back to the ready queue.

 Context Switching: When the operating system switches from executing one process to another,
it must save the current process’s context and load the context of the next process to execute.
This is known as context switching.

 Inter-Process Communication: Processes may need to communicate with each other to share
data or coordinate actions. The operating system provides mechanisms for inter-process
communication, such as shared memory, message passing, and synchronization primitives.

 Process Synchronization: Multiple processes may need to access a shared resource or critical
section of code simultaneously. The operating system provides synchronization mechanisms to
ensure that only one process can access the resource or critical section at a time.

 Process States: Processes may be in one of several states, including ready, running, waiting, and
terminated. The operating system manages the process states and transitions between them.

PROCESS CONTROL BLOCK (PCB)

A process control block (PCB), also sometimes called a process descriptor,is a data structure used by a
computer operating system to store all the information about a process. The operating system maintains
a process control block (PCB) for each process, which contains information about the process state,
priority, scheduling information, and other process-related data.

In conclusion, understanding the states of a process in an operating system is essential for


comprehending how the system efficiently manages multiple processes. These states—new, ready,
running, waiting, and terminated—represent different stages in a process’s life cycle. By transitioning
through these states, the operating system ensures that processes are executed smoothly, resources are
allocated effectively, and the overall performance of the computer is optimized. This knowledge helps us
appreciate the complexity and efficiency behind the scenes of modern computing.
7
CPU Scheduling in Operating Systems

Scheduling of processes/work is done to finish the work on time. CPU Scheduling is a process that allows
one process to use the CPU while another process is delayed (in standby) due to unavailability of any
resources such as I / O etc, thus making full use of the CPU. The purpose of CPU Scheduling is to make
the system more efficient, faster, and fairer.

What is The Need For CPU Scheduling Algorithm?

CPU scheduling is the process of deciding which process will own the CPU to use while another process
is suspended. The main function of the CPU scheduling is to ensure that whenever the CPU remains idle,
the OS has at least selected one of the processes available in the ready-to-use line.

Objectives of Process Scheduling Algorithm

 Utilization of CPU at maximum level. Keep CPU as busy as possible.

 Allocation of CPU should be fair.

 Throughput should be Maximum. i.e. Number of processes that complete their execution per
time unit should be maximized.

 Minimum turnaround time, i.e. time taken by a process to finish execution should be the least.

 There should be a minimum waiting time and the process should not starve in the ready queue.

 Minimum response time. It means that the time when a process produces the first response
should be as less as possible.

Terminologies Used in CPU Scheduling

 Arrival Time: Time at which the process arrives in the ready queue.

 Completion Time: Time at which process completes its execution.

 Burst Time: Time required by a process for CPU execution.

 Turn Around Time: amount of time to execute a particular process

 Waiting Time(W.T): amount of time a process has been waiting in the ready queue

What Are The Different Types of CPU Scheduling Algorithms?

There are mainly two types of scheduling methods:

 Preemptive Scheduling: Preemptive scheduling is used when a process switches from running
state to ready state or from the waiting state to the ready state.

 Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process terminates , or


when a process switches from running state to waiting state.

8
Let us now learn about these CPU scheduling algorithms in operating systems one by one:

1. First Come First Serve (Non-Pre-emptive)

FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve
scheduling algorithm states that the process that requests the CPU first is allocated the CPU first and is
implemented by using FIFO queue.

Characteristics of FCFS

 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.

 Tasks are always executed on a First-come, First-serve concept.

 FCFS is easy to implement and use.

 This algorithm is not much efficient in performance, and the wait time is quite high.

Advantages of FCFS

 Easy to implement

 First come, first serve method

Disadvantages of FCFS

 FCFS suffers from Convoy effect- short process behind long process

 The average waiting time is much higher than the other algorithms.

 FCFS is very simple and easy to implement and hence not much efficient.

9
EXAMPLE:

2. Shortest Job First(SJF)

Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest
execution time to execute next. This scheduling method may or may not be preemptive. Significantly
reduces the average waiting time for other processes waiting to be executed. The full form of SJF is
Shortest Job First

10
Characteristics of SJF

 Shortest Job first has the advantage of having a minimum average waiting time among
all operating system scheduling algorithms.

 It is associated with each task as a unit of time to complete.

 It may cause starvation if shorter processes keep coming. This problem can be solved using the
concept of ageing.

Advantages of SJF

 As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.

 SJF is generally used for long term scheduling

Disadvantages of SJF

 One of the demerit SJF has is starvation.

 Many times it becomes complicated to predict the length of the upcoming CPU request

To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article
on Shortest Job First.

11
3. Round Robin

Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It
is the preemptive version of First come First Serve CPU Scheduling algorithm . Round Robin CPU
Algorithm generally focuses on Time Sharing technique.

Characteristics of Round robin

 It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.

 One of the most widely used methods in CPU scheduling as a core.

 It is considered preemptive as the processes are given to the CPU for a very limited time.

Advantages of Round robin

 Round robin seems to be fair as every process gets an equal share of CPU.

 The newly created process is added to the end of the ready queue.

12
EXAMPLE:

4. Priority Scheduling

Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling


algorithm that works based on the priority of a process. In this algorithm, the editor sets the functions
to be as important, meaning that the most important process must be done first. In the case of any
conflict, that is, where there is more than one process with equal value, then the most important CPU
planning algorithm works on the basis of the FCFS (First Come First Serve) algorithm.

Characteristics of Priority Scheduling

 Schedules tasks based on priority.

 When the higher priority work arrives and a task with less priority is executing, the higher
priority process will takes the place of the less priority process and

 The later is suspended until the execution is complete.

 Lower is the number assigned, higher is the priority level of a process.

Advantages of Priority Scheduling

 The average waiting time is less than FCFS

 Less complex

Disadvantages of Priority Scheduling

 One of the most common demerits of the Preemptive priority CPU scheduling algorithm is
the Starvation Problem. This is the problem in which a process has to wait for a longer amount
of time to get scheduled into the CPU. This condition is called the starvation problem.

EXAMPLE:

13
14

You might also like