0% found this document useful (0 votes)
23 views15 pages

Os Unit Ii Notes

Uploaded by

sshreyakam365.0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views15 pages

Os Unit Ii Notes

Uploaded by

sshreyakam365.0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

OS UNIT II

Q1. Explain Process control Block.

Introduction:

A process control block (PCB) contains information about the process, i.e.
registers, quantum, priority, etc.

The process table is an array of PCBs, that means logically contains a PCB for
all the current processes in the system.

The process control block (PCB) is used to track the process’s execution
status.

Each block of memory contains information about the process state, program
counter, stack pointer, status of opened files.

Following is the Advantages of Process control Block:

Efficient process management: The process table and PCB provide an


efficient way to manage processes in an operating system. The process table
contains all the information about each process, while the PCB contains the
current state of the process, such as the program counter and CPU registers.

Resource management: The process table and PCB allow the operating
system to manage system resources, such as memory and CPU time,
efficiently. By keeping track of each process’s resource usage, the operating
system can ensure that all processes have access to the resources they
need.

Process synchronization: The process table and PCB can be used to


synchronize processes in an operating system. The PCB contains information
about each process’s synchronization state, such as its waiting status and
the resources it is waiting for.

Process scheduling: The process table and PCB can be used to schedule
processes for execution. By keeping track of each process’s state and
resource usage, the operating system can determine which processes should
be executed next.
However, apart from advantages there are certain disadvantages
such as:

Overhead: The process table and PCB can introduce overhead and reduce
system performance. The operating system must maintain the process table
and PCB for each process, which can consume system resources.

Complexity: The process table and PCB can increase system complexity
and make it more challenging to develop and maintain operating systems.
The need to manage and synchronize multiple processes can make it more
difficult to design and implement system features and ensure system
stability.

Scalability: The process table and PCB may not scale well for large-scale
systems with many processes. As the number of processes increases, the
process table and PCB can become larger and more difficult to manage
efficiently.

Security: The process table and PCB can introduce security risks if they are
not implemented correctly. Malicious programs can potentially access or
modify the process table and PCB to gain unauthorized access to system
resources or cause system instability.
Q2. State the difference between program and process.

POINTS Program Process


1.Definition A set of instructions An instance of an
designed to executing program.
complete a specific
task.
2.Entity type Passive entity Active entity created
residing in during execution and
secondary memory. loaded into main
memory.
3.Lifespan Exists in a single Exists for a limited
place and continues span of time and
to exist until it is gets terminated
deleted. after completing its
task.
4.Nature Static entity. Dynamic entity.
5. Resource Requires only Requires resources
Requirement memory space for like CPU, memory
storing instructions. address, and I/O
during its lifetime.
6.Control block Does not have any It has its own control
control block. block called Process
Control Block (PCB).
7. Components Contains two logical In addition to
components: code program data,
and data. requires additional
information for
management and
execution.

8. Mutability Does not change Can involve multiple


itself. instances of a single
program, where
program code may
be the same, but
program data may
differ.
9. Instruction Contains It is a sequence of
instructions. instruction
execution.

Q3. What are schedulers? Explain its Types

Introduction:
Schedulers in an operating system are specialized components responsible
for managing the order in which processes are executed by the CPU. They
play a crucial role in optimizing the performance and efficiency of the system
by determining the sequence of process execution. There are three main
types of schedulers, each with distinct functions and operational frequencies:
long-term, short-term, and medium-term schedulers.

Long-term Scheduler:
The long-term scheduler, also known as the job scheduler, controls the
admission of processes into the system. It decides which processes from the
job queue should be moved to the ready queue, thereby determining the
degree of multiprogramming (the number of processes in memory at one
time). This scheduler runs infrequently because it makes decisions that have
long-lasting effects on the system's load and performance. Its primary goal is
to maintain a balanced mix of I/O-bound and CPU-bound processes, ensuring
efficient utilization of system resources.

Short-term Scheduler:
The short-term scheduler, or CPU scheduler, is responsible for selecting
which process from the ready queue should be executed next by the CPU.
This scheduler runs very frequently, as it needs to make quick decisions to
manage context switching—the process of saving and loading the state of
processes. The short-term scheduler's efficiency directly affects the
responsiveness and throughput of the system, as it determines the order in
which processes access the CPU.

Medium-term Scheduler:
The medium-term scheduler manages the swapping of processes between
the main memory and secondary storage. Its primary function is to control
the level of multiprogramming by temporarily suspending (swapping out)
and resuming (swapping in) processes. This helps in managing the memory
more efficiently and ensuring that active processes have enough resources
to execute. By reducing the number of processes in memory, it prevents
overloading and allows the system to maintain better performance.

In summary: schedulers are essential for effective process management in


an operating system. The long-term scheduler determines which processes
enter the system, the short-term scheduler decides which processes run
next, and the medium-term scheduler manages the suspension and
resumption of processes to optimize memory use and system performance.

Q4. What are Threads? Explain need and components of threads.

Introduction:

Threads in the context of operating systems refer to individual sequences of


execution within a process. They are sometimes called lightweight processes
because they share some properties with processes but are more lightweight
in terms of resource consumption. Each thread belongs to exactly one
process, but a process can consist of multiple threads.

Definition of Threads:

A thread is a single sequence stream within a process. It represents a single


sequential activity being executed within a process. Threads are also known
as "threads of execution" or "thread control."

Need for Threads:

1. Parallel Execution: Threads allow for parallel execution within an


application, which can improve performance by utilizing multiple CPUs or
CPU cores efficiently.

2. Resource Sharing: Threads within the same process share the same
address space and resources, which allows them to communicate and share
data more efficiently than separate processes.
3. Synchronization: Threads can synchronize their activities through
mechanisms such as locks, semaphores, and monitors, enabling coordinated
execution and data sharing while avoiding conflicts.

4. Efficiency: Threads are more lightweight than processes, as they share


resources such as memory and file descriptors. Creating and managing
threads is generally faster and consumes fewer resources than creating
processes.

Components of Threads:

1.Stack Space: Each thread has its own stack space where local variables
and function call information are stored. This allows threads to have
independent function call hierarchies.

2. Register Set: Threads have their own set of CPU registers, including the
program counter (PC) and other general-purpose registers. These registers
hold the current execution state of the thread.

3. Program Counter (PC): The program counter keeps track of the address
of the next instruction to be executed by the thread.

4. Thread Control Block (TCB): Like processes, each thread has its own
thread control block that contains information about the thread's state,
priority, register contents, stack pointer, and other relevant information.
During a context switch, the contents of the CPU registers are saved into the
thread's TCB, allowing the thread to be resumed later.

Threads provide a way to achieve concurrency and parallelism within a


single process, facilitating efficient utilization of system resources and
improved application performance.
Q5. What is CPU scheduling?

Introduction:

CPU scheduling orchestrates the allocation of the central processing unit


(CPU) among multiple processes in an operating system. It aims to optimize
CPU utilization, ensure fairness among processes, and enhance system
efficiency and responsiveness. The need for CPU scheduling arises from the
inherent asymmetry between CPU speed and the speed of input/output (I/O)
devices, leading to periods of CPU idle time during which processes are
waiting for I/O operations to complete. To address this, CPU scheduling
selects the next process to execute when the current process finishes its CPU
burst or is blocked by an I/O operation, maximizing overall system efficiency
by keeping the CPU busy.

Key points regarding CPU scheduling:


1. Maximizing CPU utilization: Ensures that the CPU is utilized efficiently,
minimizing idle time and maximizing the throughput of the system.

2. Fairness among processes: Allocates CPU time fairly among processes


to prevent any single process from monopolizing the CPU and ensures
equitable access to computing resources.

3. Criteria for evaluating scheduling algorithms: Includes CPU


utilization, throughput, turnaround time, waiting time, completion time,
priority, and predictability.

4. CPU scheduling algorithms: Designed to balance competing objectives


such as maximizing CPU utilization while minimizing response time and
ensuring fairness.

5. Importance in multitasking environments: Enables the operating


system to manage the execution of multiple processes concurrently,
enhancing system performance and responsiveness.

In summary: CPU scheduling is a vital aspect of operating system


management, employing algorithms to optimize CPU utilization, ensure
fairness among processes, and meet specific performance criteria. By
efficiently allocating CPU resources, CPU scheduling plays a crucial role in
enhancing system performance and responsiveness in multitasking
environments.

Q6. Explain: a) FCFS b) SJN (small note there are total 14 points only
write 5 to 6 u can delete which ever u don’t understand and learn the easy
ones) love yaaaa

Introduction:

FCFS Scheduling: Prioritizes processes based on their arrival time,


executing them in the order they arrive. Simple but it may lead to
inefficiency and longer waiting times.

a) First-Come, First-Served (FCFS) Scheduling:


1.FCFS is a non-preemptive scheduling algorithm where processes are
executed in the order they arrive in the ready queue.

2.It's simple to understand and implement, as it operates on the principle of


a FIFO (First In, First Out) queue.

3.When a process enters the ready queue, it is assigned the CPU, and it
continues to execute until it completes its CPU burst.

5. Since it doesn't consider the burst time of processes, it may lead to poor
performance in terms of average waiting time, especially if long processes
arrive first, causing shorter processes to wait excessively.

6.The average waiting time can be calculated by subtracting the arrival time
from the service time of each process and then averaging these values
across all processes.

7. An example was provided in the data where the average wait time was
calculated for a set of processes using FCFS.

Introduction:

SJN Scheduling: Selects the process with the shortest burst time for
execution to minimize average waiting time. Optimal for batch systems but
may not work well for unpredictable execution times in interactive systems.
b) Shortest Job Next (SJN) Scheduling:

1.SJN, also known as Shortest Job First (SJF), is a non-preemptive or


preemptive scheduling algorithm.

2.In SJN, the scheduler selects the process with the shortest burst time from
the ready queue for execution.

3.It aims to minimize the average waiting time, making it an optimal


algorithm in terms of average waiting time.

4.SJN is suitable for batch systems where the required CPU time of each
process is known in advance.

5.However, it's not suitable for interactive systems where the required CPU
time is not predictable because it requires prior knowledge of process
execution times.

6.The average waiting time can be calculated similarly to FCFS, by


subtracting the arrival time from the service time of each process and then
averaging these values across all processes.

7.An example was provided in the data where the average wait time was
calculated for a set of processes using SJN.

These explanations provide an overview of FCFS and SJN scheduling


algorithms, their characteristics, and how they operate in terms of process
execution. Let me know if you need further clarification on any aspect!

Q7. Explain types of threads with its advantages and disadvantages. (you
can do it pookie)
Introduction:

Threads in the context of operating systems refer to individual sequences of


execution within a process. They are sometimes called lightweight processes
because they share some properties with processes but are more lightweight
in terms of resource consumption. Each thread belongs to exactly one
process, but a process can consist of multiple threads.

There are two types of threads which are described below:

 User Level Thread


 Kernel Level Thread

1. User Level Threads

User Level Thread is a type of thread that is not created using system calls.
The kernel has no work in the management of user-level threads. User-level
threads can be easily implemented by the user. In case when user-level
threads are single-handed processes, kernel-level thread manages them.
Let’s look at the advantages and disadvantages of User-Level Thread.

Advantages of User-Level Threads:

1.ImpImplementation of the User-Level Thread is easier than Kernel Level


Thread.

2.Context Switch Time is less in User Level Thread.

Disadvantages of User-Level Threads

1.There is a lack of coordination between Thread and Kernel.

2.Inc case of a page fault, the whole process can be blocked.


2. Kernel Level Threads

A kernel Level Thread is a type of thread that can recognize the Operating
system easily. Kernel Level Threads has its own thread table where it keeps
track of the system. The operating System Kernel helps in managing threads.
Kernel Threads have somehow longer context switching time. Kernel helps in
the management of threads.

Advantages of Kernel-Level Threads:

1.It has up-to-date information on all threads.

2.Applications that block frequency are to be handled by the Kernel-Level


Threads.

Disadvantages of Kernel-Level threads:

1.Kernel-Level Thread is slower than User-Level Thread.

2.Implementation of this type of thread is a little more complex than a user-


level thread.
Q8. State difference between pre-emptive and non-pre-emptive
scheduling.

(miaaww <3)

Introduction
1. Pre-emption – Process is forcefully removed from CPU. Pre-emption is also
called time sharing or multitasking.

2. Non-pre-emption – Processes are not removed until they complete the


execution.

PREEMPTIVE NON-PREEMPTIVE
Parameter SCHEDULING SCHEDULING

Once resources (CPU


In these resources Cycle) are allocated to a
(CPU Cycle) are process, the process
1.Basic
allocated to a process holds it till it completes
for a limited time. its burst time or switches
to waiting state.

Process cannot be
Process can be interrupted until it
3.Interrupt
interrupted in between. terminates itself or its
time is up.

If a process having If a process with a long


high priority frequently burst time is running
4.Starvation arrives in the ready CPU, then later coming
queue, a low priority process with less CPU
process may starve. burst time may starve.

5.Overhead It has overheads of It does not have


scheduling the overheads.
processes.

6.Flexibility flexible rigid

7.Cost cost associated no cost associated

In pre-emptive
It is low in non
8.CPU Utilization scheduling, CPU
pre-emptive scheduling.
utilization is high.

Pre-emptive Non-pre-emptive
9.Waiting Time scheduling waiting scheduling waiting time
time is less. is high.

Pre-emptive Non-pre-emptive
10.Response Time scheduling response scheduling response
time is less. time is high.

Decisions are made by


Decisions are made by
the process itself and
the scheduler and are
11.Decision making the OS just follows the
based on priority and
process’s instructions
time slice allocation

The OS has greater The OS has less control


control over the over the scheduling of
12.Process control
scheduling of processes
processes

Lower overhead since


Higher overhead due
context switching is less
13.Overhead to frequent context
frequent
switching

14.Examples Examples of Examples of


pre-emptive non-pre-emptive
scheduling is Round scheduling are First
Robin and Shortest Come First Serve and
Remaining Time First. Shortest Job First.

I loveeeeeeeee yaaaaaaaaaaaaaaa

You might also like