ETE Os Sol
ETE Os Sol
1- a) FIFO replacement
b) LRU replacement
c)Optimal Page Replacement.-10
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In
this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue. When a page needs to be replaced
page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page
frames.Find the number of page faults.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not
available in memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6
comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —
>1 Page Fault. Finally, when 3 come it is not available so it replaces 0 1 page
fault.
2. Optimal Page replacement: In this algorithm, pages are replaced which would
not be used for the longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4
page frame. Find number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7
because it is not used for the longest duration of time in the future.—>1 Page
fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page
Fault.
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set
up a benchmark so that other replacement algorithms can be analyzed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least
recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
with 4 page frames. Find number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7
because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already
available in the memory.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
So,
total overhead movement (total distance covered by the disk arm)
=(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) =208
Advantages:
Average Response Time decreases
Throughput increases
Disadvantages:
Overhead to calculate seek time in advance
Can cause Starvation for a request if it has a higher seek time as compared
to incoming requests
High variance of response time as SSTF favors only some requests
SCAN: In SCAN algorithm the disk arm moves in a particular direction and services
the requests coming in its path and after reaching the end of the disk, it reverses its
direction and again services the request arriving in its path. So, this algorithm works
as an elevator and is hence also known as an elevator algorithm. As a result, the
requests at the midrange are serviced more and those arriving behind the disk arm
will have to wait.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger
value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is
calculated as:
1. =(199-50)+(199-16) =332
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by disk arm
1. CSCAN: In SCAN algorithm, the disk arm again scans the path that has
been scanned, after reversing its direction. So, it may be possible that too
many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of
reversing its direction goes to the other end of the disk and starts servicing the
requests from there. So, the disk arm moves in a circular fashion and this algorithm
is also similar to SCAN algorithm and hence it is known as C-SCAN (Circular
SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards
the larger
value”.
so, the total overhead movement (total distance covered by the disk arm) is
calculated as:
=(199-50)+(199-0)+(43-0) =391 Advantages:
Provides more uniform wait time compared to SCAN
1. LOOK: It is similar to the SCAN disk scheduling algorithm except for the
difference that the disk arm in spite of going to the end of the disk goes
only to the last request to be serviced in front of the head and then reverses
its direction from there only. Thus it prevents the extra delay which
occurred due to unnecessary traversal to the end of the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is
calculated as:
1. =(190-50)+(190-16) =314
The release_lock() function is called by the thread or process when it wants to exit the critical
section. It simply sets the flag to False, releasing the lock for other threads or processes to
acquire.
The test_and_set() function implements the test-and-set instruction. It atomically sets the
target variable to True and returns its previous value. This ensures that only one thread or
process can set the flag to True at a time, allowing mutual exclusion.
It's worth noting that this implementation assumes that the test-and-set instruction itself is
atomic. In practice, atomic instructions are typically provided by the underlying hardware or
operating system. The above code demonstrates the basic idea, but the actual implementation
may vary depending on the specific programming language or platform being used.
5- Critical Section problem. Illustrate the software-based solution to the Critical Section
problem.15
The critical section is a code segment where the shared variables can be accessed. An atomic
action is required in a critical section i.e. only one process can execute in its critical section
at a time. All the other processes have to wait to execute in their critical sections.
A diagram that demonstrates the critical section is as follows −
In the above diagram, the entry section handles the entry into the critical section. It acquires
the resources needed for execution by the process. The exit section handles the exit from the
critical section. It releases the resources and also informs the other processes that the critical
section is free.
Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical
section at any time. If any other processes require the critical section, they
must wait until it is free.
Progress
Progress means that if a process is not using the critical section, then it should
not stop any other process from accessing it. In other words, any process can
enter a critical section if it is free.
Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt
should not wait endlessly to access the critical section.
6- The concept of Thrashing. What is the cause of Thrashing? How does the system detect
Thrashing.-15
Thrashing is a phenomenon that occurs in computer systems when the system's performance
drastically decreases due to excessive and inefficient use of system resources, particularly
virtual memory. It typically happens when a system is overwhelmed with a high demand for
memory but cannot efficiently allocate it to meet the demand. Instead, the system spends a
significant amount of time and resources swapping data between physical memory (RAM)
and the disk, resulting in a severe degradation of performance.
The primary cause of thrashing is an excessive number of page faults. A page fault happens
when a program requests data that is not currently in physical memory and must be loaded
from the disk. If a system is constantly experiencing page faults and continuously swapping
pages in and out of the disk, it leads to thrashing.
Insufficient memory: When a system does not have enough physical memory to hold all the
actively used data and programs, it relies heavily on swapping pages between the disk and
RAM, leading to thrashing.
Overloaded system: If the system is running too many processes simultaneously, each
requiring a significant amount of memory, it can cause excessive demand for memory
resources and result in thrashing.
High page fault rate: Monitoring the number of page faults per unit of time can provide
insights into thrashing. A significant increase in page faults might indicate that the system is
struggling to keep up with memory demands.
Low CPU utilization: In a thrashing system, the CPU may be mostly idle because it spends
most of its time waiting for page swaps to complete. If the CPU utilization remains
consistently low despite the system being busy, it can be a sign of thrashing.
Excessive disk activity: Thrashing involves frequent swapping of pages between RAM and
disk. Therefore, monitoring disk activity can reveal high disk usage due to constant page
swapping.
Slow response time: If the system becomes unresponsive or experiences significant delays in
executing tasks, it can be an indication of thrashing. This slowdown occurs because the
system spends excessive time on disk I/O operations rather than processing tasks.
Changing Directories:
cd: Changes the current working directory to the specified directory.
cd ..: Moves up one level in the directory hierarchy.
cd ~: Moves to the user's home directory.
Removing Directories:
rmdir: Deletes an empty directory.
rm -r: Removes a directory and its contents recursively.
Navigating Directories:
pwd: Prints the current working directory.
Copying Directories:
cp -r: Copies a directory and its contents recursively to a new location.
Context switching Processes require more time for context Threads require less time for context
time switching as they are heavier. switching as they are lighter than
processes.
Memory Sharing Processes are totally independent and A thread may share some memory with its
don’t share memory. peer threads.
Blocked If a process gets blocked, remaining If a user level thread gets blocked, all of
processes can continue execution. its peer threads also get blocked.
Resource Processes require more resources than Threads generally need less resources
Consumption threads. than processes.
Dependency Individual processes are independent of Threads are parts of a process and so are
each other. dependent.
Data and Code Processes have independent data and A thread shares the data segment, code
sharing code segments. segment, files etc. with its peer threads.
Treatment by OS All the different processes are treated All user level peer threads are treated as a
separately by the operating system. single task by the operating system.
Time for creation Processes require more time for Threads require less time for creation.
creation.
Time for Processes require more time for Threads require less time for termination.
termination termination.
Kernel Threads:
Advantages:
Kernel threads provide strong isolation between threads, ensuring that one thread's issues do not
affect others.
They can take advantage of the full range of operating system features and services.
Disadvantages:
Creation and management of kernel threads typically involve system calls, which can be more time-
consuming than user-level threads.
Switching between kernel threads often incurs higher overhead due to the involvement of the
operating system.
The number of kernel threads that can be created is typically limited by the operating system.
User-Level Threads:
Advantages:
User-level threads are managed by a user-level thread library or runtime system, which provides
flexibility and control over scheduling and thread management.
Context switching between user-level threads is typically faster than kernel threads since it doesn't
involve the operating system.
They can run on any operating system that supports the execution of a single thread.
Disadvantages:
User-level threads are typically not well-suited for parallel execution on multiple processors or
processor cores since the operating system schedules them on a single kernel thread.
Blocking system calls made by one user-level thread can block the entire process and all its threads.
User-level threads do not provide strong isolation, and issues in one thread can affect the entire
process.
Hybrid Threads:
Advantages:
Hybrid threads combine the advantages of both kernel threads and user-level threads.
They allow multiple user-level threads to be associated with a single kernel thread, enabling parallel
execution on multiple processors or cores.
Hybrid threads provide a balance between control and efficiency by utilizing both user-level and
kernel-level mechanisms.
Disadvantages:
Hybrid thread implementations may require more complex programming models and management
strategies.
Context switching between hybrid threads can involve both user-level and kernel-level operations,
which may introduce additional overhead.
The effectiveness of hybrid threads depends on the underlying thread library and the level of
integration with the operating system.
It includes the single CPU to execute It uses multiple tasks for the task
1.
the program. allocation.
8. Execution of process takes more time. Execution of process takes less time.
12- If there are 100 units of resource R in the system and each process in the system requires 4
units of resource R, then test how many processes can be present at maximum so that no
deadlock will occur.-10
To calculate the maximum number of processes that can be present without causing a deadlock, we
divide the total number of units of resource R by the resource requirement of each process:
Maximum number of processes = Total units of resource R / Resource requirement per process
= 25 processes
Therefore, a maximum of 25 processes can be present in the system without causing a deadlock,
given the available 100 units of resource R and the resource requirement of 4 units per process.
File name: The name of the file, which identifies it within the file system.
File extension: A part of the file name that indicates the file's type or format.
File size: The size of the file in bytes or another appropriate unit.
File permissions: Access permissions that determine who can read, write, or
execute the file.
File timestamps: Creation, modification, and access timestamps that record
when the file was created, last modified, and last accessed.
File ownership: The user and group ownership of the file.
File attributes may also include additional information specific to the file
system or operating system, such as file flags, encryption status, and file
versioning.
b) File operations: File operations refer to the actions or tasks that can be performed
on files within a file system. Common file operations include:
c) File types: File types refer to the categorization or classification of files based on
their content, format, or purpose. Some common file types include:
Text files: Files containing plain text without any specific formatting or binary
data.
Binary files: Files containing non-textual data, such as executable programs or
multimedia files.
Document files: Files created and used by word processors or document
editing software.
Spreadsheet files: Files containing tabular data used by spreadsheet
applications.
Image files: Files storing graphical images in various formats (e.g., JPEG, PNG,
GIF).
Audio files: Files containing audio data (e.g., MP3, WAV).
Video files: Files containing video data (e.g., MP4, AVI).
Archive files: Files that contain compressed or multiple files bundled together
(e.g., ZIP, TAR).
Configuration files: Files containing settings and configurations for
applications or systems (e.g., XML, INI).
Database files: Files used to store structured data in a database format (e.g.,
SQLite, MySQL).
d) Internal file structure: The internal file structure refers to how the contents of a file
are organized and stored within the file system. The internal structure can vary based
on the file system implementation and file type. Some common internal file
structures include:
It is very important for process management as the data structuring for processes is done in
terms of the PCB. It also defines the current state of the operating system.
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the process.
Registers
This specifies the registers that are used by the process. They may include accumulators,
index registers, stack pointers, general purpose registers etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of
the PCB accounting information.
There are various components of an Operating System to perform well defined tasks. Though most
of the Operating Systems differ in structure but logically they have similar components. Each
component must be a well-defined portion of a system that appropriately describes the functions,
inputs, and outputs.
There are following 8-components of an Operating System:
1. Process Management
2. I/O Device Management
3. File Management
4. Network Management
5. Main Memory Management
6. Secondary Storage Management
7. Security Management
8. Command Interpreter System
Following section explains all the above components in more detail:
Process Management
A process is program or a fraction of a program that is loaded in main memory. A process needs
certain resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The
process management component manages the multiple processes running simultaneously on the
Operating System.
A program in running state is called a process.
The operating system is responsible for the following activities in connection with process
management:
File Management
File management is one of the most visible services of an operating system. Computers can store
information in several different physical forms; magnetic tape, disk, and drum are the most common
forms.
A file is defined as a set of correlated information and it is defined by the creator of the file. Mostly
files represent data, source and object forms, and programs. Data files can be of any type like
alphabetic, numeric, and alphanumeric.
A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user.
The operating system implements the abstract concept of the file by managing mass storage device,
such as types and disks. Also files are normally organized into directories to ease their use. These
directories may contain files and other directories and so on.
The operating system is responsible for the following activities in connection with file management:
Network Management
The definition of network management is often broad, as network management involves several
different components. Network management is the process of managing and administering a
computer network. A computer network is a collection of various types of computers connected
with each other.
Network management comprises fault analysis, maintaining the quality of service, provisioning of
networks, and performance management.
Network management is the process of keeping your network healthy for an efficient
communication between different computers.
Following are the features of network management:
Network administration
Network maintenance
Network operation
Network provisioning
Network security
Keep track of which parts of memory are currently being used and by whom.
Decide which processes to load when memory space becomes available.
Allocate and deallocate memory space as needed.
Disk scheduling
Security Management
The operating system is primarily responsible for all task and activities happen in the computer
system. The various processes in an operating system must be protected from each other’s activities.
For that purpose, various mechanisms which can be used to ensure that the files, memory segment,
cpu and other resources can be operated on only by those processes that have gained proper
authorization from the operating system.
Security Management refers to a mechanism for controlling the access of programs, processes, or
users to the resources defined by a computer controls to be imposed, together with some means of
enforcement.
For example, memory addressing hardware ensure that a process can only execute within its own
address space. The timer ensure that no process can gain control of the CPU without relinquishing it.
Finally, no process is allowed to do it’s own I/O, to protect the integrity of the various peripheral
devices.
Command Interpreter System
One of the most important component of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Command Interpreter System executes a user command by calling one or more number of
underlying system programs or system calls.
Command Interpreter System allows human users to interact with the Operating System and
provides convenient programming environment to the users.
Many commands are given to the operating system by control statements. A program which reads
and interprets control statements is automatically executed. This program is called the shell and few
examples are Windows DOS command window, Bash of Unix/Linux or C-Shell of Unix/Linux.
shows the state transition diagram for the process states defined above:
Logically, the first two state are similar. In both case the process is willing to run, but in the
ready state there is no CPU temporarily available for it.
(I) Running to ready state:
A process in the running state has all of the resources that it needs for further
execution, including a processor.
The long term scheduler picks up a new process from second memory and loads it
into the main memory when there are sufficient resources available.
The process is now in ready state, waiting for its execution.
In fixed scheme, the OS will be divided into fixed sized blocks. It takes place at the time of
installation.
At compile time, we can bind only addresses
Degree of multiprogramming is not flexible. This is because the number of blocks is fixed
resulting in memory wastage due to fragmentation.
Advantages
Some of the advantages of the time-sharing system are as follows:
It resides in the secondary memory of the It is created when a program is in execution and is
system. being loaded into the main memory.
It exists in a single place and continues to exist It exists for a limited amount of time and it gets
until it has been explicitly deleted. terminated once the task has been completed.
It requires memory space to store instructions. It requires resources such as CPU, memory address,
I/O during its working.
It doesn't have a control block. It has its own control block, which is known as Process
Control Block.
1. First, thes context switching needs to save the state of process P1 in the form
of the program counter and the registers to the PCB (Program Counter Block),
which is in the running state.
2. Now update PCB1 to process P1 and moves the process to the appropriate
queue, such as the ready queue, I/O queue and waiting queue.
3. After that, another process gets into the running state, or we can select a new
process from the ready state, which is to be executed, or the process has a
high priority to execute its task.
4. Now, we have to update the PCB (Process Control Block) for the selected
process P2. It includes switching the process state from ready to running state
or from another state like blocked, exit, or suspend.
5. If the CPU already executes process P2, we need to get the status of process
P2 to resume its execution at the same time point where the system interrupt
occurs.
Disc scheduling is an important process in operating systems that determines the order in
which disk access requests are serviced. The objective of disc scheduling is to minimize
the time it takes to access data on the disk and to minimize the time it takes to complete a
disk access request. Disk access time is determined by two factors: seek time and
rotational latency. Seek time is the time it takes for the disk head to move to the desired
location on the disk, while rotational latency is the time taken by the disk to rotate the
desired data sector under the disk head. Disk scheduling algorithms are an essential
component of modern operating systems and are responsible for determining the order in
which disk access requests are serviced. The primary goal of these algorithms is to
minimize disk access time and improve overall system performance.
First-Come-First-Serve
The First-Come-First-Served (FCFS) disk scheduling algorithm is one of the simplest and
most straightforward disk scheduling algorithms used in modern operating systems. It
operates on the principle of servicing disk access requests in the order in which they are
received. In the FCFS algorithm, the disk head is positioned at the first request in the
queue and the request is serviced. The disk head then moves to the next request in the
queue and services that request. This process continues until all requests have been
serviced.
Example
Suppose we have an order of disk access requests: 20 150 90 70 30 60. The disk head is −
Shortest-Seek-Time-First
Shortest Seek Time First (SSTF) is a disk scheduling algorithm used in operating systems
to efficiently manage disk I/O operations. The goal of SSTF is to minimize the total seek
time required to service all the disk access requests. In SSTF, the disk head moves to the
request with the shortest seek time from its current position, services it, and then repeats
this process until all requests have been serviced. The algorithm prioritizes disk access
requests based on their proximity to the current position of the disk head, ensuring that
the disk head moves the shortest possible distance to service each request.
Example
In this case, for the same order of success request, the total seek time = (60-50) + (70-60)
+ (90-70) + (90-30) + (30-20) + (150-20) = 240
SCAN
SCAN (Scanning) is a disk scheduling algorithm used in operating systems to manage
disk I/O operations. The SCAN algorithm moves the disk head in a single direction and
services all requests until it reaches the end of the disk, and then it reverses direction and
services all the remaining requests. In SCAN, the disk head starts at one end of the disk,
moves toward the other end, and services all requests that lie in its path. Once the disk
head reaches the other end, it reverses direction and services all requests that it missed on
the way. This continues until all requests have been serviced.
Example
If we consider that the head direction is left in case of SCAN, the total seek time = (50-
30) + (30-20) + (20-0) + (60-0) + (60-70) + (90-70) + (90-150) = 200
C-SCAN
The C-SCAN (Circular SCAN) algorithm operates similarly to the SCAN algorithm, but
it does not reverse direction at the end of the disk. Instead, the disk head wraps around to
the other end of the disk and continues to service requests. This algorithm can reduce the
total distance the disk head must travel, improving disk access time. However, this
algorithm can lead to long wait times for requests that are made near the end of the disk,
as they must wait for the disk head to wrap around to the other end of the disk before they
can be serviced. The C-SCAN algorithm is often used in modern operating systems due
to its ability to reduce disk access time and improve overall system performance.
Example
For C-SCAN, the total seek time = (60-50) + (70-60) + (90-70) + (150-90) + (199-150) +
(199-0) + (20-0) + (30-20) = 378
LOOK
The LOOK algorithm is similar to the SCAN algorithm but stops servicing requests as
soon as it reaches the end of the disk. This algorithm can reduce the total distance the
disk head must travel, improving disk access time. However, this algorithm can lead to
long wait times for requests that are made near the end of the disk, as they must wait for
the disk head to wrap around to the other end of the disk before they can be serviced. The
LOOK algorithm is often used in modern operating systems due to its ability to reduce
disk access time and improve overall system performance.
Example
Considering the head direction is right, in this case, the total seek time = (60-50) + (70-
60) + (90-70) + (150-90) + (150-30) + (30-20) = 230
C-LOOK
C-LOOK is similar to the C-SCAN disk scheduling algorithm. In this algorithm, goes
only to the last request to be serviced in front of the head in spite of the disc arm going to
the end, and then from there it goes to the other end’s last request. Thus, it also prevents
the extra delay which might occur due to unnecessary traversal to the end of the disk.
Example
For the C-LOOK algorithm, the total seek time = (60-50) + (70-60) + (90-70) + (150-90)
+ (150-20) + (30-20) = 240