la) Proceu lift cycle:
Process life cycle is a manner of looking at ~ss in the context of their initial
maturing _and . final stages of evolution and growth: Undentanding and an~lyzin~
processes m this manner helps to understand how they fit into a "system" of processes as
well as how they change as a proc~ss natures within an organii.ation. The figure shows
the process life cycle. .
- - - Scheduler incmuot ~ E it
l,Oor
· eYalf wait
Schedulcrdisoaleh
LO or event
completion ----..: .__ _ _ _..>,..,-.:...-Cw.•it•::r
i) Orphan process an_d Zombie process:
An orphan process is a computer process whose parent process has finished or
tenninated. · ·
In Unix, any orphan process will be immediately adopted by the specfaJ "init" system
process. This operation is called re-parenting and occurs automatically. Even though
technically. The process has the "init" process as its parent, .it is stilJ called· an orphan
process since the process that originally created it no longer exists.
A Zombie is created when a parent process docs not ~ the want system call after a
child dies to need its exit status. The Zombie is not occupying any significant 11_1emory or
resources, its only an exit status waiting to be delivered. Zombie processes routinely do
things by spawning .child processes and waiting for those child processes to complete.
j) Aging Technique:
A solution to ttie problem of indefinite blockage of low-priority processes is aging. Aging
is a technique of gradually increasing the priority of processes that wait in the system for
a long time. For example, if priority of processes range from 127 (low) to O (high), we
could increase the priority of a waiting process by I every I5 minutes. Eventually, even a
process with an initiaJ priority of 127 would have ·the highest priority in the system and
would be executed. In fact, it \h,;uld take no more than 32 hours for a priority - 127
process to age to a priority - 0 proces;s. ·
OPERATING S)'STEM
OR.
peflne critical Section. Mentlc,n the mechanism to control access to critical
aectlon. . [WBUT 2022]
Answer:
t•' Part:
The key to preventing trouble involving shared storage is find some way to prohibit more
than one process fonn reading and writing the shared data simultaneously, Thal part of
the program where the shared memory is accessed is called the critical_section.
2•d·part:
· To solve-the critical section proble~, three criteria are there like,
• Mutual exc/us/011: If process P,- is executing i.n its critical-section, then no other
process can be executing in their critical sections.
• Progress: If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely.
• Bounded waltil1g: A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.
2. Whit la Dining phlloaopher'• problem? Derive an algorithm to solve the problem
Ullng Nmaphore? (WBUT 2012(CSD
OR,
0.CUN dinning phlloaopher problem with the solution. [WBUT 2014(CS)J
Auwer:
In computer science, the dining philosophers problem is an illustrative example of a
common computing -problem in concurrency. It is .• classic multiproceuor
synchroniz.ation problem. It is summarized as five philosophers sitting at a table with a
large bowl of spaghetti in the center. A fork is placed in between each pait of .t;ar.e.
philosophers and as such, each philosopher has one fork to his left and one fork lo his
right. A dangerous possibility of deadlock occurs when every philosopher holds a left
fort and waits perpetually for a right fork or vice,.versa. ·
Algorithm to solve dining philosopher problem using semaphore:
I* Program dini ng philosophe.rs • /
Semaphore fork (5] = (1):
Semaphore r oom = (4);
Void phi l osopher (.int i) (
Whi l e (true) (
Think ( ) ;
Wa it (room);
wait (fork (i));
Wait (fora [(i+l) 1110d 5));
eat ( ) ;
Signal (fork((i+l)mod 5 J );
Signal (fork (i));
Signal (room) ;
}
')
Void main ( ) (
Parbegin (Philosopher (O) ; Philosopher (1), Philosopher (2),
Philosopher (3), Philosopher (4));
b) Write Ptttraon 1lgorfthm for two proc••• crftlcal aectJon problem?
[WBUT 2014(11JJ
Answer:
Peterson algorithm is a simple algorithm that can be nm by two processes to ensure
mutual exclusion for one resource. It does not require any special hardware and it uses
busy waiting (a spinlock). Shared variables arc created and initialized before either
process starts. The shared variable flag [OJ and flag [I) are initialized to FALSE because
neither process is yet interested in the critical section. The shared variable tum is set to
either Oor I randomly.
var flag: array[0 .. 1) of boolean;
turn: 0 .. 1; ·
flag(O) : = FALSE;
flag(l) : ■ FALSE;
turn: ■ random(0 .• 1)
After initialization, each .process, which is c alled process ; in
the code (the other process is process j ) runs the following code.
repeat
flag (IJ:= TRUE
turn :· = /;
white (tlag ( / J and turn= j) do no- op;
Critical section
flag ( /) : = FALS~;
Remainder section
until FALSE;
. Information common to both processes
Turn=O .
flag(O]=FALSE .
flag[l]=FALSE
9. What 11 dynamic loading? What 11 dynamic llnklng? (WBUT 2012(CS), 2013{fTJJ
How are they related? CW.BUT 2013{rT)J
Answer:
Dynamic loading means loading the library into the memory during load or run time. ft
can be imagined to be similar to plugins, i.e., exec., can actually execute before the
dynamic loading happens. It retrieves the addresses of functions and variables contained
in the library, execute those functions or access those variables and unload the library
from memory.
Dynamic linking refers to the linking that is done during load or run-time and not when
the exec. is created. In case of dynamic linking the linker while creating the exec., docs
minimal work. For _the dynamic linker to work it actually has to load the libraries too.
Here it is also called linking loader.
Dynamic loading can be done at any point in program execution, but linking is perfonncd
at the time of program loading only. ·
20. eompa,. best fit and fi~ flt algorithm for memory allocation. [WBUT 2019(CS))
Aa5Wff!
F'ust Fit: .
In the first fit approach is to allocate the first free partition or hole Jarge enough which
can accommodate the process. It finishes after finding the first suitable free partition.
Mt--,a: .
• Fastest algorithm because it searches as little as possible.
,,_..,,,.,es: .
• The remaining unused memory areas left after allocation become waste if it is too
smaller. Thus request for larger memory requirement cannot be accomplished.
Best Fit:
The best fit de-Als with allocating the smallest free partition which meets the requirem~nt
of the requesting process. This algorithm first searches the entire list of free partitions and
considers the smallest hole·thal is •dequate. It then tries to find a hole which is close to
aco11l process size needed.
~:
• Memory utiliz.ation is much better than first fit as it searches the smallest free
partition first available.
DlsMN.td-,es:
• It is slower and may even tend to fill up memory with tiny useless holes.
1. Dlffer~ntlate between Blocking vs. Non-Blocking lnput-outpul [WBUT 2013(CS)]
OR, ·
Differentiate between Blocking Non-Blocking Input-output. [WBUT 2017(IT)]
Answer: ·
Blocking- Here, ·process suspended until 1/0 completeq. .
It is easy to use.and understand. It is insufficient for some needs Nonblocking- Here 1/0
call returns as much as available.
• It is implemented via multithreading
• It returns quickly with count of bytes read or written.
2. What are the difference between a trap and an Interrupt? What Is the use of each
function? (WBUT 2016(CS)]
Answer:
l'' Part:
An interrupt is generally initiated by an 1/0 device, and causes the CPU to stop what it is
doing, save its context, jump to the appropriate interrupt service routine, complete it,
restore the context, and continue execution.
A trap is usually initiated by the CPU hardware. Whenever the trap condition occurs, the
CPU stops what it is doing, saves the context, jumps to the appropriate trap routine,
complete it, restores the context and continue execution.
2•11 Part:
Interrupt are used for external devices to signal the processor to execute some service
code. ·
Traps arc used both for control lJ'.8nsformation between the user code and the operating
system via system calls and for error handling. ·
The CPU uses this to get the serial device interrupt .service routine, which is then
executes as
OS-97
POPULAR PUBLICATION&
Interrupts arc hardware interrupts, while traps are software invoked interrupts.
In some usages, the tcnn trap refers specifically to an interrupt ·intend~d to initiates 8
context switch to a monitor program.
· 1. Writ• a ehort not• on Blocklng and Non-blocklng 1/0. [WBUT 2013(1T)J
Aaswer:
With blocking 1/0, a process is moved to the wait queue when an 1/0 request is made and
moved back to the ready queue when the request completes, allowing other process to
rum in the machine. ·
With non-blocking 1/0, the 1/0 request returns immediately, whether the requested l/0
operation has occurred or not. This allows the process to check for available data without
getting hung completely if it is not there. One approach for programmers to implement
non-blocking 1/0 is to have a multithread application, in which one thread makes
blocking 1/0 calls, while other threads continue to update the screen or perform other
tasks. Two 1/0 methods are shown below.
,. 1•
. ffllUC51ing process
-
Kernel requesting process user
~
waiting .,
.. •
.
•
device driver I •
device driver
I
., I
Interrupt handler ' Kernel
◄
''.
I
lntarupt handler I
I
I ,''
~
Hardware Hardware data
Dlll transfe. transfer
" •
Time - Time-
•) Synchronous 00 b) AJynchronous 00
2. Write 1hort note on Data tranafer method of 1/0 device,. [WBUT 2019(CS)]
Ao1wer:
Data transfer between CPU and the 1/0 devices may be done ih different modes. Steps
involved are:
• · Buffer the byte into the buffer.
• Jnfonn the CPU that the device has the I byte to transfer (i.e. bus grant request)
• Transfer the byte (at system bu.1 speed)
• RepJace the ·control often bus back to CPU. Fig. shows-
OUJ Request OR AB -- Address Bus
..
DB Data Bus
.
RD - Read
8UJ0rant - BO w Write
I. 1) What ■re 1uk time and lattnoy time? [WBUT 2018(IT)J
An1wtr:
Setk llmt':
It 11 defined 11 the time requlrad to move the dlak ann to the required track.
I.e. T, • m • n + S
Where T, • Estimated aoek time
n • No. of track• roveraed
m • Constant that depends on dl1k drlvo
S • Start uptime
Latency time:
It la defined u the time required to reach the defined sector by the read/write had. •
Latency can be computed by dividing the number of revolution per minute, R, into 30
.I.C. ,-~R
b) DNcrlbt phplcal and loglcal formatting of dl1k. [WBUT 2011(1T)J .
A11wer: ·
A hard disk must be physically formatted before it can be logically formatted. A hard
disk's physical formattina ls usually perfonned by the manufacturer. Physical formatting
divides the hard disk platters into their basic physical clements: Tracks, sectors· and
cylindcra. These elements define the way In which data is phy~ically recorded on and
read from the disk. Tracka are concentric circular paths written on each ride of a platter
and it 11 Identified by number starting with track zero. . · ·
Tracks are divided into smaller areas called sectors which ire used· to store a fixed
amount of data. ll Is usuaUy·fonnatted to contain 512 bytes of data.
A cylinder i1 combined of a 1et of tracks that lie at the same distance from the spindle on
all sides of all the platter. For example, track three on every side of every platter is
located at the same distance from the spindle. · ·
After a hard disk has been physically fonnatted, It must also be logically fonnatted.
Logical fonnattins places a n1e system on the disk, allowing an operating system (such as
DOS, windows, llnux) to uso the available disk space to store and retrieve flies. Different
OS use different nle system. So logical file systems. depends on OS formatting your
entire hard disk with one file system noccssarily limits the ·number and types of OS. But
there Is a solution to the above problem. Before a disk is logically fonnatted, it can be
divided into partitions. Each partition can be formatted .with a different file system,
allowing one to install multiple OS.
What is thread? Draw and explain thread life cycle?
ChatGPT
In computer science, a thread refers to the smallest unit of execution within a process.
A process can have multiple threads, each running independently and sharing the
same resources, such as memory space, but having its own program counter, register
set, and stack. Threads within a process can communicate with each other more easily
than separate processes since they share the same memory space.
Thread Life Cycle:
The life cycle of a thread typically consists of several states, and transitions between
these states. Here's a simplified representation:
L. New: In this state, a thread is created, but it is not yet scheduled for execution.
Resources, such as memory, may be allocated during this phase.
2. Runnable/Ready: Once the thread is ready to run, it moves to the runnable state. It may be
waiting for the CPU time, as the scheduler decides when to execute it.
3. Running: The thread is actively being executed by the CPU.
+. Blocked/Waiting: A thread might move to this state if it is waiting for some external event, like
user input or the availability of a resource. It doesn't consume CPU time during this period.
3. Terminated/Dead: The thread has completed its execution or was explicitly terminated.
Resources associated with the thread are released.
Kernel-level threads (KLT) and user-level threads (ULT) are two different approaches
to implementing threads in an operating system.
Kernel-Level Threads (KLT):
• Managed by the Operating System Kernel: In this approach, the operating system is aware of
and manages the threads. The kernel is responsible for thread creation, scheduling, and
synchronization.
• Efficient Multlthreadlng: Since the kernel has full control over threads, it can take advantage
of multiple processors efficiently. Each thread can be scheduled independently to run on
different processors.
• Blocking One Thread Doesn't Affect Others: If one kernel-level thread is blocked due to 1/0
operations or other reasons, other threads in the same process can still run, as the kernel can
schedule another thread.
• Example: POSIX threads (pthreads) are an example of kernel-level threads.
User-Level Threads (ULT):
• Managed by User-Level Library: In this approach, the thread management is handled by a
user-level library or application rather than the operating system. The kernel remains unawarE
of the existence of threads within the process.
• Lightweight: User-level threads are generally lighter in terms of resource overhead because
they don't require kernel involvement for thread management.
• Blocking One Thread Affects All: If one user-level thread is blocked, it may block the entire
process since the kernel is unaware of the threads, and the scheduling decisions are made
within the user-level library.
• Example: Green threads or fibers are an example of user-level threads. Libraries like Java's
Green Threads or Go's goroutines implement user-level threads.
File allocation methods determine how files are stored on a storage device. Three
common file allocation methods are:
L. Contiguous Allocation:
• Description: Files are stored as contiguous blocks on the disk. Each file occupies
a set of consecutive blocks.
• Advantages: Simple and efficient for sequential access, reduces fragmentation.
• Disadvantages: Inflexible, leads to fragmentation (external and internal), difficult to
allocate space for growing files.
) Linked Allocation:
• Description: Each file is a linked list of blocks scattered across the disk. Each block
contains a pointer to the next block.
• Advantages: No external fragmentation, dynamic size allocation.
• Disadvantages: Inefficient for direct access, extra space for pointers, disk 1/0 for each
block access.
3. Indexed Allocation:
• Description: An index block is used to store pointers to all the blocks of a file. The index
block is located in a fixed position.
• Advantages: Efficient random access, no external fragmentation.
• Disadvantages: Requires additional space for the index block, potential for internal
fragmentation.