0% found this document useful (0 votes)
33 views22 pages

Cit315 Operating System Summary

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views22 pages

Cit315 Operating System Summary

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

2022_2 SECOND SEMESTER USE ONLY NOTE SCANNING OF MATERIAL BEFORE USE IS REQUIRED

JTECH EDUCATIONAL CONSULTS


WHATSAPP 09032650760,07064298170
[email protected]
MOTTO Embracing Education

CIT315 OPERATING SYSTEM SUMMARY

 Explain what process is


What is a process?
A process is a program in execution. A process is more than the program code, which is sometimes
known as the text section. It also includes the current activity, as represented by the value of the program
counter and the contents of the processor's registers.

Process Control Block (PCB)


To represent a process, we need a data structure, which we simply call a process control block
(PCB). It either contains or refers to all the per-process information mentioned above, including
the address space. To achieve efficient sharing of these resources the O/S needs to keep track of

1
all processes at all times

 Identifier: A unique identifier associated with this process, to distinguish it from all other
processes.
 State: If the process is currently executing, it is in the running state.
 Priority: Priority level relative to other processes.
 Program counter: The address of the next instruction to be executed.
 Memory pointers: Includes pointers to the program code and data associated with this process,
plus any memory blocks shared with other processes.
 Context data: These are data that are present in registers in the processor while the process is
Executing
 I/O status information: Includes outstanding I/O requests, I/O devices assigned to this
process, a list of files in use by the process, and so on.
 Accounting information: May include the amount of processor time and clock time used,
time limits, account numbers, and so on

 Identify the states of process

State Process Model and Diagrams


As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. Each process may be in one of the following states:
 New. The process is being created.
 Running. Instructions are being executed.
 Waiting. The process is waiting for some event to occur (such as an I/0 completion or
reception of a signal).
 Ready. The process is waiting to be assigned to a processor.
 Blocked/Waiting: A process that cannot execute until some event occurs, such as the
completion of an I/O operation.

2
 Terminated. The process has finished execution.

A two State process model


The first step in designing an OS to control processes is to describe the behavior that we would
like the processes to exhibit. We can construct the simplest possible model by observing that at
any time, a process is either being executed by a processor or not. In this model, a process may be in one
of two states: Running or Not Running,

A two state process model

A Five State process model


In a five-state process model, implementation above is inadequate: Some processes in the Not
Running state are ready to execute, while others are blocked, waiting for an I/O operation to
complete. Thus, using a single queue, the dispatcher could not just select the process at the oldest
end of the queue. Rather, the dispatcher would have to scan the list looking for the process that is
not blocked and that has been in the queue the longest.

A five state process model


In fig. 1.4, the possible transitions are as follows:
• Null New: A new process is created to execute a program.
• New Ready: The OS will move a process from the New state to the Ready state when it is
prepared to take on an additional process.

Ready Running: When it is time to select a process to run, the OS chooses one of the
processes in the Ready state. This is the job of the scheduler or dispatcher.

3
• Running Exit: The currently running process is terminated by the OS if the process indicates
that it has completed, or if it aborts.
• Running Ready: The most common reason for this transition is that the running process has
reached the maximum allowable time for uninterrupted execution.
Running Blocked: A process is put in the Blocked state if it requests something for which it
must wait. For example, a process may request a service from the OS that the OS is not prepared
to perform immediately. Or the process may initiate an action, such as an I/O operation, that must be
completed before the process can continue. A process may be blocked when it is waiting for another
process to provide data or waiting for a message from another process.

What is a process creation and principal events that can create a process?
Answer
The triggering of an event raising to new non-exist process is referred to process creation. When a new
process is to be added to those currently being managed. the Operating system builds the data structures
that are used to manage the process and allocates address space in main memory to the process.
There are four principal events that can trigger the creation of an event
1. System initialization.
2. Execution of a process-creation system call by a running process.
3. A user request to create a new process.
4. Initiation of a batch job.

What is the possible occurrence of a child process being terminated by parent process?
Answer
A parent may terminate the execution of one of its children for a variety of reasons, such as these:
 The child has exceeded its usage of some of the resources that it has been allocated. (To
determine whether this has occurred, the parent m.ust have a mechanism to inspect the state
of its children.)
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.

 Explain the concepts of context switching

Context/Process Switching
Changing the execution environment of one process to that of another is called process switching,
which is the basic mechanism of multitasking.
Switching the CPU to another process requires saving the state of the old process and loading
the saved state of the new process. The context of a process is represented in the PCB of the
process; it includes the value of the CPU registers, the process state, and memory management
information

4
witching can take place when the O/S has control of the system. An O/S can acquire control by:

 Interrupt: an external event which is independent on the instructions.


 Trap: that is associated with current instruction execution.
 Supervisor call/system call: that is explicit call to the O/S

Context Switching Steps


The steps involved in context switching are as follows −
 Save the context of the process that is currently running on the CPU. Update the process
control block and other important fields.
 Move the process control block of the above process into the relevant queue such as the
ready queue, I/O queue etc.
 Select a new process for execution.
 Update the process control block of the selected process. This includes updating the process
state to running.
 Update the memory management data structures as required.
 Restore the context of the process that was previously running when it is loaded again on
the processor. This is done by loading the previous values of the process control block and
reg

Context switch adds an overhead. Why?


Answer
1. In a multitasking computer environment, overhead is any time not spent executing tasks. It
is overhead because it is always there, even if nothing productive is going on context
switching is a part of the overhead but not the only part.
2. Because it takes time away from processing the task(s) at hand.
For example, if there is only one task running, the processor can give it 100% of
its processing time. There is no overhead of context switching

What is a context switch?


Answer

5
In a multitasking operating system, under what circumstances will a context switch occur? When
a process terminates; when the timer elapses indicating that the CPU should switch to another
process, when the current process suspends itself, when the current process needs time consuming I/O,
when an interrupt arises from some source aside from the timer

 Explain the concept of interrupt


Interrupt is an event that requires the operating system’s attention at any situation. The computer designer
associates an interrupt with each event, whose sole purpose is to report the occurrence of the event to the
operating system and enable it to perform appropriate event handling actions. When an I/O device has
finished the work given to it, it causes an interrupt (assuming that interrupts have been enabled by the
operating system). It does this by asserting a signal on a bus line that it has been assigned.

Examples of interrupts
Here are some examples of the causes of interrupts. Note that not all need any intervention from
the user.
 Hardware issue, such as a printer paper jam
 Key press by the user, e.g. CTRL ALT DEL
 Software error
 Phone call (mobile device)
 Disk drive indicating it is ready for more data
Hardware Interrupts
An electronic signal sent from an external device or hardware to communicate with the processor
indicating that it requires immediate attention. For example, strokes from a keyboard or an action
from a mouse invoke hardware interrupts causing the CPU to read and process it. So it arrives
asynchronously and during any point of time while executing an instruction.

 Demonstrate masking and unmasking of interrupt request


Hardware interrupts are classified into two types

 Maskable Interrupts –those which can be disabled or ignored by the microprocessor.


These interrupts are either edge-triggered or level-triggered, so they can be disabled.
INTR, RST 7.5, RST 6.5, RST 5.5 are maskable interrupts in 8085 microprocessor.
Processors have to interrupt mask register that allows enabling and disabling of
hardware interrupts. Every signal has a bit placed in the mask register. If this bit is set,
an interrupt is enabled & disabled when a bit is not set, or vice versa. Signals that
interrupt the processors through these masks are referred to as masked interrupts.
 Non-maskable Interrupts (NMI) – Non-Maskable Interrupts are those which cannot
be disabled or ignored by microprocessor. TRAP is a non-maskable interrupt. It
consists of both level as well as edge triggering and is used in critical power failure
conditions.

Software Interrupts
The processor itself requests a software interrupt after executing certain instructions or if particular
conditions are met. These can be a specific instruction that triggers an interrupt such as subroutine calls
and can be triggered unexpectedly because of program execution errors, known as exceptions or traps.

6
They are – RST 0, RST 1, RST 2, RST 3, RST 4, RST 5, RST 6, RST 7.

 Demonstrate handling of interrupts

Interrupt Handler
he job of the interrupt handler is to service the device and stop it from interrupting. Once the
handler returns, the CPU resumes what it was doing before the interrupt occurred. When
microprocessor receives multiple interrupt requests simultaneously, it will execute the interrupt
service request (ISR) according to the priority of the interrupts.
Instruction for Interrupts –
1. Enable Interrupt (EI) – The interrupt enable flip-flop is set and all interrupts are
enabled following the execution of next instruction followed by EI. No flags are
affected. After a system reset, the interrupt enable flip-flop is reset, thus disabling the
interrupts. This instruction is necessary to enable the interrupts again (except TRAP).
2. Disable Interrupt (DI) – This instruction is used to reset the value of enable flip
flop hence disabling all the interrupts. No flags are affected by this instruction.
3. Set Interrupt Mask (SIM) – It is used to implement the hardware interrupts (RST
7.5, RST 6.5, RST 5.5) by setting various bits to form masks or generate output data
via the Serial Output Data (SOD) line. First the required value is loaded in
accumulator then SIM will take the bit pattern from it
4. Read Interrupt Mask (RIM) – This instruction is used to read the status of the
hardware interrupts (RST 7.5, RST 6.5, RST 5.5) by loading into the A register a byte
which defines the condition of the mask bits for the interrupts. It also reads the
condition of SID (Serial Input Data) bit on the microprocessor.

three main classes of interrupts:


I/O interrupts
An I/O device requires attention; the corresponding interrupt handler must query the device
to determine the proper course of action. We cover this type of interrupt in the later section
"I/O Interrupt Handling.”
Timer interrupts
Some timer, either a local APIC timer or an external timer, has issued an interrupt; this
kind of interrupt tells the kernel that a fixed-time interval has elapsed. These interrupts are
handled mostly as I/O interrupts; we discuss the peculiar characteristics of timer interrupts
in Chapter 6.
Interprocessor interrupts
A CPU issued an interrupt to another CPU of a multiprocessor system.

I/O Interrupt Handling


In general, an I/O interrupt handler must be flexible enough to service several devices at the same time.
In the PCI bus architecture, for instance, several devices may share the same IRQ line. This means that
the interrupt vector alone does not tell the whole story. In the example shown in Table 4-3, the same
vector 43 is assigned to the USB port and to the sound card. However, some hardware devices found in
older PC architectures (such as ISA) do not reliably operate if their IRQ line is shared with other devices.
Interrupt handler flexibility is achieved in two distinct ways, as discussed in the following list.
IRQ sharing
The interrupt handler executes several interrupt service routines (ISRs). Each ISR is a

7
function related to a single device sharing the IRQ line. Because it is not possible to know
in advance which particular device issued the IRQ, each ISR is executed to verify whether
its device needs attention; if so, the ISR performs all the operations that need to be executed
when the device raises an interrupt.
IRQ dynamic allocation
An IRQ line is associated with a device driver at the last possible moment; for instance,
the IRQ line of the floppy device is allocated only when a user accesses the floppy disk
device.

Interrupt Handler Responsibilities


The interrupt handler has a set of responsibilities to perform. Some are required by the framework,
and some are required by the device. All interrupt handlers are required to do the following:
 Determine if the device is interrupting and possibly reject the interrupt.
The interrupt handler must first examine the device and determine if it has issued the
interrupt. If it has not, the handler must return DDI_INTR_UNCLAIMED. This step
allows the implementation of device polling: it tells the system whether this device, among
a number of devices at the given interrupt priority level, has issued the interrupt.
 Inform the device that it is being serviced.
This is a device-specific operation, but it is required for the majority of devices. For
example, SBus devices are required to interrupt until the driver tells them to stop. This
guarantees that all SBus devices interrupting at the same priority level will be serviced.
 Perform any I/O request-related processing.
Devices interrupt for different reasons, such as transfer done or transfer error. This step
may involve using data access functions to read the device's data buffer, examine the
device's error register, and set the status field in a data structure accordingly. Interrupt
dispatching and processing are relatively time consuming.
 Do any additional processing that could prevent another interrupt.
For example, read the next item of data from the device.
 Return DDI_INTR_CLAIMED

1). Why interrupts are used?


These are used to get the attention of the CPU to perform services requested by either hardware or
software.
2). What is NMI?
NMI is a non-maskable interrupt, that cannot be ignored or disabled by the processor
3). What is the function of interrupt acknowledge line?
The processor sends a signal to the devices indicating that it is ready to receive interrupts.
4). Describe hardware interrupt. Give examples
It is generated by an external device or hardware; such as keyboard keys or mouse movement
invokes hardware interrupts
5). Describe software interrupt.
It is defined as a special instruction that invokes an interrupt such as subroutine calls. Software
interrupts can be triggered unexpectedly because of program execution errors
6). Which interrupt has the highest priority?
 Non-maskable edge and level triggered
 TRAP has the highest priority
7). Give some uses of interrupt

8
 Respond quickly to time-sensitive or real-time events
 Data transfer to and from peripheral devices
 Responds to high-priority tasks such as power-down signals, traps, and watchdog timers
 Indicates abnormal events of CPU

 Explain the concepts of thread

a thread of execution is the smallest sequence of programmed instructions that can be managed
independently by an operating system scheduler. A thread is a light-weight process.

Process switching overhead has two components that imposes challenges on multitasking of the processor

 Execution related overhead: The CPU state of the running process has to be saved and the
CPU state of the new process has to be loaded in the CPU. This overhead is unavoidable.
 Resource-use related overhead: The process context also has to be switched. It involves
switching of the information about resources allocated to the process, such as memory and
files, and interaction of the process with other processes. The large size of this information
adds to the process switching overhead

POSIX Thread
The ANSI/IEEE Portable Operating System Interface (POSIX) standard defines the pthreads
application program interface for use by C language programs. Popularly called POSIX threads.
The threads package it defines is called Pthreads. Most UNIX systems support it. The standard
defines over 60 function calls.
All Pthreads threads have certain properties. Each one has an identifier, a set of registers
(including the program counter), and a set of attributes, which are stored in a structure.

9
 Write simple thread creation programming code in C

Q What is a Process
A process is divided into number of smaller tasks, each task is called thread. Number of threads
within a process execute at a time is called Multithreading. Based on the functionality threads are
divided into four categories:

 Explain the types of threads


Multithreading
Multithreading refers to the ability of an operating system to support multiple, concurrent paths of
execution within a single process. The traditional approach of a single thread of execution per
process, in which the concept of a thread is not recognized, is referred to as a single-threaded
approach.

10
A process is divided into number of smaller tasks, each task is called thread. Number of threads
within a process execute at a time is called Multithreading. Based on the functionality threads are
divided into four categories:
1) One thread per process (One to one)
2) Many threads per process (One to Many)
3) Many single-threaded processes (Many to one)
4) Many kernel threads (Many to many)
1. One thread per process: A simple single-threaded application has one sequence of
instructions, executing from beginning to end. The operating system kernel runs those
instructions in user mode to restrict access to privileged operations or system memory. The
process performs system calls to ask the kernel to perform privileged operations on its behalf.
2. Many threads per process. Alternately, a program may be structured as several concurrent
threads, each executing within the restricted rights of the process. At any given time, a subset
of the process’s threads may be running, while the rest are suspended. Any thread running in
a process can make system calls into the kernel, blocking that thread until the call returns but
allowing other threads to continue to run. Likewise, when the processor gets an I/O interrupt,
it preempts one of the running threads so the kernel can run the interrupt handler; when the
handler finishes, the kernel resumes that thread.
3. Many single-threaded processes. As recently as twenty years ago, many operating systems
supported multiple processes but only one thread per process. To the kernel, however, each
process looks like a thread: a separate sequence of instructions, executing sometimes in the
kernel and sometimes at user level. For example, on a multiprocessor, if multiple processes
perform system calls at the same time, the kernel, in effect, has multiple threads executing
concurrently in kernel mode.
4. Many kernel threads. To manage complexity, shift work to the background, exploit
parallelism, and hide I/O latency, the operating system kernel itself can benefit from using
multiple threads. In this case, each kernel thread runs with the privileges of the kernel

Threads Creation
A process is always created with one thread, called the initial thread. The initial thread provides
compatibility with previous single-threaded processes. The initial thread's stack is the process

11
stack.

Threads Termination
Terminating threads has its share of subtle issues as well. Our threads return values: which threads
receive these values and how do they do it? Clearly a thread that expects to receive another’s return value
must wait until that thread produces it, and this happens only when the other thread terminates.

1) Why might the “Hello” message from thread 2 print after the “Hello” message for
thread 5, even though thread 2 was created before thread 5?
Answer
Creating and scheduling threads are separate operations.
Although threads are usually scheduled in the order that they are created, there is no guarantee.
Further, even if thread 2 started running before thread 5, it might be preempted before it reaches
the printf call. Rather, the only assumption the programmer can make is that each of the threads
runs on its own virtual processor with unpredictable speed. Any interleaving is possible.
2) Why must the “Thread returned” message from thread 2 print before the Thread
returned message from thread 5?
Answer
Since the threads run on virtual processors with unpredictable speeds, the order in which the
threads finish is indeterminate. However, the main thread checks for thread completion in the order they
were created. It calls thread_join for thread i +1 only after thread_join for thread i has
returned.
3) What is the minimum and maximum number of threads that could exist when thread
5 prints “Hello?”
Answer
When the program starts, a main thread begins running main. That thread creates NTHREADS =
10 threads. All of those could run and complete before thread 5 prints “Hello.” Thus, the minimum is
two threads. The main thread and thread 5. On the other hand, all 10 threads could have been created,
while 5 was the first to run. Thus, the maximum is 11 threads.

 Implement a thread at user, kernel and hybrid level


User-level threads
User level threads are implemented by a thread library. The library set up the thread
implementation arrangement without using the kernel, and interleaves operation of the threads in
the process. Thus, the kernel is not aware of presence of user-level threads in a process, as it sees

12
only the process.

Implementing Threads in User Space


The first method is to put the threads package entirely in user space. The kernel knows nothing
about them. As far as the kernel is concerned, it is managing ordinary, single-threaded processes.
The first, and most obvious, advantage is that a user-level threads package can be implemented on an
operating system that does not support threads.

 Implement a thread at user, kernel and hybrid level


Advantages and Disadvantages of User-level Threads
Advantages:
Since thread scheduling is implemented by thread library, so thread switching overhead is smaller
than the kernel-level thread. This arrangement enables each process to use a scheduling policy that
is best suited to it. A process implementing a real multi-threaded server may perform R-R
scheduling of its threads.
Disadvantages:
Managing threads without the involvement of kernel has few drawbacks such as:
1. Kernel does know the distinction between a thread and a process, so if a thread want to be
block, the kernel will block its parent. As a result, all the threads of the process get blocked
until the cause of blocking is removed.
2. Since kernel schedules a process, and thread library schedule the thread within a process,
so at most one of the thread of a process is in operation at any time. Thus, process ULT
can not provide parallelism and concurrency provided by KLTs. Thus, a serious
impairment if a thread makes a system call that leads to blocking.

Kernel-level Thread
A KLT is implemented by kernel. hence, creation and termination of KLTs, and checking their
status is perform by system calls.
When a process makes a create-thread system call, the kernel assigns an ID to it, and allocates a

13
thread control block (TCB), which contain the pointer to the PCB of the process. When an event
occurs, the kernel saves the CPU state of the interrupted thread in its TCB.

Scheduling of Kernel-level Threads

The specifics of the implementation vary depending on the context:


Kernel threads: The simplest case is implementing threads inside the operating system
kernel, sharing one or more physical processors. A kernel thread executes kernel code and
modifies kernel data structures. Almost all commercial operating systems today support
kernel threads.
Kernel threads and single-threaded processes. An operating system with kernel threads
might also run some single-threaded user processes. These processes can invoke system
calls that run concurrently with kernel threads inside the kernel.

Multithreaded processes using kernel threads: Most operating systems provide a set of
library routines and system calls to allow applications to use multiple threads within a
single user-level process.

Advantages and Disadvantages of Kernel-level threads


Advantages:
A KLT is like a process except that it has smaller amount of state information. The similarity
between process and thread is convenient for programmers, as programming for thread is same as
programming for processes.
In a multiprocessor system, KLTs provides parallelism i.e several threads belonging to same
process can be scheduled simultaneously, which is not possible using the ULTs.
Disadvantages:
However, handling the threads like processes has its disadvantages too. Switching between threads
is perform by the kernel, as a result of event handling. Hence, it incurs the overhead of event
handling, even if the interrupted thread and the selected thread belongs to the same process. This
feature limits the saving in the switching overhead.

14
Hybrid Thread
A hybrid thread model has both ULT and KLT and method of associating ULTs with KLTs.
Different methods of associating user and kernel-level threads provided different combination of
the low switching overhead of ULT, and high concurrency and parallelism of KLT.

Many-to-one association method: In this method a single KLT is created in each process by the
kernel and all ULTs created by thread library are associated with this single KLT. In this method,
ULT can be current without being parallel, thread switching results in low overhead and blocking of a
ULT leads to a blocking of all threads in the process.
One-to-one method of association: In this method each ULT is permanently mapped into a KLT. This
method provides an effect similar to KLT: thread can operate in parallel on different CPUs of a
multiprocessor system, however, switching between threads is perform at kernel-level and results in high
overhead. Blocking of a user level thread does not block other threads.
Many-to-many association method: In this method ULT can be mapped to any KLT. By this
method parallelism is possible and switching is possible at kernel-level with low overhead. Also
blocking of ULT does not block other threads as they are mapped with different KLTs. Overall
this methods requires complex implementation eg, Sun Solari operating sy

1. Name three ways to switch between user mode and kernel mode in a general-purpose
operating system.
Answer
The three ways to switch from between user-mode and kernel-mode in a general-purpose
operating system are in response to a system call, an interrupt, or a signal. A system call
occurs when a user program in user-space explicitly calls a kernel-defined "function" so
the CPU must switch into kernel-mode. An interrupt occurs when an I/O device on a
machine raises an interrupt to notify the CPU of an event. In this case kernel-mode is
necessary to allow the OS to handle the interrupt. Finally, a signal occurs when one process
wants to notify another process that some event has happened, such as that a segmentation
fault has occurred or to kill a child process. When this happens the OS executes the default
signal handler for this type of signal.
2) What is the primary difference between a kernel-level context switch between
processes (address spaces) and a user-level context switch?

15
Answer
The primary difference is that kernel-level context switches involve execution of OS code.
As such it requires crossing the boundary between user- and kernel-land two times. When
the kernel is switching between two different address spaces it must store the registers as
well as the address space.

 Demonstrate the Thread Control Block diagram

Thread Control Block


Thread Control Block (TCB) is a data structure in the operating system kernel which
contains thread-specific information needed to manage it. The TCB is "the manifestation of a
thread in an operating system". The data structure of thread contains information such as: Thread
ID, Stack pointer, Program counter: CPU information, Thread priority and the Pointers

 Thread ID: It is a unique identifier assigned by the Operating System to the thread when
it is being created.
 Thread states: These are the states of the thread which changes as the thread progresses
through the system
 CPU information: It includes everything that the OS needs to know about, such as how
far the thread has progressed and what data is being used.
 Thread Priority: It indicates the weight (or priority) of the thread over other threads which
helps the thread scheduler to determine which thread should be selected next from the
READY queue.
 A pointer which points to the process which triggered the creation of this thread.
 A pointer which points to the thread(s) created by this thread.

 Show thread states and transition


Thread States
As with processes, the key states for a thread are Running, Ready, and Blocked. Generally, it does not
make sense to associate suspend states with threads because such states are process-level concepts. There

16
are four basic thread operations associated with a change in thread state:

• Spawn: Typically, when a new process is spawned, a thread for that process is also spawned.
Subsequently, a thread within a process may spawn another thread within the same process,
providing an instruction pointer and arguments for the new thread. The new thread is provided
with its own register context and stack space and placed on the ready queue.
• Block: When a thread needs to wait for an event, it will block (saving its user registers, program
counter, and stack pointers). The processor may now turn to the execution of another ready thread in the
same or a different process.
• Unblock: When the event for which a thread is blocked occurs, the thread is moved to the Ready queue.
• Finish: When a thread completes, its register context and stacks are deallocated.

Thread lifecycle
It is useful to consider the progression of states as a thread goes from being created, to being
scheduled and de-scheduled onto and off of a processor, and then to exiting. Figure 2.9 shows the states
of a thread during its lifetime

INIT: Thread creation puts a thread into its INIT state and allocates and initializes
ADY: A thread in the READY state is available to be run but is not currently running.
Its TCB is on the ready list, and the values of its registers are stored in its TCB. At any
time, the scheduler can cause a thread to transition from READY to RUNNING by copying
its register values from its TCB to a processor’s registers.
RUNNING: A thread in the RUNNING state is running on a processor. At this time, its
register values are stored on the processor rather than in the TCB. A RUNNING thread can
transition to the READY state in two ways: The scheduler can preempt a running thread
and move it to the READY state by:
1. saving the thread’s registers to its TCB and
2. switching the processor to run the next thread on the ready list.
A running thread can voluntarily relinquish the processor and go from RUNNING to READY by
calling yield (e.g., thread_yield in the thread library).
Notice that a thread can transition from READY to RUNNING and back many times. Since the
operating system saves and restores the thread’s registers exactly, only the speed of the thread’s
execution is affected by these transitions.

17
WAITING: A thread in the WAITING state is waiting for some event. Whereas the scheduler can
move a thread in the READY state to the RUNNING state, a thread in the WAITING state cannot
run until some action by another thread moves it from WAITING to READY.
FINISHED: A thread in the FINISHED state never runs again. The system can free some or all
of its state for other uses, though it may keep some remnants of the thread in the FINISHED state
for a time by putting the TCB on a finished list. For example, the thread_exit call lets a thread pass
its exit value to its parent thread via thread_join. Eventually, when a thread’s state is no longer
needed (e.g., after its exit value has been read by the join call), the system can delete and reclaim
the thread’s state
perthread data structures.

1. What is the primary difference between a kernel-level context switch between processes
(address spaces) and a user-level context switch?
Answer
The primary difference is that kernel-level context switches involve execution of OS code. As such
it requires crossing the boundary between user- and kernel-land two times. When the kernel is
switching between two different address spaces it must store the registers as well as the address
space. Saving the address space involves saving pointers to the page tables, segment tables, and
whatever other data structures are used by the CPU to describe an address space. When switching
between two user-level threads only the user-visible registers need to be saved and the kernel need
not be entered. The overhead observed on a kernel-level context switch is much higher than that
of a userlevel context switch.
2. Does spawning two user-level threads in the same address space guarantee that the
threads will run in parallel on a 2-CPU multiprocessor? If not, why?
Answer
No, the two user-level threads may run on top of the same kernel thread. There are, in fact, many
reasons why two user-level threads may not run in parallel on a 2-CPU MP. First is that there may
be many other processes running on the MP, so there is no other CPU available to execute the
threads in parallel. Second is that both threads may be executed on the same CPU because the OS
does not provide an efficient load balancer to move either thread to a vacant CPU. Third is that the
programmer may limit the CPUs on which each thread may execute.

1. Can a mutual exclusion algorithm be based on assumptions on the relative speed of


processes, i.e. that some processes may be "faster" than the others in executing the same
section of code?
Answer
No, mutual exclusion algorithms cannot be based on assumptions about the relative speed of
processes. There are MANY factors that determine the execution time of a given section of code,
all of which would affect the relative speed of processes. A process that is 10x faster through a
section of code one time, may be 10x slower the next time.
1. Can condition variables be implemented with semaphores?
Semaphores can be implemented with condition variables, provided that there is also a primitive
to protect a critical section (lock/unlock) so that both the semaphore value and the condition are
checked atomically. In Nachos for example, this is done with disabling interrupts.
2. Define briefly the lost wakeup problem.
Answer

18
The lost wakeup problem occurs when two threads are using CVs to synchronize their execution.
If thread 1 reaches the case where the necessary condition will be false, such as when a consumer
sees that the buffer is empty, it will go to sleep. It is possible that the OS will interrupt thread 1
just before it goes to sleep and schedule thread 2 which could make the condition for thread 1 true
again, for example by adding something to the buffer. If this happens thread 2 will signal thread 1
to wake up, but since thread 1 is not asleep yet, the wakeup signal is lost. At some point thread 1
will continue executing and immediately go back to sleep. Eventually, thread 2 will find its
condition to be false, for example if the buffer becomes full, and it will go to sleep. Now both
threads are asleep and neither can be woken up

1. Define external and internal fragmentation and identify the differences between them.
Answer
Internal fragmentation is where the memory manager allocates more for each allocation
than is actually requested. Internal fragmentation is the wasted (unused) space within a
page. For example if I need 1K of memory, but the page size is 4K, then there is 3K of
wasted space due to internal fragmentation. External fragmentation is the inability to use
memory because free memory is divided into many small blocks. If live objects are
scattered, the free blocks cannot be coalesced, and hence no large blocks can be allocated.
External fragmentation is the wasted space outside of any group of allocated pages that is
too small to be used to satisfy another request. For example if best-fit memory
management is used, then very small areas of memory are likely to remain, which may
not be usable by any future request. Both types of fragmentation result in free memory
that is unusable by the system.

2. Given memory partitions of 100 KB, 500 KB, 200 KB, 300 KB and 600 KB (in order),
how would each of the first-fit, best-fit and worst-fit algorithms place processes of 212
KB, 417 KB, 112 KB and 426 KB (in that order) ? Which algorithm makes the most
efficient use of memory?
Answer
First-Fit:
212K is put in 500K partition.
417K is put in 600K partition.
112K is put in 288K partition (new partition 288K = 500K - 212K).
426K must wait.
Best-Fit:
212K is put in 300K partition.
417K is put in 500K partition.
112K is put in 200K partition.
426K is put in 600K partition.

Thrashing
Thrash is the poor performance of a virtual memory (or paging) system when the same pages are
being loaded repeatedly due to a lack of main memory to keep them in memory. Depending on the
configuration and algorithm, the actual throughput of a system can degrade by multiple orders of
magnitude. Thrashing occurs when a computer's virtual memory resources are overused, leading
to a constant state of paging and page faults, inhibiting most application-level processing. It causes

19
the performance of the computer to degrade or collapse.

1. Global Page Replacement


Since global page replacement can bring any page, it tries to bring more pages whenever thrashing
is found. But what actually will happen is that no process gets enough frames, and as a result, the
thrashing will increase more and more. Therefore, the global page replacement algorithm is not
suitable when thrashing happens.
2. Local Page Replacement
Unlike the global page replacement algorithm, local page replacement will select pages which only
belong to that process. So there is a chance to reduce the thrashing. But it is proven that there are
many disadvantages if we use local page replacement. Therefore, local page replacement is just an
alternative to global page replacement in a thrashing scenario.
Causes of Thrashing
Programs or workloads may cause thrashing, and it results in severe performance problems, such
as:
o If CPU utilization is too low, we increase the degree of multiprogramming by introducing
a new system. A global page replacement algorithm is used. The CPU scheduler sees the
decreasing CPU utilization and increases the degree of multiprogramming.
o CPU utilization is plotted against the degree of multiprogramming.
o As the degree of multiprogramming increases, CPU utilization also increases.
o If the degree of multiprogramming is increased further, thrashing sets in, and CPU
utilization drops sharply.
o So, at this point, to increase CPU utilization and to stop thrashing, we must decrease the
degree of multiprogramming.
How to Eliminate Thrashing
Thrashing has some negative impacts on hard drive health and system performance. Therefore, it
is necessary to take some actions to avoid it. To resolve the problem of thrashing, here are the
following methods, such as:
o Adjust the swap file size:If the system swap file is not configured correctly, disk thrashing
can also happen to you.
o Increase the amount of RAM: As insufficient memory can cause disk thrashing, one
solution is to add more RAM to the laptop. With more memory, your computer can handle
tasks easily and don't have to work excessively. Generally, it is the best long-term solution.
o Decrease the number of applications running on the computer: If there are too many

20
applications running in the background, your system resource will consume a lot. And the
remaining system resource is slow that can result in thrashing. So while closing, some
applications will release some resources so that you can avoid thrashing to some extent.
o Replace programs: Replace those programs that are heavy memory occupied with
equivalents that use less memory

Replacement Policies
Policies also vary depending on the setting: hardware caches use a different replacement policy
than the operating system does in managing main memory as a cache for disk. A hardware cache
will often have a limited number of replacement choices, constrained by the set associativity of
the cache, and it must make its decisions very rapidly. Even within the operating system, the
replacement policy for the file buffer cache is often different than the one used for demand paged
virtual memory, depending on what information is easily available about the access pattern.
Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In case
of page fault, Operating System might have to replace one of the existing pages with the newly
needed page. Different page replacement algorithms suggest different ways to decide which page
to replace. The target for all algorithms is to reduce the number of page faults.
Page Replacement Algorithms
1. First In First Out(FIFO)
This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.
Example-1Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find number of
page faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page

21
Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not available so it replaces 0 1 page fault
2. Optimal Page replacement
In this algorithm, pages are replaced which would not be used for the longest duration of time in
the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame.
Find number of page fault.

22

You might also like