0% found this document useful (0 votes)
3 views

Assignment 2

The document discusses the five-state process model, which includes states such as New, Ready, Running, Blocked/Waiting, and Exit, detailing the transitions between these states. It also explains the necessity of user and kernel modes for protecting the operating system, the steps involved in creating a new process, and the differences between process switching and mode switching. Additionally, it introduces the concepts of Ready/Suspend and Blocked/Suspend states, outlines the characteristics of processes and threads, and highlights the shared resources among threads within a process.

Uploaded by

lisastansel07
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Assignment 2

The document discusses the five-state process model, which includes states such as New, Ready, Running, Blocked/Waiting, and Exit, detailing the transitions between these states. It also explains the necessity of user and kernel modes for protecting the operating system, the steps involved in creating a new process, and the differences between process switching and mode switching. Additionally, it introduces the concepts of Ready/Suspend and Blocked/Suspend states, outlines the characteristics of processes and threads, and highlights the shared resources among threads within a process.

Uploaded by

lisastansel07
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Assignment 2

Question 1

The states of the five-state process model are New, Ready, Running, Blocked/Waiting, and Exit.
The “New” state refers to a process that has just been created but has not yet been admitted to
the pool of executable processes by the OS. Typically, for most operating systems, a new
process has not yet been loaded into main memory. However, its process control block has
been created. The reason the newly created process is not immediately loaded into main
memory is that the size of modern programs is large, and thus does not contain enough space
to load all processes into memory. The “Ready” state refers to a process that is prepared to
execute when given the opportunity. The process has been loaded into main memory and is
ready to run. The process waits for the processor to respond, and once the processor responds,
the process advances to the processor for execution. The “Running” state refers to the process
which is currently being executed. All of the processes which are currently executing on the
CPU are in a running state. The “Blocked”, or sometimes called Waiting, state refers to a
process that cannot execute until some event occurs. For example, a process may be blocked
because it is waiting for the completion of an I/O operation. The “Exit” state refers to a process
that has been released from the pool of executable processes by the OS, either because the
process has halted or has aborted for any reason. It has been terminated from the CPU and the
main memory.

One possible sequence of transitions is as follows: the process goes into a New state (it has just
been created), the process goes from New to Ready, the process goes from Ready to Running,
the process goes from Running to Blocked, the process goes from Blocked to Ready, the
process goes from Ready to Running, the process goes from Running to Exit.

Question 2

Two modes of execution (user and kernel mode) are needed to protect the OS and key
operating system tables, such as the process control blocks, from interference by user
programs. The user mode refers to a less-privileged mode as user programs would typically
execute in this mode. In contrast, the kernel mode (or system/control mode) refers to a
more-privileged mode. In kernel mode, the software has complete control of the processor,
including its instructions, registers, and memory. Instructions such as reading or altering a
control register (such as the PSW), primitive I/O instructions, and those that relate to memory
management should all be executed while in the kernel mode. We don’t want or need to give
this complete level of control to user-level programs, in part due to safety reasons. User
programs shouldn’t typically be able to make significant changes to the registers or portions of
memory, so we should not grant it this ability in the first place. This leads to the need for two
separate modes with varying levels of privilege: the user mode and kernel mode. In addition, the
distinction between these two modes allows modern operating systems to continue operating,
even if one of its running applications misbehaves. If these two modes did not exist, or there
was no separation between modes, processes running on the system may interfere with each
other, such as by overwriting each other’s memory and causing the whole system to halt.
Overall, there is a need for two separate modes of execution in order to separate system-critical
functions from non-critical functions, including the level of privilege granted to each.

Question 3

The first step for creating a new process is to assign a unique process identifier to the new
process. For this step, a new entry is added to the primary process table. This table contains
one step per process. The second step is to allocate space for the process. This includes
making sure there is space for all elements of the process image. This means that the OS
needs to know how much space is needed for the private user address space, including the
programs and data, and the user stack. Based on the type of process being created, these
values can be assigned by default or set based on the user request at the time of job creation.
In addition, the process is spawned by a parent process, the parent process can pass any
needed values to the OS with the process creation request. Necessary linkages need to be set
up if any existing address space is to be shared by this new process. Another component of
allocating space for the new process is that space for the process control block must be
allocated. The third step for process creation is to initialize the process control block. The
process identification portion will have the ID of this process, as well as other appropriate IDs
(such as the parent process ID). The portion containing the processor state information will
usually be initialized with most entries as zero, except for the program counter which is set to
the program entry point and the system stack pointer which is set to define the process stack
boundaries. The process control information portion of the PCB is initialized based on standard
default values and attributes that have been requested for this process. For example, we may
have a process state that is initialized to Ready or Ready/Suspend. The process priority may be
set to the lowest priority as a default unless there has been a request explicitly made for higher
priority. Furthermore, the process may not have any resources it owns unless an explicit request
has been made for them or the parent allows them to be inherited. The fourth step in process
creation is to set the appropriate linkages. An example of this is when the OS maintains the
scheduling queues as linked lists and processes need to be placed in the appropriate Ready or
Ready/Suspend lists. The fifth step in process creation is creating or expanding other data
structures. An example of this is where the OS may need to maintain an accounting file on each
process for performance assessment.

Question 4

One of the key differences between a process switch and a mode switch is that a mode switch
may occur without changing the state of the process that is currently in the Running state. In
contrast, a process switch involves changing a running process into another state (such as
Ready, Blocked, etc) and the OS needs to make several changes in its environment. Mode
switching changes the process privilege between modes, such as user and kernel, while
process switching changes the process state between different states. The steps involved in a
full process switch require much more effort than those for a mode switch. The first step in
process switching is saving the context of the processor (including the program counter and
registers). The second step is updating the process control block of the process that is currently
in the running state. The state of the currently running process needs to be changed to another
state (such as Ready, Blocked, etc). Other relevant fields also need to change such as
accounting information and reasoning for leaving the Running state. The third step in process
switching is moving the process control block of this process to the appropriate queue. The
fourth step is selecting another process for execution. The fifth step is updating the process
control of the process selected. The sixth step is updating memory management data
structures. The seventh step is restoring the context of the processor to the previously selected
process. In contrast, mode switching is quite distinct from process switching and involves much
less overhead. For mode switching, it starts with the processor checking to see if any interrupts
are pending and if not interrupts are pending, it proceeds to the fetch stage and fetches the next
instruction of the current program in the current process. If an interrupt is pending, the processor
sets the program counter to the starting address of the interrupt-handler program. Then it
switches from user mode to kernel mode so the interrupt processing code is able to execute
privileged instructions. The processor then proceeds to the fetch stage to fetch the first
instruction of the interrupt-handler program. The context of the process that has been
interrupted is saved to the PCB of the interrupted program. As explained above, process
switching and mode switching are two distinct concepts and involve a different sequence of
steps. A mode switch can occur without changing the state of the process currently in the
Running state, and thus its context saving and restoral have little overhead compared to that of
process switching. Overall, process switching requires more effort than mode switching.

Question 5

We may want to add two new states to our original five-state model: Ready/Suspend and
Blocked/Suspend. Ready/Suspend means the process is in secondary memory but is available
for execution as soon as it is loaded into main memory. Blocked/Suspend means the process is
in secondary memory and is awaiting for an event to occur. One main reason we may want to
add these two states is if our system does not employ virtual memory, each process to be
executed must be loaded fully into main memory. Thus, all the processes in all of the queues
(such as Ready queue, Blocked queue, etc) must be in main memory. In a multiprogramming
system, where most processes are waiting for I/O, it is likely for a processor to be idle most of
the time. A solution to this problem is to involve swapping, which is where we move part or all of
the process from main memory to disk. A process may be suspended if it is not immediately
available for execution, the process may or may not be waiting for an event, the process was
placed in a suspended state by an agent, or the process may not be removed from the
suspended state until the agent explicitly orders its removal. When considering suspending a
process, there are two independent concepts to be considered. The first is whether the process
is waiting on an event (blocked or not), and the second is whether the process has been
swapped out of main memory or not (suspended or not). To accommodate the 2 x 2
combinations which are possible, we need to ensure there are four separate states to handle
these possibilities: Ready, Blocked, Ready/Suspend, and Blocked/Suspend. Overall, the
Ready/Suspend and Blocked/Suspend states are introduced because we may want to store the
process on secondary storage in order to free up memory to run more processes. This is due to
the process being blocked for a portion of time.

Question 6

Time 1
P0: NEW

Time 2
P0: NEW
P1: NEW

Time 3
P0: READY
P1: READY

Time 4
P0: RUNNING
P1: READY

Time 5
P0: RUNNING
P1: READY
P2: NEW

Time 6
P0: RUNNING
P1: READY
P2: READY

Time 7
P0: BLOCKED
P1: READY
P2: READY

Time 8
P0: BLOCKED
P1: RUNNING
P2: READY

Time 9
P0: BLOCKED
P1: RUNNING
P2: READY
P3: NEW

Time 10
P0: BLOCKED
P1: READY
P2: READY
P3: NEW

Time 11
P0: BLOCKED
P1: READY
P2: RUNNING
P3: NEW

Time 12
P0: BLOCKED
P1: READY
P2: RUNNING
P3: READY

Process Transition:

P0: NEW -> (admitted) -> READY -> (dispatched) -> RUNNING -> (block on I/O) -> BLOCKED

P1: NEW -> (admitted) -> READY -> (dispatched) -> RUNNING -> (times out) -> READY

P2: NEW -> (admitted) -> READY -> (dispatched) -> RUNNING

P3: NEW -> (admitted) -> READY

Question 7

It is a practical technique for register values to be stored in fixed locations associated with the
given interrupt signal when we are under the assumption that an interrupted process A will
continue to run after the response to an interrupt, with its registers immediately being restored
by the hardware. In earlier processors, interrupts were much more conceptually simple. When it
was time to interrupt, between instructions, the current PSW (Program Status Word) would be
stored at a fixed location dependent on the interrupt category. It was then replaced by the
contents of another fixed location which depended likewise. The few miscellaneous system
states chosen by the application program were included in the PSW. This was a technique
which was useful when multiprogramming was just starting to become practical and we were
thinking about microprogramming. Specifically, this technique was practical when attempting to
make a series of compatible computers of greatly different performance that appeared uniform,
even to the OS. This technique relied on the PSW, which held the address of the next
instruction along with the other state bits such as the privileged bit, memory mapping effect, and
which of the five categories of interrupts were permitted.

However, this technique of storing register values in fixed locations associated with a given
interrupt signal is inconvenient in general. In general, an interrupt may cause the basic monitor
to preempt a process A in favor of another process B. Therefore, it is now necessary to copy the
execution state of process A from the location associated with the interrupt to the process
description associated with A. If this is the case, it would have made sense for the system to
store it in this location in the first place. In addition, as the number of possible interrupts
increases, the OS is required to allocate more memory or locations for each interrupt. This is not
feasible as memory space is limited. Also, when these fixed locations are set aside for the
interrupts, they are not able to be used to store other values. Overall, issues associated with this
approach include limited flexibility, scalability issues, interrupt nesting, and context switching.
Specifically for context switching, storing register values in fixed locations is not able to support
all of the requirements associated with storing register values, as well as other
memory-management information. For these reasons, it is inconvenient to store register values
in fixed locations associated with the interrupt signals.

Question 8

There are three possible outputs for running this C program:

The first is a negative value, which indicates that the creation of a child process was
unsuccessful. The child process was not created.

The second is a positive value, which is returned to the parent or caller process. This value
contains the process ID of the newly created child process. A positive value indicates the child
process was created successfully.

The third is a zero value, which is returned to the newly created child process. A zero value
being displayed means we are currently working with the child process.

Question 9

A process is an entity which represents the basic unit of work to be implemented in the system.
Processes embody two characteristics: resource ownership and scheduling/execution.
Resource ownership refers to a process including a virtual address space to hold the process
image. A process may be allocated control of ownership of resources, such as the main memory
or I/O channels. Scheduling/execution refers to a process having an execution state (such as
Running, Ready, etc) and a dispatching priority. A process is the actual entity which is
scheduled and dispatched by the OS. Entities which are associated with processes include a
virtual address space which holds the process image and protected access to processors, other
processes, files, and I/O resources. Meanwhile, a thread is a single sequential flow of activities
which is being executed in a process. A thread has associated entities such a thread execution
state (Running, Ready, etc), a saved thread context, an execution stack, per-thread static
storage for local variables, and shared access to the memory and resources of its process. If
the process model does not have the concept of multiple threads, the representation of a
process includes its process control block and user address space, as well as user and kernel
stacks. In contrast, in a multithreaded environment, there is a single process control block and
user address space for the process, but also separate stacks for each thread, as well as a
separate control block for each thread.

Both processes and threads are an independent sequence of execution. One similarity between
a process and thread is that like processes, threads are active one at a time. Also, both can
create children and are executed sequentially. A basic difference between a process and thread
is that a process takes place in different memory spaces, while a thread executes in the same
memory space. There is a difference in how the two handle memory sharing. Processes are
totally independent and do not share memory, while threads may share some memory with its
peer threads. Another difference is in how the two act when they are blocked. If a process gets
blocked, the remaining processes can continue execution. However, if a user level thread gets
blocked, all of its peer threads also get blocked. In addition, a process has various states
(Ready, Running, etc) associated with it, while multiple threads within a process share process
state as well as memory and resources.

Question 10

Resources which are typically shared by all threads of a process include access to the memory
and resources of its process (such as open files). Threads also share the state of that process.
They also share access to the same address space, processor, and have access to the same
data. Specifically, resources such as code, data, and files (file descriptors) can be shared
among all threads within a process. Shared resources among threads for one process include
instructions (including the text and code), static and global data, uninitialized data (BSS), open
file descriptors, signals, current working directory, and user and group IDs. Three major areas
where threads share resources include the text area, data area, and the heap. The text area
contains the machine code which is executed. The data area is used for uninitialized and
initialized static variables. The heap is reserved for the dynamically allocated variables and is
located at the opposite end of the stack in the process’s virtual address space. These shared
resources are in contrast with other resources which are kept private for each thread such as
registers and the stack pointer.

Question 11

One advantage ULTs (User Level Threads) have over KLTs (Kernel Level Threads) is that
thread switching does not require kernel-mode privileges as all of the thread management data
structures are within the user address space of a single process. Therefore, there is no need for
the process to switch to kernel mode to do thread management. This saves the overhead of
making the mode switch from user to kernel, and from kernel to user. The second advantage of
ULTs over KLTs is that scheduling can be application specific. One application might benefit
from a round-robin scheduling algorithm, while another application may benefit from a
priority-based scheduling algorithm. This can be done without disturbing the underlying OS
scheduler. The third advantage of ULTs over KLTs is that ULTs can run on any OS. There are no
specific changes that are required to the underlying kernel to support ULTs. Instead, the threads
library is a set of application-level functions which is shared by all applications.

Question 12

The first advantage of KLTs over ULTs is that the kernel can simultaneously schedule multiple
threads from the same process on multiple processors (multiprocessing). The second
advantage is that if one thread in a process is blocked, the kernel can schedule another thread
of the same process. These advantages are possible because scheduling by the kernel is done
on a thread basis. Also, the kernel maintains context information for the process as a whole and
for the individual threads within the process.

Question 13

Jacketing allows the OS to convert a blocking system call into a non blocking system call. For
example, instead of directly calling a system I/O routine, a thread will call an application-level I/O
jacket routine. This jacket routine contains code that checks to determine if the I/O device is
busy. If the I/O device is busy, the thread enters the Blocked state and passes control to another
thread through the threads library. When this thread is later given control again, the jacket
routine will again check the I/O device.

Question 14

When a ULT blocks for I/O, all ULTs are blocked. This occurs because from the perspective of
the kernel, only one thread exists, which is the process. All of the activity for ULTs take place in
the user space and within a single process. The kernel is unaware of the threads’ specific
activity. ULTs do not have direct kernel-level support. Instead, they are scheduled and managed
by the application’s runtime or threading library. The kernel continues to schedule the process
as a single unit and it assigns a single execution state to that process, such as Ready, Running,
Blocked, etc. Therefore, when one of the ULTs is Blocked for I/O, from the perspective of the
kernel, the entire process is in a Blocked state as control is transferred to the kernel. Therefore,
all of the other ULTs for that process will be blocked as well and not able to do any work. The
kernel does not provide support for switching between ULTs. An example of this is if we have a
web browser with multiple tabs open, with each tab running as a ULT. if the video is buffering
one tab, as it is blocked for I/O, all of the other tabs will be frozen as well. Another example of
this is when we have a web client which has multiple ULTs for handling client requests, such as
authenticating login credentials and retrieving personal information from a database. When one
ULT blocks for I/O, such as when we are waiting for the user to input their login credentials so
we can validate them, all other ULTs will be blocked and other client requests will see some
delay in getting fulfilled. Another example is when a ULT may block for I/O because it is waiting
for data to be read from a file. In this scenario, the other ULTs will also be blocked and not able
to interact with the file system.

Question 15

If a process exits and there are still threads belonging to that process running, these threads will
not continue to run. These threads will cease to exist because they are part of the process.
Therefore, when the process ends through its exit, its resources will be deallocated, including its
threads. When a process exits, it takes everything with it, including the process structure and
memory space, including its threads. All threads in a process share the same address space,
therefore when the memory for the process is deallocated, its threads will be suspended and
deallocated as well.

Question 16

This model can make multithreaded programs run faster than their single-threaded counterparts
on a uniprocessor computer because only the thread that executes the system call is blocked
while the other threads can continue to run. A single-threaded process would be blocked if a
blocking system call is executed. However, in a multithreaded program, a thread of the process
can execute the blocking system call while the other threads will be able to continue running.
This applies even on the uniprocessor computer where the remaining threads of the process will
be able to continue their execution even while one or more of its threads are blocked. In this
situation where there is a one-to-one mapping between user-level threads and kernel-level
threads, the blocking system is not able to block the complete execution as there is a kernel
thread present for every user thread. If one kernel thread gets blocked, others will be able to
continue running, allowing for the multithreaded program to run faster. This concurrency is not
present in single-threaded systems, as the system will have to wait for the blocking event to
occur before anything else can be run, so it is not able to run as fast.

You might also like