0% found this document useful (0 votes)
54 views7 pages

Process Management: What Is A Process?

The document discusses process management in computing. It explains that a process is a program loaded into memory that can be executed by the processor. The processor can only execute one process at a time, but by rapidly switching between processes, it allows multiple processes to run simultaneously. Processes exist in one of three states - ready, running, or blocked - with the operating system allocating processor time between ready processes.

Uploaded by

Rosalinda Davis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views7 pages

Process Management: What Is A Process?

The document discusses process management in computing. It explains that a process is a program loaded into memory that can be executed by the processor. The processor can only execute one process at a time, but by rapidly switching between processes, it allows multiple processes to run simultaneously. Processes exist in one of three states - ready, running, or blocked - with the operating system allocating processor time between ready processes.

Uploaded by

Rosalinda Davis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Process Management

The microprocessor (or central processing unit (CPU), or just processor) is the central
component of the computer, and is in one way or another involved in everything the computer
does. A computer program consists of a series of machine code instructions which the
processor executes one at a time. This means that, even in a multi-tasking environment, a
computer system can, at any given moment, only execute as many program instructions as
there are processors. In a single-processor system, therefore, only one program can be
running at any one time. The fact that a modern desktop computer can be downloading files
from the Internet, playing music files, and running various applications all at (apparently) the
same time, is due to the fact that the processor can execute many millions of program
instructions per second, allowing the operating system to allocate some processor time to
each program in a transparent manner. In recent years, the emphasis in processor
manufacture has been on producing multi-core processors that enable the computer to
execute multiple processes or process threads at the same time in order to increase speed
and performance.


The Intel Core2 Quad Processor


What is a process?
Essentially, a process is what a program becomes when it is loaded into memory from a
secondary storage medium like a hard disk drive or an optical drive. Each process has its
own address space, which typically contains both program instructions and data. Despite the
fact that an individual processor or processor core can only execute one program instruction
at a time, a large number of processes can be executed over a relatively short period of time
by briefly assigning each process to the processor in turn. While a process is executing it has
complete control of the processor, but at some point the operating system needs to regain
control, such as when it must assign the processor to the next process. Execution of a
particular process will be suspended if that process requests an I/O operation, if an interrupt
occurs, or if the process times out.
When a user starts an application program, the operating system's high-level
scheduler (HLS) loads all or part of the program code from secondary storage into memory. It
then creates a data structure in memory called a process control block (PCB) that will be
used to hold information about the process, such as its current status and where in memory it
is located. The operating system also maintains a separate process table in memory that lists
all the user processes currently loaded. When a new process is created, it is given a unique
process identification number (PID) and a new record is created for it in the process table
which includes the address of the process control block in memory. As well as allocating
memory space, loading the process, and creating the necessary data structures, the
operating system must also allocate resources such as access to I/O devices and disk space
if the process requires them. Information about the resources allocated to a process is also
held within the process control block. The operating system's low-level scheduler (LLS) is
responsible for allocating CPU time to each process in turn.

Process states
The simple process state diagram below shows three possible states for a process. They are
shown as ready (the process is ready to execute when a processor becomes
available), running (the process is currently being executed by a processor) and blocked (the
process is waiting for a specific event to occur before it can proceed). The lines connecting
the states represent possible transitions from one state to another. At any instant, a process
will exist in one of these three states. On a single-processor computer, only one process can
be in the running state at any one time. The remaining processes will either
be ready or blocked, and for each of these states there will be a queue of processes waiting
for some event.


A simple three-state process state diagram

Note that certain rules apply here. Processes entering the system must initially go into
the ready state. A process can only enter the running state from theready state. A process
can normally only leave the system from the running state, although a process in
the ready or blocked state may be aborted by the system (in the event of an error, for
example), or by the user. Although the three-state model shown above is sufficient to
describe the behaviour of processes generally, the model must be extended to allow for other
possibilities, such as the suspension and resumption of a process. For example, the process
may be swapped out of working memory by the operating system's memory manager in order
to free up memory for another process. When a process is suspended, it essentially becomes
dormant until resumed by the system (or by a user). Because a process can be suspended
while it is either ready or blocked, it may also exist in one of two further states - ready
suspended and blocked suspended (a running process may also be suspended, in which
case it becomes ready suspended).


A five-state process state diagram

The queue of ready processes is maintained in priority order, so the next process to execute
will be the one at the head of the ready queue. The queue ofblocked process is typically
unordered, since there is no sure way to tell which of these processes will become unblocked
first (although if several processes are blocked awaiting the same event, they may be
prioritised within that context). To prevent one process from monopolising the processor, a
system timer is started each time a new process starts executing. The process will be
allowed to run for a set period of time, after which the timer generates an interrupt that
causes the operating system to regain control of the processor. The operating system sends
the previously running process to the end of the ready queue, changing its status
from running to ready, and assigns the first process in the ready queue to the processor,
changing its status from ready to running.

Process control blocks
The process control block (PCB) maintains information that the operating system needs in
order to manage a process. PCBs typically include information such as the process ID, the
current state of the process (e.g. running, ready, blocked, etc.), the number of the next
program instruction to be executed, and the starting address of the process in memory. The
PCB also stores the contents of various processor registers (the execution context), which
are saved when a process leaves the running state and which are restored to the processor
when the process returns to the running state. When a process makes the transition from one
state to another, the operating system updates the information in its PCB. When the process
is terminated, the operating system removes it from the process table and frees the memory
and any other resources allocated to the process so that they become available to other
processes. The diagram below illustrates the relationship between the process table and the
various process control blocks.


The process table and process control blocks

The changeover from one process to the next is called a context switch. During a context
switch, the processor obviously cannot perform any useful computation, and because of the
frequency with which context switches occur, operating systems must minimise the context-
switching time in order to reduce system overhead. Many processors contain a register that
holds the address of the current PCB, and also provide special purpose instructions for
saving the execution context to the PCB when the process leaves the running state, and
loading it from the PCB into the processor registers when the process returns to
the running state.

Process scheduling
Process scheduling is a major element in process management, since the efficiency with
which processes are assigned to the processor will affect the overall performance of the
system. It is essentially a matter of managing queues, with the aim of minimising delay while
making the most effective use of the processor's time. The operating system carries out four
types of process scheduling:
Long-term (high-level) scheduling
Medium-term scheduling
Short-term (low-level) scheduling
I/O scheduling
The long-term scheduler determines which programs are admitted to the system for
processing, and as such controls the degree of multiprogramming. Before accepting a new
program, the long-term scheduler must first decide whether the processor is able to cope
effectively with another process. The more active processes there are, the smaller the
percentage of the processor's time that can be allocated to each process. The long-term
scheduler may limit the total number of active processes on the system in order to ensure
that each process receives adequate processor time. New processes may subsequently be
created, as existing processes are terminated or suspended. If several programs are waiting
for the long-term scheduler the decision as to which job to admit first might be done on a first-
come-first-served basis, or by using some other criteria such as priority, expected execution
time, or I/O requirements.
Medium-term scheduling is part of the swapping function. The term "swapping" refers to
transferring a process out of main memory and into virtual memory(secondary storage) or
vice-versa. This may occur when the operating system needs to make space for a new
process, or in order to restore a process to main memory that has previously been swapped
out. Any process that is inactive or blocked may be swapped into virtual memory and placed
in a suspend queueuntil it is needed again, or until space becomes available. The swapped-
out process is replaced in memory either by a new process or by one of the previously
suspended processes.
The task of the short-term scheduler (sometimes referred to as the dispatcher) is to
determine which process to execute next. This will occur each time the currently running
process is halted. A process may cease execution because it requests an I/O operation, or
because it times out, or because a hardware interrupt has occurred. The objectives of short-
term scheduling are to ensure efficient utilisation of the processor and to provide an
acceptable response time to users. Note that these objectives are not always completely
compatible with one another. On most systems, a good user response time is more important
than efficient processor utilisation, and may necessitate switching between processes
frequently, which will increase system overhead and reduce overall processor throughput.


Queuing diagram for scheduling


Threads
A thread is a sub-process that executes independently of the parent process. A process may
spawn several threads, which although they execute independently of each other, are
managed by the parent process and share the same memory space. Most modern operating
systems support threads, which if implemented become the basic unit for scheduling and
execution. If the operating system does not support threads, they must be managed by the
application itself. Threads will be discussed in more detail alsewhere.

You might also like