Chapter - 2 Os
Chapter - 2 Os
OPERATING SYSTEMS
BY
K. Venkateswara Rao
Process Concept:
• A question that arises in discussing operating systems involves what to call all the
CPU activities. A batch system executes jobs, whereas a time-shared system has user
programs, or tasks. Even on a single-user system, a user may be able to run several
programs at one time: a word processor, a Web browser, and an e-mail package.
And even if a user can execute only one program at a time, such as on an embedded
device that does not support multitasking, the operating system may need to
support its own internal programmed activities, such as memory management. In
many respects, all these activities are similar, so we call all of them processes.
• The terms job and process are used almost interchangeably in this text. Although
we personally prefer the term process, much of operating-system theory and
terminology was developed during a time when the major activity of operating
systems was job processing. It would be misleading to avoid the use of commonly
accepted terms that include the word job (such as job scheduling) simply because
process has superseded job.
PROCESS:
• Informally, as mentioned earlier, a process is a program in execution. A process is more
than the program code, which is sometimes known as the text section. It also includes
the current activity, as represented by the value of the program counter and the
contents of the processor’s registers. A process generally also includes the process
stack, which contains temporary data (such as function parameters, return addresses,
and local variables), and a data section, which contains global variables. A process may
also include a heap, which is memory that is dynamically allocated during process run
time.
• We emphasize that a program by itself is not a process. A program is a passive entity,
such as a file containing a list of instructions stored on disk (often called an executable
file). In contrast, a process is an active entity, with a program counter specifying the
next instruction to execute and a set of associated resources. A program becomes a
process when an executable file is loaded into memory. Two common techniques for
loading executable files
Process State
• As a process executes, it changes state. The state of a process is
defined in part by the current activity of that process. A process may
be in one of the following states:
• New: The process is being created.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution.
Process Control Block
• Each process is represented in the operating system by a process control block (PCB)—also
called a task control block.
• It contains many pieces of information associated with a specific process, including these:
Process state: The state may be new, ready, running, waiting, halted, and so on.
Program counter: The counter indicates the address of the next instruction to be executed for
this process.
CPU registers: The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus
any condition-code information. Along with the program counter, this state information must be
saved when an interrupt occurs, to allow the process to be continued correctly afterward .
CPU-scheduling information: This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
Memory-management information: This information may include such items as the value of the
base and limit registers and the page tables, or the segment tables, depending on the memory
system used by the operating system
Accounting information. This information includes the amount of CPU
and real time used, time limits, account numbers, job or process
numbers, and so on.
I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.
THREADS:
• The process model discussed so far has implied that a process is a program that
performs a single thread of execution.
• For example, when a process is running a word-processor program, a single
thread of instructions is being executed. This single thread of control allows the
process to perform only one task at a time.
• The user cannot simultaneously type in characters and run the spell checker
within the same process, for example. Most modern operating systems have
extended the process concept to allow a process to have multiple threads of
execution and thus to perform more than one task at a time. This feature is
especially beneficial on multicore systems, where multiple threads can run in
parallel.
• On a system that supports threads, the PCB is expanded to include information
for each thread. Other changes throughout the system are also needed to
support threads.
Process scheduling:
• The objective of multiprogramming is to have some process running
at all times, to maximize CPU utilization. The objective of time sharing
is to switch the CPU among processes so frequently that users can
interact with each program while it is running.
• To meet these objectives, the process scheduler selects an available
process (possibly from a set of several available processes) for
program execution on the CPU. For a single-processor system, there
will never be more than one running process. If there are more
processes, the rest will have to wait until the CPU is free and can be
rescheduled.
Scheduling Queues
• As processes enter the system, they are put into a job queue, which consists of all
processes in the system. The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue. This queue
is generally stored as a linked list. A ready-queue header contains pointers to the
first and final PCBs in the list.
• Each PCB includes a pointer field that points to the next PCB in the ready queue.
The system also includes other queues. When a process is allocated the CPU, it
executes for a while and eventually quits, is interrupted, or waits for the
occurrence of a particular event, such as the completion of an I/O request.
Suppose the process makes an I/O request to a shared device, such as a disk. Since
there are many processes in the system, the disk may be busy with the I/O request
of some other process. The process therefore may have to wait for the disk. The
list of processes waiting for a particular I/O device is called a device queue.
• A common representation of process scheduling is a queueing diagram, such as
that in Figure 3.6. Each rectangular box represents a queue. Two types of
queues are present: the ready queue and a set of device queues. The circles
represent the resources that serve the queues, and the arrows indicate the flow
of processes in the system. A new process is initially put in the ready queue. It
waits there until it is selected for execution, or dispatched. Once the process is
allocated the CPU and is executing, one of several events could occur:
• • The process could issue an I/O request and then be placed in an I/O queue.
• • The process could create a new child process and wait for the child’s
termination.
• • The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue. In the first two cases, the
process eventually switches from the waiting state to the ready state and is
then put back in the ready queue. A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and
resources deallocated
Schedulers:
• A process migrates among the various scheduling queues throughout
its lifetime. The operating system must select, for scheduling
purposes, processes from these queues in some fashion. The
selection process is carried out by the appropriate scheduler.
• The long-term scheduler, or job scheduler, selects processes from this
pool and loads them into memory for execution. The short-term
scheduler, or CPU scheduler, selects from among the processes that
are ready to execute and allocates the CPU to one of them.
Context Switch
• Interrupts cause the operating system to change a CPU from its current task and to run a
kernel routine. Such operations happen frequently on general-purpose systems.
• When an interrupt occurs, the system needs to save the current context of the process
running on the CPU so that it can restore that context when its processing is done,
essentially suspending the process and then resuming it.
• The context is represented in the PCB of the process. It includes the value of the CPU
registers, the process state (see Figure 3.2), and memory-management information.
Generically, we perform a state save of the current state of the CPU, be it in kernel or user
mode, and then a state restore to resume operations.
• Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context switch.
When a context switch occurs, the kernel saves the context of the old process in its PCB and
loads the saved context of the new process scheduled to run.
• Context-switch time is pure overhead, because the system does no useful work while
switching. Switching speed varies from machine to machine, depending on the memory
speed, the number of registers that must be copied, and the existence of special instructions
(such as a single instruction to load or store all registers).
Operations on Processes
• The processes in most systems can execute concurrently, and they
may be created and deleted dynamically. Thus, these systems must
provide a mechanism for process creation and termination.
Process Creation
• During the course of execution, a process may create several new
processes. As mentioned earlier, the creating process is called a
parent process, and the new processes are called the children of that
process. Each of these new processes may in turn create other
processes, forming a tree of processes.
• Most operating systems (including UNIX, Linux, and Windows) identify
processes according to a unique process identifier (or pid), which is
typically an integer number. The pid provides a unique value for each
process in the system, and it can be used as an index to access various
attributes of a process within the kernel.
• When a process creates a new process, two possibilities for execution
exist:
• 1. The parent continues to execute concurrently with its children.
• 2. The parent waits until some or all of its children have terminated.
• There are also two address-space possibilities for the new process:
• 1. The child process is a duplicate of the parent process (it has the
same program and data as the parent).
• 2. The child process has a new program loaded into it.
Process Termination
• A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit() system call. At that point, the
process may return a status value (typically an integer) to its parent process (via
the wait() system call).
• All the resources of the process—including physical and virtual memory, open
files, and I/O buffers—are deallocated by the operating system. Termination can
occur in other circumstances as well. A process can cause the termination of
another process via an appropriate system call (for example, TerminateProcess()
in Windows).
• Usually, such a system call can be invoked only by the parent of the process that
is to be terminated. Otherwise, users could arbitrarily kill each other’s jobs. Note
that a parent needs to know the identities of its children if it is to terminate
them. Thus, when one process creates a new process, the identity of the newly
created process is passed to the parent.
• A parent may terminate the execution of one of its children for a
variety of reasons, such as these:
• The child has exceeded its usage of some of the resources that it has
been allocated. (To determine whether this has occurred, the parent
must have a mechanism to inspect the state of its children.)
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child
to continue if its parent terminates.
QUESTIONS
• 1.describe about process concepts and operation on processes in
detail.
• 2. write about inter process communication in detail.
• 3. write about thread scheduling in detail.
• 4. explain about process scheduling in detail.
• 5. write about any two scheduling algorithms in detail.