0% found this document useful (0 votes)
11 views56 pages

PROCESS MANAGEMENT-Process-Part1

The document provides an overview of process management, detailing the definition of a process, its memory layout, and the various sections (text, data, heap, stack) involved in a program's execution. It explains the states of a process (new, running, waiting, ready, terminated) and the role of the Process Control Block (PCB) in managing process information. Additionally, it discusses multithreading, process scheduling, and the organization of scheduling queues for efficient CPU utilization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views56 pages

PROCESS MANAGEMENT-Process-Part1

The document provides an overview of process management, detailing the definition of a process, its memory layout, and the various sections (text, data, heap, stack) involved in a program's execution. It explains the states of a process (new, running, waiting, ready, terminated) and the role of the Process Control Block (PCB) in managing process information. Additionally, it discusses multithreading, process scheduling, and the organization of scheduling queues for efficient CPU utilization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

PROCESS MANAGEMENT

Process
• A process is a program in execution. A process will need certain
resources—such as CPU time, memory, files, and I/O devices— to
accomplish its task. These resources are typically allocated to the
process while it is executing.
• A process is the unit of work in most systems. Systems consist of a
collection of processes: operating-system processes execute system
code, and user processes execute user code. All these processes may
execute concurrently
The Process
• A process is a program in execution.
• The status of the current activity of a process is represented by the
value of the program counter and the contents of the processor’s
registers.
• The memory layout of a process is typically divided into multiple
sections,
Figure 3.1 Layout of a process in memory
• Text section— the executable
code
• Data section—global variables
• Heap section—memory that is
dynamically allocated during
program run time
• Stack section— temporary data
storage when invoking functions
(such as function parameters,
return addresses, and local
variables)
MEMORY LAYOUT OF A C PROGRAM
• The figure shown below illustrates the layout of a C program in
memory, highlighting how the different sections of a process relate to
an actual C program. This figure is similar to the general concept of a
process in memory.
MEMORY LAYOUT OF A C PROGRAM
MEMORY LAYOUT OF A C PROGRAM
• The global data section is divided into different sections for (a) initialized
data and (b) uninitialized data.
• A separate section is provided for the argc and argv parameters passed to
the main() function.
• The GNU size command can be used to determine the size (in bytes) of
some of these sections. Assuming the name of the executable file of the
above C program is memory, the following is the output generated by
entering the command size memory:

• The data field refers to uninitialized data, and bss refers to initialized data.
(bss is a historical term referring to block started by symbol.) The dec and
hex values are the sum of the three sections represented in decimal and
hexadecimal, respectively.
Consider program
Text Section
• What it contains:
• The machine code for the program's executable instructions (main,
demoFunction, and printf).
• Example from the program: The compiled binary code of printf, main,
and demoFunction resides in the Text Section.
Data Section
• What it contains:
• All global and static variables that are initialized explicitly.
• Examples from the program:
• int globalVar = 42; resides in the Initialized Data Segment.
• static int staticVar; is in the BSS Segment because it's uninitialized (default
value: 0).
• Divided into:
• Initialized Data Segment: For variables like globalVar.
• BSS Segment: For variables like staticVar
Heap Section
• What it contains:
• Dynamically allocated memory created during runtime using functions like
malloc or calloc.
• Example from the program:
• int* dynamicMemory = (int*)malloc(sizeof(int) * 5);
This memory is allocated on the Heap Section, and its size depends on the
malloc call during runtime.
• The programmer manages allocation and deallocation (e.g., free(dynamicMemory) in the
program).
• Grows upward in memory.
• Improper management can lead to memory leaks.
Stack Section
• What it contains:
• Function parameters, return addresses, and local variables.
• Examples from the program:
• int param and int localVar in demoFunction are stored on the Stack Section.
• Automatically managed (memory is allocated when a function is called and deallocated when the
function returns).
• Grows downward in memory.
• Limited in size; excessive usage (e.g., deep recursion) can cause a stack overflow.
Memory Summary with Program Example
Section Contents in the Program Purpose
Text main, demoFunction, and printf Stores executable code of functions.
globalVar (Initialized Data) and Stores global/static variables (initialized and
Data
staticVar (BSS) uninitialized)
Heap malloc(sizeof(int) * 5) Stores dynamically allocated memory during runtime.
Stores temporary data for function calls (parameters,
Stack param, localVar in demoFunction
locals).
The Process
• The sizes of the text and data sections are fixed, as their sizes do not
change during program run time.
• However, the stack and heap sections can shrink and grow dynamically
during program execution.
• Each time a function is called, an activation record containing function
parameters, local variables, and the return address is pushed onto the
stack; when control is returned from the function, the activation record is
popped from the stack.
• Similarly, the heap will grow as memory is dynamically allocated, and will
shrink when memory is returned to the system.
• Although the stack and heap sections grow toward one another, the
operating system must ensure they do not overlap one another
Program and Process
• A program by itself is not a process. A program is a passive entity,
such as a file containing a list of instructions stored on disk (often
called an executable file ).
• A process is an active entity, with a program counter specifying the
next instruction to execute and a set of associated resources.
• A program becomes a process when an executable file is loaded into
memory.
• Two common techniques for loading executable files are double-clicking an icon representing the
executable file and entering the name of the executable file on the command line (as in prog.exe or
a.out).
Process
• Even if two processes are running the same program (using the same set of
instructions or code), they are treated as separate processes. Each process
has its own independent execution, memory, and resources.
• For instance, Different users running the same program: Imagine several
people checking their email. Each person uses the same email program,
but they have their own separate sessions and email accounts. These are
independent processes, even though the program (email software) is the
same.
• Same user running multiple instances of a program: Think of a person
opening several browser windows or tabs. Each tab or window is a
separate process, even though they all use the same browser program.
• Each of these is a separate process; and although the text sections are
equivalent, the data, heap, and stack sections vary.
Process State
Process State
• New: The process is being created.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready: The process is waiting to be assigned to a processor.
• Terminated: The process has finished execution.
• These names are arbitrary, and they vary across operating systems. The
states that they represent are found on all systems, however. Certain
operating systems also more finely delineate process states. It is important
to realize that only one process can be running on any processor core at
any instant. Many processes may be ready and waiting.
Memory Sections in This Program
• Text Section:
• Contains the executable machine code for functions like main, printf, scanf, and malloc.
• Example: The compiled binary of printf and main.
• Data Section:
• Stores initialized global variables.
• Example: int globalSum; resides here.
• Heap Section:
• Dynamically allocated memory during runtime.
• Example: int* result = (int*)malloc(sizeof(int));
• The memory allocated by malloc resides in the heap section.
• Stack Section:
• Stores local variables, function parameters, and return addresses.
• Example: int num1 and int num2 in main reside in the stack section.
Process States with Example
• New:
• When the program is executed (./a.out), the process is created but not yet ready for execution.
• Ready:
• The operating system loads the program into memory, and it waits in the ready queue for CPU time.
• Running:
• When the CPU starts executing the program:
• It executes printf("Enter two numbers: ");.
• Local variables num1 and num2 are created on the stack.
• Waiting:
• During scanf("%d %d", &num1, &num2);, the process moves to the waiting state for user input.
• Running:
• After input is provided, the process resumes and calculates the sum *result = num1 + num2;.
• The printf("Sum = %d\n", globalSum); statement displays the result.
• Terminated:
• The program completes execution (return 0;), and the process is terminated
Summary of Process States
State State
New Process created when ./a.out is executed.
Ready Loaded into memory and waiting for CPU time.
Running Executes printf to display "Enter two numbers: ".
Waiting Stalls during scanf while waiting for user input (10 and 20).
Running Resumes execution, calculates the sum, and displays "Sum = 30".
Terminated Completes execution and exits with return 0;.
Process Control Block
• Each process is represented in the operating system by a process control
block (PCB)—also called a task control block. It contains many pieces of
information associated with a specific process, including:
• Process state. The state may be new, ready, running, waiting, halted, and
so on.
• Program counter. The counter indicates the address of the next instruction
to be executed for this process.
• CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack
pointers, and general-purpose registers, plus any condition-code
information. Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward when it is rescheduled to run
Process Control Block
• CPU-scheduling information. This information
includes a process priority, pointers to
scheduling queues, and any other scheduling
parameters.
• Memory-management information. This
information may include such items as the
value of the base and limit registers and the
page tables, or the segment tables,
depending on the memory system used by
the operating system.
• Accounting information. This information
includes the amount of CPU and real time
used, time limits, account numbers, job or
process numbers, and so on.
• I/O status information. This information
includes the list of I/O devices allocated to
the process, a list of open files, and so on
Process Control Block
• The PCB simply serves as the repository for all the data needed to
start, or restart, a process, along with some accounting data.
Consider following program
Process Control Block
• When this program runs, the operating system (OS) creates a process for it.
The process gets a Process Control Block (PCB), which is like a notebook
where the OS keeps track of everything the program is doing. Let’s map the
C program to the components of the PCB:
• Process ID (PID):
• Every process is assigned a unique ID. When you run the program, the OS assigns a
PID to this process, say PID = 1234.
• Program Counter (PC):
• The PC stores the address of the next instruction to be executed in the program.
• For example, when the program is at scanf("%d", &num1);, the PC points to that
instruction. After executing it, the PC moves to the next instruction (scanf("%d", &num2);).
Process Control Block
• Process State:
• The process can be in one of the following states:
• Running: Executing the current instruction (e.g., reading input).
• Ready: Waiting for CPU time to execute the next instruction.
• Waiting: Waiting for an event like user input (e.g., waiting for scanf to get the number).
• Registers:
• The CPU registers temporarily hold data during execution. For example:
• When num1 is entered, it is first stored in a register before being saved in memory.
• During the addition (sum = num1 + num2), the CPU uses registers to perform the calculation.
Process Control Block
• Memory Information:
• The PCB contains details about memory allocation for the process. For this
program:
• num1, num2,
and sum are stored in the stack segment.
• The program’s instructions (code) are in the code segment.
• The OS tracks which memory locations belong to this process.
• I/O Information:
• The PCB keeps track of the I/O being used by the process.
• In this case:
• Input: Keyboard (for scanf).
• Output: Screen (for printf).
Process Control Block
• Other Information (optional):
• Priority: If multiple processes are running, the OS might prioritize one process
over others.
• Parent Process ID: The process that started this program (likely the terminal
or IDE).
Let’s walk through the program step by step
and see what the PCB does:
Step What PCB Tracks

Program starts Assigns a PID (e.g., 1234), sets state to "Running," initializes PC to point to the first instruction.

Input num1 State switches to "Waiting" while waiting for user input. PC points to scanf("%d", &num1);.

Input num2 Similar to above; state switches to "Waiting," then back to "Running" after input is entered.

Calculate sum Registers hold num1 and num2, and the calculation result is stored in memory. PC moves to the next step.

Print result PCB updates I/O details (output to the screen).

Program ends PCB is removed, and the OS frees all resources used by the process.
Threads
• The process model discussed so far has implied that a process is a program that performs
a single thread of execution.
• For example, when a process is running a word-processor program, a single thread of
instructions is being executed. This single thread of control allows the process to perform
only one task at a time. Thus, the user cannot simultaneously type in characters and run
the spell checker.
• Most modern operating systems have extended the process concept to allow a process
to have multiple threads of execution and thus to perform more than one task at a time.
• This feature is especially beneficial on multicore systems, where multiple threads can run
in parallel.
• A multithreaded word processor could, for example, assign one thread to manage user
input while another thread runs the spell checker.
• On systems that support threads, the PCB is expanded to include information for each
thread. Other changes throughout the system are also needed to support threads.
Process Scheduling
• The objective of multiprogramming is to have some process running
at all times so as to maximize CPU utilization.
• The objective of time sharing is to switch a CPU core among processes
so frequently that users can interact with each program while it is
running.
• To meet these objectives, the process scheduler selects an available
process (possibly from a set of several available processes) for
program execution on a core. Each CPU core can run one process at a
time
Process Scheduling
• For a system with a single CPU core, there will never be more than one process
running at a time, whereas a multicore system can run multiple processes at one
time.
• If there are more processes than cores, excess processes will have to wait until a
core is free and can be rescheduled.
• The number of processes currently in memory is known as the degree of
multiprogramming. Balancing the objectives of multiprogramming and time
sharing also requires taking the general behavior of a process into account.
• In general, most processes can be described as either I/O bound or CPU bound.
An I/O-bound process is one that spends more of its time doing I/O than it spends
doing computations. A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
Scheduling Queues
• As a processes enter the system, they are put into a job queue, which
consists of all processes in the system.
• The processes that are residing in the main memory and are ready
and waiting to execute are kept on the list called ready queue.
• This queue is generally stored as a linked list. A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready
queue.
• When a process is allocated the CPU, it executes for a while and
eventually quits, is interrupted, or waits for the occurrence of a
particular event, such as completion of an I/O request.
Ready Queue And Various I/O Device Queues
Scheduling Queues
• The list of processes waiting for a particular I/O device is called a device
queue.
• A common representation of process scheduling is a queuing diagram. Two
types of queues are present: the ready queue and a set of wait queues.
• The circles represent the resources that serve the queues, and the arrows
indicate the flow of processes in the system.
• A new process is initially put in the ready queue. It waits there until it is
selected for execution, or dispatched.
• Once the process is allocated a CPU core and is executing, one of several
events could occur:
Representation of Process Scheduling
Queuing diagram represents queues, resources, flows
Scheduling Queues
1. The process could issue an I/O request and then be placed in an I/O wait
queue.
2. The process could create a new child process and then be placed in a
wait queue while it awaits the child’s termination.
3. The process could be removed forcibly from the core, as a result of an
interrupt or having its time slice expire, and be put back in the ready
queue.
• In the first two cases, the process eventually switches from the waiting
state to the ready state and is then put back in the ready queue. A process
continues this cycle until it terminates, at which time it is removed from all
queues and has its PCB and resources deallocated
Summary of Process Scheduling
• Maximize CPU use, quickly switch processes onto CPU for time
sharing
• Process scheduler selects among available processes for next
execution on CPU
• Maintains scheduling queues of processes
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in main memory, ready and
waiting to execute
• Device queues – set of processes waiting for an I/O device
• Processes migrate among the various queues
Schedulers
• A process migrates among the various scheduling queues throughout
its lifetime. The operating system must select, for scheduling
purposes, processes from these queues in some fashion. The
selection process is carried out by the appropriate scheduler.
• In a batch system, more processes are submitted than can be
executed immediately. These processes are spooled to a mass-storage
device (typically a disk), where they are kept for later execution.
• The long-term scheduler, or job scheduler, selects processes from
this pool and loads them into memory for execution. The short-term
scheduler, or CPU scheduler, selects from among the processes that
are ready to execute and allocates the CPU to one of them.
Schedulers
• The primary distinction between these two schedulers lies in
frequency of execution.
• The short-term scheduler must select a new process for the CPU
frequently.
• A process may execute for only a few milliseconds before waiting for
an I/0 request. Often, the short-term scheduler executes at least once
every 100 milliseconds. Because of the short time between
executions, the short-term scheduler must be fast. If it takes 10
milliseconds to decide to execute a process for 100 milliseconds, then
10 I (100 + 10) = 9 percent of the CPU is being used (wasted) simply
for scheduling the work.
Schedulers
• The long-term scheduler executes much less frequently; minutes may
separate the creation of one new process and the next. The long-term
scheduler controls the degree of multiprogramming that is the
number of processes in memory.
• If the degree of multiprogramming is stable, then the average rate of
process creation must be equal to the average departure rate of
processes leaving the system. Thus, the long-term scheduler may
need to be invoked only when a process leaves the system. Because
of the longer interval between executions, the long-term scheduler
can afford to take more time to decide which process should be
selected for execution.
Schedulers
• It is important that the long-term scheduler make a careful selection.
• In general, most processes can be described as either I/ 0 bound or CPU bound.
• An I/O-bound process is one that spends more of its time doing I/O than it spends
doing computations. A CPU-bound process, in contrast, generates I/0 requests
infrequently, using more of its time doing computations.
• It is important that the long-term scheduler select a good process mix of I/O-
bound and CPU-bound processes.
• If all processes are I/0 bound, the ready queue will almost always be empty, and
the short-term scheduler will have little to do. If all processes are CPU bound, the
I/0 waiting queue will almost always be empty, devices will go unused, and again
the system will be unbalanced. The system with the best performance will thus
have a combination of CPU-bound and I/O-bound processes.
Schedulers
• On some systems, the long-term scheduler may be absent or minimal. For example,
time-sharing systems such as UNIX and Microsoft Windows systems often have no long-
term scheduler but simply put every new process in memory for the short-term
scheduler.
• Some operating systems, such as time-sharing systems, may introduce an additional,
intermediate level of scheduling medium-term scheduler.
• The key idea behind a medium-term scheduler is that sometimes it can be advantageous
to remove processes from memory and thus reduce the degree of multiprogramming.
• Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping.
• The process is swapped out, and is later swapped in, by the medium-term scheduler.
• Swapping may be necessary to improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed up.
Addition of Medium Term Scheduling
• Medium-term scheduler can be added if degree of multiple
programming needs to decrease
• Remove process from memory, store on disk, bring back in from disk to
continue execution: swapping
Summary Schedulers
• Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU
• Sometimes the only scheduler in a system
• Short-term scheduler is invoked frequently (milliseconds)  (must be fast)
• Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue
• Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)
• The long-term scheduler controls the degree of multiprogramming
• Processes can be described as either:
• I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
• CPU-bound process – spends more time doing computations; few very long CPU bursts
• Long-term scheduler strives for good process mix
• Medium-term scheduler can be added if degree of multiple programming needs
to decrease
• Remove process from memory, store on disk, bring back in from disk to continue execution:
swapping
Context Switch
• Interrupts cause the operating system to change a CPU core from its
current task and to run a kernel routine.
• Such operations happen frequently on general-purpose systems. When an
interrupt occurs, the system needs to save the current context of the
process running on the CPU core so that it can restore that context when
its processing is done, essentially suspending the process and then
resuming it.
• The context is represented in the PCB of the process. It includes the value
of the CPU registers, the process state, and memory-management
information.
• Generically, we perform a state save of the current state of the CPU core,
be it in kernel or user mode, and then a state restore to resume
operations.
Context Switch
• Switching the CPU core to another process requires performing a
state save of the current process and a state restore of a different
process. This task is known as a context switch.
• When a context switch occurs, the kernel saves the context of the old
process in its PCB and loads the saved context of the new process
scheduled to run.
• Context switch time is pure overhead, because the system does no
useful work while switching.
CPU Switch From Process to Process
Context Switch
• Switching speed varies from machine to machine, depending on the
memory speed, the number of registers that must be copied, and the
existence of special instructions (such as a single instruction to load or
store all registers). A typical speed is a several microseconds.
• The more complex the operating system, the greater the amount of
work that must be done during a context switch.
Summary Context Switch
• When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process
via a context switch
• Context of a process represented in the PCB
• Context-switch time is overhead; the system does no useful work
while switching
• The more complex the OS and the PCB  the longer the context switch
• Time dependent on hardware support
• Some hardware provides multiple sets of registers per CPU  multiple
contexts loaded at once
Multitasking in Mobile Systems
• Because of the constraints imposed on mobile devices, early versions
of iOS did not provide user-application multitasking; only one
application ran in the foreground while all other user applications
were suspended.
• Operating system tasks were multitasked because they were written
by Apple and well behaved.
• However, beginning with iOS 4, Apple provided a limited form of
multitasking for user applications, thus allowing a single foreground
application to run concurrently with multiple background
applications.
Multitasking in Mobile Systems
• On a mobile device, the foreground application is the application currently
open and appearing on the display. The background application remains in
memory, but does not occupy the display screen.
• The iOS 4 programming API provided support for multitasking, thus
allowing a process to run in the background without being suspended.
However, it was limited and only available for a few application types.
• As hardware for mobile devices began to offer larger memory capacities,
multiple processing cores, and greater battery life, subsequent versions of
iOS began to support richer functionality for multitasking with fewer
restrictions.
• For example, the larger screen on iPad tablets allowed running two
foreground apps at the same time, a technique known as split-screen.
Multitasking in Mobile Systems
• Since its origins, Android has supported multitasking and does not place
constraints on the types of applications that can run in the background.
• If an application requires processing while in the background, the
application must use a service, a separate application component that runs
on behalf of the background process.
• Consider a streaming audio application: if the application moves to the
background, the service continues to send audio data to the audio device
driver on behalf of the background application.
• In fact, the service will continue to run even if the background application
is suspended.
• Services do not have a user interface and have a small memory footprint,
thus providing an efficient technique for multitasking in a
mobileenvironment.
Summary Multitasking in Mobile Systems
• Some mobile systems (e.g., early version of iOS) allow only one process to
run, others suspended
• Due to screen real estate, user interface limits iOS provides for a
• Single foreground process- controlled via user interface
• Multiple background processes– in memory, running, but not on the display, and
with limits
• Limits include single, short task, receiving notification of events, specific long-running
tasks like audio playback
• Android runs foreground and background, with fewer limits
• Background process uses a service to perform tasks
• Service can keep running even if background process is suspended
• Service has no user interface, small memory use

You might also like