0% found this document useful (0 votes)
23 views7 pages

Operating Systen

Uploaded by

fopatseutchoua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views7 pages

Operating Systen

Uploaded by

fopatseutchoua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Topic 2: PROCESS MANAGEMENT

A Process is a program in execution. A process is not an executable program, but rather a running instance of a program.
A process contains: An address space, A program, an execution engine (program counter,), Data, Resources, Process control block
(PCB) with process identifier (PID). Each process views its memory as a contiguous set of logical memory addresses. 1. Processes
1.1. Process State
As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process
may be in one of the following states:
x New. The process is being created. x Running. Instructions are being executed. x Waiting. The process is
waiting for some event to occur (such as an I/O completion or reception of a signal).
x Ready. The process is waiting to be assigned to a processor.
x Terminated. The process has ¿nished execution.
1.2. Process Control Block
Each process is represented in the operating
system by a process control block (PCB), it manages
the translation of logical addresses to the physical
memory addresses of the computer. It contains many
pieces of information associated with a speci¿c
process, including these:
Process state. The state may be new, ready, running,
waiting, halted, and so on.
Program counter. The counter indicates the address
of the next instruction to be executed for this process. Figure 2 Diagram of process state. CPU registers. The
registers vary in number and type, depending on the computer architecture. They include accumulators, index
registers, stack pointers, and general-purpose registers, plus any condition-code information.
CPU-scheduling information. This information includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
Memory-management information. This information may include such items as the value of the base
and limit registers and the page tables, or the segment tables, depending on the memory system used by
the operating system.
Accounting information. This information includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on.
I/O status information. This information includes the list of I/O devices allocated to the process, a list of
open ¿les, and so on.
In brief, the PCB simply serves as the repository for any Figure 3 Process
information that may vary from process to process. control block (PCB).
1.3. Process Creation: case of Windows and
Linux
At system boot time, one user level process is
created. In Unix, this process is called init. In Windows, it is
the System Idle Process. This process is the parent or
grandparent of all other processes. New Child Processes are
created by another process (the Parent Process).
¾ Windows process creation specifies a new program to
run when the process is created.
¾ In UNIX, process creation and management uses
multiple, fairly simple system calls. This provides extra
flexibility. If needed, the parent process may contain the
code for the child process to run, so that exec() is not
always needed. The child may also set up inter-process

Pageͺ

8|Page By MONTCHIO TABELA


communication with the parent, such as with a pipe before running another program

2. Thread
The simple model of a process containing only one thread is referred to
as a classic process. Modern operating systems also support processes
with multiple threads. Each thread of a single process can use the
process’ global data with synchronization resources to easily
communicate with other threads. Multi-threaded programs run more
efficiently and use less resources than a program that creates multiple
processes to accomplish the same task. Threads share global data and
other resources, but each thread has it’s own execution engine and stack
for data that is local to each function in the program.
2.1. Thread Libraries
A thread library provides the programmer with an API for creating and
managing threads. There are two primary ways of implementing a thread
library. The ¿rst approach is to provide a library entirely in user space with no kernel support.
The second approach is to implement a kernel-level library supported directly by the operating system. In this case, code and data
structures for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call to the
kernel.
Three main thread libraries are in use today: POSIX Pthreads, Windows, and Java. The Windows thread library is a kernel-level
library available on Windows systems. The Java thread API allows threads to be created and managed directly in Java programs.
because in most instances the JVM is running on top of a host operating system, the Java thread API is generally implemented
using a thread library available on the host system. This means that on Windows systems, Java threads are typically implemented
using the Windows API; UNIX and Linux systems often use Pthreads.
x Pthreads
The C program demonstrates the basic Pthreads API for constructing a multithreaded program that calculates the summation of a
nonnegative integer in a separate thread. In a Pthreads program, separate threads
#include <pthread.h> pthread attr
#include <stdio.h> init(&attr); /* create the
int sum; /* this data is shared by the thread(s) */ void thread */
*runner(void *param); /* threads call this function */ pthread create(&tid,&attr,runner,argv[1]);
int main(int argc, char *argv[]) /* wait for the thread to exit */
{ pthread join(tid,NULL);
pthread t tid; /* the thread identifier */ printf("sum = %d\n",sum);
pthread attr t attr; /* set of thread attributes */ }
if (argc != 2) { fprintf(stderr,"usage: a.out /* The thread will begin control in this function
<integer value>\n"); return -1; */ void *runner(void *param)
} {
if (atoi(argv[1]) < 0) { int i, upper = atoi(param);
fprintf(stderr,"%d must be >= 0\n",atoi(argv[1])); sum = 0;
return -1; for (i = 1; i <= upper; i++)
} sum += i;
/* get the default attributes */ pthread exit(0);
}
Pageͻ
x Windows Threads
The technique for creating threads using the Windows thread library is similar to the Pthreads technique in several ways. We
illustrate the Windows thread API in the C program. data shared by the separate threads—in this case, Sum—are declared globally
(the DWORD data type is an unsigned 32-bit integer). We also de¿ne the Summation() function that is to be performed in a
separate thread. This function is passed a pointer to a void, which Windows de¿nes as LPVOID. The thread performing this
function sets the global data Sum to the value of the summation from 0 to the parameter passed to Summation().
#include <windows.h> Param = atoi(argv[1]);
#include <stdio.h> if (Param < 0) { fprintf(stderr,"An integer >= 0 is
DWORD Sum; /* data is shared by the thread(s) */ required\n"); return -1;
/* the thread runs in this separate function */ }
DWORD WINAPI Summation(LPVOID Param) /* create the thread */
{ ThreadHandle = CreateThread(
DWORD Upper = *(DWORD*)Param; for NULL, /* default security attributes */
(DWORD i = 0; i <= Upper; i++) 0, /* default stack size */
Sum += i; Summation, /* thread function */
return 0; } &Param, /* parameter to thread function */
int main(int argc, char *argv[]) 0, /* default creation flags */
{ &ThreadId); /* returns the thread identifier */ if
DWORD ThreadId; HANDLE (ThreadHandle != NULL) {
ThreadHandle; /* now wait for the thread to finish */
int Param; if (argc ! WaitForSingleObject(ThreadHandle,INFINITE);
= 2) { /* close the thread handle */
fprintf(stderr,"An integer parameter is required\n"); CloseHandle(ThreadHandle); printf("sum = %d\
return -1; n",Sum);
} }
}
x Java class Sum { private int 3. The process scheduler
sum; public int getSum() } public class
{ return sum; } public void Driver {
setSum(int sum) { this.sum = public static void main(String[] args) { if
sum; (args.length > 0) { if (Integer.parseInt(args[0])
} < 0) System.err.println(args[0] + " must be >=
} 0.");
class Summation implements Runnable else {
{ private int upper; private Sum sumValue; Sum sumObject = new Sum();
public Summation(int upper, Sum sumValue) { int upper = Integer.parseInt(args[0]);
this.upper = upper; Thread thrd = new Thread(new Summation(upper,
this.sumValue = sumValue; sumObject)); thrd.start(); try { thrd.join();
} public void run() { int sum System.out.println
= 0; for (int i = 0; i <= upper; ("The sum of "+upper+" is "+sumObject.getSum());
i++) sum += i; } catch (InterruptedException ie) { }
sumValue.setSum(sum); }}
} else
System.err.println("Usage: Summation <integer value>");
}
}
We define process scheduling as the act of determining which process in the ready state should be moved to the running state.
That is, decide which process should run on the CPU next. The goal of the scheduler: The goal of the scheduler is to Implement
the virtual machine in such a way the user perceives that each process is running on it’s own computer. processes can be
categorized according to what is limiting the completion of the process.
¾ I/O Bound Process: these are Processes that are mostly waiting for the completion of input or output (I/O). They should
be given high priority by the scheduler.
¾ CPU Bound Process: CPU Bound processes are ones that are implementing algorithms with a large number of
calculations. They can be expected to hold the CPU for as long as the scheduler will allow.. They should be given a
lower priority by the scheduler.
Schedulers fall into one of two general categories:
x Non-preemptive scheduling: when the currently executing process gives up the CPU voluntarily.
x Preemptive scheduling: when the operating system decides to favor another process, preempting the currently executing

3.1. Parts Of the scheduler


The main functional components of the
scheduler are:
Enqueuer It adds a pointer or reference to the
process’ Process Control Block (PCB) as per the
desired data structure of the ready queue, which
is usually a collection of linked lists.
Dispatcher The part that implements the
scheduling algorithm to pick the next process to
run.
Context Switcher Loads the selected process
onto the CPU as the running process.
3.2. Some commonly used scheduling
algorithm
Let us consider the following process with its
corresponding time

 First Come , First Serve(FCFS)

The a average waiting time is: (0+140+215+535+815)/5=

 Shortest Job Next Scheduler (SJNS)

 Round robin scheduler(with a duration of 50)

process.
 Multi- level priority scheduler x New
processes begin in the highest priority queue.
x A process is rewarded by being moved to a higher priority queue
if it voluntarily blocks for I/O before its time quantum expires.
Similarly processes that are preemptively removed are penalized
by being put in a lower priority queue. x Queues with a lower
priority use a longer time quantum.
x Higher priority queues must be empty before processes from
lower priority queues are allowed to run. Thus, it is possible for
starvation to occur.

Assignment: Give the advantage and inconvenience of each scheduling algorithm


4. Interprocess Communication
Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A
process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not
share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other processes is a cooperating process. If special care is not
taken to correctly coordinate or synchronize access to shared resources, a number of problems can potentially arise.
x Starvation: it can occur when multiple processes or threads compete for access to a shared resource. One process may
monopolize the resource while others are denied access.
x Deadlock condition can occur when two processes need multiple shared resources at the same time in order to continue.
Computer Example of Deadlock:
Thread A is waiting to receive data from thread B. Thread B is waiting to receive data from thread A. The two threads are in
deadlock because they are both waiting for the other and not continuing to execute.
x Data consistency: When shared resources are modified at the same time by multiple resources, data errors or
inconsistencies may occur. Sections of a program that might cause these problems are called critical sections. Failure to
coordinate access to a critical section is called a race condition because success or failure depends on the ability of one process
to exit the critical section before another process enters the critical section. The communication between processes can be
direct or indirect.
x In direct communication The exchange of information is from one process to another, it can be unidirectional (a process
produces and another consume) or bidirectional. each process that wants to communicate must explicitly name the
recipient or sender of the communication.
x With indirect communication, the messages are sent to and received from mailboxes,port or pipe. A mailbox can be viewed
abstractly as an object into which messages can be placed by processes and from which messages can be removed.
5. Synchronization
As processes interrelate we need to control the way they interact to avoid interference, deadlock, starvation problem. Most of
the earlier solutions were based on busy waiting where waiting processes are still using CPU times though they are in waiting
situation. 5.1. The Critical-Section Problem
Consider a system consisting of n processes {P 0 , P 1 , ..., P Qí1 }. Each process has a segment of code, called a critical section, in
which the process may be changing common variables, updating a table, writing a ¿le, and so on. The important feature of the
system is that, when one process is executing in its critical section, no other process is allowed to execute in its critical section.
A solution to the critical-section problem must satisfy the following three requirements:
¾ Mutual exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their
critical sections.
¾ Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then
only those processes that are not executing in their remainder sections can participate in deciding which will enter its
critical section next, and this selection cannot be postponed inde¿nitely.
¾ Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their
critical sections after a process has made a request to enter its critical section and before that request is granted.
5.2. Mutex Locks
There are two types of lock used to solve critical section problem, hardware and software lock. The hardware-based solutions
are complicated as well as generally inaccessible to application programmers. Instead, operating-systems designers build
software tools to solve the critical-section problem. The simplest of these tools is the mutex lock. (In fact, the term mutex is
short for mutual exclusion.). We use two functions to implement mutex(acquire() and release()).
acquire() { while } release()
(!available) ; /* { available =
busy wait */ true; }
available =
false;;
5.3. Semaphores
A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations:
wait() and signal(). The wait() operation was originally termed P (from the Dutch proberen, “to test”); signal() was originally
called V (from verhogen, “to increment”). The de¿nition of wait() is as follows:
wait(S)
{ while (S <= signal(S) {
0) S++;
; // busy wait }
S--;
}
Topic 3: Memory MANAGEMENT

The task of the memory manager is to ensure that all processes are always able to access their memory. To accomplish this task
requires careful integration between the computer’s hardware and the operating system. When several processes with dynamic
memory needs run on the computer at the same time, it is necessary to reference data with both a logical address and a physical
address. The hardware is responsible for translating the logical addresses into physical addresses in real time. While the
operating system is responsible to:
x Ensure that the requested data is in physical memory when
needed x Program the hardware to perform the address
translations. 1. Memory Management Unit
memory addresses use by running program to reference data is the logical address. The real time translation to the physical address
is performed in hardware by the CPU’s Memory Management Unit (MMU). The MMU has two special registers that are accessed
by the CPU’s control unit. A data to be sent to main memory or retrieved from memory is stored in the Memory Data Register
(MDR). The desired logical memory address is stored in the Memory Address Register (MAR). The address translation is also
called address binding and uses a memory map that is programmed by the operating system.

Before memory addresses are loaded on to the system bus, they are translated to physical addresses by the MMU.
2. Memory Allocation
The memory is usually divided into two partitions: one for the resident operating system and one for the user processes. We
usually want several user processes to reside in memory at the same time. We therefore need to consider how to allocate available
memory to the processes that are in the input queue waiting to be brought into memory.

2.1. Logical Versus Physical Address Space


An address generated by the CPU is commonly referred to as a logical address or virtual address, whereas an address seen by
the memory unit, that is, the one loaded into the MAR of the memory is commonly referred to as a physical address. The base
register is now called a relocation register. The value in the relocation register is added to every address generated by a user
process at the time the address is sent to memory.

You might also like