0% found this document useful (0 votes)
412 views51 pages

Os Lab Manual

The document contains the list of programs for the Operating System Lab (RCS-451) at MGM's College of Engineering & Technology, Noida. It includes 12 programs that simulate various CPU scheduling algorithms like FCFS, SJF, priority scheduling as well as concepts like process synchronization, critical section problem, deadlock prevention, and different page replacement algorithms. It provides the objective, theory, coding, output and questions for the first two programs that simulate Non-Preemptive FCFS and SJF CPU scheduling algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
412 views51 pages

Os Lab Manual

The document contains the list of programs for the Operating System Lab (RCS-451) at MGM's College of Engineering & Technology, Noida. It includes 12 programs that simulate various CPU scheduling algorithms like FCFS, SJF, priority scheduling as well as concepts like process synchronization, critical section problem, deadlock prevention, and different page replacement algorithms. It provides the objective, theory, coding, output and questions for the first two programs that simulate Non-Preemptive FCFS and SJF CPU scheduling algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

MGM’s COLLEGE OF ENGINEERING & TECHNOLOGY, NOIDA


DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING/
INFORMATION TECHNOLOGY
Operating System Lab (RCS-451)
LIST OF PROGRAMS

Program No. NAME OF THE EXPERIMENT


Prog.-1 Simulation of the CPU scheduling algorithm Non-Preemptive FCFS
Prog.-2 Simulation of the CPU scheduling algorithm Non-Preemptive SJF
Prog.-3 Simulation of the CPU scheduling algorithm Non-Preemptive Priority
Simulation of the CPU scheduling algorithm Non-Preemptive Round-
Prog.-4 Robin
Implementation of Process Synchronization(Producer / Consumer
Prog.-5 Problem)
Prog.-6 Simulation of Critical Section Problem using Dekker‟s Algorithm
Simulation of Deadlock Prevention and Avoidance Problem through
Prog.-7 Banker„s Algorithm
Prog.-8 Simulation of FIFO Page Replacement Algorithm
Prog.-9 Simulation of LRU Page Replacement Algorithm
Prog-10 Simulation of Optimal Page Replacement Algorithm
BEYOND SYLLABUS
Prog.-11 Synchronization of POSIX thread using MUTEX Variable
Prog-12 Synchronization of POSIX thread using CONDITION Variables

Faculty Incharge

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 1
TITLE: Simulation of the CPU scheduling algorithm
Non-Preemptive FCFS

1.1 Objective 1.2 Theory & Logic 1.3 Coding 1.4Output 1.5Questions

1.1.1 OBJECTIVE: Simulation of the CPU scheduling algorithms Non-Preemptive


FCFS.

1.2THEORY & LOGIC:


The most important aspect of job scheduling is the ability to create a multi-tasking
environment. A single user cannot keep either the CPU or the I/O devices busy at all
times. Multiprogramming increases CPU utilization by organizing jobs so that the CPU
always has something to execute. To have several jobs ready to run, the system must
keep all of them in memory at the same time for their selection one-by-one.

FCFS: With this scheme, the process that requests the CPU first is allocated the CPU
first. The implementation of the FCFS policy is easily managed with a FIFO queue.

1.3CODING:

/* FIRST COME FIRST SERVED SCHEDULING ALGORITHM*/


//PREPROCESSOR DIRECTIVES
#include<stdio.h>
#include<conio.h>
#include<string.h>
//GLOBAL VARIABLES - DECLARATION
int n,Bu[20],Twt,Ttt,A[10],Wt[10],w;
float Awt,Att;
char pname[20][20],c[20][20];
//FUNCTION DECLARATIONS
void Getdata();
void Gantt_chart();
void Calculate();
void fcfs();
//GETTING THE NUMBER OF PROCESSES AND THE BURST TIME AND
ARRIVAL TIME FOR EACH PROCESS
void Getdata()
{
int i;
printf("\n Enter the number of processes: ");
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

scanf("%d",&n);
for(i=1;i<=n;i++)
{
fflush(stdin);
printf("\n\n Enter the process name: ");
scanf("%s",&pname[i]);
printf("\n Enter The BurstTime for Process %s =",pname[i]);
scanf("%d",&Bu[i]);
printf("\n Enter the Arrival Time for Process %s =",pname[i]);
scanf("%d",&A[i]);
}
}
//DISPLAYING THE GANTT CHART
void Gantt_chart()
{
int i;
printf("\n\n\t\t\tGANTT CHART\n");
printf("\n-----------------------------------------------------------\n");
for(i=1;i<=n;i++)
printf("|\t%s\t",pname[i]);
printf("|\t\n");
printf("\n-----------------------------------------------------------\n");
printf("\n");
for(i=1;i<=n;i++)
printf("%d\t\t",Wt[i]);
printf("%d",Wt[n]+Bu[n]);
printf("\n-----------------------------------------------------------\n");
printf("\n");
}
//CALCULATING AVERAGE WAITING TIME AND AVERAGE TURN AROUND
TIME
void Calculate()
{
int i;
//For the 1st process
Wt[1]=0;
for(i=2;i<=n;i++)
{
Wt[i]=Bu[i-1]+Wt[i-1];
}
for(i=1;i<=n;i++)

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
{
Twt=Twt+(Wt[i]-A[i]);
Ttt=Ttt+((Wt[i]+Bu[i])-A[i]);
}
Att=(float)Ttt/n;
Awt=(float)Twt/n;
printf("\n\n Average Turn around time=%3.2f ms ",Att);
printf("\n\n AverageWaiting Time=%3.2f ms",Awt);
}
//FCFS Algorithm
void fcfs()
{
int i,j,temp, temp1;
Twt=0;
Ttt=0;
printf("\n\n FIRST COME FIRST SERVED ALGORITHM\n\n");
for(i=1;i<=n;i++)
{
for(j=i+1;j<=n;j++)
{
if(A[i]>A[j])
{
temp=Bu[i];
temp1=A[i];
Bu[i]=Bu[j];
A[i]=A[j];
Bu[j]=temp;
A[j]=temp1;
strcpy(c[i],pname[i]);
strcpy(pname[i],pname[j]);
strcpy(pname[j],c[i]);
}
}
}
Calculate();
Gantt_chart();
}
void main()
{
int ch;
clrscr();
Getdata();

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
fcfs();
getch();
}

1.4OUTPUT:

Enter the number of processes: 3


Enter the process name: u
Enter The BurstTime for Process u = 14
Enter the Arrival Time for Process u = 0
Enter the process name: y
Enter The BurstTime for Process y = 8
Enter the Arrival Time for Process y = 1
Enter the process name: r
Enter The BurstTime for Process r = 13
Enter the Arrival Time for Process r = 2

FIRST COME FIRST SERVED ALGORITHM

Average Turn around time=22.67 ms


AverageWaiting Time=11.00 ms

GANTT CHART

-----------------------------------------------------------
| u | y | r |

-----------------------------------------------------------

0 14 22 35
-----------------------------------------------------------

1.5QUESTIONS:

(a)What would be two major advantages and two disadvantages of FCFS scheduling?

(b)Define the difference between preemptive and no preemptive scheduling.

(c)Define turnaround time and waiting time.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 2
TITLE: Simulation of the CPU scheduling algorithm
Non-Preemptive SJF

1.1 Objective 1.2 Theory & Logic 1.3 Coding 1.4Output 1.5Questions

1.1OBJECTIVE: Simulation of the CPU scheduling algorithms Non-Preemptive


SJF.

1.2THEORY & LOGIC:


The most important aspect of job scheduling is the ability to create a multi-tasking
environment. A single user cannot keep either the CPU or the I/O devices busy at all
times. Multiprogramming increases CPU utilization by organizing jobs so that the CPU
always has something to execute. To have several jobs ready to run, the system must
keep all of them in memory at the same time for their selection one-by-one.

SJF: Shortest Job First algorithm associates with each process the length of the latter‟s
next CPU burst. When the CPU is available, it is assigned to the process that has the
smallest next CPU burst.

1.CODING:

// PROGRAM FOR SJF - NON PREEMPTIVE SCHEDULING ALGORITHM


//PREPROCESSOR DIRECTIVES
#include<stdio.h>
#include<conio.h>
#include<string.h>
//GLOBAL VARIABLES - DECLARATION
int Twt,Ttt,A[20],Wt[20],n,Bu[20];
float Att,Awt;
char pname[20][20];
//FUNCTION DECLARATIONS
void Getdata();
void Gantt_chart();
void Sjf();
//GETTING THE NUMBER OF PROCESSES AND THE BURST TIME AND
ARRIVAL TIME FOR EACH PROCESS
void Getdata()

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
{
int i;
printf("\n Enter the number of processes: ");
scanf("%d",&n);
for(i=1;i<=n;i++)
{
fflush(stdin);
printf("\n\n Enter the process name: ");
scanf("%s",&pname[i]);
printf("\n Enter The BurstTime for Process %s = ",pname[i]);
scanf("%d",&Bu[i]);
printf("\n Enter the Arrival Time for Process %s = ",pname[i]);
scanf("%d",&A[i]);
}
}
//DISPLAYING THE GANTT CHART
void Gantt_chart()
{
int i;
printf("\n\nGANTT CHART");
printf("\n--------------------------------------------------------------------\n");
for(i=1;i<=n;i++)
printf("|\t%s\t",pname[i]);
printf("|\t\n");
printf("\n-----------------------------------------------------------\n");
printf("\n");
for(i=1;i<=n;i++)
printf("%d\t\t",Wt[i]);
printf("%d",Wt[n]+Bu[n]);
printf("\n--------------------------------------------------------------------\n");
printf("\n");
}
//Shortest job First Algorithm with NonPreemption
void Sjf()
{
int w,t,i,B[10],Tt=0,temp,j;
char S[10],c[20][20];
int temp1;
printf("\n\n SHORTEST JOB FIRST SCHEDULING ALGORITHM \n\n");
Twt=Ttt=0;
w=0;
for(i=1;i<=n;i++)

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
{
B[i]=Bu[i];
S[i]='T';
Tt=Tt+B[i];
}
for(i=1;i<=n;i++)
{
for(j=3;j<=n;j++)
{
if(B[j-1]>B[j])
{
temp=B[j-1];
temp1=A[j-1];
B[j-1]=B[j];
A[j-1]=A[j];
B[j]=temp;
A[j]=temp1;
strcpy(c[j-1],pname[j-1]);
strcpy(pname[j-1],pname[j]);
strcpy(pname[j],c[j-1]);
}
}
}
//For the 1st process
Wt[1]=0;
w=w+B[1];
t=w;
S[1]='F';
while(w<Tt)
{
i=2;
while(i<=n)
{
if(S[i]=='T'&&A[i]<=t)
{
Wt[i]=w;
S[i]='F';
w=w+B[i];
t=w;
i=2;
}
else

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
i++;
}
}
//CALCULATING AVERAGE WAITING TIME AND AVERAGE TURN
AROUND TIME
for(i=1;i<=n;i++)
{
Twt=Twt+(Wt[i]-A[i]);
Ttt=Ttt+((Wt[i]+Bu[i])-A[i]);
}
Att=(float)Ttt/n;
Awt=(float)Twt/n;
printf("\n\n Average Turn around time=%3.2f ms ",Att);
printf("\n\n AverageWaiting Time=%3.2f ms",Awt);
Gantt_chart();
}

void main()
{
clrscr();
Getdata();
Sjf();
getch();
}

1.4OUTPUT:

Enter the number of processes: 3

Enter the process name: u

Enter The BurstTime for Process u = 14

Enter the Arrival Time for Process u = 0

Enter the process name: y

Enter The BurstTime for Process y = 8

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
Enter the Arrival Time for Process y = 1

Enter the process name: t

Enter The BurstTime for Process t = 6

Enter the Arrival Time for Process t = 2

SHORTEST JOB FIRST SCHEDULING ALGORITHM

Average Turn around time=19.67 ms

AverageWaiting Time=10.33 ms

GANTT CHART
--------------------------------------------------------------------
| u | t | y |

-----------------------------------------------------------

0 14 20 26
--------------------------------------------------------------------

1.5QUESTIONS:

(a)What would be two major advantages and two disadvantages of SJF scheduling?

(b)Define the difference between preemptive and no preemptive scheduling.

(c)Define turnaround time and waiting time.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 3
TITLE: Simulation of the CPU scheduling algorithm
Non-Preemptive Priority

1.1 Objective 1.2 Theory & Logic 1.3 Coding 1.4Output 1.5Questions

1.1OBJECTIVE: Simulation of the CPU scheduling algorithms Non-Preemptive


Priority.

1.2THEORY & LOGIC:


The most important aspect of job scheduling is the ability to create a multi-tasking
environment. A single user cannot keep either the CPU or the I/O devices busy at all
times. Multiprogramming increases CPU utilization by organizing jobs so that the CPU
always has something to execute. To have several jobs ready to run, the system must
keep all of them in memory at the same time for their selection one-by-one.

PRIORITY: A priority is associated with each process; the CPU is allocated to the
process with the highest priority.

1.3CODING:

#include<stdio.h>
#include<conio.h>
#include<iostream.h>
void main()
{
clrscr();
int x,n,p[10],pp[10],pt[10],w[10],t[10],awt,atat,i;
printf("Enter the number of process : ");
scanf("%d",&n);
printf("\n Enter process : time priorities \n");
for(i=0;i<n;i++)
{
printf("\nProcess no %d : ",i+1);
scanf("%d %d",&pt[i],&pp[i]);
p[i]=i+1;
}
for(i=0;i<n-1;i++)
{

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
for(int j=i+1;j<n;j++)
{
if(pp[i]<pp[j])
{
x=pp[i];
pp[i]=pp[j];
pp[j]=x;
x=pt[i];
pt[i]=pt[j];
pt[j]=x;
x=p[i];
p[i]=p[j];
p[j]=x;
}
}
}
w[0]=0;
awt=0;
t[0]=pt[0];
atat=t[0];
for(i=1;i<n;i++)
{
w[i]=t[i-1];
awt+=w[i];
t[i]=w[i]+pt[i];
atat+=t[i];
}
printf("\n\n Job \t Burst Time \t Wait Time \t Turn Around Time Priority \n");
for(i=0;i<n;i++)
printf("\n %d \t\t %d \t\t %d \t\t %d \t\t %d \n",p[i],pt[i],w[i],t[i],pp[i]);
awt/=n;
atat/=n;
printf("\n Average Wait Time : %d \n",awt);
printf("\n Average Turn Around Time : %d \n",atat);
getch();
}

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
1.4OUTPUT:

Enter the number of process : 3

Enter process : time priorities

Process no 1 : 3
5

Process no 2 : 1
6

Process no 3 : 2
7

Job Burst Time Wait Time Turn Around Time Priority

3 2 0 2 7

2 1 2 3 6

1 3 3 6 5

Average Wait Time : 1

Average Turn Around Time : 3

1.5QUESTIONS:

(a)What is priority scheduling?

(b)Define the difference between preemptive and no preemptive scheduling.

(c)Define average time and waiting time.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
Program No: 4
TITLE: Simulation of the CPU scheduling algorithm
Non-Preemptive Round-Robin

1.1 Objective 1.2 Theory & Logic 1.3 Coding 1.4Output 1.5Questions

1.1OBJECTIVE: Simulation of the CPU scheduling algorithms Non-Preemptive


Round-Robin.

1.2THEORY & LOGIC:


The most important aspect of job scheduling is the ability to create a multi-tasking
environment. A single user cannot keep either the CPU or the I/O devices busy at all
times. Multiprogramming increases CPU utilization by organizing jobs so that the CPU
always has something to execute. To have several jobs ready to run, the system must
keep all of them in memory at the same time for their selection one-by-one.

ROUND ROBIN: The round robin scheduling algorithm is designed especially for
time-sharing systems. A small unit of time ,called a time quantum,or time slice is
defined .The ready queue is treated as a circular queue.The CPU scheduler goes around
the ready queue, allocating the CPU to each process for a time interval of up to 1 time
quantum.

1.3CODING:

#include<stdio.h>
#include<conio.h>
void main()
{
int st[10],bt[10],wt[10],tat[10],n,tq;
int i,count=0,swt=0,stat=0,temp,sq=0;
float awt=0.0,atat=0.0;
clrscr();
printf("Enter number of processes:");
scanf("%d",&n);
printf("Enter burst time for sequences:");
for(i=0;i<n;i++)
{
scanf("%d",&bt[i]);
st[i]=bt[i];
}
printf("Enter time quantum:");
scanf("%d",&tq);
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
while(1)
{
for(i=0,count=0;i<n;i++)
{
temp=tq;
if(st[i]==0)
{
count++;
continue;
}
if(st[i]>tq)
st[i]=st[i]-tq;
else
if(st[i]>=0)
{
temp=st[i];
st[i]=0;
}
sq=sq+temp;
tat[i]=sq;
}
if(n==count)
break;
}
for(i=0;i<n;i++)
{
wt[i]=tat[i]-bt[i];
swt=swt+wt[i];
stat=stat+tat[i];
}
awt=(float)swt/n;
atat=(float)stat/n;
printf("\nProcess_no\tBurst_time\tWait_time\tTurn_around_t
ime");
for(i=0;i<n;i++)
printf("\n%d\t\t\t%d\t\t%d\t\t%d",i+1,bt[i],wt[i],tat[i]);
printf("\nAvg wait time is %f\nAvg turn around time is
%f",awt,atat);
getch();
}

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
1.4OUTPUT:

Enter number of processes:3


Enter burst time for sequences:8
12
13
Enter time quantum:4

Process_no Burst_time Wait_time Turn_around_time


1 8 8 16
2 12 16 28
3 13 20 33
Avg wait time is 14.666667
Avg turn around time is 25.666666

1.5QUESTIONS:

(a)What would be two major advantages and two disadvantages of Round-Robin


scheduling?

(b)Define the difference between Round-Robin scheduling and other scheduling


algorith.

(c)Define scheduling criterions.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 5
TITLE – Implementation of Process Synchronization
(Producer / Consumer Problem)

1.1 Objective 1.2 Theories and Logic 1.3. Coding .1.4 Output 1.5 Questions

1.1 OBJECTIVE: Implementation of Process Synchronization


(Producer / Consumer Problem)

1.2 THEORY and LOGIC:

Two condition variables control access to the buffer. One condition variable is used to
tell if the buffer is full, and the other is used to tell if the buffer is empty. When the
producer wants to add an item to the buffer, it checks to see if the buffer is full; if it is
full the producer blocks on the cond_wait() call, waiting for an item to be removed from
the buffer. When the consumer removes an item from the buffer, the buffer is no longer
full, so the producer is awakened from the cond_wait() call. The producer is then
allowed to add another item to the buffer.

The consumer works, in many ways, the same as the producer. The consumer uses the
other condition variable to determine if the buffer is empty. When the consumer wants
to remove an item from the buffer, it checks to see if it is empty. If the buffer is empty,
the consumer then blocks on the cond_wait() call, waiting for an item to be added to the
buffer. When the producer adds an item to the buffer, the consumer's condition is
satisfied, so it can then remove an item from the buffer.

The example copies a file by reading data into a shared buffer (producer) and then
writing data out to the new file (consumer). The Buf data structure is used to hold both
the buffered data and the condition variables that control the flow of the data.

The main thread opens both files, initializes the Buf data structure, creates the consumer
thread, and then assumes the role of the producer. The producer reads data from the
input file, then places the data into an open buffer position. If no buffer positions are
available, then the producer waits via the cond_wait () call. After the producer has read
all the data from the input file, it closes the file and waits for (joins) the consumer
thread.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
The consumer thread reads from a shared buffer and then writes the data to the output
file. If no buffers positions are available, then the consumer waits for the producer to fill
a buffer position. After the consumer has read all the data, it closes the output file and
exits.

If the input file and the output file were residing on different physical disks, then this
example could execute the reads and writes in parallel. This parallelism would
significantly increase the throughput of the example through the use of threads.

1.3 CODING:

#define _REEENTRANT
#include <stdio.h>
#include <thread.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/uio.h>

#define BUFSIZE 512


#define BUFCNT 4

/* this is the data structure that is used between the producer


and consumer threads */

struct {
char buffer[BUFCNT][BUFSIZE];
int byteinbuf[BUFCNT];
mutex_t buflock;
mutex_t donelock;
cond_t adddata;
cond_t remdata;
int nextadd, nextrem, occ, done;
} Buf;

/* function prototype */
void *consumer(void *);

main(int argc, char **argv)


{
int ifd, ofd;
thread_t cons_thr;
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

/* check the command line arguments */


if (argc != 3)
printf("Usage: %s <infile> <outfile>\n", argv[0]), exit(0);

/* open the input file for the producer to use */


if ((ifd = open(argv[1], O_RDONLY)) == -1)
{
fprintf(stderr, "Can't open file %s\n", argv[1]);
exit(1);
}

/* open the output file for the consumer to use */


if ((ofd = open(argv[2], O_WRONLY|O_CREAT, 0666)) == -1)
{
fprintf(stderr, "Can't open file %s\n", argv[2]);
exit(1);
}

/* zero the counters */


Buf.nextadd = Buf.nextrem = Buf.occ = Buf.done = 0;

/* set the thread concurrency to 2 so the producer and consumer can


run concurrently */

thr_setconcurrency(2);

/* create the consumer thread */


thr_create(NULL, 0, consumer, (void *)ofd, NULL, &cons_thr);

/* the producer ! */
while (1) {

/* lock the mutex */


mutex_lock(&Buf.buflock);

/* check to see if any buffers are empty */


/* If not then wait for that condition to become true */

while (Buf.occ == BUFCNT)


cond_wait(&Buf.remdata, &Buf.buflock);

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
/* read from the file and put data into a buffer */
Buf.byteinbuf[Buf.nextadd] = read(ifd,Buf.buffer[Buf.nextadd],BUFSIZE);

/* check to see if done reading */


if (Buf.byteinbuf[Buf.nextadd] == 0) {

/* lock the done lock */


mutex_lock(&Buf.donelock);

/* set the done flag and release the mutex lock */


Buf.done = 1;

mutex_unlock(&Buf.donelock);

/* signal the consumer to start consuming */


cond_signal(&Buf.adddata);

/* release the buffer mutex */


mutex_unlock(&Buf.buflock);

/* leave the while looop */


break;
}

/* set the next buffer to fill */


Buf.nextadd = ++Buf.nextadd % BUFCNT;

/* increment the number of buffers that are filled */


Buf.occ++;

/* signal the consumer to start consuming */


cond_signal(&Buf.adddata);

/* release the mutex */


mutex_unlock(&Buf.buflock);
}

close(ifd);

/* wait for the consumer to finish */


thr_join(cons_thr, 0, NULL);

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
/* exit the program */
return(0);
}

/* The consumer thread */


void *consumer(void *arg)
{
int fd = (int) arg;

/* check to see if any buffers are filled or if the done flag is set */
while (1) {

/* lock the mutex */


mutex_lock(&Buf.buflock);

if (!Buf.occ && Buf.done) {


mutex_unlock(&Buf.buflock);
break;
}

/* check to see if any buffers are filled */


/* if not then wait for the condition to become true */

while (Buf.occ == 0 && !Buf.done)


cond_wait(&Buf.adddata, &Buf.buflock);

/* write the data from the buffer to the file */


write(fd, Buf.buffer[Buf.nextrem], Buf.byteinbuf[Buf.nextrem]);

/* set the next buffer to write from */


Buf.nextrem = ++Buf.nextrem % BUFCNT;

/* decrement the number of buffers that are full */


Buf.occ--;

/* signal the producer that a buffer is empty */


cond_signal(&Buf.remdata);

/* release the mutex */


mutex_unlock(&Buf.buflock);
}

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

/* exit the thread */


thr_exit((void *)0);
}

1.5 OUTPUT:

1.6QUESTIONS:

(a) Can you achieve the process synchronization in producer consumer problem
using the counting semaphores?

(b) Which algorithm gives the deadlock free result in producer consumer problem?

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 6
TITLE – Simulation of Critical section problem using Dekker’s Algorithm

1.1 Objective 1.2 Theory 1.3 Logic 1.4. Coding 1.5 Output 1.6 Questions

1.1Objective: Write a Program for synchronizing of critical section problem using


Dekker‟s Algorithm.

1.2 Theory: In concurrent programming a critical section is a piece of code that


accesses a shared resource (data structure or device) that must not be concurrently
accessed by more than one thread of execution. A critical section will usually terminate
in fixed time, and a thread, task or process will only have to wait a fixed time to enter it
(i.e. bounded waiting). Some synchronization mechanism is required at the entry and
exit of the critical section to ensure exclusive use, for example a semaphore.

1.3 Algorithm:

var turn: integer; (is either 0 or 1)

flag: array [0..1] of boolean;

initialize flag [0] = flag [1] = false,

turn is either initialized to 0 or 1.

{ for p1 }

repeat

flag [1]:= true

while flag [0] do

if turn = 0 then
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
begin

flag [1]:= false

while turn = 0 do ;

flag [1]:= true

end

CS

turn:=0

flag [1]:= false

non-CS

until false

1.4 Coding :

#include <stdio.h>
#include <stdlib.h>
void main()
{
int choice;
int c1=1,c2=1,turn=1;
clrscr();
do
{
printf("\n1.Process 1 Enter");
printf("\n2.Process 2 Enter");
printf("\n3.Both Process Enter");
printf("\n4.Exit");
scanf("%d",&choice);
if (choice==1)
{
c1=0;
while (c2==0)
{
if (turn==2)
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
c1=1;
while (turn==2);
c1=0;
}
printf("\nProcess P1 enters the critical section");
c1=1;
turn=2;

else printf("\nIt is the turn of Process P2");


}
if (choice==2)
{ c2=0;
while (c1==0)
{
if (turn==1)
c2=1;
while (turn==1);
c2=0;
}
printf("\nProcess P2 enters the critical section");
c2=1;
turn=1;
else printf("\nIt is the turn of process p1");
}

}while (choice!=4);
}

1.5Output:

Enter the choice: 1

Process 1 enters critical section

Enter choice 3

Both process in critical section

1.7 Questions:

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

a. Write the dekker‟s algorithm


b. What do you understand by the critical section.

ProgramNo: 7
TITLE –Simulation of Deadlock prevention and avoidance problem through Banker„s
algo

1.1 Objective 1.2 Theory 1.3 Logic 1.4 Coding 1.5 Output 1.6 Questions

1.1 Object: Simulation of Deadlock prevention and avoidance problem through


Banker„s Algo

1.2 Theory: - The Banker's algorithm is run by the operating system whenever a
process requests resources.The algorithm prevents deadlock by denying or postponing
the request if it determines that accepting the request could put the system in an unsafe
state (one where deadlock could occur).

Resources

For the Banker's algorithm to work, it needs to know three things:

 How much of each resource each process could possibly request


 How much of each resource each process is currently holding
 How much of each resource the system has available

1.3 Algorithm:

n process and m resources


Max[n * m]
Allocated[n * m]
Still_Needs[n * m]
Available[m]
Temp[m]
Done[n]

while () {
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
Temp[j]=Available[j] for all j
Find an i such that
a) Done[i] = False
b) Still_Needs[i,j] <= Temp[j]
if so {
Temp[j] += Allocated[i,j] for all j
Done[i] = TRUE}
}
else if Done[i] = TRUE for all i then state is safe
else state is unsafe
}

1.4 Coding:

ValidateRequest()

r = 0 to Maximum Processes
c = 0 to Maximum Resources

1). if ( Alloc[r][c] + Request[r] is greater than Claim[i][j] )


then
signal Claim Error

2).else if( Request[i] is greater than Available[i] )


then
signal Available resource Error

3).if any of the (1 & 2 )above exist return the specified error
and Show the message box indicating the error.

4).Now Allocate the Resources to the processes. So we can check the deadlock
Alloc[r][c] += Request[r];
Available[c]-=Request[c];

Note: Safesystem algo is described below.


5). Check the system is in safe or unsafe state.

if system is in SafeState()
then
Signal to the system that system is in safe state

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
and return to calling functions.
else
Take the allocated resources back.

Alloc[r][c] -= Request[c];
Available[c]+=Request[c];
and signal to the system that unsafe state can occur, so the
request cannot be fulfilled and return to the calling functions.

--------------------------------------------------------------------------------------

Safestate()
r = 0 to Maximum Processes
c = 0 to Maximum Resources

1). Store the Available Resources in a temporary Matrix


tmpAvail[r] = Available[r];

2). Take a Matrix that can store the total no of resources Process[].
Assign every process a positive no
Process[r] = r;
3). Find a process in the List that satisfies the following condition.
Claim[r][c] - Alloc[r][c] ) <= tmpAvail[c]
If Process exits Mark that Process with some indicator. So, it
will not take part in the search.
Point 3 is implemented in the following code.

//////
while( Loop )
{
//////
for( i = 0; i < MAXPRO; i++ )
{
if( Process[i] != -1 )
{
Count = 0;
//////
for( int j = 0; j < MAXRES; j++ )
{

if( ( Claim[i][j] - Alloc[i][j] ) <= tmpAvail[j] )


{

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
Count++;
}

else
{
Check = false;
break;
}
}
//////
if( Count == MAXRES )
{
Check = true;
break;
}

}
//////
if( Check == true )
{
//////
for( int n=0; n < MAXRES; n++ )
{
tmpAvail[n]+=Alloc[i][n];
}
////// // Process Marked
Process[i] = -1;
Check = false;
}
4). Other wise simply go out of the search loop.

else
{
Loop = false;
}

}
//////
Count = 0;

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
for( i = 0; i < MAXRES; i++ )
{
if( Process[i] == -1 )
Count++;
}
if( Count == MAXRES )
Check = true;
else
Check = false;

5). Returns the indicator of safe state to the main functions

return (Check);
}

1.5 Output:

Input:
Total Resources: Inputs the total resources.
Max Demand: Input the Maximum demand.
Current Need: Input the Current need of the processes.
Output:
Allocated: Shows the Allocated resources by the processes.
Available: Shows the available resources in the system.
Claimed: Shows the Total resources claimed by processes.
Still Needed: Shows the resources that are still needed by processes.
Summary:
Summary: Shows the summary of functions that are already performed.

1.6 Questions:
(a)List three examples of deadlock that are not related to a computer system
environment.

(b)Is it possible to have a deadlock involving only one single process. Explain your
answer.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 8
TITLE: Simulation of FIFO page replacement algorithm

1.1 Objective 1.2 Theory & Logic 1.3 Coding 1.4Output 1.5Questions

1.1OBJECTIVE: Simulation of FIFO page replacement algorithm


.

1.2THEORY & LOGIC:


Page replacement algorithms decide which page to swap out from memory to allocate a
new page to memory. Page fault happens when a page is not in memory and needs to be
placed into memory. The idea is to find an algorithm that reduces the number of page
faults which limits the time it takes to swap in and swap out a page.
FIFO:
A simple queue structure where the page at the head of the queue gets swapped out
when a new page needs to be allocated. The new page gets added to the end of the
queue.

1.3CODING:

#include<stdio.h>
#include<conio.h>
int fr[3];
void main()
{
void display();
int i,j,page[12]={2,3,2,1,5,2,4,5,3,2,5,2};
int flag1=0,flag2=0,pf=0,frsize=3,top=0;
clrscr();
for(i=0;i<3;i++)
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
{
fr[i]=-1;
}
for(j=0;j<12;j++)
{
flag1=0;
flag2=0;
for(i=0;i<12;i++)
{
if(fr[i]==page[j])
{
flag1=1;
flag2=1;
break;
}
}
if(flag1==0)
{
for(i=0;i<frsize;i++)
{
if(fr[i]==-1)
{
fr[i]=page[j];
flag2=1;
break;
}
}
}
if(flag2==0)
{
fr[top]=page[j];
top++;
pf++;
if(top>=frsize)
top=0;
}
display();
}
printf("Number of page faults : %d ",pf);
getch();
}
void display()
{
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
int i;
printf("\n");
for(i=0;i<3;i++)
printf("%d\t",fr[i]);
}

1.4OUTPUT:

2 -1 -1
2 3 -1
2 3 -1
2 3 1
5 3 1
5 2 1
5 2 4
5 2 4
3 2 4
3 2 4
3 5 4
3 5 2

Number of page faults : 6

1.5QUESTIONS:

(a)Why we need page replacement algorithm?

(b)Define the FIFO page replacement algorithm.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 9
TITLE: Simulation of LRU page replacement algorithm

1.1 Objective 1.2 Theory & Logic 1.3 Coding 1.4Output 1.5Questions

1.1OBJECTIVE: Simulation of LRU page replacement algorithm


.

1.2THEORY & LOGIC:


Page replacement algorithms decide which page to swap out from memory to allocate a
new page to memory. Page fault happens when a page is not in memory and needs to be
placed into memory. The idea is to find an algorithm that reduces the number of page
faults which limits the time it takes to swap in and swap out a page.
LRU:
In this, a Min_array that kept track of count value.When page needs to be swapped out
look through min_array to find the min value and swap the value from reference array
to frame array.

1.3CODING:

#include<stdio.h>
#include<conio.h>
int fr[3];
void main()
{
void display();
int p[12]={2,3,2,1,5,2,4,5,3,2,5,2},i,j,fs[3];
int index,k,l,flag1=0,flag2=0,pf=0,frsize=3;
clrscr();
for(i=0;i<3;i++)
{
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
fr[i]=-1;
}
for(j=0;j<12;j++)
{
flag1=0,flag2=0;
for(i=0;i<3;i++)
{
if(fr[i]==p[j])
{
flag1=1;
flag2=1;
break;
}
}
if(flag1==0)
{
for(i=0;i<3;i++)
{
if(fr[i]==-1)
{
fr[i]=p[j];
flag2=1;
break;
}
}
}
if(flag2==0)
{
for(i=0;i<3;i++)
fs[i]=0;
for(k=j-1,l=1;l<=frsize-1;l++,k--)
{
for(i=0;i<3;i++)
{
if(fr[i]==p[k])
fs[i]=1;
}
}
for(i=0;i<3;i++)
{
if(fs[i]==0)
index=i;
}
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
fr[index]=p[j];
pf++;
}
display();
}
printf("\n no of page faults :%d",pf);
getch();
}
void display()
{
int i;
printf("\n");
for(i=0;i<3;i++)
printf("\t%d",fr[i]);
}

1.4OUTPUT:

2 -1 -1
2 3 -1
2 3 -1
2 3 1
2 5 1
2 5 1
2 5 4
2 5 4
3 5 4
3 5 2
3 5 2
3 5 2
no of page faults : 4

1.5QUESTIONS:

(a)Why we need page replacement algorithm?

(b)Define the difference between FIFO and LRU.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 10
TITLE: Simulation of Optimal page replacement algorithm

1.1 Objective 1.2 Theory & Logic 1.3 Coding 1.4Output 1.5Questions

1.1OBJECTIVE: Simulation of LRU page replacement algorithm


.

1.2THEORY & LOGIC:


Page replacement algorithms decide which page to swap out from memory to allocate a
new page to memory. Page fault happens when a page is not in memory and needs to be
placed into memory. The idea is to find an algorithm that reduces the number of page
faults which limits the time it takes to swap in and swap out a page.
OPTIMAL:
This is in theory the best algorithm to use for page replacement.This algorithm looks
ahead to see the likely hood that a value in the frame array is going to come up soon. If
the value in value array doesn‟t come up soon it will be swapped out when a new page
needs allocated.

1.3CODING:

#include<stdio.h>
#include<conio.h>
int fr[3];
void main()
{
void display();
int p[12]={2,3,2,1,5,2,4,5,3,2,5,2},i,j,fs[3];

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
int
max,found=0,lg[3],index,k,l,flag1=0,flag2=0,pf=0,frsize=3;
clrscr();
for(i=0;i<3;i++)
{
fr[i]=-1;
}
for(j=0;j<12;j++)
{
flag1=0;
flag2=0;
for(i=0;i<3;i++)
{
if(fr[i]==p[j])
{
flag1=1;
flag2=1;
break;
}
}
if(flag1==0)
{
for(i=0;i<3;i++)
{
if(fr[i]==-1)
{
fr[i]=p[j];
flag2=1;
break;
}
}
}

if(flag2==0)
{
for(i=0;i<3;i++)
lg[i]=0;
for(i=0;i<frsize;i++)
{
for(k=j+1;k<12;k++)
{
if(fr[i]==p[k])
{
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
lg[i]=k-j;
break;
}
}
}
found=0;
for(i=0;i<frsize;i++)
{
if(lg[i]==0)
{
index=i;
found=1;
break;
}
}
if(found==0)
{
max=lg[0];
index=0;
for(i=1;i<frsize;i++)
{
if(max<lg[i])
{
max=lg[i];
index=i;
}
}
}
fr[index]=p[j];
pf++;
}
display();
}
printf("\n no of page faults:%d",pf);
getch();
}
void display()
{
int i;
printf("\n");
for(i=0;i<3;i++)
printf("\t%d",fr[i]);
}
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

1.4OUTPUT:

2 -1 -1
2 3 -1
2 3 -1
231
235
235
435
435
435
235
235
235

no of page faults : 3

1.5QUESTIONS:

(a)Why we need page replacement algorithm?

(b) Which is the best page replacement algorithm and Why?

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

Program No: 11
TITLE – Synchronization of POSIX thread using mutex variable.

1.1 Objective 1.2 Theory 1.3 Logic 1.4 Coding 1.5 Output 1.6 Questions

1.1 OBJECTIVE: Write a Program for synchronizing POSIX thread using Murex
variable.

1.2 THEORY :

What is a Thread?

 Technically, a thread is defined as an independent stream of instructions that can


be scheduled to run as such by the operating system. But what does this mean?
 To the software developer, the concept of a “procedure” that runs independently
from its main program may best describes a thread.

What are Pthreads?

 Historically, hardware vendors have implemented their own proprietary versions


of threads. These implementations differed substantially from each other making
it difficult for programmers to develop portable threaded applications.
 In order to take full advantage of the capabilities provided by threads, a
standardized programming interface was required. For UNIX systems, this
interface has been specified by the IEEE POSIX 1003.1c standard (1995).

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
Implementations which adhere to this standard are referred to as POSIX threads,
or Pthreads. Most hardware vendors now offer Pthreads in addition to their
proprietary API's.
 Pthreads are defined as a set of C language programming types and procedure
calls, implemented with a pthread.h header/include file and a thread library -
though the this library may be part of another library, such as libc.

Synchronization

Synchronization is a problem in timekeeping which requires the coordination of events


to operate a system in unison. The familiar conductor of an orchestra serves to keep the
orchestra in time. Systems operating with all their parts in synchrony are said to be
synchronous or in sync. Some systems may be only approximately synchronized, or
plesiochronous. For some applications relative offsets between events need to be
determined, for others only the order of the event is important.

Mutex variable:

A mutex lock is the simplest form of lock, providing mutually exclusive access (hence
the term “mutex”) to a shared resource. Threads wanting to access a resource protected
by a mutex lock must first acquire the lock. Only one thread at a time may acquire the
lock. Any other threads attempting to acquire the lock are blocked until the owner
relinquishes it.

Mutex is an abbreviation for "mutual exclusion". Mutex variables are one of the
primary means of implementing thread synchronization and for protecting shared data
when multiple writes occur.

A mutex variable acts like a "lock" protecting access to a shared data resource. The
basic concept of a mutex as used in Pthreads is that only one thread can lock (or own) a
mutex variable at any given time. Thus, even if several threads try to lock a mutex only
one thread will be successful. No other thread can own that mutex until the owning
thread unlocks that mutex. Threads must "take turns" accessing protected data.

1.3 LOGIC:

This program illustrates the use of mutex variables in a threads program that performs a
dot product. The main data is made available to all threads through a globally accessible
structure. Each thread works on a different part of the data. The main thread waits for
all the threads to complete their computations, and then it prints the resulting sum.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
The function dot prod is activated when the thread is created. All input to this routine
is obtained from a structure of type DOTDATA and all output from this function is
written into
This structure. The benefit of this approach is apparent for the multi-threaded program:
when a thread is created we pass a single argument to the activated function - typically
this argument
is a thread number. All the other information required by the function is accessed from
the globally accessible structure.

The main program creates threads which do all the work and then print out result
upon completion. Before creating the threads, the input data is created. Since all threads
update a shared structure, we need a mutex for mutual exclusion. The main thread needs
to wait for all threads to complete, it waits for each one of the threads. We specify a
thread attribute value that allows the main thread to join with the threads it creates. Note
also that we free up handles when they are no longer needed.

1.4 CODING:

#include <pthread.h>
#include <stdio.h>
#include <malloc.h>

/* The following structure contains the necessary information to allow the function
"dot prod" to access its input data and place its output into the structure. */

typedef struct

{
double *a;
double *b;
double sum;
int veclen;

} DOTDATA; /* Define globally accessible variables and a mutex */

#define NUMTHRDS 4

#define VECLEN 100

DOTDATA dotstr;
pthread_t callThd[NUMTHRDS];

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
pthread_mutex_t mutexsum;

void *dotprod (void *arg)


{

/* Define and use local variables for convenience */


int i, start, end, offset, len ;
double mysum, *x, *y;
offset = (int)arg;

len = dotstr.veclen;
start = offset*len;
end = start + len;
x = dotstr.a;
y = dotstr.b;

/* Perform the dot product and assign result to the appropriate variable in the
structure. */

mysum = 0;
for (i=start; i<end ; i++)
{
mysum += (x[i] * y[i]);
}

/* Lock a mutex prior to updating the value in the shared structure, and unlock it
upon udating. */

pthread_mutex_lock (&mutexsum);
dotstr.sum += mysum;
pthread_mutex_unlock (&mutexsum);

pthread_exit((void*) 0);
}

int main (int argc, char *argv[])


{
int i;
double *a, *b;
int status;

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
pthread_attr_t attr;

/* Assign storage and initialize values */

a = (double*) malloc (NUMTHRDS*VECLEN*sizeof(double));


b = (double*) malloc (NUMTHRDS*VECLEN*sizeof(double));

for (i=0; i<VECLEN*NUMTHRDS; i++)


{
a[i]=1.0;
b[i]=a[i];
}

dotstr.veclen = VECLEN;
dotstr.a = a;
dotstr.b = b;
dotstr.sum=0;

pthread_mutex_init(&mutexsum, NULL);

/* Create threads to perform the dotproduct */

pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);

for(i=0; i<NUMTHRDS; i++)


{
pthread_create( &callThd[i], &attr, dotprod, (void *)i);
}

pthread_attr_destroy(&attr);

for(i=0; i<NUMTHRDS; i++)


{
pthread_join( callThd[i], (void **)&status);
}

/* After joining, print out the results and cleanup */


printf ("Sum = %f \n", dotstr.sum);
free (a);
free (b);
pthread_mutex_destroy(&mutexsum);

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
pthread_exit(NULL);
}

1.5 OUTPUT:

a = 3i + 5j - 2k

b = 2i - 2j - 2k.

a.b = (3i + 5j - 2k).(2i - 2j - 2k).

a.b = (3 x 2) + (5 x -2) + (-2 x -2) = 6 - 10 + 4 = 0

1.6 QUESTIONS:

(a) What are mutex variables?

(b)How the mutex variables are used for synchronization?

(c) How the mutex variables and condition variables are different?

Program No: 12
TITLE – Synchronization of POSIX thread using Condition variables

1.1 Objective 1.2 Theory 1.3 Logic 1.4. Coding 1.5 Output 1.6 Questions

1.1 OBJECTIVE: Write a Program for synchronizing POSIX thread using condition
variables.

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
1.2 THEORY:
What is a Thread?

 Technically, a thread is defined as an independent stream of instructions that can


be scheduled to run as such by the operating system. But what does this mean?
 To the software developer, the concept of a “procedure” that runs independently
from its main program may best describes a thread.

What are Pthreads?

 Historically, hardware vendors have implemented their own proprietary versions


of threads. These implementations differed substantially from each other making
it difficult for programmers to develop portable threaded applications.
 In order to take full advantage of the capabilities provided by threads, a
standardized programming interface was required. For UNIX systems, this
interface has been specified by the IEEE POSIX 1003.1c standard (1995).
Implementations which adhere to this standard are referred to as POSIX threads,
or Pthreads. Most hardware vendors now offer Pthreads in addition to their
proprietary API's.
 Pthreads are defined as a set of C language programming types and procedure
calls, implemented with a pthread.h header/include file and a thread library -
though the this library may be part of another library, such as libc.

Synchronization

Synchronization is a problem in timekeeping which requires the coordination of events


to operate a system in unison. The familiar conductor of an orchestra serves to keep the
orchestra in time. Systems operating with all their parts in synchrony are said to be
synchronous or in sync. Some systems may be only approximately synchronized, or
plesiochronous. For some applications relative offsets between events need to be
determined, for others only the order of the event is important.

Synchronization using condition variables

 Condition variables provide yet another way for threads to synchronize. While
mutexes implement synchronization by controlling thread access to data,
condition variables allow threads to synchronize based upon the actual value of
data.

 Without condition variables, the programmer would need to have threads


continually polling (possibly in a critical section), to check if the condition is
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
met. This can be very resource consuming since the thread would be
continuously busy in this activity. A condition variable is a way to achieve the
same goal without polling.
 A condition variable is always used in conjunction with a mutex lock.

We might use a semaphore to manage access to a pool of indistinct resources. Threads


wanting to use the resources in this pool wait on a given semaphore. The object that
manages the pool then sends out a signal for each available resource in the pool. The
same number of waiting threads would then be able to acquire a resource and do some
work with it. As each thread finished with the resource, it would put it back in the pool,
at which point the manager would send out another signal.
1.3 LOGIC:

This simple example code demonstrates the use of several Pthread condition variable
routines. The main routine creates three threads. Two of the threads perform work and
update a "count" variable. The third thread waits until the count variable reaches a
specified value.
This program locks mutex and wait for signal. Note that the pthread_cond_wait
routine will automatically and atomically unlock mutex while it waits. Also, note that if
COUNT_LIMIT is reached before this routine is run by the waiting thread, the loop will
be skipped to prevent pthread_cond_wait from never returning.

1.4 CODING:

#include <pthread.h>
#include <stdio.h>

#define NUM_THREADS 3
#define TCOUNT 10
#define COUNT_LIMIT 12

int count = 0;
int thread_ids[3] = {0,1,2};

pthread_mutex_t count_mutex;
pthread_cond_t count_threshold_cv;

void *inc_count(void *idp)


{
int j,i;
double result=0.0;
int *my_id = idp;
Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering

for (i=0; i<TCOUNT; i++)


{
pthread_mutex_lock(&count_mutex);
count++;

/* Check the value of count and signal waiting thread when condition is reached.
Note that this occurs while mutex is locked. */

if (count == COUNT_LIMIT)
{
pthread_cond_signal(&count_threshold_cv);
printf("inc_count(): thread %d, count = %d Threshold reached.\n", *my_id, count);
}
printf("inc_count(): thread %d, count = %d, unlocking mutex\n", *my_id, count);
pthread_mutex_unlock(&count_mutex);

/* Do some work so threads can alternate on mutex lock */


for (j=0; j<1000; j++)
result = result + (double)random();
}
pthread_exit(NULL);
}

void *watch_count(void *idp)


{
int *my_id = idp;

printf("Starting watch_count(): thread %d\n", *my_id);

pthread_mutex_lock(&count_mutex);

if (count<COUNT_LIMIT) {
pthread_cond_wait(&count_threshold_cv, &count_mutex);
printf("watch_count(): thread %d Condition signal
received.\n", *my_id);
}

pthread_mutex_unlock(&count_mutex);
pthread_exit(NULL);

Operating System Lab (RCS-451)


MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
}

int main (int argc, char *argv[])


{
int i, rc;
pthread_t threads[3];
pthread_attr_t attr;

/* Initialize mutex and condition variable objects */

pthread_mutex_init(&count_mutex, NULL);
pthread_cond_init (&count_threshold_cv, NULL);

/* For portability, explicitly create threads in a joinable state */

pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
pthread_create(&threads[0], &attr, inc_count, (void *)&thread_ids[0]);
pthread_create(&threads[1], &attr, inc_count, (void *)&thread_ids[1]);
pthread_create(&threads[2], &attr, watch_count, (void *)&thread_ids[2]);

/* Wait for all threads to complete */

for (i=0; i<NUM_THREADS; i++)


{
pthread_join(threads[i], NULL);
}
printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS);

/* Clean up and exit */


pthread_attr_destroy(&attr);
pthread_mutex_destroy(&count_mutex);
pthread_cond_destroy(&count_threshold_cv);
pthread_exit(NULL);

1.5 OUTPUT:

Waited on threads 1. Done

Waited on threads 2. Done


Operating System Lab (RCS-451)
MGM’s College of Engineering & Technology, Noida Department of Computer Science & Engineering
Waited on threads 3. Done

Thread 1, count = 3 Threshold reached.

Starting watch_count(): thread 0

Starting watch_count(): thread 1

Starting watch_count(): thread 2

1.6 QUESTIONS:

Q1: What are condition variables?

Q2: What is the significance of condition variables in synchronization?

Q3: Why the synchronization is necessary in pthreads?

Operating System Lab (RCS-451)

You might also like