OS Report Final
OS Report Final
INTRODUCTION
A process defines a Customized Computations framework in which a program
executes. In traditional operating system the basic unit of CPU utilization is a process.
Each process has its own program counter, its own register states, its own stack, and its
own address space. Now a days, in operating system , the basic unit of CPU utilization is
thread and a process may consist of one or more threads.
The information in the proceeding list is stored in a data structure, typically called a
process control block that is created and managed by the OS. The process control block is
that it contains sufficient information so that it is possible to interrupt a running process
and later resume execution as if the interruption had not occurred. The process control
block is the key tool that enables the OS to support multiple processes and to provide for
multiprocessing.
Process State
The behavior of an individual process can be characterized by listing the sequence of
instruction that execute for that process. Such listing is referred to as a trace of process. A
small dispatcher program is there that switches the process from one process to another.
A process can be in any of the five states while it is on the system:
RUNNING: The process that is currently being executed. At most one process at a time
can be in this state.
READY: A process that is prepared to execute when given the opportunity.
BLOCKED/WAITING: A process that cannot execute until some event occurs, such as
the completion of an I/O operation.
NEW:A process that has just been created but has not been admitted to the pool of
executable processes by the OS, Typically, a new process has not yet been loaded into
main memory, although its process control block has been created.
EXIT: A process that has been released from the pool of executable processes by the OS,
either because it halted or because it aborted for some reason.
THREADS
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program
counter, a register set, and a stack. It shares with other threads belonging to the same
process its code section, data section, and other operating-system resources, such as open
files and signals. There are two types of threads one of which is single thread and other
multi thread. A traditional process has a single thread of control. If a process has multiple
threads of control, it can perform more than one task at a time. The difference between a
single-threaded process and multi-threaded process is shown in figure below:
Registers
Code Data Stack
Files Register
Code Register
Data Register
Files
In certain situations, a single application may be required to perform several similar tasks.
For example, a web server accepts client requests for web pages, images, sound and so
forth. A busy web server may have several of clients concurrently accessing it. If the web
server ran as a traditional single-threaded process, it would be able to service only one
client at a time. One solution is to have the server run as a single process that accepts
requests. When the server receives a request, it creates a separate process to service that
request. In fact, this process-creation method was in common use before threads became
popular.
Threads play a vital role in remote procedure call(RPC) systems. The RPCs allow
interprocess communication mechanism similar to ordinary function or procedure calls.
RPC servers are multithreaded. Finally, many operating system kernels are now
multithreaded; several threads operate in the kernel, and each thread performs a specific
task, such as managing devices or interrupt handling. For example, Solaris creates a set of
threads in the kernel specifically for interrupt handling.
Benefits
The benefits of multithreaded programming can be broken down into four major
categories:
1. Responsiveness
2. Resource sharing
3. Economy
4. Utilization of multiprocessor architectures
Responsiveness : Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness to the user.
Resource sharing: Threads share the memory and the resources of the process to which
they belong. The benefit of sharing code and data is that it allows an application to have
several different threads of activity within the same address space.
Economy: Allocating memory and resources for process creation is costly. Because
threads share resources of the process to which they belong, it is more economical to
create and context-switch threads. In general it is much more time consuming to create
and manage process than threads.
Utilization of multiprocessor architectures: The benefits of multithreading can be
greatly increased in a multiprocessor architecture, where threads may be running in
Multithreading Models
Threads support at user level, for user threads, or by the kernel, for kernel
threads. User threads are supported above the kernel and are managed without kernel
support, whereas kernel threads are supported and managed directly by the operating
system.
Many-to-One Model
The many-to-one model maps many user-level threads to one kernel thread.
Thread management is done by the thread library in user space, so it is efficient, but the
entire process will block if a thread makes a blocking system call. Figure shows many-to-
One model.
One-to-One Model
The one-to-one model maps each user thread to a kernel thread. It provides more
concurrency than the many-to-one model by allowing another thread to run when a thread
makes a blocking system call. It also allows multiple threads to run in parallel on
multiprocessors. The only drawback to this model is that creating a user thread requires
creating the corresponding kernel thread. Figure shows One-to-One model
Many-to-Many Model
The Many-to-Many model multiplexes many user-level threads to a smaller or equal
number of kernels threads. The number of kernel threads may be specific to either a
particular application or a particular machine. Whereas the many-to-one model allows the
developer to create as many user threads as they wishes, because the kernel can schedule
only one thread at a time. The many-to-many model suffers from neither of these
shortcomings: Developers can create as many user threads as necessary and
corresponding kernel thread can run in parallel on a multiprocessor.
Java Threads
Threads are the fundamental model of program execution in a Java program, and the Java
language and its API provide a rich set of features for the creation and management of
threads. All Java programs comprise at least a single thread of control. There are two
techniques for creating threads in Java program. One approach is to create a new class
that is derived from the thread class and to override its run() method. An alternative and
more commonly used technique is to define a class that implements the Run able
interface.
Source Code:
import java.util.Random;
import java.io.*;
noOfThreads= Integer.parseInt(br.readLine());
for (int index = 0; index < noOfThreads; index++)
{
PrimeNumThread P = new PrimeNumThread();
}
} catch (Exception e)
}
}//end of main method
}//end of class CreateThreads
Output
Enter the Required Number Of Threads
1
The upperlimit of Thread[Thread-0,5,main] is 18
ThreadName: Thread-0 And primeNumber within:18---> 2
ThreadName: Thread-0 And primeNumber within:18---> 3
ThreadName: Thread-0 And primeNumber within:18---> 5
ThreadName: Thread-0 And primeNumber within:18---> 7
ThreadName: Thread-0 And primeNumber within:18---> 11
ThreadName: Thread-0 And primeNumber within:18---> 13
ThreadName: Thread-0 And primeNumber within:18---> 17
2. INTRODUCTION
Process is a program in execution. Process is a passive entity with a
program counter specifying the next instruction to execute and a set of
associated resources. Each process is represented in the operating system by a
process control block (PCB) also called a task control block. It contains
information such as process state, program counter, cpu registers.
System calls:
It provide the interface between a process and the operating system. It is a
routine built in the kernel to perform function that requires communication with
the system’s hardware.
All activities related to file handling, process and memory management and
maintenance of user and system information are handled by the kernel using
these system calls.
For example: when we execute a C program, the CPU runs in user mode till a
system call is invoked. In this mode, the process has access to a limited section
of the computer’s memory and can execute a restricted set of machine
instructions.
However, when the process invokes a system call, the CPU switches
from user mode to a more privileged mode i.e., the kernel mode. In this mode,
it’s kernel that runs on behalf of the user. After the system call has returned, the
CPU switches back to user mode for further execution.
The Unix and Linux Os utilizes same system calls. Portable
Operating system Interface for Computer Environments ( POSIX.1)specifies
the C API that contains the built-in
System calls.
(the users)
Processor time
The amount of time a process takes to run, given that it has exclusive and uninterrupted
use of the CPU. Note that in a modern computer, this would be very unusual, and so the
processor time calculation for most processes involves adding up all the small amounts of
time the CPU actually spends on the process. Some systems break processor time down
into user time and system time.
Kernel space
It is that portion of memory in which the kernel executes and provides its services.
Kernel space time is the amount of time taken by kernel to execute system calls and other
interrupt services in kernel mode. The division of system memory in Linux-like operating
systems into user space and kernel space plays an important role in maintaining system
stability and security.
DESIGN
Algorithm to create the child processes, also find the real time, processor time,
user space time & kernel space time for each process after creating. Also display
the real time, user space time & kernel space time of the parent process
Algorithm: Child_process(process_no)
//Purpose: To generate total prime no and find real time, processor time, user
space time,
kernel space time of each child process
// Input: process_no
// Output:
// processor time at child is created
// generate random no. & total prime no. of each child process
// real time of child process
// user space time of child process
// kernel space time of child process
time Child process //find processor time & store in variable
Rand rand( ) % const value // generate random no. using rand( ) function
Total_prime_no 0
for n 0 to n< Rand do
prime=1 // Initialize prime to 1
for i2 to i< n do
if ( n % i = 0)
prime=0 // if n is not prime
end if
end for
if (prime)
increment total _prime_ no.
end if
end for
// after generating total prime no. the respective time of each child process and
total prime is shown
print total prime number
print real time of child process
print user space time of child process
print kernel space time of child process
//When the two child process terminates, the following respective time is shown
print real time of parent process
print user space time of parent process
print kernel space time of parent process
// End of algorithm
IMPLEMENTATION
Let us consider the Linux operating system for implementation. Here each
process is identified by its process identifier, which is a unique integer. A new process is
created by the fork system call (child process). The child process consists of a copy of the
address space of the original process.
This mechanism allows the parent process to communicate easily with its child
process. Both processes (the parent and the child) continue execution at the instruction
after the fork system call, with one difference: The value of pid for the child process is
zero but for the parent is an integer value greater than zero.
In this program, times ( ) function which is built in sys/times header file. It stores the
current process time in struct tms datatype. The struct tms is defined in <sys/times.h> as
follows:
Struct tms {
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/times.h>
child_process(int process_no)
{
int i, n, prime, TotPrimeNo=0;
struct tms chpr_t1, chpr_t2;
long rchprt1, rchprt2;
int Rand;
rchprt1 = times(&chpr_t1); //start time of child process
printf("\n Processor time for child process (%d) = %d", (process_no
+1), rchprt1);
printf("\n Created child process %d ", process_no+1);
srand(process_no);
Rand = random() % 10000;
printf(" \n Random number = %d\t", Rand );
// total prime no generation
for(n=0; n<Rand; n++)
{
prime = 1; // initialize
for(i=2; i<n; i++)
{
if(n % i == 0)
{
prime = 0;
break;
}
}
if(prime)
{
TotPrimeNo++;
}
}
printf("\nTotal Prime Nos=%d\n", TotPrimeNo);
rchprt2 = times(&chpr_t2); // end time of child process
printf(“------------- The real time, user space and kernel space
for chid process are: ---------------“)
printf("\n Real time for child process (%d) = %d", (process_no +1),
(rchprt2- rchprt1));
printf("\n User space time for child process(%d) = %d", (process_no +1),
(chpr_t2.tms_cutime -
chpr_t1.tms_cutime));
printf("\n kernel space time for child process(%d) = %d\n\n",
(process_no +1),
(chpr_t2.tms_cstime -
chpr_t1.tms_cstime));
exit(0);
}
void CreateProcess()
{
int i;
for (i=0; i<2;i++)
{
if(fork( ) == 0)
child_process(i);
}
for (i=0; i<2;i++)
{
wait( );
}
}
int main ()
{
long p_mt1, p_mt2;
struct tms p_mst1, p_mst2
p_mt1 = times(&p_mst1); //start time of a parent process
CreateProcess();
p_mt2 = times(&p_mst2); // end time of a parent process
printf(“------------- The real time, user space and kernel space for parent process
are: -------/n”);
printf("\n Real time for parent process = %d", p_mt2-p_mt1);
printf("\n User space time for parent process = %d",(p_mst2.tms_utime -
p_mst1.tms_uti
me));
printf("\n Kernel space time for parent process = %d\n\n",(p_mst2.tms_stime -
p_mst1.tms_sti
me));
}
Output:
Processor time for child process (1) = 18847897
Created child process 1
Processor time for child process (2) = 18847897
Created child process 2
Random number = 289383
Total Prime No=25180
------------- The real time, user space and kernel space for chid process are:
---------------
Real time for child process (2) = 1402
User space time for child process(2) = 0
kernel space time for child process(2) = 0
Random number = 289383
Total Prime No=25180
------------- The real time, user space and kernel space for chid process are:
---------------
Real time for child process (1) = 1422
User space time for child process(1) = 0
kernel space time for child process(1) = 0
------------- The real time, user space and kernel space for parent process are:
-------
Real time for parent process = 1424
User space time for parent process = 0
Kernel space time for parent process = 0
INTRODUCTION
Interprocess Communication is a mechanism, for processes to communicate and to
synchronize their actions. It has a set of methods for the exchange of data among multiple
threads in one or more processes.
The producer and the consumer, who share a common, fixed-size buffer. The producer's
job is to generate a piece of data, put it into the buffer and start again. At the same time
the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time.
The problem is to make sure that the producer won't try to add data into the buffer if it's
full and that the consumer won't try to remove data from an empty buffer.
DESIGN
This project can be divided mainly into four parts
1. Producer.
2. Consumer.
3. Bounded-Buffer.
4. Integration of all three.
Producer: Producer corresponds to produce the item and stores in the buffer. After
storing the item in the buffer, it has to increment the counter also it has to notice the
consumer to consume the item.
Consumer consumes the item: Consumer corresponds to consume the item available in
the buffer. After consuming the item in the buffer, it has to decrement the counter also it
has to notice the consumer to consume the item.
Bounded-Buffer: It works as the interface between the Producer and the Consumer. It
allows the producer to store the item in the buffer (shared space), also allows the
consumer to consume the item from the buffer (shared space).
Integration of all three: Finally we integrate these three parts of the problem using the
synchronization technique of java.
Producer-Consumer Problem
Producer-consumer problem (also known as the bounded-buffer problem) is a classical
example of a multi-process synchronization problem. The problem describes two
processes, the producer and the consumer, who share a common, fixed-size buffer. The
producer's job is to generate a piece of data, put it into the buffer and start again. At the
same time the consumer is consuming the data (i.e., removing it from the buffer) one
piece at a time. The problem is to make sure that the producer won't try to add data into
the buffer if it's full and that the consumer won't try to remove data from an empty buffer.
The solution for the producer is to either go to sleep or discard data if the buffer is full.
The next time the consumer removes an item from the buffer, it notifies the producer who
starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the
buffer to be empty. The next time the producer puts data into the buffer, it wakes up the
sleeping consumer. The solution can be reached by means of inter-process
communication, typically using semaphores. An inadequate solution could result in a
deadlock where both processes are waiting to be awakened. The problem can also be
generalized to have multiple producers and consumers.
Producer-Consumer problem is a paradigm for cooperating processes; producer process
produces information that consumer process consumes the information.
Unbounded-buffer places no practical limit on the size of the buffer.
Bounded-buffer assumes that there is a fixed buffer size.
IMPLEMENTATION
Algorithm
Producer process: Consumer process:
//item nextProduced; //item to be Consumed;
while (buffer_size = =max_size) { while (buffer_size = = 0) {
; /* do nothing */ ; /* do nothing */
wait( ); wait( );
} }
Add item into buffer; Remove item from buffer;
notify( ); notify( );
After the new thread is created, it will not start running until you call its start ( ) method,
which is declared within Thread. In essence, start ( ) executes a call to run ( ).
The start ( ) method is shown here:
void start ( )
Example:
// Create a second thread.
class NewThread implements Runnable {
Thread t;
NewThread( ) { // Create a new, second thread
t = new Thread(this, "Demo Thread");
System.out.println("Child thread: " + t);
t.start(); // Start the thread
}
// This is the entry point for the second thread.
public void run() {
try {
for(int i = 5; i > 0; i--) {
System.out.println("Child Thread: " + i);
Thread.sleep(500);
}
} catch (InterruptedException e) {
System.out.println("Child interrupted.");
}
System.out.println("Exiting child thread.");
}
}
class ThreadDemo {
public static void main(String args[]) {
new NewThread( ); // create a new thread
try {
for(int i = 5; i > 0; i--) {
System.out.println("Main Thread: " + i);
Thread.sleep(1000);
}
} catch (InterruptedException e) {
System.out.println("Main thread interrupted.");
}
System.out.println("Main thread exiting.");
}
}
Source Code:
import java.util.ArrayList;
import java.util.List;
class Producer_Consumer
{
public static final int BUFFER_SIZE = 4;
public static final int TOTAL_VALUES = 10;
class SharedBuffer
{
private List<Integer> buffer =
new ArrayList<Integer>(Producer_Consumer.BUFFER_SIZE);
public synchronized int get()
{
while (buffer.size() == 0)
{
try
{
wait();
}
catch (InterruptedException e) {}
}
notify();
return buffer.remove(0);
}
public synchronized void put(int message)
{
public Producer(SharedBuffer m)
{
shbuf = m;
}
Introduction
A linear equation system is a set of linear equations to be solved simultaneously. A1
linear equation takes the form
This system consists of linear equations, each with coefficients, and has
unknowns which have to fulfill the set of equations simultaneously. To simplify notation,
it is possible to rewrite the above equations in matrix notation:
In this notation each line forms a linear equation The elements of the matrix
are the coefficients of the equations, and the vectors and have the elements
and respectively. In this notation each line forms a linear equation.
There are many methods to solve a linear system of equations such as diagonal and
triangular systems and gauss-jordan elimination are exact solutions of linear systems and
gauss-seidel method, successive over-relaxation method, Jacobi method are approximate
solutions of linear systems. In numerical linear algebra, the method of successive over-
relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of
equations, resulting in faster convergence. A similar method can be used for any slowly
converging iterative process.
SOR Technique:
Given a square system of n linear equations with unknown x:
Then A can be decomposed into a diagonal component D, and strictly lower and upper
triangular components L and U:
A = D + L + U,
where
The method of successive over-relaxation is an iterative technique that solves the left
hand side of this expression for x, using previous value for x on the right hand side.
Analytically, this may be written as:
However, by taking advantage of the triangular form of (D+ωL), the elements of x(k+1) can
be computed sequentially using forward substitution:
The choice of relaxation factor is not necessarily easy, and depends upon the properties of
the coefficient matrix. For symmetric, positive-definite matrices it can be proven that
0 < ω < 2 will lead to convergence, but we are generally interested in faster convergence
rather than just convergence.
Implementation
#include<stdlib.h>
#include<math.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#define A(x,y) A[(u+1)*(x)+y]
#define PRECISION 0.0001
int u;
int is_converged(int n, float *A, float *B, float *p)
{
float *sum;
int i , j;
sum=(float *)malloc(n*sizeof(float));
for(i=0;i<n;++i)
sum[i]=0;
for(i=0;i < n;++i)
{
for(j=0; j < n;++j)
sum[i] += A(i,j)*p[j];
sum[i]=fabs(B[i]-sum[i]);
}
for(i=0;i<n;++i)
{
/* not converged if any result is off by more than PRECISION */
if(sum[i]>PRECISION)
return 0;
}
return 1;
}
{
float sum;
int i,j;
for(i=0;i < n; ++i)
{
sum=0;
for(j=0;j < i;++j)
{
sum += A(i,j)* p[j];
}
for (j = i+1;j < n; ++j)
{
sum += A(i,j) * p[j];
}
p[i] += w * ( ((B[i]-sum)/A(i,i)) - p[i]);
}
}
//Main//
int main(int argc, char* argv[])
{
float *A,*p,*B;
float w;
int shmid;
key_t key;
int i,j;
printf("\nPlease input the number of lines:");
scanf("%d",&u);
A=(float *)calloc(u*(u+1),sizeof(float));
B=(float *)malloc(u*sizeof(float));
printf("Please input the matrix A(%dx%d):\n", u, u); //n*n
for(i = 1; i <= u; ++i)
{
printf("\nPlease input line %d:",i);
for(j=1;j<=u;++j)
scanf("%f",&A(i-1,j-1));
}
printf("\nPlease input matrix B(1x%d):",u);
for(i=0;i<u;++i)
scanf("%f",&B[i]);
/* shared memory key */
key = 5678;
/* Create the shared memory segment for storing the results of SOR iterations. */
/* Read the starting guess values to the shared memory. This will be used by the
child process computing first SOR iteration */
printf("\nPlease input the start guess (1x%d): ",u);
for(i=0;i < u; ++i)
scanf("%f",&p[i]);
printf("\nPlease input the relax VAR(w): ");
scanf("%f",&w);
while (!is_converged(u, A, B, p))
{
if (fork() == 0)
{
Output:
Please input the number of lines:2
Please input the matrix A(2x2):
Please input line 1:2 3
Please input line 2:4 9
Please input matrix B(1x2):6 15
Please input the start guess (1x2):1 1
5.
INTRODUCTION
Procedure Call (RPC) defines a powerful technology for creating distributed
client/server programs. The RPC run-time stubs and libraries manage most of the
processes relating to network protocols and communication. This enables you to focus on
the details of the application rather than the details of the network.
Examples: File service, Database service, and Authentication service.
Need of RPC: The client needs an easy way to call the procedures of the server to get
some services.
RPC enables clients to communicate with servers by calling procedures in a
similar way to the conventional use of procedure calls in high-level languages.
RPC is modeled on the local procedure call, but the called procedure is executed
in a different process and usually a different computer.
Different RPC Systems:
Sun RPC
DCE RPC
DCOM
CORBA
Java RMI
XML RPC, SOAP/.NET, AJAX, REST
Sun RPC: An alternative to sockets is RPC. RPC enables an application to call
procedures that exist on another machine. RPC makes a network connection to a remote
machine using sockets. After the connection is made with the server, one can invoke a
function on the remote machine. The programmer has the illusion of calling a local
procedure, but actually the arguments are packaged and passed to the remote machine.
RPC is easier to use than sockets. There are some drawbacks of using RPC, such as:
You can only pass simple data types & not objects as arguments.
The programmer must learn a special Interface Definition Language (IDL).
Remote Interface
A remote interface is an interface, which declares a set of methods that may be invoked
remotely (from a different JVM). All interactions with the remote object will be
performed through this interface. A remote interface must directly or indirectly extend the
RMI Layers
RMI system is organized as a four-layer model. Each layer performs some specific
functions like establishing the connection, marshaling of parameters, etc. Each layer
insulates the layers above it from some details.
The four layers are:
Application layer
Proxy layer
Remote reference layer
Transport layer
RMI API
Implementation
//**SERVER PROGRAM**//
import org.apache.xmlrpc.server.PropertyHandlerMapping;
import org.apache.xmlrpc.webserver.WebServer;
phm.addHandler("math", RpcServer.class);
server.getXmlRpcServer().setHandlerMapping(phm);
server.start();
System.out.println("Started successfully.");
System.out.println("Accepting requests. (Halt program to stop.)");
}
catch (Exception exception) {
System.err.println("JavaServer: " + exception);
}
}
}
//**CLIENT PROGRAM**//
import java.net.URL;
import java.util.Vector;
import org.apache.xmlrpc.client.XmlRpcClient;
import org.apache.xmlrpc.client.XmlRpcClientConfigImpl;
Client:
-------
$ java RpcClient localhost:8080 17 13
The sum is: 30
$ java RpcClient localhost:8080 54 23
The sum is: 77