Project Report of Operating System
Project Report of Operating System
&TECHNOLOGY BHOPAL
SESSION 2010-2011
SEMINAR/GROUP DISCUSSION
On
“OPERATING SYSTEM”
SUBMITTED TO SUBMITTEDBY:-
PROF. SHAHEEN AYYUB INDRAJEET GOUR
The 1960’s definition of an operating system is “the software that controls the
hardware”. However, today, due to microcode we need a better definition. We see an
operating system as the programs that make the hardware useable. In brief, an
operating system is the set of programs that controls a computer. Some examples of
operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago,
OS/2, MacOS, VMS, MVS, and VM.
Operating Systems are resource managers. The main resource is computer hardware in
the form of processors, storage, input/output devices, communication devices, and
data. Some of the operating system functions are: implementing the user interface,
sharing hardware among users, allowing users to share data among themselves,
preventing users from interfering with one another, scheduling resources among users,
facilitating input/output, recovering from errors, accounting for resource usage,
facilitating parallel operations, organizing data for secure and rapid access, and
handling network communications.
Modern Operating systems generally have following three major goals. Operating
systems generally accomplish these goals by running processes in low privilege and
providing service calls that invoke the operating system kernel in high-privilege state.
The notion of process is central to the understanding of operating systems. There are
quite a few definitions presented in the literature, but no "perfect" definition has yet
appeared.
Definition
The term "process" was first used by the designers of the MULTICS in 1960's. Since
then, the term process, used somewhat interchangeably with 'task' or 'job'. The process
has been given many definitions for instance
A program in Execution.
An asynchronous activity.
The 'animated sprit' of a procedure in execution.
The entity to which processors are assigned.
The 'dispatchable' unit.
and many more definitions have given. As we can see from above that there is no
universally agreed upon definition, but the definition "Program in Execution" seem to
be most frequently used. And this is a concept are will use in the present study of
operating systems.
Now that we agreed upon the definition of process, the question is what is the relation
between process and program. It is same beast with different name or when this beast
is sleeping (not executing) it is called program and when it is executing becomes
process. Well, to be very precise. Process is not the same as program. In the following
discussion we point out some of the difference between process and program. As we
have mentioned earlier.
Process is not the same as program. A process is more than a program code. A process
is an 'active' entity as oppose to program which consider to be a 'passive' entity. As we
all know that a program is an algorithm expressed in some suitable notation, (e.g.,
programming language). Being a passive, a program is only a part of process. Process,
on the other hand, includes:
Process State
The process state consist of everything necessary to resume the process execution if it
is somehow put aside temporarily. The process state consists of at least following:
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
Unique identification of the process in order to track "which is which"
information.
A pointer to parent process.
Similarly, a pointer to child process (if it exists).
The priority of process (a part of CPU scheduling information).
Pointers to locate memory of processes.
A register save area.
The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information
about a process. Thus, the PCB is the data structure that defines a process to the
operating systems.
Threads
Threads
Processes Vs Threads
Why Threads?
User-Level Threads
Kernel-Level Threads
Advantages of Threads over Multiple Processes
Disadvantages of Threads over Multiprocesses
Application that Benefits from Threads
Application that cannot benefit from Threads
Resources used in Thread creation and Process Creation
Context Switch
Major Steps of Context Switching
Action of Kernel to Context switch among threads
Action of kernel to Context switch among processes
Threads
Despite of the fact that a thread must execute in process, the process and its associated
threads are different concept. Processes are used to group resources together and
threads are the entities scheduled for execution on the CPU.
A thread is a single sequence stream within in a process. Because threads have some
of the properties of processes, they are sometimes called lightweight processes. In a
process, threads allow multiple executions of streams. In many respect, threads are
popular way to improve application through parallelism. The CPU switches rapidly
back and forth among the threads giving illusion that the threads are running in
parallel. Like a traditional process i.e., process with one thread, a thread can be in any
of several states (Running, Blocked, Ready or Terminated). Each thread has its own
stack. Since thread will generally call different procedures and thus a different
execution history. This is why thread needs its own stack. An operating system that
has thread facility, the basic unit of CPU utilization is a thread. A thread has or
consists of a program counter (PC), a register set, and a stack space. Threads are not
independent of one other like processes as a result threads shares with other threads
their code section, data section, OS resources also known as task, such as open files
and signals.
Processes Vs Threads
As we mentioned earlier that in many respect threads operate in the same way as that
of processes. Some of the similarities and differences are:
Similarities
Like processes threads share CPU and only one thread active (running) at a
time.
Like processes, threads within a processes, threads within a processes execute
sequentially.
Like processes, thread can create children.
And like process, if one thread is blocked, another thread can run.
Differences
Why Threads?
Following are some reasons why we use threads in designing operating systems.
1. A process with multiple threads make a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess
communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
But this cheapness does not come free - the biggest drawback is that there is no
protection between threads.
User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls, so
thread switching does not need to call operating system and to cause interrupt to the
kernel. In fact, the kernel knows nothing about user-level threads and manages them
as if they were single-threaded processes.
Advantages:
The most obvious advantage of this technique is that a user-level threads package can
be implemented on an Operating System that does not support threads. Some other
advantages are
Disadvantages:
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime system is
needed in this case. Instead of thread table in each process, the kernel has a thread
table that keeps track of all threads in the system. In addition, the kernel also
maintains the traditional process table to keep track of processes. Operating Systems
kernel provides system call to create and manage threads.
Advantages:
Because kernel has full knowledge of all threads, Scheduler may decide to give
more time to a process having large number of threads than process having
small number of threads.
Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes. It require
a full thread control block (TCB) for each thread to maintain information about
threads. As a result there is significant overhead and increased in kernel
complexity.
CPU/Process Scheduling
In this section we try to answer following question: What the scheduler try to achieve?
General Goals
Fairness
Fairness is important under all circumstances. A scheduler makes sure that each
process gets its fair share of the CPU and no process can suffer indefinite
postponement. Note that giving equivalent or equal time is not fair. Think of safety
control and payroll at a nuclear plant.
Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For example, if
the local policy is safety then the safety control processes must be able to run
whenever they want to, even if it means delay in payroll processes.
Efficiency
Scheduler should keep the system (or in particular CPU) busy cent percent of the
time when possible. If the CPU and all the Input/Output devices can be kept running
all the time, more work gets done per second than if some components are idle.
Response Time
A scheduler should minimize the response time for interactive user.
Turnaround
A scheduler should minimize the time batch users must wait for an output.
Throughput
A scheduler should maximize the number of jobs processed per unit time.
A little thought will show that some of these goals are contradictory. It can be shown
that any scheduling algorithm that favors some class of jobs hurts another class of
jobs. The amount of CPU time available is finite, after all.
The Scheduling algorithms can be divided into two categories with respect to how
they deal with clock interrupts.
Nonpreemptive Scheduling
A scheduling discipline is nonpreemptive if, once a process has been given the CPU,
the CPU cannot be taken away from that process.
1. In nonpreemptive system, short jobs are made to wait by longer jobs but the
overall treatment of all processes is fair.
2. In nonpreemptive system, response times are more predictable because
incoming high priority jobs can not displace waiting jobs.
3. In nonpreemptive scheduling, a schedular executes jobs in the following two
situations.
a. When a process switches from running state to the waiting state.
b. When a process terminates.
Preemptive Scheduling
A scheduling discipline is preemptive if, once a process has been given the CPU can
taken away.
Deadlock
A set of process is in a deadlock state if each process in the set is waiting for an event
that can be caused by only another process in the set. In other words, each member of
the set of deadlock processes is waiting for a resource that can be released only by a
deadlock process. None of the processes can run, none of them can release any
resources, and none of them can be awakened. It is important to note that the number
of processes and the number and kind of resources possessed and requested are
unimportant.
The resources may be either physical or logical. Examples of physical resources are
Printers, Tape Drivers, Memory Space, and CPU Cycles. Examples of logical
resources are Files, Semaphores, and Monitors.
The simplest example of deadlock is where process 1 has been allocated non-
shareable resources A, say, a tap drive, and process 2 has be allocated non-sharable
resource B, say, a printer. Now, if it turns out that process 1 needs resource B (printer)
to proceed and process 2 needs resource A (the tape drive) to proceed and these are the
only two processes in the system, each is blocked the other and all useful work in the
system stops. This situation ifs termed deadlock. The system is in deadlock state
because each process holds a resource being requested by the other process neither
process is willing to release the resource it holds.
3. No-Preemptive Condition
Resources already allocated to a process cannot be preempted.
Explanation: Resources cannot be removed from the processes are used to
completion or released voluntarily by the process holding it.
4. Circular Wait Condition
The processes in the system form a circular list or chain where each process in
the list is waiting for a resource held by the next process in the list.
The simple rule to avoid traffic deadlock is that a vehicle should only enter an
intersection if it is assured that it will not have to stop inside the intersection.
It is not possible to have a deadlock involving only one single process. The deadlock
involves a circular “hold-and-wait” condition between two or more processes, so
“one” process cannot hold a resource, yet be waiting for another resource that it is
holding. In addition, deadlock is not possible between two threads in a process,
because it is the process that holds resources, not the thread that is, each thread has
access to the resources held by the process.