Module - II Software
Module - II Software
Definition of Software
What is Software: Computer software, or only software, is a kind of program that
enable a user to perform some specific task or used to operate a computer. It directs
all the peripheral devices on the computer system – what to do and how to perform a
task. PC Software plays the role of mediator between the user and computer hardware.
Without software, a user can’t perform any task on a digital computer.
A computer system can be divided into three components: the hardware, the software,
and the users. Software can be further divide into mainly two parts: Application
software and System Software. Bare use of hardware is not easy, so to make it easy,
software created.
Type of Software
The software has mainly divided into two categories: Application software and System
software.
Application Software
System Software
Generally, the user does not interact with the System Software directly. The user
interacts with the GUI created by System Software. Through this, GUI user interacts
with applications installed in the system.
Some examples of system software are Operating System, Compilers, Interpreter, and Assemblers
etc.
Hardware
Hardwar represents the physical and tangible components of the computer i.e. the components that
can be seen and touched.
Definition of Multiprogramming
Operating System
To overcome the problem of under utilization of CPU and main memory, the multi-
programming was introduced. The multi-programming is interleaved execution of
multiple jobs by the same computer.
In multi-programming system, when one program is waiting for I/O transfer; there is
another program ready to utilize the CPU. So it is possible for several jobs to share the
time of the CPU. But it is important to note that multi-programming is not defined to be
the execution of jobs at the same instance of time. Rather it does mean that there are a
number of jobs available to the CPU (placed in main memory) and a portion of one is
executed then a segment of another and so on. A simple process of multi-programming
is shown in figure
As shown in fig, at the particular situation, job’ A’ is not utilizing the CPU time because
it is busy in I/ 0 operations. Hence the CPU becomes busy to execute the job ‘B’.
Another job C is waiting for the CPU for getting its execution time. So in this state the
CPU will never be idle and utilizes maximum of its time.
The time sharing system provides the direct access to a large number of users where
CPU time is divided among all the users on scheduled basis. The OS allocates a set of
time to each user. When this time is expired, it passes control to the next user on the
system. The time allowed is extremely small and the users are given the impression
that they each have their own CPU and they are the sole owner of the CPU. This short
period of time during that a user gets attention of the CPU; is known as a time slice or
a quantum. The concept of time sharing system is shown in figure.
In above figure the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting
state whereas user 6 is in ready status.
As soon as the time slice of user 5 is completed, the control moves on to the next
ready user i.e. user 6. In this state user 2, user 3, user 4, and user 5 are in waiting
state and user 1 is in ready state. The process continues in the same way and so on.
The time-shared systems are more complex than the multi-programming systems. In
time-shared systems multiple processes are managed simultaneously which requires
an adequate management of main memory so that the processes can be swapped in
or swapped out within a short time.
Locking system: In order to provide safe access to the resources shared among
multiple processors, they need to be protected by locking scheme. The purpose of a
locking is to serialize accesses to the protected resource by multiple processors.
Undisciplined use of locking can severely degrade the performance of system. This
form of contention can be reduced by using locking scheme, avoiding long critical
sections, replacing locks with lock-free algorithms, or, whenever possible, avoiding
sharing altogether.
Shared data: The continuous accesses to the shared data items by multiple
processors (with one or more of them with data write) are serialized by the cache
coherence protocol. Even in a moderate-scale system, serialization delays can have
significant impact on the system performance. In addition, bursts of cache coherence
traffic saturate the memory bus or the interconnection network, which also slows down
the entire system. This form of contention can be eliminated by either avoiding sharing
or, when this is not possible, by using replication techniques to reduce the rate of write
accesses to the shared data.
False sharing: This form of contention arises when unrelated data items used by
different processors are located next to each other in the memory and, therefore, share
a single cache line: The effect of false sharing is the same as that of regular sharing
bouncing of the cache line among several processors. Fortunately, once it is identified,
false sharing can be easily eliminated by setting the memory layout of non-shared
data.
Apart from eliminating bottlenecks in the system, a multiprocessor operating system
developer should provide support for efficiently running user applications on the
multiprocessor. Some of the aspects of such support include mechanisms for task
placement and migration across processors, physical memory placement insuring most
of the memory pages used by an application is located in the local memory, and
scalable multiprocessor synchronization primitives.
Multitasking OS Multi-programming OS
• The program divided into the fixed size of • The whole program loaded into
pages. memory.
• Context switching takes place after a fixed • Not fixed time of interval
interval of time. consider.
Process
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside
main memory −
S.N. Component & Description
1
Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.
4
Data
This section contains the global and static variables.
Program
A program is a piece of code which may be a single line or millions of lines. A computer program is
usually written by a computer programmer in a programming language. For example, here is a simple
program written in C programming language −
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when executed by a
computer. When we compare a program with a process, we can conclude that a process is a dynamic
instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A collection of
computer programs, libraries and related data are referred to as a software.
1
Start
This is the initial state when a process is first started/created.
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can
run. Process may come into this state after Start state or while running it by but
interrupted by the scheduler to assign CPU to some other process.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system,
it is moved to the terminated state where it waits to be removed from main memory.
1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2
Process privileges
This is required to allow/disallow access to system resources.
3
Process ID
Unique identification for each of the process in the operating system.
4
Pointer
A pointer to parent process.
5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed
for this process.
6
CPU registers
Various CPU registers where process need to be stored for execution for running
state.
7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
8
Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.
10
IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.