Operating System Unit 1
Operating System Unit 1
In the Computer System (comprises of Hardware and software), Hardware can only
understand machine code (in the form of 0 and 1) which doesn't make any sense to a
naive user.
In Batch operating system, access is given to more than one person; they submit their
respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then
executes the jobs one by one. The users collect their respective output when all the
jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to
another as soon as the job was completed. It contained a small set of programs called
the resident monitor that always resided in one part of the main memory. The
remaining part is used for servicing jobs.
Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU is always kept
busy. Each process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start
the execution of other processes. Therefore, multiprogramming improves the
efficiency of the system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction with the
computer system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the
system can execute more than one process simultaneously, which will increase the
throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present
in the system can execute more than one process simultaneously, which will increase
the throughput of the system.
When the first electronic computer was developed in 1940, it was created without any
operating system. In early times, users have full access to the computer machine and write a
program for each task in absolute machine language. The programmer can perform and solve
only simple mathematical calculations during the computer generation, and this calculation
does not require an operating system.
The first operating system (OS) was created in the early 1950s and was known as GMOS.
General Motors has developed OS for the IBM computer. The second-generation operating
system was based on a single stream batch processing system because it collects all similar
jobs in groups or batches and then submits the jobs to the operating system using a punch
card to complete all jobs in a machine. At each completion of jobs (either normally or
abnormally), control transfer to the operating system that is cleaned after completing one job
and then continues to read and initiates the next job in a punch card. After that, new machines
were called mainframes, which were very big and used by professional operators.
During the late 1960s, operating system designers were very capable of developing a new
operating system that could simultaneously perform multiple tasks in a single computer
program called multiprogramming. The introduction of multiprogramming plays a very
important role in developing operating systems that allow a CPU to be busy every time by
performing different tasks on a computer at the same time. During the third generation, there
was a new development of minicomputer's phenomenal growth starting in 1961 with the DEC
PDP-1. These PDP's leads to the creation of personal computers in the fourth generation.
Each CPU in a distributed memory multiprocessor has its own private memory. Each
processor can use local data to accomplish the computational tasks. The processor
may use the bus to communicate with other processors or access the main memory if
remote data is required.
Advantages and disadvantages of Multiprocessor System
There are various advantages and disadvantages of the multiprocessor system. Some
advantages and disadvantages of the multiprocessor system are as follows:
Advantages
There are various advantages of the multiprocessor system. Some advantages of the
multiprocessor system are as follows:
1. It is a very reliable system because multiple processors may share their work between the
systems, and the work is completed with collaboration.
2. It requires complex configuration.
3. Parallel processing is achieved via multiprocessing.
4. If multiple processors work at the same time, the throughput may increase.
5. Multiple processors execute the multiple processes a few times.
Disadvantages
There are various disadvantages of the multiprocessor system. Some disadvantages of the
multiprocessor system are as follows:
The software techniques used to implement the cores in a multicore system are
responsible for the system's performance. The extra focus has been put on
developing software that may execute in parallel because you want to achieve
parallel execution with the help of many cores'
Advantages
There are various advantages of the multicore system. Some advantages of the multicore
system are as follows:
Disadvantages
There are various disadvantages of the multicore system. Some disadvantages of the
multicore system are as follows:
Advantages
There are various advantages of the multicore system. Some advantages of the multicore
system are as follows:
Disadvantages
There are various disadvantages of the multicore system. Some disadvantages of the
multicore system are as follows:
Reliability It is more reliable than the It is not much reliable than the
multicore system. If one of any multiprocessors.
processors fails in the system, the
other processors will not be
affected.
Traffic It has high traffic than the It has less traffic than the
multicore system. multiprocessors.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned
in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −
1
Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.
4
Data
This section contains the global and static variables.
Program
A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming
language. For example, here is a simple program written in C programming language
−
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
1
Start
This is the initial state when a process is first started/created.
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but
interrupted by the scheduler to assign CPU to some other process.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system,
it is moved to the terminated state where it waits to be removed from main memory.
1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2
Process privileges
This is required to allow/disallow access to system resources.
3
Process ID
Unique identification for each of the process in the operating system.
4
Pointer
A pointer to parent process.
5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for
this process.
6
CPU registers
Various CPU registers where process need to be stored for execution for running
state.
7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
8
Memory management information
This includes the information of page table, memory limits, Segment table depending
on memory used by the operating system.
9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.
10
IO status information
This includes a list of I/O devices allocated to the process.
The process can be split down into so many threads. For example, in a browser, many
tabs can be viewed as threads. MS Word uses many threads - formatting text from one
thread, processing input from another thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Types of Threads
In the operating system
User-level thread
The operating system
does not recognize the user-level thread. User threads can be easily implemented, and it is
implemented by the user. If a user performs a user-level thread blocking operation, the whole
process is blocked. The kernel level thread does not know nothing about the user level thread. The
kernel-level thread manages user-level threads as if they are single-threaded processes?
examples: Java
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do
not support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini
thread control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention
of the process.
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads,
and each thread is treated as a job, the number of jobs done in the unit time increases.
That is why the throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one
thread in one process, you can schedule more than one thread in more than one
processor.
o Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share
the same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
o Resource sharing: Resources can be shared between all threads within a process, such
as code, data, and files. Note: The stack and register cannot be shared between threads.
There is a stack and register for each thread.
1. Job queue – It helps you to store all the processes in the system.
2. Ready queue – This type of queue helps you to set every process residing
in the main memory, which is ready and waiting to execute.
3. Device queues – It is a process that is blocked because of the absence of
an I/O device.
Process Scheduling Queues
1. Every new process first put in the Ready queue .It waits in the ready
queue until it is finally processed for execution. Here, the new process is
put in the ready queue and wait until it is selected for execution or it is
dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once
interrupt is completed, it should be sent back to ready queue.
• Running State
• Not Running State
Running
In the Operating system, whenever a new process is built, it is entered into the
system, which should be running.
Not Running
The process that are not running are kept in a queue, which is waiting for their
turn to execute. Each entry in the queue is a point to a specific process.
Scheduling Objectives
Here, are important objectives of Process scheduling
However, the main goal of this type of scheduler is to offer a balanced mix of
jobs, like Processor, I/O jobs., that allows managing multiprogramming.
Process Creation and Process termination are used to create and terminate processes
respectively. Details about these are given as follows −
Process Creation
A process may be created in the system for different operations. Some of the events
that lead to process creation are as follows −
Process Termination
Process termination occurs when the process is terminated The exit() system call is
used by most operating systems for process termination.
Some of the causes of process termination are as follows −
• A process may be terminated after its execution is naturally completed. This process
leaves the processor and releases all its resources.
• A child process may be terminated if its parent process requests for its termination.
• A process can be terminated if it tries to use a resource that it is not allowed to. For
example - A process can be terminated for trying to write into a read only file.
• If an I/O failure occurs for a process, it can be terminated. For example - If a process
requires the printer and it is not working, then the process will be terminated.
• In most cases, if a parent process is terminated then its child processes are also
terminated. This is done because the child process cannot exist without the parent
process.
• If a process requires more memory than is currently available in the system, then it is
terminated because of memory scarcity.
Message Passing:
It is a mechanism for a process to communicate and synchronize. Using
message passing, the process communicates with each other without resorting
to shared variables.
Message Queues:
A message queue is a linked list of messages stored within the kernel. It is
identified by a message queue identifier. This method offers communication
between single or multiple processes with full-duplex capacity.
Direct Communication:
In this type of inter-process communication process, should name each other
explicitly. In this method, a link is established between one pair of
communicating processes, and between each pair, only one link exists.
Indirect Communication:
Indirect communication establishes like only when processes share a common
mailbox each pair of processes sharing several communication links. A link can
communicate with many processes. The link may be bi-directional or
unidirectional.
Shared Memory:
Shared memory is a memory shared between two or more processes that are
established using shared memory between all the processes. This type of
memory requires to protected from each other by synchronizing access across
all the processes.
FIFO:
Communication between two unrelated processes. It is a full-duplex method,
which means that the first process can communicate with the second process,
and the opposite can also happen
• The entry to the critical section is handled by the wait() function, and it is
represented as P().
• The exit from a critical section is controlled by the signal() function,
represented as V().
In the critical section, only a single process can be executed. Other processes,
waiting to execute their critical section, need to wait until the current process
completes its execution.