DOS-Unit 1 Complete Notes
DOS-Unit 1 Complete Notes
1
A computer system can be divided roughly into four components: the
hardware, the operating system, the application programs, and the
users.
The hardware-the central processing unit (CPU), the memory, and the
input/output devices-provides the basic computing resources for the system.
The operating system controls the hardware and coordinates its use among
the various application programs for the various users. The operating system
provides the means for proper use of these resources in the operation of the
computer system.
Security
The operating system monitors the overall health of the system in order to
optimize performance.
To get a thorough picture of the system’s health, keep track of the time
between system responses and service requests.
2
This can aid performance by providing critical information for
troubleshooting issues.
Job Accounting
The operating system maintains track of how much time and resources are
consumed by different tasks and users, and this data can be used to measure
resource utilization for a specific user or group of users.
Memory Management
The main memory consists of a vast array of bytes or words, each of which
is allocated an address.
Main memory is rapid storage that the CPU can access directly.
A program must first be loaded into the main memory before it can be
executed.
3
For memory management, the OS performs the following tasks:
The OS keeps track of primary memory – meaning, which user program can
use which bytes of memory, memory addresses that have already been
assigned, as well as memory addresses yet to be used.
It allocates memory to the process when the process asks for it and
deallocates memory when the process exits or performs an I/O activity.
Process Management
There are several types of Operating Systems which are mentioned below
2. Multi-Programming System
4
3. Multi-Processing System
This type of operating system does not interact with the computer directly.
5
Advantages of Batch Operating System
It is very difficult to guess or know the time required for any job to
complete. Processors of the batch systems know how long the job would
be when it is in the queue.
It is sometimes costly.
The other jobs will have to wait for an unknown time if any job fails.
6
Advantages of Multi-Programming Operating System
7
As it has several processors, so, if one processor fails, we can proceed
with another processor.
8
Disadvantages of Multi-Tasking Operating System
Each task is given some time to execute so that all the tasks work
smoothly.
Each user gets the time of the CPU as they use a single system.
After this time interval is over OS switches over to the next task.
Advantages of Time-Sharing OS
9
Resource Sharing: Time-sharing systems allow multiple users to share
hardware resources such as the CPU, memory, and peripheral and increasing
efficiency.
Disadvantages of Time-Sharing OS
Reliability problem.
Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of
user access, authentication, and authorization to ensure the security of data
and software.
10
These are referred to as loosely coupled systems or distributed systems.
The major benefit of working with these types of the operating system is
that it is always possible that one user can access the files or software which
are not actually present on his system but some other system connected
within this network i.e., remote access is enabled within the devices
connected in that network.
Failure of one will not affect the other network communication, as all
systems are independent of each other.
Since resources are being shared, computation is highly fast and durable
[strong].
These systems are easily scalable as many systems can be easily added to
the network.
11
Delay in data processing reduces.
These types of systems are not readily available as they are very expensive.
These systems run on a server and provide the capability to manage data,
users, applications, and other networking functions.
One more important aspect of Network Operating Systems is that all the
users are well aware of the underlying configuration, of all other users
within the network, their individual connections, etc. and that’s why these
computers are popularly known as tightly coupled systems.
12
Advantages of Network Operating System
New technologies and hardware up-gradation are easily integrated into the
system.
These types of OSs serve real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is
called response time.
Real-time systems are used when there are time requirements that are very
strict like missile systems, air traffic control systems, robots, etc.
13
Soft Real-Time Systems
These OSs are for applications where time-constraint is less strict.
Advantages of RTOS
Task Shifting: The time assigned for shifting tasks in these systems is very
less. For example, in older systems, it takes about 10 microseconds in
shifting from one task to another, and in the latest systems, it takes 3
microseconds.
Disadvantages of RTOS
Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
Thread Priority: It is not good to set thread priority as these systems are
very less prone to switching tasks.
Memory Management
Process Management
Device Management
File Management
14
User Interface or Command Interpreter
Memory Management
Main memory is fast storage and it can be accessed directly by the CPU.
It keeps track of primary memory, i.e., which bytes of memory are used
by which user program.
The memory addresses that have already been allocated and the memory
addresses of the memory that has not yet been used.
15
Process Management
16
Device Management
Decides which process gets access to a certain device and for how long.
File Management
17
User Interface and Command Interpreter
The user interacts with the computer system through the operating system.
Hence OS act as an interface between the user and the computer hardware.
Through this interface, the user makes interaction with the applications and
the machine hardware.
18
Booting the Computer
19
CHARACTERISTICS OF OPERATING SYSTEMS
Virtualization
Networking
Scheduling
Interprocess Communication
20
Performance Monitoring
Debugging
Program Execution
The Operating System utilizes various resources available for the efficient
running of all types of functionalities.
The Operating System is responsible for handling all sorts of inputs and
outputs, i.e., from the keyboard, mouse, desktop, etc.
21
For example, there is a difference between all types of peripheral devices
such as mouse or keyboards, the Operating System is responsible for
handling data.
The Operating System decides how the data should be manipulated and
stored.
The Operating System is responsible for the detection of any type of error
or bugs that can occur while any task.
Resource Allocation
The Operating System ensures the proper use of all the resources
available by deciding which resource to be used by whom for how much
time.
Accounting
22
All the details such as the types of errors that occurred are recorded by the
Operating System.
SYSTEM CALLS
System calls are the only entry points into the kernel system.
23
A system call is initiated by the program executing a specific instruction,
which triggers a switch to kernel mode, allowing the program to request a
service from the OS.
The OS then handles the request, performs the necessary operations, and
returns the result back to the program.
System calls are essential for the proper functioning of an operating system,
as they provide a standardized way for programs to access system resources.
Without system calls, each program would need to implement its own
methods for accessing hardware and system services, leading to inconsistent
and error-prone behavior.
24
When an application creates a system call, it must first obtain permission
from the kernel.
It achieves this using an interrupt request, which pauses the current process
and transfers control to the kernel.
When the operation is finished, the kernel returns the results to the
application and then moves data from kernel space to user space in memory.
A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time.
Most operating systems launch a distinct kernel thread for each system call
to avoid bottlenecks.
Modern operating systems are multi-threaded, which means they can handle
various system calls at the same time.
25
To create or delete files.
Process Control
File Management
Device Management
Information Maintenance
Communication
Process Control
Process control is the system call performs the task of process creation,
process termination etc.
26
File Management
Creation of a file
Deletion of a file
Device Management
Information Maintenance
27
Getting or setting time and date
Communication
Functions of communication:
Interface
Protection
The operating system uses this privilege to protect the system from
malicious or unauthorized access.
28
Kernel Mode
When a system call is made, the program is switched from user mode to
kernel mode.
Context Switching
A system call requires a context switch, which involves saving the state of
the current process and switching to the kernel mode to execute the
requested service.
Error Handling
System calls can return error codes to indicate problems with the
requested service.
Programs must check for these errors and handle them appropriately.
Synchronization
29
System calls allow programs to access hardware resources such as disk
drives, printers, and network devices.
Memory management
Process management
Standardization
open()
Accessing a file on a file system is possible with the open() system call.
30
wait()
In some systems, a process might need to hold off until another process
has finished running before continuing.
When a parent process creates a child process, the execution of the parent
process is halted until the child process is complete.
The parent process regains control once the child process has finished
running.
fork()
exit()
In environments with multiple threads, this call indicates that the thread
execution is finished.
After using the exit() system function, the operating system recovers the
resources used by the process.
OS STRUCTURE
31
The structure of the OS depends mainly on how the various common
components of the operating system are interconnected and melded into
the kernel.
Simple structure
Layered structure
Micro-kernel
Simple structure
Such operating systems do not have well defined structure and are small,
simple and limited systems.
In MS-DOS application programs are able to access the basic I/O routines.
These types of operating system cause the entire system to crash if one of
the user programs fails.
32
The following figure illustrates layering in simple structure
There are four layers that make up the MS-DOS operating system, and
each has its own set of features.
These layers include ROM BIOS device drivers, MS-DOS device drivers,
application programs, and system programs.
The MS-DOS operating system benefits from layering because each level
can be defined independently and, when necessary, can interact with one
another.
33
Because MS-DOS systems have a low level of abstraction, programs and I/O
procedures are visible to end users, giving them the potential for unwanted
access.
Layered structure
An OS can be broken into pieces and retain much more control on system.
The bottom layer (layer 0) is the hardware and the topmost layer (layer N) is
the user interface.
These layers are so designed that each layer uses the functions of the
lower level layers only.
This simplifies the debugging process as if lower level layers are debugged
and an error occurs during debugging then the error must be on that layer
only as the lower level layers have already been debugged.
34
The main disadvantage of this structure is that at each layer, the data
needs to be modified and passed on which adds overhead to the system.
Moreover careful planning of the layers is necessary as a layer can use only
lower level layers.
35
It requires careful planning for designing the layers as higher layers use the
functionalities of only the lower layers.
Micro-kernel
Advantages of this structure are that all new services need to be added to
user space and does not require the kernel to be modified.
Thus it is more secure and reliable as if a service fails then rest of the
operating system remains untouched.
36
Advantages of Micro-kernel structure
The kernel has only set of core components and other services are added as
dynamically loadable modules to the kernel either during run time or boot
time.
It resembles layered structure due to the fact that kernel has defined and
protected interfaces but it is more flexible than the layered structure as a
module can call any other module.
PROCESS MANAGEMENT
37
The original code and binary code are both programs. When we actually
run the binary code, it becomes a process.
A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).
Process Management
If the operating system supports multiple users then operating systems have
to keep track of all the completed processes, schedule them, and dispatch
them one after another.
Terminate a process
Dispatch a process
Suspend a process
Resume a process
38
Delay a process
Fork a process
Explanation of Process
Text Section
Data Section
Heap Section
Stack
39
Process Control Block
By keeping data on different things including their state, I/O status, and CPU
Scheduling, a PCB maintains track of processes.
1. Process ID
2. Process State
3. Program Counter
4. CPU Registers
1. Process Id
40
2. Process State
i) New State
The ready state, when a process waits for the CPU to be assigned.
The operating system pulls new processes from secondary memory and
places them all in main memory.
The term "ready state processes" refers to processes that are in the main
memory and are prepared for execution.
The Operating System will select one of the processes from the ready
state based on the scheduling mechanism.
41
As a result, if our system only has one CPU, there will only ever be one
process operating at any given moment.
The OS switches a process to the block or wait state and allots the CPU to
the other processes while it waits for a specific resource to be allocated or
for user input.
v) Terminated State
The operating system will end the process and erase the whole context of the
process.
3) Program Counter
42
An instruction counter, instruction pointer, instruction addresses register, or
sequence control register are other names for a program counter.
4) CPU Registers
When the process is in a running state, here is where the contents of the
processor registers are kept.
This Input Output Status Information section consists of Input and Output
related information which includes about the process statuses etc.
43
THREADS
What is Thread?
The process can be easily broken down into numerous different threads.
44
When working with threads, context switching is faster.
Types of Threads
1. User-Level Thread
2. Kernel-Level Thread
1. User-Level Thread
User threads are simple to implement and are done so by the user.
45
The representation of user-level threads is relatively straightforward. The
user-level process’s address space contains the register, stack, PC, and
mini thread control blocks.
Threads may be easily created, switched, and synchronized without the need
for process interaction.
Threads at the user level are not coordinated with the kernel.
User-Level Thread
2. Kernel-Level Thread
46
Each thread in the kernel-level thread has its own thread control block
in the system.
Kernel-Level Thread
If a thread in the kernel is blocked, it does not block all other threads in
the same process.
47
Several threads of the same process might be scheduled on different CPUs
in kernel-level threading.
The scheduler may decide to allocate extra CPU time to threads with large
numerical values.
48
Threading Models
The user threads must be mapped to kernel threads, by one of the following
strategies:
In the many to one model, many user-level threads are all mapped onto a
single kernel thread.
49
One to One Model
The one to one model creates a separate kernel thread to handle each
and every user thread.
This model provides more concurrency than that of many to one Model.
The many to many model multiplexes any number of user threads onto
an equal or smaller number of kernel threads.
Benefits of Threads
The number of jobs completed per unit time increases when the process
is divided into numerous threads, and each thread is viewed as a job.
You can schedule multiple threads in multiple processors when you have
many threads in a single process.
The thread context switching time is shorter than the process context
switching time.
51
Communication
Resource sharing
Code, data, and files, for example, can be shared among all threads in a
process. Note that threads cannot share the stack or register.
Process Thread
The process requires more time for Thread requires comparatively less
creation. time for creation than process.
The process is a heavyweight process Thread is known as a lightweight
process
The process takes more time to terminate The thread takes less time to
terminate
The process takes more time for context The thread takes less time for context
switching. switching.
For some reason, if a process gets blocked In case if a user-level thread gets
then the remaining processes can blocked, all of its peer threads also
continue their execution get blocked.
52
INTERPROCESS COMMUNICATION
Independent process
Co-operating process
Shared Memory
Message passing
53
An operating system can implement both methods of communication.
Suppose process1 and process2 are executing simultaneously, and they share
some resources or use some information from another process.
When process2 needs to use the shared information, it will check in the
record stored in shared memory and take note of the information generated
by process1 and act accordingly.
The producer produces some items and the Consumer consumes that item.
54
The two processes share a common space or memory location known as
a buffer where the item produced by the Producer is stored and from which
the Consumer consumes the item if needed.
The first one is known as the unbounded buffer problem in which the
Producer can keep on producing items and there is no limit on the size of the
buffer, the second one is known as the bounded buffer problem in which
the producer can produce up to a certain number of items before it starts
waiting for Consumer to consume it.
55
The message size can be of fixed size or of variable size.
The header part is used for storing message type, destination id, source id.
A link has some capacity that determines the number of messages that
can reside in it temporarily for which every link has a queue associated
with it which can be of zero capacity.
56
In zero capacity, the sender waits until the receiver informs the sender that
it has received the message.
The process which wants to communicate must explicitly name the recipient
or sender of the communication.
57
The standard primitives used are: send(A, message) which means send the
message to mailbox A. The primitive for the receiving the message also
works in the same way e.g. receive (A, message).
Suppose there are more than two processes sharing the same mailbox
and suppose the process p1 sends a message to the mailbox, which process
will be the receiver?
This can be solved by either enforcing that only two processes can share a
single mailbox or enforcing that only one process is allowed to execute the
receive at a given time.
A mailbox can be made private to a single sender/receiver pair and can also
be shared between multiple senders and one receiver.
This allows a sender to continue doing other things as soon as the message
has been sent.
SCHEDULING
There are various algorithms which are used by the Operating System to
schedule the processes on the processor in an efficient way.
59
Allocation of CPU should be fair.
There should be a minimum waiting time and the process should not starve
in the ready queue.
Minimum response time. It means that the time when a process produces
the first response should be as less as possible.
Preemptive Scheduling
Non-Preemptive Scheduling
Round Robin
60
Priority Scheduling
First come first serve scheduling algorithm states that the process that
requests the CPU first is allocated the CPU first and is implemented by
using FIFO queue.
Characteristics of FCFS
This algorithm is not much efficient in performance, and the wait time is
quite high.
Advantages of FCFS
Easy to implement
Disadvantages of FCFS
61
The average waiting time is much higher than the other algorithms.
FCFS is very simple and easy to implement and hence not much
efficient.
Example
Process Burst
S.No ID Process Name Arrival Time Time
1 P1 A 0 9
2 P2 B 1 3
3 P3 C 1 2
4 P4 D 1 4
5 P5 E 2 3
6 P6 F 3 2
Solution
62
The Average Completion Time is:
Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6
Average CT = 97 / 6
Average CT = 16.16667
Shortest job first (SJF) is a scheduling process that selects the waiting
process with the smallest execution time to execute next.
Significantly reduces the average waiting time for other processes waiting
to be executed.
Characteristics of SJF
63
Advantages of Shortest Job first
As SJF reduces the average waiting time thus, it is better than the first
come first serve scheduling algorithm.
Disadvantages of SJF
Example
P2 3 3 13
P3 6 2 10
P4 7 10 31
P5 9 8 21
64
Since, No Process arrives at time 0 hence; there will be an empty slot in
the Gantt chart from time 0 to 1 (the time at which the first process
arrives).
Till now, we have only one process in the ready queue hence the scheduler
will schedule this to the processor no matter what is its burst time.
This will be executed till 8 units of time. Till then we have three more
processes arrived in the ready queue hence the scheduler will choose the
process with the lowest burst time.
Among the processes given in the table, P3 will be executed next since it is
having the lowest burst time among all the available processes.
=84/6 = 14
65
3. ROUND ROBIN SCHEDULING
It’s simple, easy to use, and starvation-free as all processes get the
balanced CPU allocation.
66
Each process get a chance to reschedule after a particular quantum time in
this scheduling.
Gantt chart seems to come too big (if quantum time is less for scheduling.
For Example:1 ms for big scheduling.)
Example
1 P1 0 7
2 P2 1 4
3 P3 2 15
4 P4 3 11
5 P5 4 20
6 P6 4 9
67
Assume Time Quantum TQ=5
68
4. PRIORITY SCHEDULING
In the case of any conflict, that is, where there are more than one
processor with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS (First Come First Serve)
algorithm.
When the higher priority work arrives while a task with less priority is
executed, the higher priority work takes the place of the less priority one and
The latter is suspended until the execution is complete.
Less complex
This is the problem in which a process has to wait for a longer amount
of time to get scheduled into the CPU. This condition is called the
starvation problem.
69
In Priority scheduling, there is a priority number assigned to each
process.
In some systems, the lower the number, the higher the priority. While, in the
others, the higher the number, the higher will be the priority.
The Process with the higher priority among the available processes is
given the CPU.
70
Non Preemptive Priority Scheduling
Once the process gets scheduled, it will run till the completion.
Generally, the lower the priority number, the higher is the priority of the
process.
Example
P1 2 0 3
P2 6 2 5
P3 3 1 4
P4 5 4 2
P5 7 6 9
P6 4 5 4
P7 10 7 10
The Process P1 arrives at time 0 with the burst time of 3 units and the
priority number 2. Since No other process has arrived till now hence the
OS will schedule it immediately.
71
Meanwhile all the processes get available in the ready queue. The
Process with the lowest priority number will be given the priority.
Since all the jobs are available in the ready queue hence All the Jobs will get
executed according to their priorities.
If two jobs have similar priority number assigned to them, the one with the
least arrival time will be executed.
72
Turn
Process Priority Arrival Burst Completion Around Waiting
ID Time Time Time Time Time
P1 2 0 3 3 3 0
P2 6 2 5 18 16 11
P3 3 1 4 7 6 2
P4 5 4 2 13 9 7
P5 7 6 9 27 21 12
P6 4 5 4 11 6 2
P7 10 7 10 37 30 18
=116/7=16.5
73
The Readers-Writers Problem
Producers must wait if the buffer is full, and consumers must wait if the
buffer is empty.
74
The Dining Philosophers Problem
75
The Readers-Writers Problem
Readers can access the resource concurrently, but writers require exclusive
access.
Writers must have exclusive access to the resource and block any other
readers or writers from accessing it.
76
The Sleeping Barber Problem
Customers arrive and either wait in a queue or leave if the queue is full.
Customers must wait if the barber is busy cutting someone else's hair or
leave if the waiting area is full.
77
The Cigarette Smokers Problem
The agent places two different ingredients on the table, and the smoker who
has the missing ingredient can pick up the ingredients and roll a cigarette.
The challenge is to ensure that only one smoker can pick up the
ingredients at a time and that all smokers get a fair chance.
78