0% found this document useful (0 votes)
94 views

Operating System Notes

1. An operating system manages tasks like accessing devices, memory, and processing without needing separate programs for each task. 2. It provides functions like process management, memory management, security, and allowing access to devices through system calls. 3. Common types of operating systems include batch, multi-programming, multi-tasking, real-time, distributed, and embedded systems.

Uploaded by

Dilawaiz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Operating System Notes

1. An operating system manages tasks like accessing devices, memory, and processing without needing separate programs for each task. 2. It provides functions like process management, memory management, security, and allowing access to devices through system calls. 3. Common types of operating systems include batch, multi-programming, multi-tasking, real-time, distributed, and embedded systems.

Uploaded by

Dilawaiz
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

User1 , user2, user3

Working system
we have to write separate programs
User Applications
if we want to interact with the devices
without operating system.
Operating System
Like we want to access RAM.
We have to write a program for
I/O devices, CPU ,
Printer to print to invoke it. RAM,HARD DISK
No one is authoritative who can manage tasks without O/S.
1. Convenience(windows)
2. Throughput (number of tasks executed per unit of time) (Linux ,
apple Macintosh)
3. Resource governor( sever-- multiple client) Task manager
( speed, memory each task , parallel processing)
4. Process management(CPU scheduling algorithms)
5. Storage management(hard disk, files sysyem, NFS, CNFS)

How to store data in tracks and sectors in hard disk


or architecture of device

6. Memory Management(RAM, LIMITATION OF SIZE OF


RAM(management) , Multitasking, Multiprogramming, Allocation and
Deallocation, Swapping)
7. Security and Privacy(password, windows uses Kerberos security
protocol, security between processes in RAM (Blockage))
8. We can access operating system or kernel through command prompt (in
linux terminal)) through called System call. Open system call,
Read/Write system call.
Types of Operating system
(Main categories)
1. Batch operating system
2. Multiprogramming Operating System
3. Multitasking Operating System
4. Real Time operating system
5. Distributed operating system
6. Clustered OS
7. Embedded OS

Batch Operating system


Batch similar kind of jobs(1950s only some of the big companies have CPUs, computers etc)
Punch cards / magnetic tape/ paper tape (store data/jobs offline on these devices go to
that company’s computer(give that jobs to operator)
There was non primitive processing/ CPU idleness/ CPU cant get another batch until first batch
ends up. If batch wants I/O operation CPU has to wait.
BATCH all batches contain similar kins of jobs..
After job processing use has to go to that company … take the offline result from
operator (punch card or tape) and use that result. Can take 1, 2 ,30 ,300 days depends on your
job.
1960 refinement monitors, punch cards (directly punch by you to monitors ) remove
operators IBM(Fortran IBSYS709X)

P1 P2 P3
Multi Programmed OS(RAM) 1 1 1
(Maximum processes in RAM)
Non-preemtive(in case there is I/O operation P4 P5 P6
1 1 1
Required then it goes to the p2 otherwise it completes p1 first .
Reduces idleness RAM
Multitasking/ Time Sharing OS
Preemtive … round robin algorithm.. time quantum for each process.
Increases response time

Real time operating system:


Non real… YouTube video
Real time… live streaming
Hard real time OS No delay( critical situation)
Soft real time OS a little bit delay can be beard( gaming, live streaming)
Missile system , flight System

Distributed OS
Geographically (network oriented) independently work, each have own resources but can
shared files.

Clustered OS
Multiple(thousand of) devices (connected like one network to increase power) like server.

Embedded OS
Fixed functionality oven washing, machine
PROCESS STATES
its just a diagram to make understand users. No actual work in computer

Suspend
ready

READY
NEW Terminated
RUNNING

Suspend Wait/Block
wait RAM
New…Store in SS or on some application chrome laptop
mount OS background processes pop create

Ready state… mean in RAM(fcfs etc)(some process come in


RAM by long term scheduler(Multiprogramming)
Running State… dispatch in CPU ( schedule in running state
provide CPU uni processor.. grid computing.. multiprocessing
parallel processing then more than one process cpu starts
execution of instruction..
Terminate… Deallocation(take RAM space and all resources).
Priority base process comes … previous send to RAM (ready
state)…Multitasking(preemtive)… time quantum ( short term
scheduler ( ready to running)
Wait/block state
I/O request…> file read or access hardware.. SS.. waiting state
… after completion again ready state..
Additional states
Suspend wait/ Block state…> swap out( secondary
memory) RAM full with the process that want I/O operation…
Suspend wait…. Again process goes to wait/block state this
process is performed by Medium term scheduler.
If ready queue is full…> if vip process comes
we move processes to SUSPEND READY STATE ….

Unix…. PS command (to find process states like task manager)

SYSTEM CALLS IN OPERATING SYSTEM


From user mode to kernel mode to access functionalities of operating system.
To access devices we use Kernel we move to kernel to get devices and use system call.
Read write close open
Windows around 700 system calls
In some OS we direct give system calls and some times we have to use APIs or local
libraries like printf(function aceesing system call). System call invoke kernel to perform
sone work.
File related system calls: open(), read(), write(), create file
Privilege though system call cant access file directly.
Device related system calls: access device access privilege through system call(like
printer, scanner) read, write, reposition, ioctl, fctl.
Information: getpid, getppid (information regarding process (meta data) attributes of
process system time date data.
Process control (load In RAM): multithreading / fork()( creates childs for process), load,
execute, wait signal , allocate.
Communication: inter process communication: Pipe(), create/ delete, shmget().
Security:

CIA
Confidentiality( nobody should beable to read the data)… can prevent from encryption
Cryptograpgy means paln text- cipher text
Symmetric(public key) and Asymmetric ( public and private key)
Integrity( no unauthorized modify )
Availiability( 24/7 data available)
Theft odf service( privacy information)
DOS( prevention of legitimate user) ping of death( server buffer full cant take more
requests)
Domain…> set of access rights.
Threat…> convert to …. Attack..>
Phishing… junk emails(spam)
Trojan horse is not actually virus it’s a destructive program look like application
program.do not replicate… just destructive. backdoor entry
Virus attches to executable files until you open.
Warm subclass of virus… it can go by protocols
Firewall…> front door
Carbros security protocol…(windows)
HBAs (host bus adapter)
A host bus adapter (HBA) is a circuit board or integrated circuit adapter that connects a host system,
such as a server, to a storage or network device.

Virtulization( running multiple operating system on a single computer)


hyperviser( virtual box , vmware) etc inside (inside hyperviser other operating system
are loaded) for efficiently use of resource. More benefit to servers(companies). Same
hardware two or more operating system.

VMS
VIRTUAL MEMORY BASED

History of operating system:


During 1940’s to early 1950’s electronic computer were used without operating systems.
1st generation OS☹ vacuum tube)
All programming was absolutely machine language.
UNIVAC & ENIVAC
2nd generation computer( transistors)
During 1955 to 1965 operating system were first developed to manage tape storage. 1 st
OS was general motors operating system made ofor IBM(701)1955
2ND GEN computers move from machine to symbolic:
COBOL and FORTRAN were developed at that time:
Single- stream batch processing systems; (group of jobs batches)
The 3rd generation
1965 to 1980 ( ICs)
Multiprogramming
1969, first version of the unix os was developed called unics:
Fourth generation:
1980s personal computing
1981 microsft OS, Ms DOS were built.
1984 Apple Macintosh OS was released an graphical user interface.
1985 gui version paired with MS DOS MS windows OS
Artificial intelligence: natural language processing:

2. Multiuser operating system:


A multi-user operating system is an operating system that permits several users to access a single system
running to a single operating system. Users will usually sit at terminals or computers connected to the
system via a network and other system machines like printers.

Unix, Ubuntu, MacOS, Windows and all Linux based OS examples of multiuser OS.

Multi-user operating systems were originally used for time-sharing and batch processing
on mainframe computers. These types of systems are still in use today by large
companies, universities, and government agencies, and are usually used in servers, such
as the Ubuntu Server edition (21.04 LTS) or Windows Server 2019.
These servers allow several users to access the operating system, kernel, and hardware at the same time.

Components of Multi-User Operating System

Memory kernel processor user interface device handler spooler

Kernel
A multi-user operating system makes use of the Kernel component, which is built in a low-level
language. This component is embedded in the computer system's main memory and may
interact directly with the system's H/W.
Device Handler
Each input and output device needs its device handler. The device handler's primary goal is to
provide all requests from the whole device request queue pool. The device handler operates in
continuous cycle mode, first discarding the I/O request block from the queue side.

Spooler
Spooler stands for 'Simultaneous Peripheral Output on Line'. The Spooler runs all computer
processes and outputs the results at the same time. Spooling is used by a variety of output
devices, including printers.

Distributed System
A distributed system is also known as distributed computing. It is a collection of multiple
components distributed over multiple computers that interact, coordinate, and seem like a
single coherent system to the end-user. With the aid of the network, the end-user would be
able to interact with or operate them.

Time-Sliced Systems
It's a system in which each user's job gets a specific amount of CPU time. In other words, each
work is assigned to a specific time period. These time slices look too small to the user's eyes. An
internal component known as the 'Scheduler' decides to run the next job. This scheduler
determines and executes the job that must perform based on the priority cycle.

Multiprocessor System
Multiple processors are used in this system, which helps to improve overall performance. If one
of the processors in this system fails, the other processor is responsible for completing its
assigned task.

Characteristics of Multi-User Operating System


Resource Sharing
Several devices, like printers, fax machines, plotters, and hard drives, can be shared in a multi-
user operating system. Users can share their own documents using this functionality. All users
are given a small slice of CPU time under this system.

Multi-Tasking
Multi-user operating systems may execute several tasks simultaneously, and several programs
may also execute at the same time.
Background Processing
Background processing is a term that refers to when commands are not processed but rather
executed "in the background". Usually, other programs interact with the system in real-time.

Time-Sharing
A strategy used by multi-user operating systems to operate on several user requests at the
same time by switching between jobs at very short periods of time.

System
The operating system must handle a computer's combination of hardware and software
resources.

Invisibility
Various functions of the multi-user operating system are hidden from users. It is due to factors
such as the OS being instinctive or happening at the lower end, such as disk formatting, etc.

Types of Multi-User Operating System


There are various types of multi-user operating systems. Some of them are as follows:

Distributed System
A distributed system is also known as distributed computing. It is a collection of multiple
components distributed over multiple computers that interact, coordinate, and seem like a
single coherent system to the end-user. With the aid of the network, the end-user would be
able to interact with or operate them.

Time-Sliced Systems
It's a system in which each user's job gets a specific amount of CPU time. In other words, each
work is assigned to a specific time period. These time slices look too small to the user's eyes. An
internal component known as the 'Scheduler' decides to run the next job. This scheduler
determines and executes the job that must perform based on the priority cycle.

Multiprocessor System
Multiple processors are used in this system, which helps to improve overall performance. If one
of the processors in this system fails, the other processor is responsible for completing its
assigned task.
1. A multi-user operating system can be used in the printing process to allow multiple users
to access the same printer, which a normal operating system may not do.
2. On a single computer system, several users can access the same copy of a document. For
instance, if a PPT file is kept on one computer, other users can see it on other systems.
3. Multi-user operating systems are very useful in offices and libraries because they can
efficiently handle printing jobs.
4. If one computer fails in its own network system, the entire system does not come to a
halt.
5. The ticket reservation system uses a multi-user operating system.
6. Each user can access the same document on their own computer.

Disadvantages of Multi-User Operating System


1. Virus attacks occur simultaneously on all of them as the computers are shared. As a
result, if one machine is affected, the others will be as well.
2. If a virus hits one computer, it spreads to the entire network system simultaneously, and
finally, all computer systems fail.
3. All computer information is shared publicly, and your personal information is accessible
to everyone on the network.
4. Multiple accounts on a single computer may not be suitable for all users. Thus, it is better
to have multiple PCs for each user.

CPU and process management


A process is basically a program in execution. The execution of a process must progress in a sequential
fashion. When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data.
Stack
The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
Heap
This is dynamically allocated memory to a process during its run time.
Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.
Data
This section contains the global and static variables.

Program
A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For
example, here is a simple program written in C programming language −
#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when executed
by a computer. When we compare a program with a process, we can conclude that a process
is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A
collection of computer programs, libraries and related data are referred to as a software.

Process Life Cycle


When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1 Start
This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the scheduler
to assign CPU to some other process.

3 Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table −

S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.

6 CPU registers
Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the
process.

8 Memory management information


This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the loaded
process shares the CPU using time multiplexing.
Time-division multiplexing (TDM) is a method of putting multiple data streams in a single signal by
separating the signal into many segments, each having a very short duration.
Transmitting multiple data stream in a single communication path.the data from different input channels
is divided into fixed length segments and combined in round robin fashion ito a single output data.over a
single channel transmission system and demultiplexed.
Sophisticated/statistical multiplexing(to avoid blank input stream).

Process Scheduling Queues


 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.)

Two-State Process Model

S.N. State & Description

1 Running
When a new process is created, it enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers
Schedulers are special system software.Their main task is to select the jobs to be submitted
into the system and to decide which process to run.

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler.  It selects processes from the queue and loads them into
memory for execution. The primary objective of the job scheduler is to provide a balanced mix
of jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. the average rate of process creation must be equal to the average
departure rate of processes leaving the system. Time-sharing operating systems have no long
term scheduler

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system performance.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
It is the change of ready state to running state of the process. 

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the
memory(RAM). It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request.
o remove the process from memory and make space for other processes, the suspended
process is moved to the secondary storage. This process is called swapping, and the process
is said to be swapped out or rolled out.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both short
term scheduler other two and long term scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing


in time sharing system sharing system systems.

5 It selects processes from It selects those processes It can re-introduce the process
pool and loads them into which are ready to into memory and execution
memory for execution execute can be continued.

Multithreading, A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors.

Difference between Process and Thread

S.N. Process Thread

1 Process is heavy weight or resource intensive.

Thread is light weight, taking lesser resources than a process.

2 Process switching needs interaction with operating system.


Thread switching does not need to interact with operating system.

3 In multiple processing environments, each process executes the same code but has its own memory and
file resources.

All threads can share same set of open files, child processes.

4 If one process is blocked, then no other process can execute until the first process is unblocked.

While one thread is blocked and waiting, a second thread in the same task can run.

5 Multiple processes without using threads use more resources.

Multiple threaded processes use fewer resources.

6 In multiple processes each process operates independently of the others.

One thread can read, write or change another thread's data.

Advantages of Thread
 Threads minimize the context switching time.
 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

 User Level Threads − User managed threads.


 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread library contains
code for creating and destroying threads, for passing message and data between threads, for scheduling thread
execution and for saving and restoring thread contexts. The application starts with a single thread.

 User level threads are fast to create and manage.


 User level thread can run on any operating system.
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application
can be programmed to be multithreaded. All of the threads within an application are supported
within a single process.

The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a mode switch to
the Kernel.

Multithreading Models
 Many to many relationship.
 Many to one relationship.
 One to one relationship.

What is Cooperating Process?


independent or cooperating processes

It is considered independent when any other processes operating on the system may not impact
a process. Process-independent processes don't share any data with other processes. On the
other way, a collaborating process may be affected by any other process executing on the
system. A cooperating process shares data with another.

Advantages of Cooperating Process in Operating System


1. Information Sharing
Cooperating processes can be used to share information between various processes. It
could involve having access to the same files. A technique is necessary so that the
processes may access the files concurrently.
2. Modularity

Modularity refers to the division of complex tasks into smaller subtasks. Different cooperating
processes can complete these smaller subtasks. As a result, the required tasks are completed
more quickly and efficiently.

. Computation Speedup

Cooperating processes can be used to accomplish subtasks of a single task simultaneously. It


improves computation speed by allowing the task to be accomplished faster. Although, it is only
possible if the system contains several processing elements.

4. Convenience

There are multiple tasks that a user requires to perform, such as printing, compiling, editing, etc.
It is more convenient if these activities may be managed through cooperating
processes.Concurrent execution of cooperating processes needs systems that enable processes
to communicate and synchronize their actions.

Methods of Cooperating Process


1. Cooperation by sharing
The processes may cooperate by sharing data, including variables, memory,
databases, etc. The critical section provides data integrity, and writing is
mutually exclusive to avoid inconsistent data.

2. Cooperation by Communication
The cooperating processes may cooperate by using messages. If every process waits for a message from
another process to execute a task, it may cause a deadlock. If a process does not receive any messages, it
may cause starvation.

Producer-Consumer Problem

Producer Process

It generates information that the consumer would consume.


Consumer Process

It consumes the information that the producer produces.

Both processes run simultaneously. The customer waits if there is nothing to consume.

There is a producer and a consumer; the producer creates the item and stores it in a buffer
while the consumer consumes it. For example, print software generates characters that the
printer driver consumes. A compiler can generate assembly code, which an assembler can use.
In addition, the assembler may produce object modules that are used by the loader.

What is Concurrency?
It refers to the execution of multiple instruction sequences at the same time. It occurs in an
operating system when multiple process threads are executing concurrently. These threads can
interact with one another via shared memory or message passing. Concurrency results in
resource sharing, which causes issues like deadlocks and resource scarcity. It aids with
techniques such as process coordination, memory allocation, and execution schedule to
maximize throughput.

What is a File System?


A file system is a process of managing how and where data on a storage disk, which is also
referred to as file management or FS. It is a logical disk component that compresses files
separated into groups, which is known as directories. It is abstract to a human user and related
to a computer; hence, it manages a disk's internal operations. Files and additional directories
can be in the directories. Although there are various file systems with Windows, NTFS is the
most common in modern times. It would be impossible for a file with the same name to exist
and also impossible to remove installed programs and recover specific files without file
management, as well as files would have no organization without a file structure. The file system
enables you to view a file in the current directory as files are often managed in a hierarchy.

A disk (e.g., Hard disk drive) has a file system, despite type and usage. Also, it
contains information about file size, file name, file location fragment information, and where disk data is
stored and also describes how a user or application may access the data. The operations like metadata,
file naming, storage management, and directories/folders are all managed by the file system.

On a storage device, files are stored in sectors in which data is stored in groups of sectors called
blocks. The size and location of the files are identified by the file system, and it also helps to
recognize which sectors are ready to be used. Other than Windows, there are some other
operating systems that contain FAT and NTFS file system. But Apple product (like iOS and
macOS) uses HFS+ as operating system is horizon by many different kinds of file systems.

Sometimes the term "file system" is used in the reference of partitions. For instance, saying, "on
the hard drive, two files systems are available," that does not have to mean the drive is divided
between two file systems, NTFS and FAT. But it means two separate partitions are there that use
the same physical disk.

, Protection
Protection and security requires that computer resources such as CPU, softwares, memory etc.
are protected. This extends to the operating system as well as the data in the system. This can
be done by ensuring integrity, confidentiality and availability in the operating system. The
system must be protect against unauthorized access, viruses, worms etc.

Threats to Protection and Security


A threat is a program that is malicious in nature and leads to harmful effects for the system.
Some of the common threats that occur in a system are −
Virus
Viruses are generally small snippets of code embedded in a system. They are very dangerous
and can corrupt files, destroy data, crash systems etc. They can also spread further by
replicating themselves as required.
Trojan Horse
A trojan horse can secretly access the login details of a system. Then a malicious user can use
these to enter the system as a harmless being and wreak havoc.
Trap Door
A trap door is a security breach that may be present in a system without the knowledge of the
users. It can be exploited to harm the data or files in a system by malicious people.
Worm
A worm can destroy a system by using its resources to extreme levels. It can generate multiple
copies which claim all the resources and don't allow any other processes to access them. A
worm can shut down a whole network in this way.
Denial of Service
These type of attacks do not allow the legitimate users to access a system. It overwhelms the
system with requests so it is overwhelmed and cannot work properly for other user.

Protection and Security Methods


The different methods that may provide protect and security for different computer systems are

Authentication
This deals with identifying each user in the system and making sure they are who they claim to
be. The operating system makes sure that all the users are authenticated before they access
the system. The different ways to make sure that the users are authentic are:

 Username/ Password

Each user has a distinct username and password combination and they need to enter it
correctly before they can access the system.

 User Key/ User Card

The users need to punch a card into the card slot or use they individual key on a keypad
to access the system.

 User Attribute Identification

Different user attribute identifications that can be used are fingerprint, eye retina etc.
These are unique for each user and are compared with the existing samples in the
database. The user can only access the system if there is a match.
One Time Password
These passwords provide a lot of security for authentication purposes. A one time password
can be generated exclusively for a login every time a user wants to enter the system. It cannot
be used more than once. The various ways a one time password can be implemented are −

 Random Numbers

The system can ask for numbers that correspond to alphabets that are pre arranged.
This combination can be changed each time a login is required.

 Secret Key

A hardware device can create a secret key related to the user id for login. This key can
change each time.

The procedure involved in preserving the appropriate order of execution of cooperative


processes is known as Process Synchronization.

Race Condition
A Race Condition typically occurs when two or more threads try to read, write and possibly
make the decisions based on the memory that they are accessing concurrently.
Critical Section
The regions of a program that try to access shared resources and may cause race conditions are
called critical section. To avoid race condition among the processes, we need to assure that only
one process at a time can execute within the critical section.

The critical section refers to the segment of code where processes access shared
resources, such as common variables and files, and perform write operations on them.
Since processes execute concurrently, any process can be interrupted mid-execution.

AFTER ACADEMY
In the Operating System, there are a number of processes present in a particular
state. At the same time, we have a limited amount of resources present, so those
resources need to be shared among various processes. But you should make sure
that no two processes are using the same resource at the same time because this
may lead to data inconsistency. So, synchronization of process should be there in
the Operating System. These processes that are sharing resources between each
other are called Cooperative Processes and the processes whose execution does
not affect the execution of other processes are called Independent Processes.

Race Condition and Critical Section

Race Condition
In an Operating System, we have a number of processes and these processes
require a number of resources. Now, think of a situation where we have two
processes and these processes are using the same variable "a". They are reading the
variable and then updating the value of the variable and finally writing the data in
the memory.

SomeProcess(){
...
read(a) //instruction 1
a = a + 5 //instruction 2
write(a) //instruction 3
...
}
In the above, you can see that a process after doing some operations will have to
read the value of "a", then increment the value of "a" by 5 and at last write the
value of "a" in the memory. Now, we have two processes P1 and P2 that needs to
be executed. Let's take the following two cases and also assume that the value of
"a" is 10 initially.

1. In this case, process P1 will be executed fully (i.e. all the three instructions)
and after that, the process P2 will be executed. So, the process P1 will first
read the value of "a" to be 10 and then increment the value by 5 and make it
to 15. Lastly, this value will be updated in the memory. So, the current value
of "a" is 15. Now, the process P2 will read the value i.e. 15, increment with
5(15+5 = 20) and finally write it to the memory i.e. the new value of "a" is
20. Here, in this case, the final value of "a" is 20.
2. In this case, let's assume that the process P1 starts executing. So, it reads the
value of "a" from the memory and that value is 10(initial value of "a" is
taken to be 10). Now, at this time, context switching happens between
process P1 and P2(learn more about context switching from here). Now, P2
will be in the running state and P1 will be in the waiting state and the context
of the P1 process will be saved. As the process P1 didn't change the value of
"a", so, P2 will also read the value of "a" to be 10. It will then increment the
value of "a" by 5 and make it to 15 and then save it to the memory. After the
execution of the process P2, the process P1 will be resumed and the context
of the P1 will be read. So, the process P1 is having the value of "a" as
10(because P1 has already executed the instruction 1). It will then increment
the value of "a" by 5 and write the final value of "a" in the memory i.e. a =
15. Here, the final value of "a" is 15.
In the above two cases, after the execution of the two processes P1 and P2, the
final value of "a" is different i.e. in 1st case it is 20 and in 2nd case, it is 15. What's
the reason behind this?

The processes are using the same resource here i.e. the variable "a". In the first
approach, the process P1 executes first and then the process P2 starts executing.
But in the second case, the process P1 was stopped after executing one instruction
and after that the process P2 starts executing. And here both the processes are
dealing on the same resource i.e. variable "a" at the same time.

Here, the order of execution of processes changes the output. All these processes
are in a race to say that their output is correct. This is called a race condition.

Critical Section
The code in the above part is accessed by all the process and this can lead to data
inconsistency. So, this code should be placed in the critical section. The critical
section code can be accessed by only one process at a time and no other process
can access that critical section code. All the shared variables or resources are
placed in the critical section that can lead to data inconsistency.

All the Critical Section problems need to satisfy


the following three conditions:

 Mutual Exclusion: If a process is in the critical section, then other processes


shouldn't be allowed to enter into the critical section at that time i.e. there
must be some mutual exclusion between processes.
 Progress: If in the critical section, there is no process that is being executed,
then other processes that need to go in the critical section and have a finite
time can enter into the critical section.
 Bounded Waiting: There must be some limit to the number of times a
process can go into the critical section i.e. there must be some upper bound.
If no upper bound is there then the same process will be allowed to go into
the critical section again and again and other processes will never get a
chance to get into the critical section.
So, in order to remove the problem of race condition, there must be
synchronization between various processes present in the system for its execution
otherwise, it may lead to data inconsistency i.e. a proper order should be defined in
which the processes can execute.

What is Context Switching in Operating


System?
What is Context Switching?
A context switching is a process that involves switching of the CPU from one
process or task to another. In this phenomenon, the execution of the process that is
present in the running state is suspended by the kernel and another process that is
present in the ready state is executed by the CPU.

It is one of the essential features of the multitasking operating system. The


processes are switched so fastly that it gives an illusion to the user that all the
processes are being executed at the same time.

A context is the contents of a CPU's registers and program counter at any point in
time. Context switching can happen due to the following reasons:

 When a process of high priority comes in the ready state. In this case, the
execution of the running process should be stopped and the higher priority
process should be given the CPU for execution.
 When an interruption occurs then the process in the running state should be
stopped and the CPU should handle the interrupt before doing something
else.
 When a transition between the user mode and kernel mode is required then
you have to perform the context switching.
 Firstly, the context of the process P1 i.e. the process present in the running
state will be saved in the Process Control Block of process P1 i.e. PCB1.
 Now, you have to move the PCB1 to the relevant queue i.e. ready queue, I/O
queue, waiting queue, etc.
 From the ready state, select the new process that is to be executed i.e. the
process P2.
 Now, update the Process Control Block of process P2 i.e. PCB2 by setting
the process state to running. If the process P2 was earlier executed by the
CPU, then you can get the position of last executed instruction so that you
can resume the execution of P2.
 Similarly, if you want to execute the process P1 again, then you have to
follow the same steps as mentioned above(from step 1 to 4).
For context switching to happen, two processes are at least required in general, and
in the case of the round-robin algorithm, you can perform context switching with
the help of one process only.
The time involved in the context switching of one process by other is called the
Context Switching Time.

Advantage of Context Switching


Context switching is used to achieve multitasking i.e. multiprogramming with
time-sharing(learn more about multitasking from here). Multitasking gives an
illusion to the users that more than one process are being executed at the same
time. But in reality, only one task is being executed at a particular instant of time
by a processor. Here, the context switching is so fast that the user feels that the
CPU is executing more than one task at the same time.

The disadvantage of Context Switching


The disadvantage of context switching is that it requires some time for context
switching i.e. the context switching time. Time is required to save the context of
one process that is in the running state and then getting the context of another
process that is about to come in the running state. During that time, there is no
useful work done by the CPU from the user perspective. So, context switching is
pure overhead in this condition.

What is Deadlock?
Deadlock is a situation where two or more processes are waiting for each other.
For example, let us assume, we have two processes P1 and P2. Now, process P1 is
holding the resource R1 and is waiting for the resource R2. At the same time, the
process P2 is having the resource R2 and is waiting for the resource R1. So, the
process P1 is waiting for process P2 to release its resource and at the same time,
the process P2 is waiting for process P1 to release its resource. And no one is
releasing any resource. So, both are waiting for each other to release the resource.
This leads to infinite waiting and no work is done here. This is called Deadlock.
Coffman conditions

 Mutual Exclusion: A resource can be held by only one process at a time. In


other words, if a process P1 is using some resource R at a particular instant
of time, then some other process P2 can't hold or use the same resource R at
that particular instant of time. The process P2 can make a request for that
resource R but it can't use that resource simultaneously with process P1.

 Hold and Wait: A process can hold a number of resources at a time and at
the same time, it can request for other resources that are being held by some
other process. For example, a process P1 can hold two resources R1 and R2
and at the same time, it can request some resource R3 that is currently held
by process P2.
 No preemption: A resource can't be preempted from the process by another
process, forcefully. For example, if a process P1 is using some resource R,
then some other process P2 can't forcefully take that resource. If it is so, then
what's the need for various scheduling algorithm. The process P2 can request
for the resource R and can wait for that resource to be freed by the process
P1.
 Circular Wait: Circular wait is a condition when the first process is waiting
for the resource held by the second process, the second process is waiting for
the resource held by the third process, and so on. At last, the last process is
waiting for the resource held by the first process. So, every process is
waiting for each other to release the resource and no one is releasing their
own resource. Everyone is waiting here for getting the resource. This is
called a circular wait.

Deadlock will happen if all the above four conditions happen simultaneously.

Difference between
Deadlock and Starvation
There is a difference between a Deadlock and Starvation. You shouldn't get
confused between these. In the case of Deadlock, each and every process is waiting
for each other to release the resource. But in the case of starvation, the high
priority processes keep on executing and the lower priority processes keep on
waiting for its execution. So, every deadlock is always starvation, but every
starvation is not a deadlock. Deadlock is infinite waiting but starvation is not an
infinite waiting. Starvation is long waiting. If the higher priority processes don't
come, then the lower priority process will get a chance to be executed in case of
starvation. So, in the case of starvation, we have long waiting and not infinite
waiting.

Process Address Space


The process address space is the set of logical addresses that a process references in its
code. For example, when 32-bit addressing is in use, addresses can range from 0 to 0x7fffffff;
that is, 2^31 possible numbers, for a total theoretical size of 2 gigabytes.
The operating system takes care of mapping the logical addresses to physical addresses at
the time of memory allocation to the program. There are three types of addresses used in a
program before and after memory is allocated −

S.N. Memory Addresses & Description

1
Symbolic addresses
The addresses used in a source code. The variable names, constants,
and instruction labels are the basic elements of the symbolic address
space.

2
Relative addresses
At the time of compilation, a compiler converts symbolic addresses into
relative addresses.

3
Physical addresses
The loader generates these addresses at the time when a program is
loaded into main memory.

Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device.

Memory management :Allocation and deallocation of process between RAM and disk drive. 

MMU uses following mechanism to convert virtual address to physical address.


 The value in the base register is added to every address generated by a user process,
which is treated as offset at the time it is sent to memory. For example, if the base
register value is 10000, then an attempt by the user to use address location 100 will be
dynamically reallocated to location 10100.
 The user program deals with virtual addresses; it never sees the real physical
addresses.

Static vs Dynamic Loading

You might also like