0% found this document useful (0 votes)
17 views47 pages

Os QB

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views47 pages

Os QB

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

1.

Explain critical section problem with example

The critical section problem is used to design a protocol followed by a group of


processes, so that when one process has entered its critical section, no other process is
allowed to execute in its critical section.

The critical section refers to the segment of code where processes access shared
resources, such as common variables and files, and perform write operations on them.

Since processes execute concurrently, any process can be interrupted mid-execution. In


the case of shared resources, partial execution of processes can lead to data
inconsistencies

Solutions to the critical section problem

Any solution to the critical section problem must


satisfy the following requirements:

 Mutual exclusion: When one process is


executing in its critical section, no other
process is allowed to execute in its critical
section.
 Progress: When no process is executing in its
critical section, and there exists a process that
wishes to enter its critical section, it should
not have to wait indefinitely to enter it.
 Bounded waiting: There must be a bound on the number of times a process is
allowed to execute in its critical section, after another process has requested to
enter its critical section and before that request is accepted.

Example:
2. Explain methods of handling deadlock

Deadlock is a situation where a process or a set of processes is blocked, waiting for


some other resource that is held by some other waiting process. It is an undesirable
state of the system.

Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and
circular wait holds simultaneously

Methods of handling deadlocks :


There are three approaches to deal with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance
3. Deadlock detection
1. Deadlock Prevention :
The strategy of deadlock prevention is to design the system in such a way that the
possibility of deadlock is excluded. Indirect method prevent the occurrence of one of
three necessary condition of deadlock i.e., mutual exclusion, no pre-emption and hold
and wait. Direct method prevent the occurrence of circular wait.
The idea behind the approach is very simple that we have to fail one of the four
conditions but there can be a big argument on its physical implementation in the
system.
{
Prevention techniques –
Mutual exclusion – is supported by the OS.
Hold and Wait – condition can be prevented by requiring that a process requests all its
required resources at one time and blocking the process until all of its requests can be
granted at a same time simultaneously. But this prevention does not yield good result
because :
long waiting time required
in efficient use of allocated resource
A process may not know all the required resources in advance
No pre-emption – techniques for ‘no pre-emption are’
 If a process that is holding some resource, requests another resource that can not
be immediately allocated to it, the all resource currently being held are released
and if necessary, request them again together with the additional resource.
 If a process requests a resource that is currently held by another process, the OS
may pre-empt the second process and require it to release its resources. This works
only if both the processes do not have same priority.

Circular wait One way to ensure that this condition never hold is to impose a total
ordering of all resource types and to require that each process requests resource in an
increasing order of enumeration.

2. Deadlock Avoidance :
This approach allows the three necessary conditions of deadlock but makes judicious
choices to assure that deadlock point is never reached. It allows more concurrency
than avoidance detection
A decision is made dynamically whether the current resource allocation request will, if
granted, potentially lead to deadlock. It requires the knowledge of future process
requests. Two techniques to avoid deadlock :
1. Process initiation denial
2. Resource allocation denial
Advantages of deadlock avoidance techniques :
 Not necessary to pre-empt and rollback processes
 Less restrictive than deadlock prevention
Disadvantages :
 Future resource requirements must be known in advance
 Processes can be blocked for long periods
 Exists fixed number of resources for allocation
3. Deadlock Detection :
Deadlock detection is used by employing and algorithm that tracks the circular waiting
and killing one or more processes so that deadlock is removed. The system state is
examined periodically to determine if a set of processes is deadlocked. A deadlock is
resolved by aborting and restarting a process, relinquishing all the resources that the
process held.
 This technique doe not limit resources access or restrict process action.
 Requested resources are granted to processes whenever possible.
 It never delays the process initiation and facilitates online handling.
 The disadvantage is the inherent pre-emption losses

3. Explain deadlock prevention method


Deadlock is a situation where a process or a set of processes is blocked, waiting for
some other resource that is held by some other waiting process. It is an undesirable
state of the system.

To prevent a deadlock the OS must eliminate one of the four necessary conditions 1.
Mutual exclusion 2. Hold and wait 3. No preemption 4. Circular wait

1. Mutual exclusion: It is necessary in any computer system because some resources


(memory, CPU) must be exclusively allocated to one user at a time. No other process
can use a resource while it is allocated to a process.

2. Hold and wait : If a process holding certain resources is denied a further request, it
must release its original resources and if required request them again.

3. No preemption : It could be bypassed by allowing the operating system to deallocate


resources from process.

4. Circular wait : Circular wait can be bypassed if the operating system prevents the
formation of a circle.

• A deadlock is possible only if all four of these conditions simultaneously hold in the
system.

• Prevention strategies ensure that at least one of the conditions is always false.
4. Explain bankers algorithm for deadlock prevention
5. Explain bounded buffer problem
Bounded buffer problem, which is also called producer consumer problem, is
one of the classic problems of synchronization. Let's start by understanding the
problem here, before moving on to the solution and program code.

What is the Problem Statement?

There is a buffer of n slots and each slot is capable of storing one unit of data. There are
two processes running, namely, producer and consumer, which are operating on the
buffer.

A producer tries to insert data into an empty slot of the buffer. A consumer tries to
remove data from a filled slot in the buffer. As you might have guessed by now, those
two processes won't produce the expected output if they are being executed
concurrently.

There needs to be a way to make the producer and consumer work in an independent
manner.

Here's a Solution

One solution of this problem is to use semaphores. The semaphores which will be used
here are:

 m, a binary semaphore which is used to acquire and release the lock.


 empty, a counting semaphore whose initial value is the number of slots in the
buffer, since, initially all slots are empty.

 full, a counting semaphore whose initial value is 0.

At any instant, the current value of empty represents the number of empty slots in the
buffer and full represents the number of occupied slots in the buffer.

The Producer Operation

do
{
wait(empty);
wait(mutex);
signal(mutex);
signal(full);
}
while(TRUE)

 Looking at the above code for a producer, we can see that a producer first waits
until there is atleast one empty slot.

 Then it decrements the empty semaphore because, there will now be one less
empty slot, since the producer is going to insert data in one of those slots.

 Then, it acquires lock on the buffer, so that the consumer cannot access the
buffer until producer completes its operation.

 After performing the insert operation, the lock is released and the value of full is
incremented because the producer has just filled a slot in the buffer.

The Consumer Operation

do
{
wait(full);
wait(mutex);

signal(mutex);

signal(empty);
}
while(TRUE);

 The consumer waits until there is atleast one full slot in the buffer.

 Then it decrements the full semaphore because the number of occupied slots will
be decreased by one, after the consumer completes its operation.

 After that, the consumer acquires lock on the buffer.

 Following that, the consumer completes the removal operation so that the data
from one of the full slots is removed.

 Then, the consumer releases the lock.

 Finally, the empty semaphore is incremented by 1, because the consumer has


just removed data from an occupied slot, thus making it empty.

6. Explain reader-writer problem

The Problem Statement

There is a shared resource which should be accessed by multiple processes. There are
two types of processes in this context. They are reader and writer. Any number
of readers can read from the shared resource simultaneously, but only one writer can
write to the shared resource. When a writer is writing data to the resource, no other
process can access the resource

The readers-writers problem relates to an object such as a file that is shared between
multiple processes. Some of these processes are readers i.e. they only want to read the
data from the object and some of the processes are writers i.e. they want to write into
the object.
The readers-writers problem is used to manage synchronization so that there are no
problems with the object data. For example - If two readers access the object at the same
time there is no problem. However if two writers or a reader and writer access the object
at the same time, there may be problems.

Reader Process
The code that defines the reader process is given below −

wait (mutex);
rc ++;
if (rc == 1)
wait (wrt);
signal(mutex);
.
. READ THE OBJECT
.
wait(mutex);
rc --;
if (rc == 0)
signal (wrt);
signal(mutex);

In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a
variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt
handles the writing mechanism and is common to the reader and writer process code.

Writer Process
The code that defines the writer process is given below:

wait(wrt);
.
. WRITE INTO THE OBJECT
.
signal(wrt);
If a writer wants to access the object, wait operation is performed on wrt. After that no
other writer can access the object. When a writer is done writing into the object, signal
operation is performed on wrt.

7. Explain dining philosopher problem

Dining Philosophers Problem


The dining philosophers problem is another classic synchronization problem which is
used to evaluate situations where there is a need of allocating multiple resources to
multiple processes.

What is the Problem Statement?

Consider there are five philosophers sitting around a circular dining table. The dining
table has five chopsticks and a bowl of rice in the middle as shown in the below figure.
At any instant, a philosopher is either eating or thinking. When a philosopher wants to
eat, he uses two chopsticks - one from their left and one from their right. When a
philosopher wants to think, he keeps down both chopsticks at their original place.

Here's the Solution

From the problem statement, it is clear that a philosopher can think for an indefinite
amount of time. But when a philosopher starts eating, he has to stop at some point of
time. The philosopher is in an endless cycle of thinking and eating.

An array of five semaphores, stick[5], for each of the five chopsticks.

The code for each philosopher looks like:

while(TRUE)

wait(stick[i]);

wait(stick[(i+1) % 5]);

signal(stick[i]);

signal(stick[(i+1) % 5]);

When a philosopher wants to eat the rice, he will wait for the chopstick at his left and
picks up that chopstick. Then he waits for the right chopstick to be available, and then
picks it too. After eating, he puts both the chopsticks down.

But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:
 A philosopher must be allowed to pick up the chopsticks only if both the left and
right chopsticks are available.

 Allow only four philosophers to sit at the table. That way, if all the four
philosophers pick up four chopsticks, there will be one chopstick left on the table.
So, one philosopher can start eating and eventually, two chopsticks will be
available. In this way, deadlocks can be avoided.

8. Explain mutual exclusion


A mutual exclusion (mutex) is a program object that prevents simultaneous
access to a shared resource. This concept is used in concurrent programming
with a critical section, a piece of code in which processes or threads access a
shared resource. Only one thread owns the mutex at a time, thus a mutex with a
unique name is created when a program starts. When a thread holds a resource,
it has to lock the mutex from other threads to prevent concurrent access of the
resource. Upon releasing the resource, the thread unlocks the mutex

9. Explain concept of monitor

Monitors are used for process synchronization. With the help of programming
languages, we can use a monitor to achieve mutual exclusion among the
processes.
The Monitor is a module or package which encapsulates shared data structure,
procedures, and the synchronization between the concurrent procedure
invocations
Characteristics of Monitors.
1. Inside the monitors, we can only execute one process at a time.
2. Monitors offer high-level of synchronization
3. Monitors were derived to simplify the complexity of synchronization problems.
4. There is only one process that can be active at a time inside the monitor.
Components of Monitor
There are four main components of the monitor:
1. Initialization
2. Private data
3. Monitor procedure
4. Monitor entry queue
Initialization: – Initialization comprises the code, and when the monitors are created,
we use this code exactly once.
Private Data: – Private data is another component of the monitor. It comprises all the
private data, and the private data contains private procedures that can only be used
within the monitor. So, outside the monitor, private data is not visible.
Monitor Procedure: – Monitors Procedures are those procedures that can be called
from outside the monitor.
Monitor Entry Queue: – Monitor entry queue is another essential component of the
monitor that includes all the threads, which are called procedures.
Advantages:
1.We can use condition variables only in the monitors.
2 The monitors are comprised of the shared variables and the procedures which operate
the shared variable.
3. Condition variables are present in the monitor.
4. In monitors, wait always block the caller.

10. what is seamphore explain it


Semaphore is simply a variable that is non-negative and shared between threads. A
semaphore is a signaling mechanism, and a thread that is waiting on a semaphore can
be signaled by another thread. It uses two atomic operations, 1) Wait, and 2) Signal for
the process synchronization.

A semaphore either allows or disallows access to the resource, which depends on how it
is set up.

Characteristic of Semaphore
Here, are characteristic of a semaphore:

 It is a mechanism that can be used to provide synchronization of tasks.


 It is a low-level synchronization mechanism.
 Semaphore will always hold a non-negative integer value.
 Semaphore can be implemented using test operations and interrupts, which
should be executed using file descriptors.

Types of Semaphores
The two common kinds of semaphores are

 Counting semaphores
 Binary semaphores.

Counting Semaphores
This type of Semaphore uses a count that helps task to be acquired or released
numerous times. If the initial count = 0, the counting semaphore should be created in
the unavailable state.

However, If the count is > 0, the semaphore is created in the available state, and the
number of tokens it has equals to its count.

Binary Semaphores
The binary semaphores are quite similar to counting semaphores, but their value is
restricted to 0 and 1. In this type of semaphore, the wait operation works only if
semaphore = 1, and the signal operation succeeds when semaphore= 0. It is easy to
implement than counting semaphores.

Example of Semaphore
The below-given program is a step by step implementation, which involves usage and
declaration of semaphore.

Shared var mutex: semaphore = 1;


Process i
begin
.
.
P(mutex);
execute CS;
V(mutex);
.
.
End;

11. List services provided by OS explain it


An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −

 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection
Program Execution

The OS loads a program into memory and then executes that program. It also makes
sure that once started that program can end its execution, either normally or forcefully.
The major steps during program management are:

 Loading a program into memory.


 Executing the program.
 Making sure the program completes its execution.
 Providing a mechanism for:
1. process synchronization.
2. process communication.
3. deadlock handling.

I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O
device.
 Operating system provides the access to the required I/O device when required.

File System manipulation

Programs need has to be read and then write them as files and directories. File handling
portion of operating system also allows users to create and delete files by specific name
along with extension, search for a given file and / or list file information. Some programs
comprise of permissions management for allowing or denying access to files or
directories based on file ownership.

Communication

Process needs to swap over information with other process. Processes executing on
same computer system or on different computer systems can communicate using
operating system support. Communication between two processes can be done using
shared memory or via message passing.

Error handling

Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in
the memory hardware. Following are the major activities of an operating system with
respect to error handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing

Resource allocation
When multiple jobs running concurrently, resources must need to be allocated to each
of them. Resources can be CPU cycles, main memory storage, file storage and I/O
devices. CPU scheduling routines are used here to establish how best the CPU can be
used.
Protection and Security

This is to ensure the safety of the system. Thus, user authentication is required to access
a system. It is also necessary to protect a process from another when multiple processes
are running on a system at the same time.
The OS controls the access to the resources, protects the I/O devices from invalid
access, and provides authentication through passwords.

12. Functions of OS

Functions of Operation System

An operating system is a program that acts as a user-computer GUI (Graphical user


interface). It controls the execution of all types of applications.

The operating system performs the following functions in a device.

1. Instruction
2. Input/output Management
3. Memory Management
4. File Management
5. Processor Management
6. Job Priority
7. Special Control Program
8. Scheduling of resources and jobs
9. Security
10. Monitoring activities
11. Job accounting

Instruction: The operating system establishes a mutual understanding between the


various instructions given by the user.

Input/output Management: What output will come from the input given by the user, the
operating system runs this program. This management involves coordinating various
input and output devices. It assigns the functions of those devices where one or more
applications are executed.

Memory Management: The operating system handles the responsibility of storing any
data, system programs, and user programs in memory. This function of the operating
system is called memory management.
File Management: The operating system is helpful in making changes in the stored files
and in replacing them. It also plays an important role in transferring various files to a
device.

Processor Management: The processor is the execution of a program that accomplishes


the specified work in that program. It can be defined as an execution unit where a
program runs.

Job Priority: The work of job priority is creation and promotion. It determines what action
should be done first in a computer system.

Special Control Program: The operating systems make automatic changes to the task
through specific control programs. These programs are called Special Control Program.

Security
The OS keeps the system and programs safe and secure through authentication. A user id
and password decide the authenticity of the user.

13. Explain Time sharing OS

Time sharing operating system is a type of operating system. An operating system is


basically, a program that acts as an interface between the system hardware and the user.
Moreover, it handles all the interactions between the software and the hardware.

It allows the user to perform more than one task at a time, each task getting the same
amount of time to execute. Hence, the name time sharing OS. Moreover, it is an extension
of multiprogramming systems. In multiprogramming systems, the aim is to make the
maximum use of the CPU. On the other hand, here the aim is to achieve the minimum
response time of CPU.

 It is the division of CPU time for each process when more than one task are given by
the user.
 A short duration of time is chosen for each process. Moreover, this time duration is
very small in the order of 10-100 milliseconds. This time duration is known as time
slot, time slice, or quantum.

Advantages of Timesharing operating systems are −

 It provides the advantage of quick response.


 This type of operating system avoids duplication of software.
 It reduces CPU idle time.
Disadvantages of Time-sharing operating systems are −

 Time sharing has problem of reliability.


 Question of security and integrity of user programs and data can be raised.
 Problem of data communication occurs.

14. Explain batch operating system


15. Difference between multiprogramming and multitasking
16. Explain process state diagram

The Process State diagram illustrates the States in which a process can be in, and it also
defines the flow in which a particular state can be achieved by the Process. Let us first
take a look at the Process State diagram, which is as follows:

1. New

A program which is going to be picked up by the OS into the main memory is called a
new process.

2. Ready

Whenever a process is created, it directly enters in the ready state, in which, it waits for
the CPU to be assigned. The OS picks the new processes from the secondary memory
and put all of them in the main memory.

The processes which are ready for the execution and reside in the main memory are
called ready state processes.

3. Running

One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm.

Block or wait

From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.
5. Completion or termination

When a process finishes its execution, it comes in the termination state. All the context
of the process (Process Control Block) will also be deleted the process will be terminated
by the Operating system.

Operations on the Process

1. Creation

Once the process is created, it will be ready and come into the ready queue (main
memory) and will be ready for the execution.

2. Scheduling

Out of the many processes present in the ready queue, the Operating system chooses
one process and start executing it. Selecting the process which is to be executed next, is
known as scheduling.

3. Execution

Once the process is scheduled for the execution, the processor starts executing it.
Process may come to the blocked or wait state during the execution then in that case
the processor starts executing the other processes.

4. Deletion/killing

Once the purpose of the process gets over then the OS will kill the process. The Context
of the process (PCB) will be deleted and the process gets terminated by the Operating
system.

17. Explain program threads


18. Explain memory swapping

Swapping is a memory management technique and is used to temporarily remove the


inactive programs from the main memory of the computer system. Any process must be
in the memory for its execution, but can be swapped temporarily out of memory to a
backing store and then again brought back into the memory to complete its execution.
Swapping is done so that other processes get memory for their execution.

Due to the swapping technique performance usually gets affected, but it also helps in
running multiple and big processes in parallel. The swapping process is also known as a
technique for memory compaction. Basically, low priority processes may be swapped
out so that processes with a higher priority may be loaded and executed

The swapping of processes by the memory manager is fast enough that some processes
will be in memory, ready to execute, when the CPU scheduler wants to reschedule the
CPU.

A variant of the swapping technique is the priority-based scheduling algorithm. If any


higher-priority process arrives and wants service, then the memory manager swaps out
lower priority processes and then load the higher priority processes and then execute
them

Advantages of Swapping

1. The swapping technique mainly helps the CPU to manage multiple processes
within a single main memory.
2. This technique helps to create and use virtual memory.
3. With the help of this technique, the CPU can perform several tasks
simultaneously. Thus, processes need not wait too long before their execution.
4. This technique is economical.
5. This technique can be easily applied to priority-based scheduling in order to
improve its performance.

Disadvantages of Swapping

1. There may occur inefficiency in the case if a resource or a variable is commonly


used by those processes that are participating in the swapping process.
2. If the algorithm used for swapping is not good then the overall method can
increase the number of page faults and thus decline the overall performance of
processing.
3. If the computer system loses power at the time of high swapping activity then
the user might lose all the information related to the program.
19. Explain segmentation in Os

In Operating Systems, Segmentation is a memory management technique in which the


memory is divided into the variable size parts. Each part is known as a segment which can
be allocated to a process.

The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Basically, a process is divided into segments. Like paging, segmentation divides or


segments the memory. But there is a difference and that is while the paging divides the
memory into a fixed size and on the other hand, segmentation divides the memory into
variable segments these are then loaded into logical memory space.

Types of Segmentation

Given below are the types of Segmentation:

 Virtual Memory Segmentation With this type of segmentation, each process is


segmented into n divisions and the most important thing is they are not
segmented all at once.
 Simple Segmentation With the help of this type, each process is segmented into
n divisions and they are all together segmented at once exactly but at the
runtime and can be non-contiguous (that is they may be scattered in the
memory).

Characteristics of Segmentation

Some characteristics of the segmentation technique are as follows:

 The Segmentation partitioning scheme is variable-size.


 Partitions of the secondary memory are commonly known as segments.
 Partition size mainly depends upon the length of modules.
 Thus with the help of this technique, secondary memory and main memory are
divided into unequal-sized partitions.

20. Explain Cryptography in os

Cryptography is technique of securing information and communications through use of


codes so that only those person for whom the information is intended can understand
it and process it. Thus preventing unauthorized access to information. The prefix
“crypt” means “hidden” and suffix graphy means “writing”.
In Cryptography the techniques which are use to protect information are obtained
from mathematical concepts and a set of rule based calculations known as algorithms
to convert messages in ways that make it hard to decode it.
Features Of Cryptography are as follows:
1. Confidentiality:
Information can only be accessed by the person for whom it is intended and no
other person except him can access it.
2. Integrity:
Information cannot be modified in storage or transition between sender and
intended receiver without any addition to information being detected.
3. Non-repudiation:
The creator/sender of information cannot deny his or her intention to send
information at later stage.
4. Authentication:
The identities of sender and receiver are confirmed. As well as destination/origin of
information is confirmed.
Types Of Cryptography:
In general there are three types Of cryptography:
1. Symmetric Key Cryptography:
It is an encryption system where the sender and receiver of message use a single
common key to encrypt and decrypt messages. Symmetric Key Systems are faster
and simpler but the problem is that sender and receiver have to somehow
exchange key in a secure manner. The most popular symmetric key cryptography
system is Data Encryption System(DES).
2. Hash Functions:
There is no usage of any key in this algorithm. A hash value with fixed length is
calculated as per the plain text which makes it impossible for contents of plain text
to be recovered. Many operating systems use hash functions to encrypt passwords.
3. Asymmetric Key Cryptography:
Under this system a pair of keys is used to encrypt and decrypt information. A
public key is used for encryption and a private key is used for decryption. Public key
and Private Key are different. Even if the public key is known by everyone the
intended receiver can only decode it because he alone knows the private key.

21. explain user authentication

User authentication process is used just to identify who the owner is or who the
identified person is.In personal computer, generally, user authentication can be perform
using password. When a computer user wants to log into a computer system, then the
installed operating system (OS) on that computer system generally wants to determine
or check who the user is. This process is called as user authentication.

Sometime it is too important to authenticate the user because the computer system
may have some important documents of the owner

User can be authenticated through one of the following way:

 User authentication using password


 User authentication using physical object
 User authentication using biometric
 User authentication using countermeasures

User Authentication using Password

User authentication using password is the most widely used form of authenticating the
user.

In this method of authenticating the user with password, it is to require that the user
who is going to authenticate has to type their login name or id and login password
Authenticating the user using their password is an easy method and also easy to
implement.

Keeping a central list of pairs is the simplest implementation of user authentication


using password method.

both login and password match, then the login is allowed or the user is successfully
authenticated and approved to log into that system.

How to Improve Password Security ?

 Password should be minimum of eight characters


 Password should contain both uppercase and lowercase letters
 Password should contain at least one digit and one special characters
 Don't use dictionary words and known name such as stick, mouth, sun, albert etc.

User Authentication using Physical Object

User authentication using a physical object is a second way to authenticate the user
here.

Here, physical object may refer to Bank's Automated Teller Machine (ATM) card or any
other plastic card that is used to authenticate. To authenticate the user, plastic card is
inserted by the user into a reader associated with the terminal or computer system.

Generally, the user must not only insert the card that is used as physical object to
authenticate him/her, but also type in a password just to prevent someone from using a
lost or stolen card.

User Authentication using Biometric

User authentication using biometric is the third authentication method here.

This method measures the physical characteristics of the user that are very hard to
forge. These are called as biometrics.

User authentication using biometric's example is a fingerprint, voiceprint, or retina scan


reader in the terminal could verify the identity of the user
User Authentication using Countermeasure

User authentication using countermeasure method is used to make the unauthorized


access much harder.

((((For example, a company could have their policy that the employee working in the
Computer Science (CS) department are only allowed to log in from 10 A.M. to 4 P.M.,
Monday to Saturday, and then only from a machine in the CS department connected to
company's Local Area Network (LAN).

Now, any attempt to log in by a CS department employee at any wrong time or from any
wrong place would be treated or handled as an attempted break in and log in failure.)))

22. Principle of Protection

Processes in the system must be protected from one another’s activities. Else there
may be disruption in the normal working. Protection refers to a mechanism for
controlling the access of programs, processes, or users to resources defined by the
computer system.

Goals of protection-

Provides a means to distinguish between authorized and unauthorized usage.

To prevent mischievously, intentional violation of an access restriction by the user.

To ensure that each program component which is active in a system uses system
resources only in ways consistent with stated policies.

To detect latent errors at the interfaces between the component subsystems. Early
detection helps in preventing malfunctioning of subsystems.

To enforce policies governing resource usage.

Principles of protection-

The time-tested guiding principle used for protection is called the principle of least
privilege.
An OS following this principle implements its features, programs, system calls, and data
structures so that failure or compromise of a component does the minimum damage
and allows minimum damage to be done.

It provides mechanisms to enable privileges when they are needed and to disable them
when not needed.

Privileged function access have audit trails that enable programmer or systems
administrator or law-enforcement officer to trace all protection and security activities of
the system.

We can create separate accounts for each user with just the privileges that the user
needs.

23. Explain different methods of implementing access matrix

As earlier discussed access matrix is likely to be very sparse and takes up a large chunk
of memory. Therefore direct implementation of access matrix for access control is
storage inefficient.
The inefficiency can be removed by decomposing the access matrix into rows or
columns.Rows can be collapsed by deleting null values and so for the columns to
increase efficiency. From these approaches of decomposition three implementation of
access matrix can be formed which are widely used. They are as follows:
. Global Table

 The simplest approach is one big global table with < domain, object, rights > entries.
 Unfortunately this table is very large ( even if sparse ) and so cannot be kept in memory
( without invoking virtual memory techniques. )
 There is also no good way to specify groupings - If everyone has access to some
resource, then it still needs a separate entry for every domain.

2. Access Lists for Objects

 Each column of the table can be kept as a list of the access rights for that particular
object, discarding blank entries.
 For efficiency a separate list of default access rights can also be kept, and checked first.
3. Capability Lists for Domains

 In a similar fashion, each row of the table can be kept as a list of the capabilities of that
domain.
 Capability lists are associated with each domain, but not directly accessible by the
domain or any user process.
 Capability lists are themselves protected resources, distinguished from other data in
one of two ways:
 A tag, possibly hardware implemented, distinguishing this special type of data. ( other
types may be floats, pointers, booleans, etc. )
 The address space for a program may be split into multiple segments, at least one of
which is inaccessible by the program itself, and used by the operating system for
maintaining the process's access right capability list.

4 .A Lock-Key Mechanism

 Each resource has a list of unique bit patterns, termed locks.


 Each domain has its own list of unique bit patterns, termed keys.
 Access is granted if one of the domain's keys fits one of the resource's locks.
 Again, a process is not allowed to modify its own keys.

5. Comparison

 Each of the methods here has certain advantages or disadvantages, depending on the
particular situation and task at hand.
 Many systems employ some combination of the listed methods.

24. Explain access matrix with example


Access Matrix is a security model of protection state in computer system. It is
represented as a matrix. Access matrix is used to define the rights of each process
executing in the domain with respect to each object. The rows of matrix represent
domains and columns represent objects. Each cell of matrix represents set of access
rights which are given to the processes of domain means each entry(i, j) defines the
set of operations that a process executing in domain Di can invoke on object Oj.
According to the above matrix: there are four domains and four objects- three files(F1,
F2, F3) and one printer. A process executing in D1 can read files F1 and F3. A process
executing in domain D4 has same rights as D1 but it can also write on files. Printer can
be accessed by only one process executing in domain D2. The mechanism of access
matrix consists of many policies and semantic properties. Specifically, We must ensure
that a process executing in domain Di can access only those objects that are specified
in row i.
Association between the domain and processes can be either static or dynamic. Access
matrix provides an mechanism for defining the control for this association between
domain and processes
When we switch a process from one domain to another, we execute a switch
operation on an object(the domain). We can control domain switching by including
domains among the objects of the access matrix. Processes should be able to switch
from one domain (Di) to another domain (Dj) if and only is a switch right is given to
access(i, j).
According to the matrix: a process executing in domain D2 can switch to domain D3
and D4. A process executing in domain D4 can switch to domain D1 and process
executing in domain D1 can switch to domain D2.

25. CONTIGUOUS ALLOCATION


• In this allocation each file takes up a set of contiguous blocks on the disk.
• If the blocks are allocated to the file in such a way that all the logical blocks of the file
get the contiguous physical block in the hard disk then such allocation scheme is known
as contiguous allocation
In the image shown above, there are three files in the directory. The starting block and
the length of each file are mentioned in the table. We can check in the table that the
contiguous blocks are assigned to each file as per its need.

A contiguous memory allocation is a memory management technique where


whenever there is a request by the user process for the memory, a single section
of the contiguous memory block is given to that process according to its
requirement.

It is achieved by dividing the memory into fixed-sized partitions or variable-sized


partitions.

Fixed-sized partition scheme:


It is also known as Static-partitioning.
The system is divided into fixed-sized partitions.

In this scheme, each partition may contain exactly one process. This process limits
the extent of multiprogramming, as the number of partitions decides the number
of processes.

Advantages of Fixed-sized partition scheme:

The advantages of fixed-sized partition scheme are as follows:


1. This scheme is easy to implement.
2. It makes management easier.
3. It supports multiprogramming.

Disadvantages of Fixed-sized partition scheme:


The disadvantages of fixed-sized partition scheme are as follows:
1. It limits the extent of multiprogramming.
2. The unused portion of each partition cannot be used to load other programs.
3. The size of the process cannot be larger than the size of the partition, hence limiting the
size of the processes.

Variable-sized partition scheme:

It is also known as Dynamic partitioning. In this, the scheme allocation is done


dynamically.
The size of each partition is not declared initially, and only once we know the size
of the process. The size of the partition and the process is equal, hence
preventing internal fragmentation.
When the process is smaller than the partition, some size of the partition gets
wasted, this is known as internal fragmentation. It is a concern in static
partitioning, but dynamic partitioning aims to solve this issue.

Advantages of Variable-sized partition scheme:

The advantages of a variable-sized partition scheme are as follows:


1. There is no internal fragmentation.
2. The degree of multiprogramming is dynamic.
3. There is no limitation on the size of the processes.

Disadvantages of Variable-sized partition scheme:

The disadvantages of a variable-sized partition scheme are as follows:


1. It is challenging to implement.
2. It is prone to external fragmentation.
26. Explain types of page replacement algo
When a page fault occurs, page replacement algorithms are used for loading the page in
memory and no free page frame exist in memory.

• Page fault occurs if a running process references a nonresident page.

. The goal of a replacement strategy is to minimize the fault rate. To evaluate a


replacement algorithm, following parameters are used :

1. The size of a page 2. A set of reference strings

3. The number of page frames.


Page replacement algorithm deals with how the kernel decides which page reclaim.
Operating systems selects the local or global page replacement policy. Local
replacement policy allocates a certain number of pages to each process. If a process
needs new page, it must replace one of its own pages.

You might also like