Operating System
Operating System
System software are the system-side computer ➢ word processing programs such as
programs that are required for the working of the Mircosoft Word and Google Docs
computer itself , although the user of the
computer does not need to know about the ➢ Spreadsheeet programs such as Microsoft
functionality of system software while using the Excel and Google sheets
computer. Examples of system software include :
operating systems , device drivers , language ➢ Database programs such as oracle and
processors , etc … Microsoft access
Operating systems :- These are the core software An operating system (os) is a software program
programs that manage a computer’s resources , that manages and controls the hardware and the
including the CPU , memory and storage devices. software resources of a computer system. It acts
Popular operating systems include Windows , as an interface between the user and the
macOS , Linux , and Android. computer’s hardware and provides services and
tools for application programs to interact with the
Device Drivers :- These programs enable a computer.
computer to communicate with hardware devices
such as printers , scanners , and graphic cards. The operating system is responsible for managing
Device drivers act as a translator between the the computer’s memory , processing power ,
hardware and the operating system , allowing the input/output devices and other resources. It also
two to work together seamlessly. provides a platform for running other software
applications , manages the file system and
Utility programs :- These programs perform provides a user interface for interacting with the
system-level tasks such as managing memory , computer.
scheduling tasks , and controlling hardware
devices. Examples of utility programs include Some common examples of operating systems
antivirus software , disk de-fragments , and include :-
system back up tools.
Microsoft Windows :- This is the most widely
What is an Application Software used operating system for personal computers
A computer software which is developed to MacOs :- This is the operating system used on
perform a specific function is known as Apple’s Mac computers
Linux :- This is an open source operating system Batch systems typically use a queue to manage
that is widely used in servers and other systems. the jobs or tasks that are waiting to be
processed. Each job is executed in turn , without
Android :- This is an operating system designed any user interaction , and the system may print
for mobile devices such as smart phones and out a report or summary of the results once the
tablets. job has finished.
Types of operating system's One advantage of batch systems is that they can
efficiently process large volumes of similar jobs ,
Single-User systems such as printing reports or processing payroll ,
they can also help to maximize the utilization of
A single user operating system that is designed to system resources by allowing multiple jobs to be
be used by one user at a time. It is typically processed simultaneously.
found on personal computers , work stations and
other systems that are used by individual users Multi-programmed systems
rather than shared among multiple users. The
goals of such systems are maximizing user In operating systems, a multi–programmed
convenience and responsiveness , instead of system is a type of operating system that allows
maximizing the utilization of the CPU and multiple programs to run on a computer system
peripheral devices. simultaneously , with the goal of maximizing the
utilization of system resources and improving
Single user operating systems are designed to overall system performance.
provide a simple , user-friendly interface that
allows users to perform basic tasks such as file In multi programmed system , multiple programs
management , document creation and editing and are loaded in to the computer’s memory and are
web browsing. They typically provide a graphical executed concurrently , with the operating system
user interface (GUI) that allows users to interact controlling the execution of each program. The
with the computer system using icons , menus , operating system use techniques such as time-
and windows. sharing , in which each program is allocated a
small slice of time to execute and priority
Single user operating systems are generally easier scheduling , in which higher priority programs
to use and manage than multi–user operating are given more time to execute.
systems , which are designed to support multiple
users simultaneously , they are also less One advantage of multi-programmed systems is
complex , as they do not need to manage access that they can improve overall system performance
to system resources by multiple users. by allowing multiple programs to run
simultaneously and share system resources such
Batch systems as CPU time and memory. This can help to
reduce the idle time of the CPU and improve
In operating systems courses , batch systems refer overall system throughput.
to a type of operating system that is designed to
process a large number of similar jobs or tasks in Another advantage of multi-programmed systems
a batch mode , without any user interaction is that they can improve the response time of the
during the processing of each job. system for interactive applications , by allowing
the operating system to switch quickly between
Batch systems were popular in the early days of different programs as needed.
computing when computers were large and
expensive , and user has to share the system Time-sharing or Multi-tasking systems
resources. In this environment , users would
submit their jobs or tasks to a system operator , Time sharing or multi tasking systems in
who would then queue them up for processing in operating systems refer to a type of system that
batch mode. allows multiple users or tasks to run concurrently
on a single computer system. These systems
enable users to interact with the computer in real
time and provide a seamless experience for deadlines , even in the face of resource
running multiple applications simultaneously. contention or other system constraints.
In a time-sharing or multi tasking system , the Soft real time systems :- These systems have less
CPU is divided in to multiple time slots or time strick timing requirements and can tolerate
slices and each user or task is allocated a small occasional missed deadlines , as long as the
slice of CPU time to execute its instructions. The system can recover and continue to function
operating system manages the allocation of CPU properly.
time , and switches between tasks or users
rapidly to give the impression of simultaneous CHAPTER TWO
execution.
PROCESS AND PROCESS MANAGEMENT
Parallel or Multiprocessor systems
Process Scheduling :-
These block tell or represent about the process in
an operating system. The objective of multi programming is to have
some process running at all times , to maximize
Process number (process – id ) :- Shows a unique CPU utilization. The objective of time sharing is
id of a particular process , so every process has to switch the CPU among processes so frequently
to be represented by a unique id. that users can interact with each program while
it is running. For a uni-processor system , there
Process state :- tells us a particular state in which will never be more than one running process. If
a process is at that particular moment , from the there are more processes , the rest will have to
different states that we have , this tells In which wait until the CPU is free and can be
of those states the process is in at that particular rescheduled.
moment.
Process scheduler selects an available process
Program counter :- It indicates the address of the (possibly from a set of several available processes
next instruction that has to be executed for that ) for program execution on the CPU.
SCHEDULING QUEUES
In operating systems, scheduling queues refer to
the data structures used to organize and manage
the execution of processes or threads. These
queues store the processes or threads in different
states, allowing the scheduler to determine which
one should be executed next.
Schedulers in operating systems are responsible 3) Medium Term Scheduler: The medium term
for managing the execution of processes and scheduler is responsible for handling the
deciding which processes should be allocated swapping function, which involves moving
system resources such as the CPU and memory. processes out of main memory (swapping out) to
create space for other processes. This occurs
when a running process makes an I/O request
and becomes suspended, unable to make progress. Imagine you have a big task to do, like cleaning
To free up memory, the suspended process is your room. But instead of doing it all by
moved to secondary storage, which is usually a yourself, you decide to divide the work among
hard disk. This swapping process reduces the your siblings. You become the parent, and your
degree of multiprogramming, helping to improve siblings become the children.
the overall process mix in the system.
So, you start by assigning different parts of the
This scheduler helps manage the computer's room to each sibling. Each sibling becomes
memory, which is like its working space. responsible for their assigned area. Now, here's
Sometimes, when a task is waiting for something where it gets interesting. Each sibling can also
else, like when it needs to read from a hard disk, ask for help from their own friends or siblings.
it gets put aside temporarily to make room for They become parents themselves, and their
other tasks. The medium term scheduler helps friends or siblings become their children. This
with this, making sure that the right tasks are way, the work of cleaning the room is divided
moved out of the working space to make room and shared among everyone.
for new tasks. It helps keep things organized and
In the same way, in a computer operating
running well.
system, a process (like a program or task) can
In simpler terms, the long term scheduler decides create child processes under it. The original
which programs should be allowed to run, the process is the parent process, and the newly
short term scheduler chooses which program created processes are its children. Just like you
should run next on the CPU, and the medium assigned tasks to your siblings, the parent process
term scheduler handles the swapping of processes can assign specific tasks or share the workload
between memory and secondary storage. Each among its child processes.
scheduler has its own role in managing the
And similar to how your siblings can also create
execution of processes and optimizing system
their own children to help with the work, child
performance.
processes can create their own child processes as
well. This creates a tree-like structure where the
Operation On Processes parent process can have children, and those
children can have their own children, and so on.
Processes in an operating system are like tasks or
By creating these child processes, the computer
jobs that the computer needs to do. The
can perform multiple tasks simultaneously,
operating system has to manage the creation and
making the most efficient use of its resources and
deletion of processes. Here's a simplified
getting work done more quickly.
explanation of how this works:
So, process creation is like dividing a big task
Creating a Process: When the operating system
into smaller tasks, assigning them to different
creates a new process, it gives it a name and
processes, and allowing those processes to create
some characteristics. It's like giving a new task a
more processes if needed, forming a tree-like
name and some instructions. A process can also
structure of tasks being executed by the
create other processes, like a parent giving birth
computer.
to children. Each new process can create more
processes, forming a tree-like structure. When a When a process creates child processes, there are
process is created, the operating system prepares two possibilities for how they can execute:
a special block of information called the Process
The parent and children execute concurrently:
Control Block (PCB), which holds important
This means that the parent process and its child
details about the process. The PCB is then added
processes can execute at the same time. It's like a
to a list of ready processes, meaning it's ready to
parent working on their task while the children
run.
work on their own tasks simultaneously. They work and wants to be removed from the
can all make progress together, independently, system.
without waiting for each other.
2. Returning a Status Value: Upon
The parent waits for its children to complete: In termination, a process may provide a
this case, the parent process pauses its own status value or result back to its parent
execution and waits for its child processes to process. This status value is typically an
finish their tasks. It's like a parent waiting for integer that indicates the outcome or
their children to complete their work before relevant information about the process's
continuing with their own task. Once all the execution. It helps the parent process
child processes have finished, the parent process understand the result or status of its child
resumes its execution. process.
In a computer system, a critical region refers to a • To design a robust and efficient solution,
part of a program where multiple processes or it's important to create algorithms or
parts of the program may access shared mechanisms that work reliably, regardless
resources, like memory or files. To prevent of the number of processors or their
problems, we want to ensure that only one speeds. We should not make assumptions
process can use the shared resource at a time. about the underlying hardware's specific
characteristics to ensure compatibility and
To avoid issues like race conditions, we need a
effectiveness across different systems.
way to provide mutual exclusion, which means
making sure that only one process can use the • By not assuming anything about the speed
shared resource while others are excluded. It's or number of processors, we create
like having a rule that only one person can play solutions that are flexible and can work
with the toy at a time. optimally in various computer
environments. It allows the operating
The part of the program where the shared
system or program to adapt to different
resource is accessed is called the critical region
hardware configurations without relying
or critical section. Think of it as the special area
on fixed assumptions.
where you have to take turns to use the toy. If
we can ensure that no two processes are inside 3. A process that is not in its critical region
their critical regions simultaneously, we can should not block or prevent other
prevent race conditions. processes from doing their work.
4. No process should have to wait
To make sure processes cooperate correctly and
indefinitely to enter its critical region. We
efficiently using shared resources, we need to
want everyone to get a fair chance to
follow four conditions:
play with the toy.
1. No two processes can be in their critical
By following these conditions, we create a system
regions at the same time. It's like only
where processes take turns using shared
one person playing with the toy at a
resources, like playing with the toy, ensuring that
time.
everyone gets a chance without conflicts or
2. We can't assume anything about the speed
waiting forever.
or number of processors in the computer
system. So, in simpler terms, a critical region is like a
special area where processes or parts of a
When it says, "We can't assume anything program need to take turns using shared
about the speed or number of processors resources. We want to avoid conflicts and make
in the computer system," it means that sure everyone gets a fair chance, just like sharing
we should not rely on specific a toy with a friend.
assumptions about how fast the processors
in the system are or how many processors CPU SCHEDULER
there are. In an operating system, scheduling algorithms
• In a computer system, the number of help manage the allocation of resources, such as
processors (also known as CPUs) can vary, the CPU (Central Processing Unit), to different
and their speeds can differ as well. Some processes or tasks that are running on a
systems may have multiple processors, computer. Preemptive and non-preemptive
while others may have a single processor. scheduling are two types of algorithms used for
Additionally, the speed of processors can this purpose.
Non-preemptive Scheduling: Non-preemptive how much work is being done by the CPU. For
scheduling is like taking turns. Imagine you and example, if the CPU can complete 10 processes in
your friends are playing a game, and you decide a second, the throughput is 10 processes per
that each person will take a turn to play for a second. Higher throughput indicates a higher rate
certain amount of time before the next person of work being accomplished.
gets a chance. In non-preemptive scheduling, a
Turnaround Time: Turnaround time focuses on
process is given control of the CPU and is
the perspective of an individual process. It is the
allowed to run until it completes or voluntarily
time taken for a process to complete from the
gives up the CPU.
moment it is submitted. Turnaround time includes
For example, if you're watching a video on your waiting to enter memory, waiting in the ready
computer, the non-preemptive scheduling queue, executing on the CPU, and performing I/O
algorithm will let the video play until it finishes operations. We aim to minimize turnaround time
or until you decide to pause or stop it. Other to ensure processes are completed quickly.
processes will have to wait for the video to finish
Waiting Time: Waiting time refers to the time
before they can use the CPU.
spent by a process waiting in the ready queue,
Preemptive Scheduling: Preemptive scheduling is waiting for its turn to be executed by the CPU. It
like interrupting or cutting in line. Imagine you does not include the time spent executing or
and your friends are waiting in line for a ride, performing I/O operations. Minimizing waiting
and suddenly someone jumps in front of you time helps in improving overall process efficiency.
without waiting for their turn. Preemptive
Response Time: In interactive systems, response
scheduling allows a process to be interrupted and
time is crucial. It measures the time it takes from
temporarily paused so that another process can
submitting a request until the first response is
use the CPU.
produced. For example, in a web application,
For example, if you're working on a document response time is the time it takes for the system
and suddenly a high-priority task comes up, the to start showing results to the user. Lower
preemptive scheduling algorithm will pause your response time leads to better user experience and
work and allow the high-priority task to use the interactivity.
CPU. After the high-priority task is done, your
The goal is to maximize CPU utilization and
work will resume. This way, time-critical or
throughput while minimizing turnaround time,
important tasks can be handled promptly.
waiting time, and response time. By optimizing
SCHEDULING CRITERIA these criteria, we can enhance the efficiency and
performance of the system.
When it comes to choosing a CPU scheduling
algorithm, we consider different criteria to In simpler terms, the criteria for comparing
compare them. These criteria help us determine scheduling algorithms include keeping the CPU
which algorithm is the best fit for a particular busy, completing tasks efficiently, minimizing the
situation. Here are the criteria: time processes take to finish, reducing waiting
time, and providing fast response to user
CPU Utilization: We want to keep the CPU busy
requests. The aim is to make the best use of the
and productive as much as possible. CPU
CPU's time and resources.
utilization refers to how much of the CPU's time
is being used for executing processes. We aim for
high CPU utilization, ideally close to 100 percent,
to ensure efficient utilization of system resources.
FCFS scheduling is a non-preemptive scheduling The components of the formula are defined as
algorithm. Once a process starts running, it is not follows:
interrupted until it finishes or voluntarily releases • Completion Time: The time at which a
the CPU. This means that if a process with a process finishes execution.
longer execution time arrives before a process • Arrival Time: The time at which a process
with a shorter execution time, the longer process arrives in the system.
will continue to run, and the shorter process will
have to wait.
p2 3 0 0 0 3 3
The following table shows the arrival time , start
time , waiting time , finish time and Turnaround p3 3 0 3 3 6 6
time of the given processes. p1 24 0 6 6 30 30
The resource-allocation graph helps in identifying but it's not definite. More investigation is needed
deadlock situations based on its structure. Here to confirm if deadlock has actually happened.
The purpose of binding is to enable efficient and Imagine you're going on a trip, but you don't
reliable access to data and instructions during know the exact address of your destination yet.
program execution. It helps the computer system You can still pack your bag with things you
accurately map and manage the memory might need and get ready to go. Similarly, the
resources required by programs, ensuring that the program prepares its instructions and data
right data is retrieved from the right place in without knowing the exact memory addresses.
memory.
When it's time to start the program, the computer
In simpler terms, binding is like creating a finds a suitable place in memory to load it.
connection or a link between a name (symbolic During this loading process, the computer figures
representation) and the place (memory address) out the final memory addresses where the
where something is stored. It helps the computer program will be stored. It's like finding the exact
system know where to find and store information, address of your destination before you start your
making it easier to access and manage data and journey.
instructions.
Once the program's instructions and data are
When a program is being prepared to run on a loaded into memory and their final memory
computer, its instructions and data need to be addresses are determined, the program is ready
assigned specific memory addresses. The process to run. It can access its instructions and data at
of assigning these addresses is called binding. the correct memory locations and perform its
There are three common stages where binding tasks.
can take place:
In simple terms, load time binding is like packing
Compile Time: If the computer knows exactly your bag without knowing the exact address, but
where the program will be stored in memory when you're about to start your trip, you find
before it even runs, the compiler (a special out the exact address and can start moving.
program) can generate instructions that directly Similarly, the program prepares its instructions
reference those memory addresses. It's like without knowing the exact memory addresses, but
creating a road map with fixed destinations. when it's time to run, the computer figures out
where to put the program in memory.
Load Time: If the memory addresses are not
known at compile time, the compiler generates Execution Time: In some cases, a program might
instructions that can be adjusted later. These need to move around in memory while it's
instructions are called relocatable code because running. This could be because the program is
they can be moved around. During the loading too big to fit entirely in one memory segment or
process, the final binding of memory addresses because the computer needs to optimize the
takes place. It's like waiting until you know the memory usage. In these situations, binding is
exact location of your destination before getting delayed until runtime, when the program is
directions. actually being executed. It's like deciding where
to go while you're on the road.
Let's simplify it:
Let’s simplify it
When a program is being prepared to run on a
computer, it needs to know where to find its Imagine you're playing with building blocks, and
instructions and data in memory. Sometimes, the you have a big tower you're building. But as you
program doesn't know the exact memory keep adding more blocks, the tower gets too tall
addresses in advance, so it uses special to fit on just one table. So, you decide to move
parts of the tower to other tables to make room once. Instead, we keep parts of the program on
for everything. the disk until they are needed.
In most operating systems, when we finish But with dynamic linking, it's different. Instead of
writing our program, we combine it with the taking the toy out of the box, you keep it in the
library code to make a complete program. This is box and tell your friends that you want to use a
called static linking. It's like putting all the tools specific toy. When you want to play with it, you
we need inside our project, even if other people borrow it from the box and use it, and then put
are using the same tools for their projects. it back when you're done.
But dynamic linking works differently. Instead of This way, you and your friends can share the
including the library code in our program, we toys in the box without needing to have multiple
wait until the program is actually being used to copies. It saves space and makes it easier to find
link it with the library code. It's like borrowing the toy you want. That's how dynamic linking
tools from a shared toolbox when we need them. works with programs and libraries. Programs can
borrow functions from a shared library when they
Here's how dynamic linking works :-
need them and return them when they're done.
1. When we finish writing our program, we
So, dynamic linking is like sharing a big box of
tell the computer that we want to use
toys with your friends. You borrow the toys
certain functions from the library, like
(functions) you need from the box when you
asking for specific tools.
want to play with them and put them back when
2. When the program is run, the computer
you're finished. It helps save space and makes it
loads it into memory, just like opening a
easier for everyone to find and use the toys they
book to read.
need.
3. Now, our program needs to use the
functions from the library. Instead of The cool thing about dynamic linking is that
having its own copy of the library code, many programs can share the same library code,
the program uses a reference to find the like sharing tools with friends. This saves
functions it needs. memory because we don't need to copy the entire
4. The computer then finds the actual library library for every program. Without dynamic
code and connects it with our program, linking, each program would have its own copy
like getting the right tools from the of the library code, and that would be wasteful.
toolbox and putting them in our hands.
Dynamic linking is often used with libraries that
5. Finally, our program can use the library
have common functions that many programs use.
functions as if they were part of the
Instead of making a copy of the library for each
program itself, just like using the
program, we share it among them. When a
borrowed tools to do our work.
program needs a specific function, it gets it from
Let’s simplify it the shared library, just like borrowing a tool
from a shared toolbox and returning it when
Imagine you have a big box full of toys, and
we're done.
your friends also have the same box. Each toy is
like a function that helps you do something OVERLAYS
special, like a magic wand or a super-fast race
Imagine you have a very big puzzle with many
car.
pieces, but your table is too small to fit all the
Usually, when you want to play with a toy, you pieces at once. However, you still want to solve
take it out of your box and keep it with you the puzzle and see the complete picture. To
while you play. That's like static linking, where tackle this challenge, you divide the puzzle into
smaller sections, and you place one section on
the table at a time. As you work on one section and swap them as the program progresses. It's a
and need a different section, you swap the way to manage memory limitations and work
current section with the new one. This way, you with programs that are larger than the available
can solve the entire puzzle using the limited memory capacity.
space on your table.
LOGICAL VERSUS PHYSICAL ADDRESS SPACE
Now, let's relate this to computer programs. In
older computer systems with limited memory, Logical Address: Imagine you have a virtual map
programs could be larger than what could fit in that helps you find your favorite toys in a large
the available memory at once. This is where toy store. The map has numbers and symbols that
overlays come into play. Overlays are a technique guide you to where each toy is located. This
used to divide a program into smaller logical virtual map is like a logical address.
sections or overlays. In computer systems, a logical address is a virtual
Here's how overlays work in simpler terms: address generated by the CPU while a program is
running. It doesn't physically exist in the
1. Dividing the Program: Just like dividing a computer's memory. It's like a reference or code
puzzle, a program is divided into smaller that helps the computer know where to find
logical sections based on its functionality. specific information. The set of all logical
Each section contains a specific part of addresses generated by a program is called the
the program's code. logical address space.
2. Swapping Sections: The computer can Physical Address: Now, imagine you're in a real
only load one section into memory at a toy store where the toys are physically placed on
time due to limited memory capacity. As different shelves. Each toy has its own specific
the program execution progresses and location on a shelf. These physical locations
reaches a point where a different section represent the physical addresses.
is needed, the current section is swapped
out of memory, and the new section is In computer systems, a physical address is the
loaded. actual location of data in the computer's memory.
It's like the specific shelf and spot where a toy is
3. Manual Management: Overlays require placed in the toy store. While the program
manual programming and management. generates logical addresses, the physical addresses
The programmer needs to determine are computed by a hardware device called the
which sections are needed at different Memory Management Unit (MMU). The physical
points in the program and handle the address space is the collection of all physical
swapping of sections in and out of addresses corresponding to the logical addresses.
memory.
To summarize :-
4. Limited Memory Utilization: Overlays are
used when the entire program cannot fit • Logical Address: It's like a virtual map
into the available memory all at once. By that helps you find toys in a toy store.
dividing the program into smaller sections It's a reference used by the computer to
and swapping them as needed, memory locate information while a program is
can be effectively utilized. running.
So, overlays are like dividing a large puzzle into • Physical Address: It's like the actual
smaller sections and swapping them on and off location of toys on shelves in a toy store.
the table as you solve it. Similarly, in computer It represents the physical memory
programs, overlays divide a program into smaller locations where data is stored. The MMU
sections, load one section at a time into memory, computes the physical address
corresponding to a logical address.
In simpler terms, a logical address is like a map Swapping requires a special place called a
guiding the computer to find information, and a backing store, usually a fast disk, to store the
physical address is the actual location where the programs temporarily. The computer keeps track
information is stored in the computer's memory. of which programs are in memory and which are
in the backing store. When the CPU scheduler
wants to run a program, it checks if it's in
SWAPPING
memory. If not, and there's no free memory
space, it swaps out a program from memory and
When a computer is running many programs at
swaps in the desired program.
the same time, it needs to manage the memory
efficiently. Swapping is a technique used to It's important to note that swapping takes some
temporarily move a program out of memory and time because programs need to be moved in and
store it on a special place called a "backing out of memory. This switching time is called the
store," like a fast disk. The program can then be context switch time and can affect the overall
brought back into memory later to continue performance of the system.
running.
Contiguous Allocation
Here's how swapping works:
In a computer's main memory, we need to
1. Imagine a computer that can run multiple
allocate space for both the operating system and
programs simultaneously, like a round-
user processes. One way to do this is by using
robin game where everyone gets a turn.
contiguous allocation, which means that each
2. Each program gets a limited amount of
process is placed in a single, uninterrupted
time called a "quantum" to run on the
section of memory.
CPU. When the quantum is up, the
program is temporarily moved out of To manage this allocation, the memory is divided
memory. into two partitions: one for the operating system
3. The memory manager takes the program and one for the user processes. In our discussion,
that just finished and swaps it out to the let's assume that the operating system resides in
backing store, freeing up memory space. the low memory area.
4. Meanwhile, the CPU scheduler gives a
In contiguous allocation, each process is assigned
turn to another program that is already in
a specific section of memory that is large enough
memory.
to accommodate its needs. To keep track of
5. The swapped-out program can stay in the
where each process is located, we use two special
backing store until it's time for its turn
registers: the base register and the limit register.
again. Then it's swapped back into
memory for continued execution. The base register points to the smallest memory
6. This swapping process continues, with address of a process, while the limit register
programs taking turns in memory while indicates the size of that process's memory
others are swapped out. section. Together, these registers define the
boundaries of the process in memory.
Swapping is also used in priority-based
scheduling, where higher-priority programs get For example, let's say we have three processes:
priority for CPU time. If a high-priority program Process A, Process B, and Process C. Each process
needs to run, the memory manager can swap out is assigned a contiguous section of memory. The
a lower-priority program to make room for it. base and limit registers for each process specify
When the high-priority program is done, the the starting address and the size of their memory
lower-priority program can be swapped back in. sections.
Using these registers, the computer can easily Single Partition Allocation
locate and access the instructions and data for
each process when needed. Imagine that the computer's memory is like a big
playground. In this playground, we have two
Contiguous allocation allows multiple processes to
areas: one area for the operating system and one
coexist in memory at the same time, making
area for the user programs.
efficient use of the available memory space.
However, it also means that processes need to be The operating system is like the supervisor of the
assigned continuous blocks of memory, which can playground, and it wants to make sure that
limit the flexibility and efficiency of memory everything is running smoothly. It stays in the
utilization. low part of the playground, like the ground floor
of a building. The user programs, on the other
Imagine you have a big room called the
hand, stay in the high part of the playground,
computer's memory. In this room, you need to
like the upper floors of a building.
keep the operating system and different programs.
To do this, you divide the room into two parts: To protect the operating system from any changes
one for the operating system and one for the or problems caused by the user programs, we use
programs. a special system. We have two special guards
called the relocation register and the limit
Now, let's focus on the part for the programs.
register. The relocation register tells us where the
Each program needs its own space to stay in the
operating system starts in the memory, and the
memory. In contiguous allocation, we give each
limit register tells us how much space it takes.
program a special area that is like its own little
house. This house is a single piece of space in Whenever a user program wants to access the
the memory, without any gaps in between. memory, we check if the address it wants to use
is within the allowed range set by the relocation
To remember where each program's house is, we
and limit registers. If it is, we allow the program
use two special signs. One sign tells us the
to access that part of the memory. This way, we
starting address of the house, and the other sign
make sure that the program can't go beyond its
tells us how big the house is. With these signs,
allowed area and cause any trouble for the
we can find each program's house easily.
operating system or other programs.
For example, let's say we have three programs:
When a new program is selected to run by the
Program A, Program B, and Program C. Each
computer, we have a person called the dispatcher
program has its own house in the memory. The
who prepares everything. The dispatcher sets the
signs tell us where each house starts and how
relocation and limit registers to the correct values
much space it takes.
for that program. This helps to protect the
This way, when we need to run a program or do program and keep it within its allocated memory
something with it, we know exactly where to space.
find it in the memory.
One important thing to note is that this system
Contiguous allocation allows many programs to allows the operating system to change its size
live in the memory together, making good use of dynamically. This means that the size of the
the available space. But it also means that operating system can grow or shrink as needed.
programs need to have one continuous block of For example, if a certain part of the operating
space, which can sometimes limit how efficiently system is not being used, it can be removed from
we can use the memory. the memory to make space for other things. This
flexibility helps to optimize the memory usage
and make the best out of it.
So, in simple terms, we use the relocation and
limit registers to make sure that the operating
system and user programs stay in their designated
areas in the memory. This protects them from
causing problems for each other. The system also
allows the operating system to change its size
dynamically to make the most efficient use of the
memory.
Let’s simplify it
The cool thing is that the grown-ups' area can Initially, all the memory is considered as one
change its size if needed. If they have extra space large block called a hole. As processes come and
that's not being used, they can make it smaller go, the memory is divided into different-sized
and let the kids use it. This way, everyone gets holes. When a process arrives and needs memory,
to play and have fun! the operating system searches for a hole that is
large enough to hold it. If the hole is larger than
So, in simple words, the relocation and limit
needed, it is divided into two parts: one part is
registers are like guards in the playground. They
allocated to the arriving process, and the other
make sure the grown-ups and kids stay in their
part becomes a new hole. When a process
own areas. And the grown-ups can change the
finishes, its memory block is released and
size of their area to share it with the kids.
returned to the set of holes. If the newly released
memory is adjacent to other holes, they can be
merged to form a larger hole.
Solution to Fragmentation
• Compaction (already seen it)
• Paging
• Segmentation
PAGING
SEGMENTATION
But it's important to be careful! If you try to Difference between paging and
access a byte that is outside the limit of a segmentation
segment, like byte 1222 in segment 0, it would
cause a problem. The segment table tells you that Segmentation and paging are two different
segment 0 is only 1000 bytes long, so trying to memory management techniques used in
access byte 1222 would be an error, and the operating systems. Here's a comparison between
computer would notify the operating system. the two:
So, segmentation is like having different boxes for Segmentation:
storing information, and each box has a number.
• In segmentation, the logical address space
To find something inside a box, you need to
of a process is divided into variable-sized
know the box number and the position of what
segments, each representing a different
you're looking for.
part of the program (such as code
In the example provided, the segment number, segment, data segment, stack segment,
offset, and base address can be identified as etc.).
follows :- • Each segment has its own starting address
• Segment number: The segment number is and length.
the identifier for a specific segment or • Segments can vary in size and can be
box of memory. In the example, segment allocated and deallocated dynamically as
numbers are mentioned as "segment 2," needed.
"segment 3," and "segment 0." Each • Segmentation provides a flexible memory
segment represents a different area of allocation scheme that allows processes to
memory. grow or shrink based on their memory
requirements.
• Offset: The offset refers to the position or • It allows for logical separation of different
location of a specific byte within a parts of a program and provides
segment. It tells us how far into the protection between segments.
segment we need to go to find the • Segmentation may lead to external
desired byte. For instance, in the fragmentation, where free memory
example, byte numbers like "53," "852," becomes scattered in small chunks,
and "1222" represent the offset. making it harder to allocate contiguous
• Base address: The base address is the blocks of memory.
starting memory location of a segment. It Paging:
tells us where the segment begins in
physical memory. In the example, the • In paging, the logical address space of a
base addresses for segments are mentioned process is divided into fixed-sized blocks
as "location 4300" and "3200." called pages.
• Physical memory is divided into fixed-
To summarize: sized blocks called frames.
• Segment number: Identifies the specific • Pages and frames are of the same size,
segment or box of memory. and each page can be mapped to any
available frame in physical memory.
• Paging provides a uniform and fixed-size used. When the computer needs to work on
memory allocation scheme. something, it takes out the necessary parts from
• It eliminates external fragmentation the bookshelf and puts them in its memory
because each page can be placed (which is like your table). If it needs more space,
anywhere in physical memory as long as it can swap out some parts to make room for
a free frame is available. new ones.
• Paging introduces the concept of a page
By using virtual memory, the computer can run
table, which is used to map logical
big programs and work on large projects, even if
addresses to physical addresses.
it doesn't have a lot of memory. It can easily
• Paging requires hardware support,
switch between different tasks and use its
specifically a memory management unit
resources more efficiently.
(MMU) that handles the translation of
logical addresses to physical addresses. So, virtual memory is like a magical bookshelf
that helps the computer handle big tasks by
In summary, segmentation divides the logical
storing parts of them when they are not needed,
address space into variable-sized segments, while
and bringing them back when they are. It's a
paging divides it into fixed-sized pages.
clever way to make the most of limited space
Segmentation allows for flexible memory
and keep everything running smoothly.
allocation, while paging provides a more uniform
memory allocation scheme. Both techniques have
their advantages and trade-offs, and their choice
Here's a simplified explanation of virtual memory:
depends on factors such as the system
requirements and hardware capabilities. In a computer, programs and data are stored in
memory for the CPU to access and execute.
However, the physical memory of a computer is
VIRTUAL MEMORY limited, and sometimes programs are too big to
fit entirely in memory. Virtual memory solves this
Imagine you have a small table in your room
problem by using a portion of the hard disk as
where you can do your homework. But
an extension of the physical memory.
sometimes your homework is too big to fit on the
table. So what can you do? You can use a Virtual memory works by dividing programs into
special trick called virtual memory. smaller chunks called pages. These pages are
loaded into the physical memory only when they
Virtual memory is like having a magical
are needed. When a program tries to access a
bookshelf that can hold all your books. When
page that is not currently in the physical
you need to work on a big project, you take out
memory, a process called demand paging is used.
a few pages from the bookshelf and put them on
The operating system brings the required page
your table. These pages represent the parts of
from the hard disk into the physical memory and
your project that you are currently using.
then allows the program to access it.
If you need more pages, but your table is full,
This approach has several advantages:
you can put some of the pages back on the
bookshelf and take out new ones. This way, you 1. Larger Programs: With virtual memory,
always have enough pages on your table to do programs can be larger than the physical
your work, even if the project is really big. memory. They can be divided into pages,
and only the necessary pages are loaded
Virtual memory helps your computer do the same
into memory as needed.
thing. It has a special space on the hard disk
called the bookshelf, which can store parts of 2. Efficient Memory Usage: Virtual memory
programs and data that are not currently being allows multiple programs to share the
same physical memory. Each program is DEMAND PAGING
given a portion of the memory, and as
pages are swapped in and out, the Imagine you have a big box filled with your toys,
physical memory is used more efficiently. but you don't have enough space to take them all
out and play with them at once. So instead, you
3. Increased Performance: Virtual memory
decide to take out only the toys you want to play
reduces the need for constant swapping of
with right now, and leave the rest in the box.
entire programs in and out of memory.
This improves overall performance and Demand paging is similar to this idea. When a
allows more programs to run program is running on a computer, instead of
simultaneously, increasing the utilization loading the entire program into memory at once,
of the CPU. the computer only brings in the parts of the
program that are needed at the moment. It's like
4. Flexibility: Certain parts of a program,
taking out the specific pages of a book that you
like error handling routines or rarely used
want to read, rather than carrying the entire
features, don't need to be loaded into
book with you.
memory until they are actually required.
This saves memory space and improves By using demand paging, the computer can save
efficiency. a lot of memory space because it doesn't need to
load everything at once. It only brings in the
pages of the program that the user wants to use.
This helps the computer work more efficiently
and use its resources wisely.
In computers, a file is similar to one of those looks for a suitable place in its memory to store
named items. It's a way to store information, the file. Then, it adds an entry in a special list
such as text documents, pictures, or music. Just called the directory, which keeps track of all the
like you have names for your items, a file also files. This way, the computer knows the file
has a name that helps humans recognize and exists and where it is stored.
A file has certain characteristics or attributes that something down in a notebook. To do that on a
describe it. These attributes include: computer, you use a special command that tells
the computer the name of the file you want to
• Name: The human-readable name that write to and the information you want to write.
helps identify the file. It's like giving instructions to the computer to
• Identifier: A unique number or tag save your words or data in the file.
assigned to the file by the computer
system. Reading a file: Just as you can read from a book
• Type: The type of file, such as a or a paper, you can also read from a file on a
document, image, or video. computer. When you want to read a file, you use
• Location: The specific place where the file a command that tells the computer the name of
is stored on the computer's storage the file you want to read from and where in the
devices, like a hard disk. computer's memory you want the information
• Size: The current size of the file, from the file to be stored. The computer then
measured in bytes, which determines how brings that information into the memory for you
• Protection: Information about who can Repositioning within a file: Let's say you have a
access the file, read it, modify it, or big book with many pages. Sometimes you want
execute it. to go directly to a specific page without reading
• Time, date, and user identification: Details everything from the beginning. Similarly, when
about when the file was created, last working with files on a computer, you can use a
modified, and last accessed, which can command to tell the computer to find a particular
help with security and tracking usage. part of a file. The computer searches the
directory for the right file and moves a pointer
called the current-file-position pointer to the is used to open a file. It searches for the file in
specified location within the file. It's like telling the directory and copies its information into the
the computer to jump to a specific page in the open-file table. This way, we can access the file
book. directly without searching for it every time.
Deleting a file: To delete a file, we search the Once a file is open, several pieces of information
directory for the named file. Having found the are associated with it :-
associated directory entry, we release all file
• File pointer: This keeps track of the
space, so that it can be reused by other files, and
current position in the file, so we know
erase the directory entry.
where to read from or write to next.
Truncating a file: Truncating a file is like erasing • File-open count: This counts how many
all the content inside it, but keeping the file's times the file has been opened by
other information the same. It's a bit like having different processes. When this count
a notebook with things written in it and deciding reaches zero, meaning no process is using
to clear all the pages while keeping the cover the file, it can be removed from the
and other details the same. table.
• Disk location: This information tells us
When you truncate a file, you are essentially
where the file is stored on the disk. It
removing all the data or text inside the file,
helps the system quickly access the file's
making it empty. However, the file's name,
data without repeatedly reading it from
location, permissions, and other attributes remain
the disk.
unchanged. It's a way to start fresh with the file,
• Access rights: Each process opens a file
as if it were brand new, but without the need to
with specific access rights, determining
delete and recreate the file from scratch.
what operations it can perform on the
Think of it as taking a notebook and erasing file. This information is stored in a table
everything you wrote on the pages, but the for each process, allowing the operating
notebook itself still has the same cover, title, and system to allow or deny further I/O
other information. It's a way to keep the file's requests.
structure intact while removing its contents.
When we are done working with a file, we close
These are the basic operations performed on files: it. Closing a file removes its entry from the open-
creating a file, writing to a file, reading from a file table, indicating that we no longer need to
file, and repositioning within a file. Each access it actively.
operation has its own purpose and involves
Think of it like having a library card to borrow
different commands or instructions that the
books. When you want to read a book, you first
computer understands.
show your library card, which has all the
When we work with files, there are many necessary information about you and the book
operations we can perform, such as reading, you want to read. Once you're done reading the
writing, creating, and deleting files. To make book, you return it, and the library removes the
these operations faster, the operating system book's details from your library card.
keeps track of information about open files in a
FILE TYPES
small table called the open-file table. This table
helps the system find the necessary details about
When we work with files, we often come across
a file without constantly searching through the different types of files, such as documents,
directory. images, videos, and music. To help us identify
the type of a file easily, many systems use a
When we want to perform operations on a file,
naming convention that includes a file extension.
we first need to open it. The open() system call
The file extension is a part of the file name that specific order, one record after another. It is like
comes after a period (.) character. reading a book from start to finish, where you
read each page in order without skipping any
For example, let's say we have a file named
pages.
"mydocument.docx". Here, "mydocument" is the
name of the file, and "docx" is the file extension. In sequential access, a read operation reads the
The file extension tells us that this file is a next portion of the file in the order it was
document and it is associated with a specific written. Each time we read, a file pointer is
program or file format, in this case, Microsoft automatically moved to the next position, keeping
Word. track of the current location. Similarly, a write
operation appends data to the end of the file,
Different file types have different extensions. Here
extending the file's content.
are some common file types and their extensions:
Sequential access is commonly used by editors,
• Documents: Files containing text, such as
compilers, and other programs that process data
essays, reports, or letters. They often have
in a linear manner. However, it may not be
extensions like .docx (Microsoft
efficient for large files when we need to access
Word), .pdf (Portable Document Format),
specific records randomly.
or .txt (Plain Text).
• Images: Files that represent pictures or Direct Access: Direct access, also known as
graphics. They can have extensions relative access, allows for random access to data
like .jpg or .jpeg (JPEG image), .png within a file. In this method, the file is divided
(Portable Network Graphics), or .gif into fixed-length logical records or blocks, and
(Graphics Interchange Format). each block is assigned a unique number.
• Videos: Files that contain moving images. With direct access, we can read or write records
They can have extensions like .mp4 in any order, without having to go through the
(MPEG-4 video), .avi (Audio Video entire file sequentially. For example, we can read
Interleave), or .mov (QuickTime Movie). block 14, then jump to block 53, and then write
• Music: Files that store audio recordings or to block 7. This random access is possible
songs. They can have extensions like .mp3 because of the underlying disk-based storage,
(MPEG Audio Layer 3), .wav (Waveform which allows accessing any block directly.
Audio File Format), or .flac (Free Lossless
Audio Codec). To perform direct access, file operations are
modified to include the block number as a
By looking at the file extension, both the user parameter. Instead of reading or writing the next
and the operating system can quickly identify the record, we specify the block number we want to
type of the file and determine which program read or write.
should be used to open or work with it. This
makes it easier to organize and manage different It's important to note that not all operating
types of files. systems support both sequential and direct access
methods for files. Some systems only allow
ACCESS METHODS sequential access, while others only support direct
access. The choice of access method depends on
When we want to access information stored in a
the specific needs of the application and the
file, there are different methods we can use. Let's
capabilities of the operating system
look at two common access methods: sequential
access and direct access. DIRECTORY STRUCTURE
Sequential Access: Sequential access is the Storage Structure :- Imagine you have a big box
simplest method of accessing information in a where you can keep all your toys. This box is
file. With sequential access, data is processed in a like a computer's storage device, called a disk.
Normally, you put all your toys in the box and DIRECTORY OVERVIEW
that's it.
In simpler terms, when we talk about a directory
But sometimes, you might want to organize your structure, we need to think about the things we
toys in different ways. You could divide the box can do with it. These are the different operations:
into sections and put different types of toys in
1. Search for a file: It's like looking for a
each section. You could also use some parts of
specific toy in your toy box. We want to
the box for things other than toys, like storing
find the right place in the directory where
extra clothes or snacks.
a particular file is listed.
In a computer, the box or disk can be used
2. Create a file: Imagine you want to add a
entirely for storing files. However, sometimes it's
new toy to your toy box. Similarly, we
useful to divide the disk into smaller parts called
can create a new file and include it in
partitions or slices. Each partition can have its
the directory.
own set of files. This way, you can keep different
types of files separate from each other. 3. Delete a file: Sometimes, you may not
want a toy anymore and want to remove
Sometimes, you might have more than one disk,
it from your toy box. Similarly, we want
and you can combine them to make a bigger
to be able to remove a file from the
storage space called a volume. Each volume is
directory when we don't need it anymore.
like a virtual box where you can store even more
files. 4. List a directory: When we want to see all
the toys in our toy box, we can make a
Now, every box or volume needs a special list
list of them. Similarly, we want to be
called a directory. This list keeps track of all the
able to see a list of all the files in a
toys or files inside the box or volume. It has
directory and get information about each
information like the names of the files, where
file.
they are located, how big they are, and what
type they are. This helps you find and organize 5. Rename a file: Suppose you want to
your files easily. change the name of a toy because its
name doesn't match anymore. Similarly,
So, in simpler terms, a disk is like a big box
we can change the name of a file in the
where you keep files. You can divide the box
directory to reflect its new content or
into sections for different types of files. If you
purpose. This may also change its position
have more than one box, you can combine them
in the directory.
to make an even bigger storage space. And to
keep track of all the files, there is a special list 6. Traverse the file system: Imagine you
called a directory that tells you information about want to explore and look at every toy
each file. and box within your toy box collection.
Similarly, we might want to access and
go through every directory and file in a
directory structure, exploring the entire
system.
In a computer, a directory is like a folder that Imagine you have a big box where you keep all
helps organize files. It keeps track of all the files your toys, but now you want to share the box
on a computer and provides a way to access with your friend. To avoid confusion, you decide
them easily. Just like how you can have different to create a separate section in the box for each
folders to store your toys, a computer has of you. This way, your toys are kept separate
directories to store files. from your friend's toys. This is similar to a two-
Now, let's talk about the single-level directory level directory structure.
Imagine you have a big box where you keep all their own directory, which is like their personal
your toys, and you don't have any other boxes or toy section in the box. Each user's directory lists
folders to organize them. All your toys are mixed only the files that belong to that user. When a
together in that one box. This is similar to a user starts using the computer or logs in, the
single-level directory structure. system checks the master directory to find their
specific directory.
In a single-level directory, all the files are stored
in one directory. It's like putting all your toys in Here are some key points about the two-level
and manage because there is only one place to 1. User's own directory: Each user gets their
look for files. own separate space to store their files. It's
However, there are some limitations to this like having your own personal section in
different names. It's like each toy in your directory, file names only need to be
box needs to have a different name so unique within that user's directory. It's
that you can identify them easily. If two like each user can have their own toy
toys have the same name, it would be named "Teddy" because they have their
only one user and all the files are in the directory that keeps track of each user's
same directory, it can become difficult to directory. It helps the system find the
remember the names of all the files as the right user's directory when they log in.
number of files increases. It's like trying It's like having a list of names that tells
to remember the names of all the toys in you where each person's toy section is in
files) grows, it becomes harder to keep 4. System executable files: Apart from user
track of everything. directories, there is usually a separate
So, in summary, a single-level directory structure directory for system executable files.
is like having all your toys in one big box. It's These are special files that make the
limitations like the need for unique file names 5. Access to directories: The system can
and difficulty in remembering the names of all decide whether users are allowed to
the files. access directories other than their own. If
allowed, there must be a way to specify 1. Root directory: The root directory is like
which directory they want to access. If the base of the tree. It's the top-level
access is denied, special arrangements directory from which all other directories
must be made for users to run programs and files stem. Think of it as the starting
located in system directories. point of the entire file organization.
6. Search path: A search path is like a list of 2. Paths and organization: Every file in the
directories where the system looks for system has a unique path name. It's like a
specific programs. Each user can have specific address that tells you where a file
their own unique search path to find is located within the tree structure. Users
programs. can create subdirectories (smaller
branches) and organize their files within
So, in simpler terms, a two-level directory
them to keep things organized.
structure is like having separate toy sections in a
shared toy box. Each user has their own section, 3. Current directory: Each user or process
and their files are listed only in their section. has a concept of a current directory. It's
There is a master list that helps the system find like a starting point for that user or
each user's section. Users may or may not be process to search for files. All searches for
able to access other sections, and they can have files happen relative to the current
their own search paths to find programs. directory.
and the location where we want to attach together or access the same files to achieve a
the file system. The location is typically common computing goal. It's like sharing toys
an empty folder, like an empty toy box with your friends so that everyone can play
2. FTP (File Transfer Protocol): Originally, 1. Client and server: The client computer
file sharing across systems was done using wants to access files, and the server
FTP. It allowed individual files to be computer has those files. It's like you
transported between systems as needed. asking your friend (client) to get a toy
FTP can require an account or password from their toy storage (server) for you.
for access, or it can be anonymous, not 2. User IDs and group IDs: To work
requiring any user name or password. properly, the user IDs and group IDs must
3. Distributed file systems: There are various be consistent across both systems. This is
forms of distributed file systems that important when multiple computers
enable remote file systems to be managed by the same organization share
connected to a local directory structure. files with a common group of users. It's
This means you can access files on remote like making sure you and your friend
systems as if they are part of your own agree on who the toys belong to.
computer. The actual files are still 3. Security concerns: There are security
transported across the network as needed, concerns in this model. Servers often
possibly using FTP as the underlying restrict access to trusted systems only to
transport mechanism. prevent impersonation (a computer
4. World Wide Web (WWW): The WWW has pretending to be another). They may also
made it easy to access files on remote restrict remote access to read-only mode.
systems without connecting their entire Additionally, servers control which file
file systems. This is often done using FTP systems can be mounted remotely, and
as the underlying file transport they usually keep limited, relatively
mechanism, and it allows you to view public information within those file
and download files from remote systems. systems, protected by regular backups.
The NFS (Network File System) is an
So, in simpler terms, file sharing is when example of such a system.
multiple users work together or access the same
files. It's like sharing toys with your friends. The Failure Modes :-
operating system determines whether users can 1. Local file system failures: Local file
access each other's files. There are owners who systems can fail for various reasons, like
have more control over the files, and groups of disk failure, corruption of directory
users who can share access. With networking, structure or other disk management
files can be shared across computers using information, hardware or cable failures,
protocols like FTP. Remote file systems can be or human errors. These failures may cause
connected to a local structure, allowing access to the computer to crash, and human
files on remote systems. The World Wide Web intervention is needed to fix the problems.
also allows you to access files on remote systems
using FTP.
2. Remote file system failures: Remote file Types of Access :-
systems have additional failure modes due
1. Read: It means looking at the contents of
to the complexity of network systems.
a file, like reading a book.
Interruptions in the network between two
2. Write: It means making changes or adding
computers can occur due to hardware
new information to a file, like writing in
failure, poor configuration, or network
a notebook.
implementation issues.
3. Execute: It means running or using a file
3. Handling loss of remote file system: as a program, like playing a game on a
Imagine the client computer is using a computer.
remote file system, and suddenly, it 4. Append: It means adding new information
becomes unreachable. In this situation, at the end of a file, like adding a new
the client computer should not behave as page to a book.
it would if a local file system was lost. 5. Delete: It means removing a file
Instead, it can either terminate all completely, like throwing away a toy.
operations to the lost server or delay 6. List: It means seeing the name and details
operations until the server is reachable of a file, like looking at a list of toys and
again. These behaviors are defined in the their descriptions.
remote-file-system protocol. Terminating
operations can result in users losing data,
so most protocols allow delaying file Access Control: To control access to files, we
system operations with the hope that the usually associate each file with an access-control
remote server will become available list (ACL). It's like having a list of people who
again. can play with a specific toy and what they are
allowed to do with it.
So, in simpler terms, a client-server model is like
asking a friend (client) to fetch a toy from their When someone wants to access a file, the
toy storage (server) and share it with you. In computer checks the access-control list for that
remote file systems, computers access files on file. If the person is listed with the requested
other computers. Security measures are in place access (like read or write), they are allowed to
to protect access, and certain failures can occur perform that action. If they are not listed or don't
in both local and remote file systems. When the have the required access, they are denied access.
remote file system is lost, the client computer can
One way to simplify access control is by
either stop operations or wait for the server to be
categorizing users into three groups for each
reachable again.
file :-
PROTECTION 1. Owner: The person who created the file.
They have the most control and can do
When we want to protect files on a computer, we
all actions on the file.
need to control who can do what with those
2. Group: A set of people who share the file
files. This is like having different rules for
and need similar access. They have
different toys so that everyone plays with them
specific permissions, like read and write,
properly. Access control mechanisms help us
but can't delete the file.
control the types of actions that can be
3. Universe: All other users of the system
performed on files.
who don't fall into the owner or group
category. They have limited permissions,
usually just read access.
For example, let's say Sara is writing a book and pages of a book without needing to jump
hires three graduate students (Jim, Dawn, and around.
Jill) to help. Sara should have full access to the
3. Storage allocation issues: Similar to
book file, while the students should have read
allocating blocks of memory, allocating
and write access but not the ability to delete the
contiguous disk space involves
file. Other users in the system should only have
considerations like first fit, best fit, and
read access. To achieve this, a group called "text"
fragmentation problems. However, with
is created with the students as members, and the
disks, moving the disk heads to different
access rights are set accordingly.
locations takes time, so it may be more
To ensure proper control, permissions and access beneficial to keep files contiguous
lists need to be managed carefully. In some whenever possible.
systems, only authorized individuals, like
4. Compacting the disk: Even file systems
managers, can create or modify groups.
that don't store files contiguously by
So, in simpler terms, access control is about default can benefit from utilities that
setting rules for who can do what with files. We compact the disk. These utilities rearrange
have different types of access like read, write, the files and make them contiguous,
and execute. Access-control lists help us keep improving performance.
track of who can access files and what they can
5. Problems with file growth: There can be
do with them. By categorizing users into owner,
issues when files grow or when the exact
group, and universe, we simplify access control.
size of a file is unknown at creation time:
It's like having different rules for different people
to play with different toys. • Overestimating the file's final size
leads to external fragmentation
and wastes disk space.
ALLOCATION METHODS • Underestimating the size may
require moving the file or aborting
When we store files on a disk, we need to decide a process if the file outgrows its
how to allocate space for those files. It's like allocated space.
finding the best places to keep your toys in your • Slow growth over time with a
toy storage area so that they can be easily predetermined initial allocation
accessed. There are different methods for may result in a lot of unusable
allocating disk space, and we will discuss three space before the file fills it.
major methods: contiguous, linked, and indexed. 6. Extents: Another variation of contiguous
1) Contiguous Allocation :- allocation is allocating file space in large
chunks called extents. If a file outgrows
1. Contiguous allocation: This method its original extent, an additional extent is
requires that each file is stored in a allocated. An extent can be as large as a
continuous block of space on the disk. It's complete track or even a cylinder, aligned
like putting all the parts of a puzzle on appropriate boundaries.
together in one place.
So, in simpler terms, allocation methods are ways
2. Performance benefits: Contiguous to decide where to store files on a disk.
allocation provides fast performance Contiguous allocation means keeping the parts of
because reading consecutive blocks of the a file together. It helps with fast access, but
same file doesn't require moving the disk problems can arise when files grow or their sizes
heads much. It's like flipping through are unknown. Compacting the disk can improve
performance. Extents are large chunks of space
used for allocation. It's like finding the best spots puzzle pieces together to form larger
to store your toys together or in big chunks. clusters.