0% found this document useful (0 votes)
5 views100 pages

Os 2

The document discusses memory management in operating systems, focusing on address binding types (compile-time, load-time, execution-time) and the distinction between logical and physical addresses. It explains how logical addresses are generated by the CPU and translated into physical addresses by the Memory Management Unit (MMU) using a page table. Additionally, it covers memory allocation strategies, including fixed partitioning and paging, highlighting their advantages and disadvantages.

Uploaded by

Vinaya Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views100 pages

Os 2

The document discusses memory management in operating systems, focusing on address binding types (compile-time, load-time, execution-time) and the distinction between logical and physical addresses. It explains how logical addresses are generated by the CPU and translated into physical addresses by the Memory Management Unit (MMU) using a page table. Additionally, it covers memory allocation strategies, including fixed partitioning and paging, highlighting their advantages and disadvantages.

Uploaded by

Vinaya Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

Opera ng System -II

Unit 1(Memory Management)

In this ar cle, We are going to cover address binding with the help of an
example and Its types like compile me, load me, and execu on me address
binding. Let’s discuss one by one.
Address Binding :
The Associa on of program instruc on and data to the actual physical memory
loca ons is called the Address Binding. Let’s consider the following example
given below for be er understanding.
Consider a program P1 has the set of instruc on such that I1, I2, I3, I4, and
program counter value is 10, 20, 30, 40 respec vely.
Program P1
I1 --> 10
I2 --> 20
I3 --> 30
I4 --> 40

Program Counter = 10, 20, 30, 40

Types of Address Binding :


Opera ng System -II
Address Binding divided into three types as follows.
1. Compile- me Address Binding
2. Load me Address Binding
3. Execu on me Address Binding
Compile- me Address Binding :
 If the compiler is responsible for performing address binding then it is
called compile- me address binding.
 It will be done before loading the program into memory.
 The compiler requires interacts with an OS memory manager to perform
compile- me address binding.
Load me Address Binding :
 It will be done a er loading the program into memory.
 This type of address binding will be done by the OS memory manager i.e
loader.
Execu on me or dynamic Address Binding :
 It will be postponed even a er loading the program into memory.
 The program will be kept on changing the loca ons in memory un l the
me of program execu on.
 The dynamic type of address binding done by the processor at the me
of program execu on.
Note :
The majority of the Opera ng System prac cally implement dynamic loading,
dynamic linking, dynamic address binding. For example – Windows, Linux, Unix
all popular OS.

Logical and Physical Address in Opera ng System


Introduc on:
In opera ng systems, logical and physical addresses are used to manage and
access memory. Here’s an overview of each:
Opera ng System -II
Logical address: A logical address, also known as a virtual address, is an
address generated by the CPU during program execu on. It is the address seen
by the process and is rela ve to the program’s address space. The process
accesses memory using logical addresses, which are translated by the
opera ng system into physical addresses.
Physical address: A physical address is the actual address in main memory
where data is stored. It is a loca on in physical memory, as opposed to a virtual
address. Physical addresses are used by the memory management unit (MMU)
to translate logical addresses into physical addresses.
The transla on from logical to physical addresses is performed by the opera ng
system’s memory management unit. The MMU uses a page table to translate
logical addresses into physical addresses. The page table maps each logical
page number to a physical frame number.
The similarity between logical and physical addresses in the opera ng system
are listed below:
 Both logical and physical addresses are used to iden fy a specific
loca on in memory.
 Both logical and physical addresses can be represented in different
formats, such as binary, hexadecimal, or decimal.
 Both logical and physical addresses have a finite range, which is
determined by the number of bits used to represent them.
Here are some important points about logical and physical addresses in
opera ng systems:
The use of logical addresses provides a layer of abstrac on that allows
processes to access memory without knowing the physical memory loca on.
Logical addresses are mapped to physical addresses using a page table. The
page table contains informa on about the mapping between logical and
physical addresses.
The MMU translates logical addresses into physical addresses using the page
table. This transla on is transparent to the process and is performed by
hardware.
The use of logical and physical addresses allows the opera ng system to
manage memory more efficiently by using techniques such as paging and
segmenta on.
Opera ng System -II
Some reference books on opera ng system concepts that cover logical and
physical addressing include:
“Opera ng System Concepts” by Abraham Silberschatz, Peter Baer Galvin, and
Greg Gagne.
“Modern Opera ng Systems” by Andrew S. Tanenbaum.
“Opera ng Systems: Three Easy Pieces” by Remzi H. Arpaci-Dusseau and
Andrea C. Arpaci-Dusseau.
These books provide detailed coverage of opera ng system concepts, including
memory management and addressing techniques.
Logical Address is generated by CPU while a program is running. The logical
address is virtual address as it does not exist physically, therefore, it is also
known as Virtual Address. This address is used as a reference to access the
physical memory loca on by CPU. The term Logical Address Space is used for
the set of all logical addresses generated by a program’s perspec ve.
The hardware device called Memory-Management Unit is used for mapping
logical address to its corresponding physical address.

Physical Address iden fies a physical loca on of required data in a memory.


The user never directly deals with the physical address but can access by its
corresponding logical address. The user program generates the logical address
and thinks that the program is running in this logical address but the program
needs physical memory for its execu on, therefore, the logical address must be
mapped to the physical address by MMU before they are used. The term
Physical Address Space is used for all physical addresses corresponding to the
logical addresses in a Logical address space.
Opera ng System -II

Mapping virtual-address to physical-addresses


Differences Between Logical and Physical Address in Opera ng System
1. The basic difference between Logical and physical address is that Logical
address is generated by CPU in perspec ve of a program whereas the
physical address is a loca on that exists in the memory unit.
2. Logical Address Space is the set of all logical addresses generated by CPU
for a program whereas the set of all physical address mapped to
corresponding logical addresses is called Physical Address Space.
3. The logical address does not exist physically in the memory whereas
physical address is a loca on in the memory that can be accessed
physically.
4. Iden cal logical addresses are generated by Compile- me and Load me
address binding methods whereas they differs from each other in run-
me address binding method. Please refer this for details.
5. The logical address is generated by the CPU while the program is running
whereas the physical address is computed by the Memory Management
Unit (MMU).
Opera ng System -II
Comparison Chart:

Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU loca on in a memory unit

Logical Address Space is set Physical Address is set of all


Address of all logical addresses physical addresses mapped to
Space generated by CPU in the corresponding logical
reference to a program. addresses.

User can view the logical User can never view physical
Visibility
address of a program. address of program.

Genera on generated by the CPU Computed by MMU

The user can use the logical The user can indirectly access
Access address to access the physical address but not
physical address. directly.

Logical address can be Physical address will not


Editable
change. change.

Also called virtual address. real address.


Opera ng System -II
Memory alloca on strategies:
1. Fixed and variable par ons
Fixed par oning, also known as sta c par oning, is a memory alloca on
technique used in opera ng systems to divide the physical memory into fixed-
size par ons or regions, each assigned to a specific process or user. Each
par on is typically allocated at system boot me and remains dedicated to a
specific process un l it terminates or releases the par on.
1. In fixed par oning, the memory is divided into fixed-size chunks, with
each chunk being reserved for a specific process. When a process
requests memory, the opera ng system assigns it to the appropriate
par on. Each par on is of the same size, and the memory alloca on is
done at system boot me.
2. Fixed par oning has several advantages over other memory alloca on
techniques. First, it is simple and easy to implement. Second, it is
predictable, meaning the opera ng system can ensure a minimum
amount of memory for each process. Third, it can prevent processes
from interfering with each other’s memory space, improving the security
and stability of the system.
3. However, fixed par oning also has some disadvantages. It can lead to
internal fragmenta on, where memory in a par on remains unused.
This can happen when the process’s memory requirements are smaller
than the par on size, leaving some memory unused. Addi onally, fixed
par oning limits the number of processes that can run concurrently, as
each process requires a dedicated par on.
Overall, fixed par oning is a useful memory alloca on technique in situa ons
where the number of processes is fixed, and the memory requirements for
each process are known in advance. It is commonly used in embedded
systems, real- me systems, and systems with limited memory resources.
In opera ng systems, Memory Management is the func on responsible for
alloca ng and managing a computer’s main memory. Memory
Management func on keeps track of the status of each memory loca on,
either allocated or free to ensure effec ve and efficient use of Primary
Memory.
Opera ng System -II
There are two Memory Management Techniques:
1. Con guous
2. Non-Con guous
In Con guous Technique, execu ng process must be loaded en rely in the
main memory.
Con guous Technique can be divided into:
 Fixed (or sta c) par oning
 Variable (or dynamic) par oning

Fixed Par oning:


This is the oldest and simplest technique used to put more than one process in
the main memory. In this par oning, the number of par ons (non-
overlapping) in RAM is fixed but the size of each par on may or may not be
the same. As it is a con guous alloca on, hence no spanning is allowed. Here
par ons are made before execu on or during system configure.

As illustrated in above figure, first process is only consuming 1MB out of 4MB
in the main memory.
Opera ng System -II
Hence, Internal Fragmenta on in first block is (4-1) = 3MB.
Sum of Internal Fragmenta on in every block = (4-1)+(8-7)+(8-7)+(16-14)=
3+1+1+2 = 7MB.
Suppose process P5 of size 7MB comes. But this process cannot be
accommodated in spite of available free space because of con guous alloca on
(as spanning is not allowed). Hence, 7MB becomes part of External
Fragmenta on.
There are some advantages and disadvantages of fixed par oning.
Advantages of Fixed Par oning –
 Easy to implement: The algorithms needed to implement Fixed
Par oning are straigh orward and easy to implement.
 Low overhead: Fixed Par oning requires minimal overhead, which
makes it ideal for systems with limited resources.
 Predictable: Fixed Par oning ensures a predictable amount of memory
for each process.
 No external fragmenta on: Fixed Par oning eliminates the problem of
external fragmenta on.
 Suitable for systems with a fixed number of processes: Fixed
Par oning is well-suited for systems with a fixed number of processes
and known memory requirements.
 Prevents processes from interfering with each other: Fixed Par oning
ensures that processes do not interfere with each other’s memory space.
 Efficient use of memory: Fixed Par oning ensures that memory is used
efficiently by alloca ng it to fixed-sized par ons.
 Good for batch processing: Fixed Par oning is ideal for batch
processing environments where the number of processes is fixed.
 Be er control over memory alloca on: Fixed Par oning gives the
opera ng system be er control over the alloca on of memory.
 Easy to debug: Fixed Par oning is easy to debug since the size and
loca on of each process are predetermined.
Opera ng System -II
Disadvantages of Fixed Par oning –
1. Internal Fragmenta on:
Main memory use is inefficient. Any program, no ma er how small,
occupies an en re par on. This can cause internal fragmenta on.

2. External Fragmenta on:


The total unused space (as stated above) of various par ons cannot be
used to load the processes even though there is space available but not
in the con guous form (as spanning is not allowed).

3. Limit process size:


Process of size greater than the size of the par on in Main Memory
cannot be accommodated. The par on size cannot be varied according
to the size of the incoming process size. Hence, the process size of 32MB
in the above-stated example is invalid.

4. Limita on on Degree of Mul programming:


Par ons in Main Memory are made before execu on or during system
configure. Main Memory is divided into a fixed number of par ons.
Suppose if there are par ons in RAM and are the number of
processes, then condi on must be fulfilled. Number of
processes greater than the number of par ons in RAM is invalid in
Fixed Par oning.

2.Paging
Paging in OS (Operating System)
In Operating Systems, Paging is a storage mechanism used to retrieve processes from
the secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.
Opera ng System -II
One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.

Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging,
page size needs to be as same as frame size.

Example
Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process
is divided into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in
the contiguous way.
Opera ng System -II
Frames, pages and the mapping between the two is shown in the image below.

Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8
frames become empty and therefore other pages can be loaded in that empty place.
The process P5 of size 8 KB (8 pages) is waiting inside the ready queue.

ADVERTISEMENT

Given the fact that, we have 8 non contiguous frames available in the memory and
paging provides the flexibility of storing the process at the different places. Therefore,
we can load the pages of process P5 in the place of P2 and P4.
Opera ng System -II

Memory Management Unit


The purpose of Memory Management Unit (MMU) is to convert the logical address
into the physical address. The logical address is the address generated by the CPU for
every page while the physical address is the actual address of the frame where each
page will be stored.

When a page is to be accessed by the CPU by using the logical address, the operating
system needs to obtain the physical address to access that page physically.

The logical address has two parts.

1. Page Number
2. Offset
Opera ng System -II
Memory management unit of OS needs to convert the page number to the frame
number.

Example

Considering the above image, let's say that the CPU demands 10th word of 4th page
of process P3. Since the page number 4 of process P1 gets stored at frame number 9
therefore the 10th word of 9th frame will be returned as the physical address.

3. Segmenta on in Opera ng System


A process is divided into Segments. The chunks that a program is divided
into which are not necessarily all of the exact sizes are called segments.
Segmenta on gives the user’s view of the process which paging does not
provide. Here the user’s view is mapped to physical memory.
Types of Segmenta on in Opera ng System
 Virtual Memory Segmenta on: Each process is divided into a number of
segments, but the segmenta on is not done all at once. This
segmenta on may or may not take place at the run me of the program.
 Simple Segmenta on: Each process is divided into a number of
segments, all of which are loaded into memory at run me, though not
necessarily con guously.
There is no simple rela onship between logical addresses and physical
addresses in segmenta on. A table stores the informa on about all such
segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical
address. It’s each table entry has:
 Base Address: It contains the star ng physical address where the
segments reside in memory.
Opera ng System -II
 Segment Limit: Also known as segment offset. It specifies the length of
the segment.

Segmenta on
Transla on of Two-dimensional Logical Address to Dimensional Physical
Address.
Opera ng System -II

Transla on
The address generated by the CPU is divided into:
 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the
segment.
Advantages of Segmenta on in Opera ng System
 No Internal fragmenta on.
 Segment Table consumes less space in comparison to Page table in
paging.
 As a complete module is loaded all at once, segmenta on improves CPU
u liza on.
 The user’s percep on of physical memory is quite similar to
segmenta on. Users can divide user programs into modules via
segmenta on. These modules are nothing more than separate
processes’ codes.
 The user specifies the segment size, whereas, in paging, the hardware
determines the page size.
Opera ng System -II
 Segmenta on is a method that can be used to segregate data from
security opera ons.
 Flexibility: Segmenta on provides a higher degree of flexibility than
paging. Segments can be of variable size, and processes can be designed
to have mul ple segments, allowing for more fine-grained memory
alloca on.
 Sharing: Segmenta on allows for sharing of memory segments between
processes. This can be useful for inter-process communica on or for
sharing code libraries.
 Protec on: Segmenta on provides a level of protec on between
segments, preven ng one process from accessing or modifying another
process’s memory segment. This can help increase the security and
stability of the system.
Disadvantages of Segmenta on in Opera ng System
 As processes are loaded and removed from the memory, the free
memory space is broken into li le pieces, causing External
fragmenta on.
 Overhead is associated with keeping a segment table for each ac vity.
 Due to the need for two memory accesses, one for the segment table
and the other for main memory, access me to retrieve the instruc on
increases.
 Fragmenta on: As men oned, segmenta on can lead to external
fragmenta on as memory becomes divided into smaller segments. This
can lead to wasted memory and decreased performance.
 Overhead: Using a segment table can increase overhead and reduce
performance. Each segment table entry requires addi onal memory, and
accessing the table to retrieve memory loca ons can increase the me
needed for memory opera ons.
 Complexity: Segmenta on can be more complex to implement and
manage than paging. In par cular, managing mul ple segments per
process can be challenging, and the poten al for segmenta on faults can
increase as a result.
Opera ng System -II
Virtual Memory in Opera ng System
Virtual Memory is a storage alloca on scheme in which secondary
memory can be addressed as though it were part of the main memory.
The addresses a program may use to reference memory are
dis nguished from the addresses the memory system uses to iden fy
physical storage sites and program-generated addresses are translated
automa cally to the corresponding machine addresses.
A memory hierarchy, consis ng of a computer system’s memory and a
disk, that enables a process to operate with only some por ons of its
address space in memory. A virtual memory is what its name indicates- it
is an illusion of a memory that is larger than the real memory. We refer
to the so ware component of virtual memory as a virtual memory
manager. The basis of virtual memory is the noncon guous memory
alloca on model. The virtual memory manager removes some
components from memory to make room for other components.
The size of virtual storage is limited by the addressing scheme of the
computer system and the amount of secondary memory available not by
the actual number of main storage loca ons.
It is a technique that is implemented using both hardware and so ware.
It maps memory addresses used by a program, called virtual addresses,
into physical addresses in computer memory.
1. All memory references within a process are logical addresses that are
dynamically translated into physical addresses at run me. This means
that a process can be swapped in and out of the main memory such that
it occupies different places in the main memory at different mes during
the course of execu on.
2. A process may be broken into a number of pieces and these pieces need
not be con nuously located in the main memory during execu on. The
combina on of dynamic run- me address transla on and the use of a
page or segment table permits this.
If these characteris cs are present then, it is not necessary that all the
pages or segments are present in the main memory during execu on.
This means that the required pages need to be loaded into memory
Opera ng System -II
whenever required. Virtual memory is implemented using Demand
Paging or Demand Segmenta on.
Demand Paging
The process of loading the page into memory on demand (whenever a
page fault occurs) is known as demand paging. The process includes the
following steps are as follows:

Demand Paging
1. If the CPU tries to refer to a page that is currently not available in the
main memory, it generates an interrupt indica ng a memory access
fault.
2. The OS puts the interrupted process in a blocking state. For the
execu on to proceed the OS must bring the required page into the
memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the
decision-making of replacing the page in physical address space.
Opera ng System -II
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to con nue the program execu on and
it will place the process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the
opera ng system and the required page is brought into memory.
Advantages of Virtual Memory
 More processes may be maintained in the main memory: Because we
are going to load only some of the pages of any par cular process, there
is room for more processes. This leads to more efficient u liza on of the
processor because it is more likely that at least one of the more
numerous processes will be in the ready state at any par cular me.
 A process may be larger than all of the main memory: One of the most
fundamental restric ons in programming is li ed. A process larger than
the main memory can be executed because of demand paging. The OS
itself loads pages of a process in the main memory as required.
 It allows greater mul programming levels by using less of the available
(primary) memory for each process.
 It has twice the capacity for addresses as main memory.
 It makes it possible to run more applica ons at once.
 Users are spared from having to add memory modules when RAM space
runs out, and applica ons are liberated from shared memory
management.
 When only a por on of a program is required for execu on, speed has
increased.
 Memory isola on has increased security.
 It makes it possible for several larger applica ons to run at once.
 Memory alloca on is compara vely cheap.
 It doesn’t require outside fragmenta on.
 It is efficient to manage logical par on workloads using the CPU.
 Automa c data movement is possible.
Opera ng System -II
Disadvantages of Virtual Memory
 It can slow down the system performance, as data needs to be
constantly transferred between the physical memory and the hard disk.
 It can increase the risk of data loss or corrup on, as data can be lost if
the hard disk fails or if there is a power outage while data is being
transferred to or from the hard disk.
 It can increase the complexity of the memory management system, as
the opera ng system needs to manage both physical and virtual
memory.
Page Fault Service Time: The me taken to service the page fault is
called page fault service me. The page fault service me includes the
me taken to perform all the above six steps.
Let Main memory access me is: m
Page fault service me is: s
Page fault rate is : p
Then, Effec ve memory access me = (p*s) + (1-p)*m
Opera ng System -II
Unit 2(Disk Management)

File
A file is a named collection of related information that is recorded on secondary storage such
as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits,
bytes, lines or records whose meaning is defined by the files creator and user.

File Structure
A File Structure should be according to a required format that the operating system can
understand.

 A file has a certain defined structure according to its type.


 A text file is a sequence of characters organized into lines.
 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks that are understandable by
the machine.
 When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file
structure.

File Type
File type refers to the ability of the operating system to distinguish different types of file such
as text files source files and binary files etc. Many operating systems support many types of
files. Operating system like MS-DOS and UNIX have the following types of files −

Ordinary files
 These are the files that contain user informa on.
 These may have text, databases or executable program.
 The user can apply various opera ons on such files like add, modify, delete or even remove
the en re file.

Directory files
 These files contain list of file names and other informa on related to these files.

Special files
 These files are also known as device files.
 These files represent physical device like disks, terminals, printers, networks, tape drive etc.
Opera ng System -II
These files are of two types −

 Character special files − data is handled character by character as in case of


terminals or printers.
 Block special files − data is handled in blocks as in the case of disks and tapes.

File Access Mechanisms


File access mechanism refers to the manner in which the records of a file may be accessed.
There are several ways to access files −

 Sequen al access
 Direct/Random access
 Indexed sequen al access

Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is
the most primitive one. Example: Compilers usually access files in this fashion.

Direct/Random access
 Random access file organization provides, accessing the records directly.
 Each record has its own address on the file with by the help of which it can be directly
accessed for reading or writing.
 The records need not be in any sequence within the file and they need not be in
adjacent locations on the storage medium.

Indexed sequential access


 This mechanism is built up on base of sequen al access.
 An index is created for each file which contains pointers to various blocks.
 Index is searched sequen ally and its pointer is used to access the file directly.

Space Allocation
Files are allocated disk spaces by operating system. Operating systems deploy following
three main ways to allocate disk space to files.

 Con guous Alloca on


 Linked Alloca on
 Indexed Alloca on

Contiguous Allocation
 Each file occupies a con guous address space on disk.
 Assigned disk address is in linear order.
Opera ng System -II
 Easy to implement.
 External fragmenta on is a major issue with this type of alloca on technique.

Linked Allocation
 Each file carries a list of links to disk blocks.
 Directory contains link / pointer to first block of a file.
 No external fragmenta on
 Effec vely used in sequen al access file.
 Inefficient in case of direct access file.

Indexed Allocation
 Provides solu ons to problems of con guous and linked alloca on.
 A index block is created having all pointers to files.
 Each file has its own index block which stores the addresses of disk space occupied by the
file.
 Directory contains the addresses of index blocks of files.

OS File Opera ons


File opera ons within an opera ng system (OS) encompass a set of essen al tasks and ac ons
directed at files and directories residing within a computer’s file system. These opera ons are
fundamental for the effec ve management and manipula on of data stored on various storage
devices. In this ar cle, we will learn different file opera ons and what are the system calls and APIs
used to perform them in a Linux / Windows-based OS.
Opera ng System -II
File Crea on and Manipula on

File Crea on and Manipula on encompasses essen al opera ons within an opera ng system that
involve crea ng, modifying, and organizing files and directories. These ac ons are vital for managing
data efficiently and are integral to the func oning of computer systems.

File Opera on Descrip on System Calls / APIs

 open() (Linux-like
Crea ng Files Create a new file for data storage. systems)

 CreateFile() (Windows)

 mkdir() (Linux systems)


Create a new directory for
Crea ng Directories  CreateDirectory()
organizing files.
(Windows)

Open a file that you already have  open() (Linux systems)


Opening Files
open to read or write from.  CreateFile() (Windows)

 read() (Linux systems)


Reading Files Retrieve data from an open file.
 ReadFile() (Windows)

 write() (Linux systems)


Wri ng Files Store data in an open file.
 WriteFile() (Windows)

 rename() (Linux
Renaming Files and If you want to rename a file or systems)
Directories directory,.
 MoveFile() (Windows)

 unlink() (Linux systems)

Dele ng Files and  remove() (Linux


Remove files or directories.
Directories systems)

 DeleteFile() (Windows)
Opera ng System -II
File Opera on Descrip on System Calls / APIs

 RemoveDirectory()
(Windows)

File Organiza on and Search

File organiza on and search are key OS opera ons for arranging files systema cally and swi ly
loca ng specific data, op mizing file management and user efficiency.

File Opera on Descrip on System Calls / APIs

Create duplicates of files in  cp (Linux systems)


Copying Files
another loca on.  CopyFile() (Windows)

Relocate files from one loca on  mv (Linux systems)


Moving Files
to another.  MoveFile() (Windows)

 find (Linux systems)


Searching for Locate files based on specific
Files criteria.  FindFirstFile() and FindNextFile()
(Windows)

File Security and Metadata

File Security and Metadata are vital components of file management, encompassing access control
and crucial file informa on preserva on within an opera ng system. They are essen al for data
security and efficient organiza on.

File Opera on Descrip on System Calls / APIs

File Control access rights to files and  chmod (Linux systems)


Permissions directories.  SetFileSecurity (Windows)

Assign specific users or groups as file  chown (Linux systems)


File Ownership
owners.  SetFileSecurity (Windows)
Opera ng System -II
File Opera on Descrip on System Calls / APIs

 stat (Linux systems)


Retrieve and manipulate file
File Metadata  GetFileA ributesEx
informa on.
(Windows)

File Compression and Encryp on

File Compression and Encryp on are essen al for op mizing storage and enhancing data security.
Compression reduces file sizes, while encryp on safeguards data privacy by making it unreadable
without the correct decryp on key.

File Opera on Descrip on System Calls / APIs

File Reduce file sizes to save  gzip, zip, tar (Linux systems),
Compression storage space.  Compress-Archive (Windows)

 openssl, gpg (Linux systems)


File Protect data by conver ng it  Windows provides encryp on
Encryp on into an unreadable format. libraries and APIs for encryp on
opera ons.

Conclusion

In summary, this ar cle has explored essen al file opera ons in Linux and Windows-based opera ng
systems. It has provided insights into the system calls and APIs used to perform these opera ons,
covering everything from file crea on and manipula on to organiza on, search, security, metadata,
compression, and encryp on.
Opera ng System -II
Structures of Directory in Opera ng System
A directory is a container that is used to contain folders and files. It organizes files and folders in a
hierarchical manner.

Following are the logical structures of a directory, each providing a solu on to the problem faced in
previous type of directory structure.

1) Single-level directory:

The single-level directory is the simplest directory structure. In it, all files are contained in the same
directory which makes it easy to support and understand.

A single level directory has a significant limita on, however, when the number of files increases or
when the system has more than one user. Since all the files are in the same directory, they must have
a unique name. If two users call their dataset test, then the unique name rule violated.

Advantages:

 Since it is a single directory, so its implementa on is very easy.

 If the files are smaller in size, searching will become faster.


Opera ng System -II
 The opera ons like file crea on, searching, dele on, upda ng are very easy in such a
directory structure.

 Logical Organiza on: Directory structures help to logically organize files and directories in a
hierarchical structure. This provides an easy way to navigate and manage files, making it
easier for users to access the data they need.

 Increased Efficiency: Directory structures can increase the efficiency of the file system by
reducing the me required to search for files. This is because directory structures are
op mized for fast file access, allowing users to quickly locate the file they need.

 Improved Security: Directory structures can provide be er security for files by allowing
access to be restricted at the directory level. This helps to prevent unauthorized access to
sensi ve data and ensures that important files are protected.

 Facilitates Backup and Recovery: Directory structures make it easier to backup and recover
files in the event of a system failure or data loss. By storing related files in the same directory,
it is easier to locate and backup all the files that need to be protected.

 Scalability: Directory structures are scalable, making it easy to add new directories and files
as needed. This helps to accommodate growth in the system and makes it easier to manage
large amounts of data.

Disadvantages:

 There may chance of name collision because two files can have the same name.

 Searching will become me taking if the directory is large.

 This can not group the same type of files together.

2) Two-level directory:

As we have seen, a single level directory o en leads to confusion of files names among different
users. The solu on to this problem is to create a separate directory for each user.

In the two-level directory structure, each user has their own user files directory (UFD). The UFDs
have similar structures, but each lists only the files of a single user. System’s master file directory
(MFD) is searched whenever a new user id is created.
Opera ng System -II

Two-Levels Directory Structure

Advantages:

 The main advantage is there can be more than two files with same name, and would be very
helpful if there are mul ple users.

 A security would be there which would prevent user to access other user’s files.

 Searching of the files becomes very easy in this directory structure.

Disadvantages:

 As there is advantage of security, there is also disadvantage that the user cannot share the
file with the other users.

 Unlike the advantage users can create their own files, users don’t have the ability to create
subdirectories.

 Scalability is not possible because one use can’t group the same types of files together.

3) Tree Structure/ Hierarchical Structure:

Tree directory structure of opera ng system is most commonly used in our personal computers. User
can create files and subdirectories too, which was a disadvantage in the previous directory
structures.

This directory structure resembles a real tree upside down, where the root directory is at the peak.
This root contains all the directories for each user. The users can create subdirectories and even store
files in their directory.

A user do not have access to the root directory data and cannot modify it. And, even in this directory
the user do not have access to other user’s directories. The structure of tree directory is given below
which shows how there are files and subdirectories in each user’s directory.
Opera ng System -II

Tree/Hierarchical Directory Structure

Advantages:

 This directory structure allows subdirectories inside a directory.

 The searching is easier.

 File sor ng of important and unimportant becomes easier.

 This directory is more scalable than the other two directory structures explained.

Disadvantages:

 As the user isn’t allowed to access other user’s directory, this prevents the file sharing among
users.

 As the user has the capability to make subdirectories, if the number of subdirectories
increase the searching may become complicated.

 Users cannot modify the root directory data.

 If files do not fit in one, they might have to be fit into other directories.

4) Acyclic Graph Structure:

As we have seen the above three directory structures, where none of them have the capability to
access one file from mul ple directories. The file or the subdirectory could be accessed through the
directory it was present in, but not from the other directory.

This problem is solved in acyclic graph directory structure, where a file in one directory can be
accessed from mul ple directories. In this way, the files could be shared in between the users. It is
designed in a way that mul ple directories point to a par cular directory or file with the help of
links.
Opera ng System -II
In the below figure, this explana on can be nicely observed, where a file is shared between mul ple
users. If any user makes a change, it would be reflected to both the users.

Acyclic Graph Structure

Advantages:

 Sharing of files and directories is allowed between mul ple users.

 Searching becomes too easy.

 Flexibility is increased as file sharing and edi ng access is there for mul ple users.

Disadvantages:

 Because of the complex structure it has, it is difficult to implement this directory structure.

 The user must be very cau ous to edit or even dele on of file as the file is accessed by
mul ple users.

 If we need to delete the file, then we need to delete all the references of the file inorder to
delete it permanently.

File Allocation Methods


File allocation methods refer to the strategies employed by computer
operating systems for the efficient distribution of storage space on disks or
other storage media. Their main objective is to optimize the utilization of
available space and minimize fragmentation, which can impede file access
and decrease the overall performance of the system. There are several
Opera ng System -II
different file allocation methods that are commonly used, each with its own
strengths and weaknesses.

Contiguous File Allocation


In this method, files are stored in a continuous block of free space on the disk
meaning that all the data for a particular file is stored in one continuous section
of the disk. When a file is created, the operating system searches for a
contiguous block of free space large enough to accommodate the file. If such
a block is found, the file is stored in that block, and the operating system keeps
track of the starting address and the size of the block.

The advantage of contiguous file allocation is that it provides fast access to


files, as the operating system only needs to remember the starting address of
the file. When a user requests access to a file, the operating system can quickly
locate the file's starting address and read the entire file sequentially. This
method is particularly useful for large files, such as video or audio files, which
can be accessed more quickly when stored in contiguous blocks.

However, contiguous file allocation has some limitations. One significant


disadvantage is that it can lead to fragmentation when files are deleted or
when new files are created. If a file is deleted, the space it occupied becomes
free, but that space may not be contiguous with the remaining free space on
Opera ng System -II
the disk. This can result in gaps or fragments of free space scattered
throughout the disk, making it difficult for the operating system to find
contiguous blocks of free space for new files.

Linked File Allocation


In this method, files are stored in non-contiguous blocks of free space on the
disk, and each block is linked to the next block using a pointer. When a file is
created, the operating system searches for a series of free blocks that are
large enough to store the file, and it links them together using pointers. Each
block contains the address of the next block in the file, allowing the operating
system to access the entire file by following the chain of pointers.

The advantage of linked file allocation is that it can accommodate files of any
size, as the file can be stored in multiple non-contiguous blocks. This method
also avoids fragmentation, as files can be stored in any available free space on
the disk, without the need to find a contiguous block of free space.

However, linked file allocation has some limitations. One significant


disadvantage is that it can result in slower access times to files, as the
operating system needs to follow the chain of pointers to access the entire file.
This method may also require more disk space, as each block contains a
pointer to the next block in the file. Additionally, if a pointer becomes damaged
or lost, it can result in the loss of the entire file, as the operating system cannot
access the entire chain of blocks.

Indexed File Allocation


To address some of the limitations, operating systems can use a variation of
linked file allocation called indexed file allocation. In indexed file allocation,
files are stored in noncontiguous blocks, but instead of linking each block
together, the operating system creates an index block that contains a list of
pointers to each block in the file. When a file is created, the operating system
searches for a series of free blocks that are large enough to store the file and
creates an index block that contains pointers to each of those blocks. Each
block of the file is then stored in a separate block on the disk.

The advantage of indexed file allocation is that it provides fast access to files,
as the operating system only needs to read the index block to locate the file's
Opera ng System -II
blocks. This method also avoids fragmentation, as files can be stored in any
available free space on the disk, without the need to find a contiguous block
of free space. Indexed file allocation also reduces the risk of data loss, as the
index block can be duplicated to provide redundancy.

However, indexed file allocation has some limitations. One significant


disadvantage is that it can result in wasted disk space, as the index block can
take up a significant amount of space on the disk. This method also requires
more disk space than linked file allocation, as each block of the file is stored
separately on the disk.

There are several types of indexed file allocation methods used in computer
operating systems, each with its own strengths and weaknesses −

 Single-Level Index − This method is the simplest form of indexed ile allocation. In
this method, a single index block is created for each file, and it contains pointers to the
blocks that make up the file. This method is useful for small files but can become
inefficient for larger files as the index block can take up a significant amount of space.
 Multi-Level Index − This method is an improvement over the single-level index
method. In this method, multiple index blocks are used to store the pointers to the
blocks that make up the file. The first level index block contains pointers to the second
level index blocks, and so on. This method is useful for large files as it reduces the size
of each index block and allows for faster access to the file.
 Combined Index − This method combines the bene its of both contiguous and
indexed file allocation methods. In this method, a portion of the file is stored
contiguously, and the rest is stored using indexed file allocation. The contiguous
portion of the file is accessed quickly, while the indexed portion can accommodate
files of any size.
 Linked Index − This method is similar to linked ile allocation, but instead of linking
blocks of the file together, an index block is created that contains pointers to the next
index block. Each index block contains pointers to the data blocks that make up the
file. This method is useful for large files, but it can result in slower access times to the
file.
 Inverted Index − This method is used in databases to store indexes of records. In this
method, a separate index block is created for each record type, and each block
contains pointers to the data blocks that contain records of that type. This method is
useful for fast access to specific types of records.

File Allocation Table


Opera ng System -II
File Allocation Table (FAT) is a file system that uses a table to store information
about the allocation of files on a disk or other storage media. In a FAT file
system, the file allocation table is a data structure that contains a list of entries,
each of which represents a block of storage space on the disk. The entries in
the file allocation table indicate whether a block of storage space is free or
allocated, and if it is allocated, they indicate which file or directory the block is
associated with. When a file is created, the operating system searches for a
series of free blocks of storage space on the disk and records the allocation of
these blocks in the file allocation table. As the file is modified or expanded, the
operating system updates the entries in the file allocation table to reflect the
new allocation of blocks.

The FAT file system has several advantages. It is a simple and efficient file
system that is well-suited to small disks and low-powered devices. It is also
widely supported by many operating systems and can be used on a variety of
storage media, including hard disks, floppy disks, and flash drives.

However, the FAT file system also has some limitations. It can be susceptible
to file fragmentation, where files become fragmented across multiple non-
contiguous blocks of storage space on the disk. This can slow down file access
times and reduce overall system performance. Additionally, the file allocation
table can become corrupted, leading to data loss or disk errors.

Conclusion
File allocation methods are an important aspect of computer operating
systems, as they determine how files are stored and accessed on disk or other
storage media. These methods are designed to ensure the efficient storage of
files and data, which is essential for the seamless functioning of computer
systems. The selection of a file allocation method depends on various factors
such as the number and size of the files that need to be stored, the
specifications of the operating system and applications, and the speed and
capacity of the storage media. The chosen file allocation method should be
able to cater to the storage requirements while simultaneously preventing data
fragmentation.

Secondary Storage is used as an extension of main memory. Secondary storage devices


can hold the data permanently.
Storage devices consists of Registers, Cache, Main-Memory, Electronic-Disk, Magne c-Disk, Op cal-
Disk, Magne c-Tapes. Each storage system provides the basic system of storing a datum and of
Opera ng System -II
holding the datum un l it is retrieved at a later me. All the storage devices differ in speed, cost, size
and vola lity. The most common Secondary-storage device is a Magne c-disk, which provides
storage for both programs and data.

In this fig Hierarchy of storage is shown –

Disk Scheduling
In this hierarchy all the storage devices are arranged according to speed and cost. The higher levels
are expensive, but they are fast. As we move down the hierarchy, the cost per bit generally
decreases, where as the access me generally increases.
Opera ng System -II
The storage systems above the Electronic disk are Vola le, where as those below are Non-Vola le.
An Electronic disk can be either designed to be either Vola le or Non-Vola le. During normal
opera on, the electronic disk stores data in a large DRAM array, which is Vola le. But many
electronic disk devices contain a hidden magne c hard disk and a ba ery for backup power. If
external power is interrupted, the electronic disk controller copies the data from RAM to the
magne c disk. When external power is restored, the controller copies the data back into the RAM.

The design of a complete memory system must balance all the factors. It must use only as much
expensive memory as necessary while providing as much inexpensive, Non-Vola le memory as
possible. Caches can be installed to improve performance where a large access- me or transfer-rate
disparity exists between two components.

Disk scheduling is done by opera ng systems to schedule I/O requests arriving for the disk. Disk
scheduling is also known as I/O Scheduling.

Improtance of Disk Scheduling in Opera ng System

 Mul ple I/O requests may arrive by different processes and only one I/O request can be
served at a me by the disk controller. Thus other I/O requests need to wait in the wai ng
queue and need to be scheduled.

 Two or more requests may be far from each other so this can result in greater disk arm
movement.

 Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.

Here, we will cover the following Topics in Disk Scheduling.

 Important Terms Associated with Disk Scheduling

 Disk Scheduling Algorithms

 FCFS (First Come First Serve)

 SSTF (Shortest Seek Time First)

 SCAN (Elevator Algorithm)

 C-SCAN (CIrcular SCAN)

 LOOK

 C-LOOK

 RSS

 LIFO (Last-In First-Out)

 N-Step SCAN

 F-SCAN

Key Terms Associated with Disk Scheduling


Opera ng System -II
 Seek Time: Seek me is the me taken to locate the disk arm to a specified track where the
data is to be read or wri en. So the disk scheduling algorithm that gives a minimum average
seek me is be er.

 Rota onal Latency: Rota onal Latency is the me taken by the desired sector of the disk to
rotate into a posi on so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rota onal latency is be er.

 Transfer Time: Transfer me is the me to transfer the data. It depends on the rota ng
speed of the disk and the number of bytes to be transferred.

 Disk Access Time:

Disk Access Time = Seek Time + Rota onal Latency + Transfer Time

Total Seek Time = Total head Movement * Seek Time

Disk Access Time and Disk Response Time

 Disk Response Time: Response Time is the average me spent by a request wai ng to
perform its I/O opera on. The average Response me is the response me of all
requests. Variance Response Time is the measure of how individual requests are serviced
with respect to average response me. So the disk scheduling algorithm that gives minimum
variance response me is be er.

FCFS Disk Scheduling Algorithms


Prerequisite: Disk scheduling algorithms.
Given an array of disk track numbers and ini al head posi on, our task is to find the total number of
seek opera ons done to access all the requested tracks if First Come First Serve (FCFS) disk
scheduling algorithm is used.

First Come First Serve (FCFS)


FCFS is the simplest disk scheduling algorithm. As the name suggests, this algorithm entertains
requests in the order they arrive in the disk queue. The algorithm looks very fair and there is no
starva on (all requests are serviced sequen ally) but generally, it does not provide the fastest
service.

Algorithm:

1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their me of arrival. ‘head’ is the posi on of disk head.
Opera ng System -II
2. Let us one by one take the tracks in default order and calculate the absolute distance of the
track from the head.

3. Increment the total seek count with this distance.

4. Currently serviced track posi on now becomes the new head posi on.

5. Go to step 2 un l all tracks in request array have not been serviced.

Example:

Input:

Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}

Ini al head posi on = 50

Output:

Total number of seek opera ons = 510

Seek Sequence is

176

79

34

60

92

11

41

114

The following chart shows the sequence in which requested tracks are serviced using FCFS.

Therefore, the total seek count is calculated as:


Opera ng System -II
= (176-50)+(176-79)+(79-34)+(60-34)+(92-60)+(92-11)+(41-11)+(114-41)

= 510

Implementa on:
Implementa on of FCFS is given below. Note that distance is used to store absolute distance
between head and current track posi on.

 C++

 C

 Java

 Python3

 C#

 Javascript

// C++ program to demonstrate

// FCFS Disk Scheduling algorithm

#include <bits/stdc++.h>

using namespace std;

int size = 8;

void FCFS(int arr[], int head)

int seek_count = 0;

int distance, cur_track;

for (int i = 0; i < size; i++) {

cur_track = arr[i];

// calculate absolute distance

distance = abs(cur_track - head);


Opera ng System -II
// increase the total count

seek_count += distance;

// accessed track is now new head

head = cur_track;

cout << "Total number of seek opera ons = "

<< seek_count << endl;

// Seek sequence would be the same

// as request array sequence

cout << "Seek Sequence is" << endl;

for (int i = 0; i < size; i++) {

cout << arr[i] << endl;

// Driver code

int main()

// request array

int arr[size] = { 176, 79, 34, 60, 92, 11, 41, 114 };

int head = 50;

FCFS(arr, head);

return 0;
Opera ng System -II
}

Learn Data Structures & Algorithms with GeeksforGeeks

Output:

Total number of seek opera ons = 510

Seek Sequence is

176

79

34

60

92

11

41

114

Program for SSTF Disk Scheduling Algorithm


Given an array of disk track numbers and ini al head posi on, our task is to find the total number of
seek opera ons done to access all the requested tracks if Shortest Seek Time First (SSTF) is a disk
scheduling algorithm is used.

The basic idea is the tracks that are closer to the current disk head posi on should be serviced first in
order to minimize the seek opera ons is basically known as Shortest Seek Time First (SSTF).

Advantages of Shortest Seek Time First (SSTF)

 Be er performance than the FCFS scheduling algorithm.

 It provides be er throughput.

 This algorithm is used in Batch Processing systems where throughput is more important.

 It has a less average response and wai ng me.

Disadvantages of Shortest Seek Time First (SSTF)

 Starva on is possible for some requests as it favours easy-to-reach requests and ignores the
far-away processes.

 There is a lack of predictability because of the high variance of response me.

 Switching direc on slows things down.

Algorithm
Opera ng System -II
Step 1: Let the Request array represents an array storing indexes of tracks that have been requested.
‘head’ is the posi on of the disk head.

Step 2: Find the posi ve distance of all tracks in the request array from the head.

Step 3: Find a track from the requested array which has not been accessed/serviced yet and has a
minimum distance from the head.

Step 4: Increment the total seek count with this distance.

Step 5: Currently serviced track posi on now becomes the new head posi on.

Step 6: Go to step 2 un l all tracks in the request array have not been serviced.

Example:

Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Ini al head posi on = 50

The following chart shows the sequence in which requested tracks are serviced using SSTF.

Therefore, the total seek count is calculated as:

SSTF (Shortest Seek Time First)

= (50-41)+(41-34)+(34-11)+(60-11)+(79-60)+(92-79)+(114-92)+(176-114)
= 204
which can also be directly calculated as: (50-11) + (176-11)

Implementa on

The implementa on of SSTF is given below. Note that we have made a node class having 2 members.
‘distance’ is used to store the distance between the head and the track posi on. ‘accessed’ is a
boolean variable that tells whether the track has been accessed/serviced before by the disk head or
not.

 C++

 Java

 Python3
Opera ng System -II
 C#

 Javascript

// C++ program for implementa on of

// SSTF disk scheduling

#include <bits/stdc++.h>

using namespace std;

// Calculates difference of each

// track number with the head posi on

void calculatedifference(int request[], int head,

int diff[][2], int n)

for(int i = 0; i < n; i++)

diff[i][0] = abs(head - request[i]);

// Find unaccessed track which is

// at minimum distance from head

int findMIN(int diff[][2], int n)

int index = -1;

int minimum = 1e9;

for(int i = 0; i < n; i++)

if (!diff[i][1] && minimum > diff[i][0])

{
Opera ng System -II
minimum = diff[i][0];

index = i;

return index;

void shortestSeekTimeFirst(int request[],

int head, int n)

if (n == 0)

return;

// Create array of objects of class node

int diff[n][2] = { { 0, 0 } };

// Count total number of seek opera on

int seekcount = 0;

// Stores sequence in which disk access is done

int seeksequence[n + 1] = {0};

for(int i = 0; i < n; i++)

seeksequence[i] = head;

calculatedifference(request, head, diff, n);

int index = findMIN(diff, n);

diff[index][1] = 1;
Opera ng System -II
// Increase the total count

seekcount += diff[index][0];

// Accessed track is now new head

head = request[index];

seeksequence[n] = head;

cout << "Total number of seek opera ons = "

<< seekcount << endl;

cout << "Seek sequence is : " << "\n";

// Print the sequence

for(int i = 0; i <= n; i++)

cout << seeksequence[i] << "\n";

// Driver code

int main()

int n = 8;

int proc[n] = { 176, 79, 34, 60, 92, 11, 41, 114 };

shortestSeekTimeFirst(proc, 50, n);

return 0;

}
Opera ng System -II
// This code is contributed by manish19je0495

Output

Total number of seek opera ons = 204


Seek Sequence: 50, 41, 34, 11, 60, 79, 92, 114, 176

Time Complexity: O(N^2)

Auxiliary Space: O(N)

SCAN (Elevator) Disk Scheduling Algorithms


Given an array of disk track numbers and ini al head posi on, our task is to find the total number of
seek opera ons to access all the requested tracks if the SCAN disk scheduling algorithm is used.

In the SCAN Disk Scheduling Algorithm, the head starts from one end of the disk and moves towards
the other end, servicing requests in between one by one and reaching the other end. Then the
direc on of the head is reversed and the process con nues as the head con nuously scans back and
forth to access the disk. So, this algorithm works as an elevator and is hence also known as
the elevator algorithm. As a result, the requests at the midrange are serviced more and those
arriving behind the disk arm will have to wait.

Advantages of SCAN (Elevator) Algorithm

 This algorithm is simple and easy to understand.

 SCAN algorithm has no starva on.

 This algorithm is be er than the FCFS Disk Scheduling algorithm.

Disadvantages of the SCAN (Elevator) Algorithm

 More complex algorithm to implement.

 This algorithm is not fair because it causes a long wai ng me for the cylinders just visited by
the head.

 It causes the head to move ll the end of the disk in this way the requests arriving ahead of
the arm posi on would get immediate service but some other requests that arrive behind
the arm posi on will have to wait for the request to complete.

Algorithm

Step 1: Let the Request array represents an array storing indexes of tracks that have been requested
in ascending order of their me of arrival. ‘head’ is the posi on of the disk head.

Step 2: Let direc on represents whether the head is moving towards le or right.

Step 3: In the direc on in which the head is moving, service all tracks one by one.

Step 4: Calculate the absolute distance of the track from the head.
Opera ng System -II
Step 5: Increment the total seek count with this distance.

Step 6: Currently serviced track posi on now becomes the new head posi on.

Step 7: Go to step 3 un l we reach one of the ends of the disk.

Step 8: If we reach the end of the disk reverse the direc on and go to step 2 un l all tracks in the
request array have not been serviced.

Example:

Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Ini al head posi on = 50
Direc on = le (We are moving from right to le )
Output:
Total number of seek opera ons = 226
Seek Sequence is
41
34
11
0
60
79
92
114
176

The following chart shows the sequence in which requested tracks are serviced using SCAN.

SCAN Disk Scheduling Algorithm

Therefore, the total seek count is calculated as:

= (50-41) + (41-34) + (34-11) + (11-0) + (60-0) + (79-60) + (92-79) + (114-92) + (176-114)


= 226

Implementa on
Opera ng System -II
The implementa on of SCAN is given below. Note that distance is used to store the absolute distance
between the head and the current track posi on. disk_size is the size of the disk. Vectors le and
right store all the request tracks on the le -hand side and the right-hand side of the ini al head
posi on respec vely.

 C++

 Java

 Python3

 C#

 Javascript

// C++ program to demonstrate

// SCAN Disk Scheduling algorithm

#include <bits/stdc++.h>

using namespace std;

int size = 8;

int disk_size = 200;

void SCAN(int arr[], int head, string direc on)

int seek_count = 0;

int distance, cur_track;

vector<int> le , right;

vector<int> seek_sequence;

// appending end values

// which has to be visited

// before reversing the direc on

if (direc on == "le ")

le .push_back(0);

else if (direc on == "right")


Opera ng System -II
right.push_back(disk_size - 1);

for (int i = 0; i < size; i++) {

if (arr[i] < head)

le .push_back(arr[i]);

if (arr[i] > head)

right.push_back(arr[i]);

// sor ng le and right vectors

std::sort(le .begin(), le .end());

std::sort(right.begin(), right.end());

// run the while loop two mes.

// one by one scanning right

// and le of the head

int run = 2;

while (run--) {

if (direc on == "le ") {

for (int i = le .size() - 1; i >= 0; i--) {

cur_track = le [i];

// appending current track to seek sequence

seek_sequence.push_back(cur_track);

// calculate absolute distance

distance = abs(cur_track - head);

// increase the total count

seek_count += distance;
Opera ng System -II
// accessed track is now the new head

head = cur_track;

direc on = "right";

else if (direc on == "right") {

for (int i = 0; i < right.size(); i++) {

cur_track = right[i];

// appending current track to seek sequence

seek_sequence.push_back(cur_track);

// calculate absolute distance

distance = abs(cur_track - head);

// increase the total count

seek_count += distance;

// accessed track is now new head

head = cur_track;

direc on = "le ";

cout << "Total number of seek opera ons = "

<< seek_count << endl;

cout << "Seek Sequence is" << endl;


Opera ng System -II
for (int i = 0; i < seek_sequence.size(); i++) {

cout << seek_sequence[i] << endl;

// Driver code

int main()

// request array

int arr[size] = { 176, 79, 34, 60,

92, 11, 41, 114 };

int head = 50;

string direc on = "le ";

SCAN(arr, head, direc on);

return 0;

Disk Management in Operating System


The operating system is responsible for various operations of disk management.
Modern operating systems are constantly growing their range of services and add-
ons, and all operating systems implement four essential operating system
administration functions. These functions are as follows:

1. Process Management
2. Memory Management
3. File and Disk Management
4. I/O System Management
Opera ng System -II
Most systems include secondary storage devices (magnetic disks). It is a low-cost, non-
volatile storage method for data and programs. The user data and programs are stored
on different storage devices known as files. The OS is responsible for allocating space
to files on secondary storage devices as required.

It doesn't ensure that files are saved on physical disk drives in contiguous locations. It
is highly dependent on the provided space. New files are mostly stored in various
locations if the disk drive is full. On the other hand, the OS example file hides that the
file is divided into several parts.

The OS requires tracking the position of the disk drive for each section of every file on
the disk. It may include tracking many files and file segments on a physical disk drive
in some circumstances. Furthermore, the OS must be able to identify each file and
conduct read and writes operations on it according to the requirements. As a result,
the OS is mainly responsible for setting the file system, assuring the security and
reliability of reading and writing activities to secondary storage, and keeping access
times consistent.

Disk Management of the OS includes the various aspects, such as:

1. Disk Formatting
A new magnetic disk is mainly a blank slate. It is platters of the magnetic recording
material. Before a disk may hold data, it must be partitioned into sectors that may be
read and written by the disk controller. It is known as physical formatting and low-
level formatting.

Low-level formatting creates a unique data structure for every sector on the drive. A
data structure for a sector is made up of a header, a data region, and a trailer. The disk
controller uses the header and trailer to store information like an error-correcting code
(ECC) and a sector number.

The OS must require recording its own data structures on the disk drive to utilize it as
a storage medium for files. It accomplishes this in two phases. The initial step is to
divide the disk drive into one or more cylinder groups. The OS may treat every partition
as it were a separate disk. For example, one partition could contain a copy of the OS
executable code, while another could contain user files. The second stage after
partitioning is logical formatting. The operating store stores the initial file system
data structure on the disk drive in this second stage.

2. Boot Block
When a system is turned on or restarted, it must execute an initial program. The start
program of the system is called the bootstrap program. It starts the OS after initializing
Opera ng System -II
all components of the system. The bootstrap program works by looking for the OS
kernel on disk, loading it into memory, and jumping to an initial address to start the
OS execution.

The bootstrap is usually kept in read-only memory on most computer systems. It is


useful since read-only memory does not require initialization and is at a fixed location
where the CPU may begin executing whether powered on or reset. Furthermore, it may
not be affected by a computer system virus because ROM is read-only. The issue is
that updating this bootstrap code needs replacing the ROM hardware chips.

As a result, most computer systems include small bootstrap loader software in the boot
ROM, whose primary function is to load a full bootstrap program from a disk drive.
The entire bootstrap program can be modified easily, and the disk is rewritten with a
fresh version. The bootstrap program is stored in a partition and is referred to as
the boot block. A boot disk or system disk is a type of disk that contains a boot
partition.

3. Bad Blocks
Disks are prone to failure due to their moving parts and tight tolerances. When a disk
drive fails, it must be replaced and the contents transferred to the replacement disk
using backup media. For some time, one or more sectors become faulty. Most disks
also come from the company with bad blocks. These blocks are handled in various
ways, depending on the use of disk and controller.

On the disk, the controller keeps a list of bad blocks. The list is initialized during the
factory's low-level format and updated during the disk's life. Each bad sector may be
replaced with one of the spare sectors by directing the controller. This process is
referred to as sector sparing.
Opera ng System -II
Unit 3(Device Management)

Device Management in Opera ng System


The process of implementa on, opera on, and maintenance of a
device by an opera ng system is called device management. When
we use computers we will be having various devices connected to our
system like mouse, keyboard, scanner, printer, and pen drives. So all
these are the devices and the opera ng system acts as an interface
that allows the users to communicate with these devices.
An opera ng system is responsible for successfully establishing the
connec on between these devices and the system. The opera ng
system uses the concept of drivers for establishing a connec on
between these devices with the system.
Func ons of Device Management
 Keeps track of all devices and the program which is responsible
to perform this is called the I/O controller.
 Monitoring the status of each device such as storage drivers,
printers, and other peripheral devices.
 Enforcing preset policies and taking a decision on which process
gets the device when and for how long.
 Allocates and deallocates the device in an efficient way.
Types of Device
There are three main types of devices:
1. Block Device: It stores informa on in fixed-size block, each one
with its own address. Example, disks.
Opera ng System -II
2. Character Device: It delivers or accepts a stream of characters.
the individual characters are not addressable. For example,
printers, keyboards etc.
3. Network Device: It is for transmi ng data packets.
Features of Device Management in Opera ng System
 Opera ng system is responsible in managing device
communica on through their respec ve drivers.
 Opera ng system keeps track of all devices by using a program
known as an input output controller.
 It decides which process to assign to CPU and for how long.
 O.S. is responsible in fulfilling the request of devices to access
the process.
 It connects the devices to various programs in efficient way
without error.
 Deallocate devices when they are not in use.
Device Drivers
Opera ng system is responsible for managing device communica on
through their respec ve drivers. As we know that the opera ng
system will have many devices like the mouse, printer, and scanner
and opera ng system is responsible for managing these devices and
establishing the communica on between these devices with the
computer through their respec ve drivers. So opera ng system uses
its respec ve drivers each and every device will have its own driver.
Without the use of their respec ve driver, that device cannot make
communica on with other systems.
Device Tracking
Opera ng system keeps track of all devices by using a program
known as input output controller. Apart from allowing the system to
Opera ng System -II
make the communica on between these drivers opera ng system is
also responsible in keeping track all these devices which are
connected with the system. If any device request any process which
is under execu on by the CPU then the opera ng system has to send
a signal to the CPU to immediately release that process and moves to
the next process from the main memory so that the process which is
asked by the device fulfills the request of this device. That’s why
opera ng system has to con nuously keep on checking the status of
all the devices and for doing that opera ng system uses a specialized
program which is known as Input/Output controller.
Process Assignment
Opera ng system decides which process to assign to CPU and for
how long. So opera ng system is responsible in assigning the
processes to the CPU and it is also responsible in selec ng
appropriate process from the main memory and se ng up the me
for that process like how long that process needs to get executed
inside the CPU. Opera ng system is responsible in fulfilling the
request of devices to access the process. If the printer requests for
the process which is now ge ng executed by the CPU then it is the
responsibility of the opera ng system to fulfill that request. So what
opera ng system will do is it will tell the CPU that you need to
immediately release that process which the device printer is asking
for and assign it to the printer.
Connec on
Opera ng system connects the devices to various programs in
efficient way without error. So we use so ware to access these
drivers because we cannot directly access to keyboard, mouse,
printers, scanners, etc. We have to access these devices with the help
of so ware. Opera ng system helps us in establishing an efficient
connec on with these devices with the help of various so ware
applica ons without any error.
Opera ng System -II
Device Alloca on
Device alloca on refers to the process of assigning specific devices to
processes or users. It ensures that each process or user has exclusive
access to the required devices or shares them efficiently without
interference.
Device Dealloca on
Opera ng system deallocates devices when they are no longer in use.
When these drivers or devices are in use, they will be using certain
space in the memory so it is the responsibility of the opera ng
system to con nuously keep checking which device is in use and
which device is not in use so that it can release that device if we are
no longer using that device.
Dedicated devices overview

Android includes APIs to manage devices that are dedicated to a specific


purpose. This developer’s guide introduces these APIs. If you're an enterprise
mobility management (EMM) developer or solution integrator, read this guide
to get started.

Where are dedicated devices used?


Dedicated devices (formerly called corporate-owned single-use, or COSU) are
fully managed devices that serve a specific purpose. Android provides APIs that
can help you create devices that cater to employee- and customer-specific
needs:

 Employee-facing: Inventory management, field service management, transport


and logistics

 Customer-facing: Kiosks, digital signage, hospitality check-in

Dedicated device features


Android includes APIs to help people using dedicated devices focus on their
tasks. You typically call these APIs from a custom home app that you develop.
Your custom home app can use some, or all, of the following APIs:
Opera ng System -II
 Run the system in an immersive, kiosk-like fashion where devices are locked to
an allowlisted set of apps using lock task mode.

 Share a device between multiple users (such as shift workers or public-kiosk


users) by managing ephemeral and secondary users.

 Avoid devices downloading the same app again for each temporary user
by caching app packages.

 Suspend over-the-air (OTA) system updates over critical periods by freezing the
operating system version.

To call these APIs, apps need to be the admin of a fully managed device—
explained in the following section.

Managed devices
Because dedicated devices might be left unattended or used in critical tasks,
you need to secure the device. To prevent misuse, dedicated devices are fully
managed and owned by an admin component (the admin component typically
manages the users too). Fully managed deployments are for company-owned
devices that are used exclusively for work purposes. To learn more about
Android device management, read the Android Enterprise Overview guide.

Depending on your solution’s needs and your business goals, you can manage
the device in one of the following ways:

 Develop your own device policy controller (DPC), combining it with a custom
home app.

 Use the Android Management API to manage the device and any custom apps.

 Use a third-party EMM solution that supports lock task mode and other
dedicated device features.

Testing
If you're planning to support a third-party EMM, develop an end-to-end testing
plan using the EMM’s solution.

We also provide the following resources, which you can use to create your own
development or test environment:

 Test DPC app on Google Play

 Dedicated device source code (Test DPC) on GitHub


Opera ng System -II
While you’re still developing, you can set your app as the admin of a fully
managed device using the Android Debug Bridge (ADB).

Provision dedicated devices


When you've finished developing your solution, you're ready
to provision Android devices, or set up the devices for management. To
provision a device, complete the following steps:

1. Factory reset the device.

2. Enroll the device. We recommend using a QR code that contains a


provisioning config for device. An IT admin can then scan the code to
provision the device.

If you cannot use a QR code, you can enroll devices through other
methods, such as NFC bumping or by entering an identifier.Additional
resources
Shared Devices:
Shared devices in operating systems (OS) are devices that can be used by multiple
users or processes at the same time. This is in contrast to dedicated devices, which
can only be used by one user or process at a time.
There are several types of shared devices, including:
 Physical devices: These are devices that are physically connected to a
computer, such as printers, scanners, and external hard drives.
 Network devices: These are devices that are connected to a network, such
as network printers and file servers.
 Virtual devices: These are software-based devices that emulate the behavior
of physical devices, such as CD-ROM drives and sound cards.
Shared devices are managed by the operating system, which must ensure that
multiple users or processes can access the device without interfering with each
other. This is typically done using a queuing system, where requests from different
users or processes are placed in a queue and then processed one at a time.
Here are some of the benefits of using shared devices:
 Reduced costs: Shared devices can help to reduce costs by allowing
multiple users to access the same device, rather than each user having to
purchase their own device.
Opera ng System -II
 Increased efficiency: Shared devices can improve efficiency by allowing
multiple users to work on the same device at the same time.
 Improved flexibility: Shared devices can provide more flexibility for users, as
they can access the device from any computer on the network.
However, there are also some potential drawbacks to using shared devices, such as:
 Security risks: Shared devices can be more vulnerable to security risks, as
multiple users have access to the device.
 Performance issues: If too many users are trying to access a shared device
at the same time, it can lead to performance issues.
 Management overhead: Shared devices can require more management
overhead than dedicated devices.
Overall, shared devices can be a valuable tool for organizations that need to provide
access to devices to multiple users. However, it is important to weigh the benefits
and drawbacks of using shared devices before making a decision.

Virtual Devices
Virtual devices in operating systems (OS) offer a software-based alternative to
physical hardware components. They mimic the functions and features of real
devices, allowing users to interact with them as if they were physically present. This
provides several advantages, especially for developers and testers:
Types of virtual devices:
 System virtual machines (VMs): These simulate entire operating systems,
allowing you to run multiple environments on a single machine. Popular
examples include VirtualBox and VMware.
 Emulated devices: These replicate individual hardware components, such as
smartphones (Android Virtual Device/AVD) or printers (LPD printers).
Developers use them to test software behavior on different devices without
needing the actual hardware.
 Virtual peripherals: These are software representations of specific devices
like sound cards or network adapters, used for development or
troubleshooting within a single system.
Benefits of virtual devices:
Opera ng System -II
 Testing on multiple platforms: Developers can test software across various
systems and devices without physically owning them. This helps ensure
broader compatibility and identify potential issues early on.
 Isolation and control: Virtual devices create isolated environments,
preventing software conflicts and protecting your main system. This allows for
safe testing and experimentation.
 Resource efficiency: VMs and emulators require fewer resources than
dedicated hardware, making them cost-effective and suitable for limited
environments.
 Rapid deployment and configuration: Virtual devices can be quickly
created and configured with specific setups, speeding up development
processes.
Limitations of virtual devices:
 Performance: They might have limitations compared to physical hardware,
impacting software performance testing.
 Limited functionality: Some hardware features might not be perfectly
emulated, requiring physical testing for confirmation.
 Security concerns: Sharing virtual devices raises security risks if not
managed properly.
Examples of virtual devices in action:
 Mobile app developers: Using emulators like AVDs to test apps on different
phone models and Android versions.
 Game developers: Employing virtual environments to test games across
various graphics cards and resolutions.
 System administrators: Utilizing VMs to create isolated test environments
for new software deployments.
Remember: Virtual devices are powerful tools, but they have limitations.
Understanding their strengths and weaknesses will help you leverage them
effectively for development, testing, and other purposes.
Opera ng System -II
pipe() System call
Prerequisite : I/O System calls

Conceptually, a pipe is a connec on between two processes, such that the standard output from one
process becomes the standard input of the other process. In UNIX Opera ng System, Pipes are useful
for communica on between related processes(inter-process communica on).

 Pipe is one-way communica on only i.e we can use a pipe such that One process write to the
pipe, and the other process reads from the pipe. It opens a pipe, which is an area of main
memory that is treated as a “virtual file”.

 The pipe can be used by the crea ng process, as well as all its child processes, for reading
and wri ng. One process can write to this “virtual file” or pipe and another related process
can read from it.

 If a process tries to read before something is wri en to the pipe, the process is suspended
un l something is wri en.

 The pipe system call finds the first two available posi ons in the process’s open file table and
allocates them for the read and write ends of the pipe.

Syntax in C language:

int pipe(int fds[2]);

Parameters :

fd[0] will be the fd(file descriptor) for the

read end of pipe.

fd[1] will be the fd for the write end of pipe.

Returns : 0 on Success.

-1 on error.

Pipes behave FIFO(First in First out), Pipe behave like a queue data structure. Size of read and write
don’t have to match here. We can write 512 bytes at a me but we can read only 1 byte at a me in a
pipe.
Opera ng System -II
// C program to illustrate

// pipe system call in C

#include <stdio.h>

#include <unistd.h>

#define MSGSIZE 16

char* msg1 = "hello, world #1";

char* msg2 = "hello, world #2";

char* msg3 = "hello, world #3";

int main()

char inbuf[MSGSIZE];

int p[2], i;

if (pipe(p) < 0)

exit(1);

/* con nued */

/* write pipe */

write(p[1], msg1, MSGSIZE);

write(p[1], msg2, MSGSIZE);

write(p[1], msg3, MSGSIZE);

for (i = 0; i < 3; i++) {

/* read pipe */

read(p[0], inbuf, MSGSIZE);

prin ("% s\n", inbuf);

return 0;
Opera ng System -II
}

Output:

hello, world #1

hello, world #2

hello, world #3

Parent and child sharing a pipe

When we use fork in any process, file descriptors remain open across child process and also parent
process. If we call fork a er crea ng a pipe, then the parent and child can communicate via the pipe.

Output of the following program.

// C program to illustrate

// pipe system call in C

// shared by Parent and Child

#include <stdio.h>

#include <unistd.h>

#define MSGSIZE 16

char* msg1 = "hello, world #1";

char* msg2 = "hello, world #2";

char* msg3 = "hello, world #3";

int main()

{
Opera ng System -II
char inbuf[MSGSIZE];

int p[2], pid, nbytes;

if (pipe(p) < 0)

exit(1);

/* con nued */

if ((pid = fork()) > 0) {

write(p[1], msg1, MSGSIZE);

write(p[1], msg2, MSGSIZE);

write(p[1], msg3, MSGSIZE);

// Adding this line will

// not hang the program

// close(p[1]);

wait(NULL);

else {

// Adding this line will

// not hang the program

// close(p[1]);

while ((nbytes = read(p[0], inbuf, MSGSIZE)) > 0)

prin ("% s\n", inbuf);

if (nbytes != 0)

exit(2);

prin ("Finished reading\n");

return 0;

}
Opera ng System -II
Output:

hello world, #1

hello world, #2

hello world, #3

(hangs) //program does not terminate but hangs

Here, In this code A er finishing reading/wri ng, both parent and child block instead of termina ng
the process and that’s why program hangs. This happens because read system call gets as much data
it requests or as much data as the pipe has, whichever is less.

 If pipe is empty and we call read system call then Reads on the pipe will return EOF
(return value 0) if no process has the write end open.

 If some other process has the pipe open for wri ng, read will block in an cipa on of
new data so this code output hangs because here write ends parent process and also
child process doesn’t close.

Buffering in Operating System


The buffer is an area in the main memory used to store or hold the data temporarily.
In other words, buffer temporarily stores data transmitted from one place to another,
either between two devices or an application. The act of storing data temporarily in
the buffer is called buffering.

A buffer may be used when moving data between processes within a computer. Buffers
can be implemented in a fixed memory location in hardware or by using a virtual data
buffer in software, pointing at a location in the physical memory. In all cases, the data
in a data buffer are stored on a physical storage medium.

Most buffers are implemented in software, which typically uses the faster RAM to store
temporary data due to the much faster access time than hard disk drives. Buffers are
typically used when there is a difference between the rate of received data and the
rate of processed data, for example, in a printer spooler or online video streaming.

A buffer often adjusts timing by implementing a queue or FIFO algorithm in memory,


simultaneously writing data into the queue at one rate and reading it at another rate.

Purpose of Buffering
You face buffer during watching videos on YouTube or live streams. In a video stream,
a buffer represents the amount of data required to be downloaded before the video
can play to the viewer in real-time. A buffer in a computer environment means that a
set amount of data will be stored to preload the required data before it gets used by
the CPU.
Opera ng System -II
Computers have many different devices that operate at varying speeds, and a buffer is
needed to act as a temporary placeholder for everything interacting. This is done to
keep everything running efficiently and without issues between all the devices,
programs, and processes running at that time. There are three reasons behind
buffering of data,

1. It helps in matching speed between two devices in which the data is


transmitted. For example, a hard disk has to store the file received from the
modem. As we know, the transmission speed of a modem is slow compared to
the hard disk. So bytes coming from the modem is accumulated in the buffer
space, and when all the bytes of a file has arrived at the buffer, the entire data
is written to the hard disk in a single operation.
2. It helps the devices with different sizes of data transfer to get adapted to each
other. It helps devices to manipulate data before sending or receiving it. In
computer networking, the large message is fragmented into small fragments
and sent over the network. The fragments are accumulated in the buffer at the
receiving end and reassembled to form a complete large message.
3. It also supports copy semantics. With copy semantics, the version of data in
the buffer is guaranteed to be the version of data at the time of system call,
irrespective of any subsequent change to data in the buffer. Buffering increases
the performance of the device. It overlaps the I/O of one job with the
computation of the same job.

Types of Buffering
There are three main types of buffering in the operating system, such as:

1. Single Buffer
Opera ng System -II
In Single Buffering, only one buffer is used to transfer the data between two devices.
The producer produces one block of data into the buffer. After that, the consumer
consumes the buffer. Only when the buffer is empty, the processor again produces the
data.

Block oriented device: The following operations are performed in the block-oriented
device,

ADVERTISEMENT

ADVERTISEMENT
ADVERTISEMENT

o System buffer takes the input.


o After taking the input, the block gets transferred to the user space and then
requests another block.
o Two blocks work simultaneously. When the user processes one block of data,
the next block is being read in.
o OS can swap the processes.
o OS can record the data of the system buffer to user processes.

Stream oriented device: It performed the following operations, such as:

o Line-at a time operation is used for scroll made terminals. The user inputs one
line at a time, with a carriage return waving at the end of a line.
o Byte-at a time operation is used on forms mode, terminals when each keystroke
is significant.

2. Double Buffer

In Double Buffering, two schemes or two buffers are used in the place of one. In this
buffering, the producer produces one buffer while the consumer consumes another
buffer simultaneously. So, the producer not needs to wait for filling the buffer. Double
buffering is also known as buffer swapping.
Opera ng System -II

Block oriented: This is how a double buffer works. There are two buffers in the system.

o The driver or controller uses one buffer to store data while waiting for it to be
taken by a higher hierarchy level.
o Another buffer is used to store data from the lower-level module.
o A major disadvantage of double buffering is that the complexity of the process
gets increased.
o If the process performs rapid bursts of I/O, then using double buffering may be
deficient.

Stream oriented: It performs these operations, such as:

o Line- at a time I/O, the user process does not need to be suspended for input
or output unless the process runs ahead of the double buffer.
o Byte- at time operations, double buffer offers no advantage over a single buffer
of twice the length.

3. Circular Buffer

When more than two buffers are used, the buffers' collection is called a circular buffer.
Each buffer is being one unit in the circular buffer. The data transfer rate will increase
using the circular buffer rather than the double buffering.

ADVERTISEMENT
Opera ng System -II

o In this, the data do not directly pass from the producer to the consumer because
the data would change due to overwriting of buffers before consumed.
o The producer can only fill up to buffer x-1 while data in buffer x is waiting to be
consumed.

How Buffering Works


In an operating system, buffer works in the following way:

o Buffering is done to deal effectively with a speed mismatch between the


producer and consumer of the data stream.
o A buffer is produced in the main memory to heap up the bytes received from
the modem.
Opera ng System -II
o After receiving the data in the buffer, the data get transferred to a disk from the
buffer in a single operation.
o This process of data transfer is not instantaneous. Therefore the modem needs
another buffer to store additional incoming data.
o When the first buffer got filled, then it is requested to transfer the data to disk.
o The modem then fills the additional incoming data in the second buffer while
the data in the first buffer gets transferred to the disk.
o When both the buffers completed their tasks, the modem switches back to the
first buffer while the data from the second buffer gets transferred to the disk.
o Two buffers disintegrate the producer and the data consumer, thus minimising
the time requirements between them.
o Buffering also provides variations for devices that have different data transfer
sizes.

Advantages of Buffer
Buffering plays a very important role in any operating system during the execution of
any process or task. It has the following advantages.

o The use of buffers allows uniform disk access. It simplifies system design.
o The system places no data alignment restrictions on user processes doing I/O.
By copying data from user buffers to system buffers and vice versa, the kernel
eliminates the need for special alignment of user buffers, making user programs
simpler and more portable.
o The use of the buffer can reduce the amount of disk traffic, thereby increasing
overall system throughput and decreasing response time.
o The buffer algorithms help ensure file system integrity.

Disadvantages of Buffer
Buffers are not better in all respects. Therefore, there are a few disadvantages as
follows, such as:

o It is costly and impractical to have the buffer be the exact size required to hold
the number of elements. Thus, the buffer is slightly larger most of the time, with
the rest of the space being wasted.
Opera ng System -II
o Buffers have a fixed size at any point in time. When the buffer is full, it must be
reallocated with a larger size, and its elements must be moved. Similarly, when
the number of valid elements in the buffer is significantly smaller than its size,
the buffer must be reallocated with a smaller size and elements be moved to
avoid too much waste.
o Use of the buffer requires an extra data copy when reading and writing to and
from user processes. When transmitting large amounts of data, the extra copy
slows down performance.

I/o system Components:


1) Operating System - I/O Hardware

One of the important jobs of an Operating System is to manage


various I/O devices including mouse, keyboards, touch pad, disk
drives, display adapters, USB devices, Bit-mapped screen, LED,
Analog-to-digital converter, On/off switch, network connections,
audio I/O, printers etc.

An I/O system is required to take an application I/O request and


send it to the physical device, then take whatever response
comes back from the device and send it to the application. I/O
devices can be divided into two categories −

 Block devices − A block device is one with which the driver


communicates by sending entire blocks of data. For
example, Hard disks, USB cameras, Disk-On-Key etc.
 Character devices − A character device is one with which the
driver communicates by sending and receiving single
characters (bytes, octets). For example, serial ports, parallel
ports, sounds cards etc

Device Controllers
Device drivers are software modules that can be plugged into an
OS to handle a particular device. Operating System takes help
from device drivers to handle all I/O devices.

The Device Controller works like an interface between a device


and a device driver. I/O units (Keyboard, mouse, printer, etc.)
Opera ng System -II
typically consist of a mechanical component and an electronic
component where electronic component is called the device
controller.

There is always a device controller and a device driver for each


device to communicate with the Operating Systems. A device
controller may be able to handle multiple devices. As an interface
its main task is to convert serial bit stream to block of bytes,
perform error correction as necessary.

Any device connected to the computer is connected by a plug and


socket, and the socket is connected to a device controller.
Following is a model for connecting the CPU, memory, controllers,
and I/O devices where CPU and device controllers all use a
common bus for communication.

Synchronous vs asynchronous I/O


 Synchronous I/O − In this scheme CPU execution waits while
I/O proceeds
 Asynchronous I/O − I/O proceeds concurrently with CPU
execution

Communication to I/O Devices


The CPU must have a way to pass information to and from an I/O
device. There are three approaches available to communicate
with the CPU and Device.

 Special Instruction I/O


 Memory-mapped I/O
Opera ng System -II
 Direct memory access (DMA)

Special Instruction I/O

This uses CPU instructions that are specifically made for


controlling I/O devices. These instructions typically allow data to
be sent to an I/O device or read from an I/O device.

Memory-mapped I/O

When using memory-mapped I/O, the same address space is


shared by memory and I/O devices. The device is connected
directly to certain main memory locations so that I/O device can
transfer block of data to/from memory without going through
CPU.

While using memory mapped IO, OS allocates buffer in memory


and informs I/O device to use that buffer to send data to the
CPU. I/O device operates asynchronously with CPU, interrupts
CPU when finished.

The advantage to this method is that every instruction which can


access memory can be used to manipulate an I/O device. Memory
mapped IO is used for most high-speed I/O devices like disks,
communication interfaces.

Direct Memory Access (DMA)


Opera ng System -II
Slow devices like keyboards will generate an interrupt to the main
CPU after each byte is transferred. If a fast device such as a disk
generated an interrupt for each byte, the operating system would
spend most of its time handling these interrupts. So a typical
computer uses direct memory access (DMA) hardware to reduce
this overhead.

Direct Memory Access (DMA) means CPU grants I/O module


authority to read from or write to memory without involvement.
DMA module itself controls exchange of data between main
memory and the I/O device. CPU is only involved at the beginning
and end of the transfer and interrupted only after entire block has
been transferred.

Direct Memory Access needs a special hardware called DMA


controller (DMAC) that manages the data transfers and arbitrates
access to the system bus. The controllers are programmed with
source and destination pointers (where to read/write the data),
counters to track the number of transferred bytes, and settings,
which includes I/O and memory types, interrupts and states for
the CPU cycles.
Opera ng System -II
The operating system uses the DMA hardware as follows −

Step Description

Device driver is instructed to transfer disk data to a buffer address


1
X.

2 Device driver then instruct disk controller to transfer data to buffer.

3 Disk controller starts DMA transfer.

4 Disk controller sends each byte to DMA controller.

DMA controller transfers bytes to buffer, increases the memory


5
address, decreases the counter C until C becomes zero.

When C becomes zero, DMA interrupts CPU to signal transfer


6
completion.

Polling vs Interrupts I/O


A computer must have a way of detecting the arrival of any type
of input. There are two ways that this can happen, known
as polling and interrupts. Both of these techniques allow the
processor to deal with events that can happen at any time and
that are not related to the process it is currently running.

Polling I/O

Polling is the simplest way for an I/O device to communicate with


the processor. The process of periodically checking status of the
device to see if it is time for the next I/O operation, is called
polling. The I/O device simply puts the information in a Status
register, and the processor must come and get the information.

Most of the time, devices will not require attention and when one
does it will have to wait until it is next interrogated by the polling
program. This is an inefficient method and much of the
processors time is wasted on unnecessary polls.

Compare this method to a teacher continually asking every


student in a class, one after another, if they need help. Obviously
the more efficient method would be for a student to inform the
teacher whenever they require assistance.
Opera ng System -II
Interrupts I/O

An alternative scheme for dealing with I/O is the interrupt-driven


method. An interrupt is a signal to the microprocessor from a
device that requires attention.

A device controller puts an interrupt signal on the bus when it


needs CPU’s attention when CPU receives an interrupt, It saves
its current state and invokes the appropriate interrupt handler
using the interrupt vector (addresses of OS routines to handle
various events). When the interrupting device has been dealt
with, the CPU continues with its original task as if it had never
been interrupted.

Input and Output Devices


An input/output
allows
Input/output
(output)
device
data
storage
Inputasdevices
is
ainput
media
human
to
a piece
and
and
devices,
asreceiving
operator
areof
device,
aprovides
storage
hardware
the as often
the
or
data
output.
itother
devices name
to
that
known
from
thatInput Devices
a computer,
can
systems
implies,
a computer
as
take,
are anto
usedoutput,
are
IO
asto
interface
device,
well
capable
(input).
or
as signals
send is
process
sends
with
of
any
Andelivering
input/output
ahardware
computer
computer.
data.
to theItcomputer
data
receives
that
data
(I/O)
to for
performing tasks. The receiver at the end is the CPU (Central Processing Unit),
which works to send signals to the output devices. Some of the classifications of
Input devices are:
 Keyboard Devices
 Pointing Devices
 Composite Devices
 Game Controller
 Visual Devices
 Audio Input Devices
Some of the input devices are described below.
Keyboard
The keyboard is the most frequent and widely used input device for entering data
into a computer. Although there are some additional keys for performing other
operations, the keyboard layout is similar to that of a typical typewriter.
Generally, keyboards come in two sizes: 84 keys or 101/102 keys but currently
keyboards with 104 keys or 108 keys are also available for Windows and the
Internet.
Opera ng System -II
Keyboard

Types of Keys
 Numeric Keys: It is used to enter numeric data or move the cursor. It
usually consists of a set of 17 keys.
 Typing Keys: The letter keys (A-Z) and number keys (09) are among
these keys.
 Control Keys: These keys control the pointer and the screen. There are
four directional arrow keys on it. Home, End, Insert, Alternate(Alt),
Delete, Control(Ctrl), etc., and Escape are all control keys (Esc).
 Special Keys: Enter, Shift, Caps Lock, NumLk, Tab, etc., and Print Screen
are among the special function keys on the keyboard.
 Function Keys: The 12 keys from F1 to F12 are on the topmost row of
the keyboard.
Mouse
The most common pointing device is the mouse. The mouse is used to move a little
cursor across the screen while clicking and dragging. The cursor will stop if you let
go of the mouse. The computer is dependent on you to move the mouse; it won’t
move by itself. As a result, it’s an input device.
A mouse is an input device that lets you move the mouse on a flat surface to control
the coordinates and movement of the on-screen cursor/pointer.
The left mouse button can be used to select or move items, while the right mouse
button when clicked displays extra menus.

Mouse

Joystick
A joystick is a pointing device that is used to move the cursor on a computer
screen. A spherical ball is attached to both the bottom and top ends of the stick. In
a socket, the lower spherical ball slides. You can move the joystick in all four
directions.
Opera ng System -II

Joystick

The joystick’s function is comparable to that of a mouse. It is primarily used in CAD


(Computer-Aided Design) and playing video games on the computer.
Track Ball
Track Ball is an accessory for notebooks and laptops, which works on behalf of a
mouse. It has a similar structure to a mouse. Its structure is like a half-inserted ball
and we use fingers for cursor movement. Different shapes are used for this like
balls, buttons, or squares.
Light Pen
A light pen is a type of pointing device that looks like a pen. It can be used to select
a menu item or to draw on the monitor screen. A photocell and an optical system
are enclosed in a tiny tube. When the tip of a light pen is moved across a monitor
screen while the pen button is pushed, the photocell sensor element identifies the
screen location and provides a signal to the CPU.

Light Pen
Opera ng System -II
Scanner
A scanner is an input device that functions similarly to a photocopier. It’s
employed when there’s information on paper that needs to be transferred to the
computer’s hard disc for subsequent manipulation. The scanner collects images
from the source and converts them to a digital format that may be saved on a disc.
Before they are printed, these images can be modified.

Scanner

Optical Mark Reader (OMR)


An Optical Mark Reader is a device that is generally used in educational
institutions to check the answers to objective exams. It recognizes the marks
present by pencil and pen.
Optical Character Reader (OCR)
OCR stands for optical character recognition, and it is a device that reads printed
text. OCR optically scans the text, character by character turns it into a machine-
readable code, and saves it to the system memory.
Magnetic Ink Card Reader (MICR)
It is a device that is generally used in banks to deal with the cheques given to the
bank by the customer. It helps in reading the magnetic ink present in the code
number and cheque number. This process is very fast compared to any other
process.
Bar Code Reader
A bar code reader is a device that reads data that is bar-coded (data that is
represented by light and dark lines).Bar-coded data is commonly used to mark
things, number books, and so on. It could be a handheld scanner or part of a
stationary scanner. A bar code reader scans a bar code image, converts it to an
alphanumeric value, and then sends it to the computer to which it is connected.
Opera ng System -II

Bar Code Reader

Web Camera
Because a web camera records a video image of the scene in front of it, a webcam
is an input device. It is either built inside the computer (for example, a laptop) or
attached through a USB connection. A webcam is a computer-connected tiny
digital video camera. It’s also known as a web camera because it can take images
and record video. These cameras come with software that must be installed on the
computer in order to broadcast video in real-time over the Internet. It can shoot
images and HD videos, however, the video quality isn’t as good as other cameras
(In Mobiles or other devices or normal cameras).

Web Camera
Opera ng System -II
Digitizer
Digitizer is a device that is used to convert analog signals to digital signals. it
converts signals into numeric values. An example of a Digitizer is Graphic Tablet,
which is used to convert graphics to binary data.
Microphone
The microphone works as an input device that receives input voice signals and also
has the responsibility of converting it also to digital form. It is a very common
device that is present in every device which is related to music.
Output Devices
Output Devices are the devices that show us the result after giving the input to a
computer system. Output can be of many different forms like image, graphic audio,
video, etc. Some of the output devices are described below.
Monitor
Monitors, also known as Visual Display Units (VDUs), are a computer’s primary
output device. It creates images by arranging small dots, known as pixels, in a
rectangular pattern. The amount of pixels determines the image’s sharpness.
The two kinds of viewing screens used for monitors are described below.
 Cathode-Ray Tube (CRT) Monitor: Pixels are minuscule visual
elements that make up a CRT display. The higher the image quality or
resolution, the smaller the pixels.
 Flat-Panel Display Monitor: In comparison to the CRT, a lat-panel
display is a type of video display with less volume, weight, and power
consumption. They can be hung on the wall or worn on the wrist.
Flat-panel displays are currently used in calculators, video games, monitors,
laptop computers, and graphical displays.

Monitor

Television
Opera ng System -II
Television is one of the common output devices which is present in each and every
house. It portrays video and audio files on the screen as the user handles the
television. Nowadays, we are using plasma displays as compared to CRT screens
which we used earlier.
Printer
Printers are output devices that allow you to print information on paper. There are
certain types of printers which are described below.
 Impact Printers
 Character Printers
 Line Printers
 Non-Impact Printers
 Laser Printers
 Inkjet Printers

Printer

Impact Printer
Characters are printed on the ribbon, which is subsequently crushed against the
paper, in impact printers. The following are the characteristics of impact printers:
 Exceptionally low consumable cost.
 Quite noisy
 Because of its low cost, it is ideal for large-scale printing.
 To create an image, there is physical contact with the paper.
Character Printers
Character Printer has the capability to print only one character at a time. It is of
two types.
 Dot Matrix Printer
 Daisy Wheel
Line Printers
Line Printers are printers that have the capability to print one line at a time. It is
of two types.
 Drum Printer
 Chain Printer
Opera ng System -II
Non-Impact Printers
Characters are printed without the need for a ribbon in non-impact printers.
Because these printers print a full page at a time, they’re also known as Page
Printers. The following are the characteristics of non-impact printers:
 Faster
 They don’t make a lot of noise.
 Excellent quality
 Supports a variety of typefaces and character sizes
Laser Printers
Laser Printers use laser lights for producing dots which will produce characters
on the page.
Inkjet Printers
Inkjet printers are printers that use spray technology for printing papers. High-
quality papers are produced in an Inkjet printer. They also do color printing.
Speakers
Speakers are devices that produce sound after getting a command from a
computer. Nowadays, speakers come with wireless technology also like Bluetooth
speakers.
Projector
Projectors are optical devices that have the work to show visuals on both types of
screens, stationary and moving both. It helps in displaying images on a big screen.
Projectors are generally used in theatres, auditoriums, etc.
Plotter
Plotter is a device that helps in making graphics or other images to give a real view.
A graphic card is mandatorily required to use these devices. These are the pen-like
devices that help in generating exact designs on the computer.
Braille Reader
Braille Reader is a very important device that is used by blind users. It helps people
with low vision or no vision to recognize the data by running their fingers over the
device to understand easily. It is a very important device for blind persons as it
gives them the comfort to understand the letters, alphabets, etc which helps them
in study.
Video Card
A video Card is a device that is fitted into the motherboard of the computer. It helps
in improvising digital content in output devices. It is an important tool that helps
people in using multiple devices.
Global Positioning System (GPS)
Global Positioning System helps the user in terms of directions, as it uses satellite
technology to track the geometrical locations of the users. With continuous
latitudinal and longitudinal calculations, GPS gives accurate results. Nowadays, all
smart devices have inbuilt GPS.
Headphones
Headphones are just like a speaker, which is generally used by a single person or
it is a single-person usable device and is not commonly used in large areas. These
are also called headsets having a lower sound frequency.
Opera ng System -II
The Input and Output Devices of a Computer
There are so many devices that contain the characteristics of both input and
output. They can perform both operations as they receive data and provide results.
Some of them are mentioned below.
USB Drive
USB Drive is one of the devices which perform both input and output operations
as a USB Drive helps in receiving data from a device and sending it to other devices.
Modem
Modems are one of the important devices that helps in transmitting data using
telephonic lines.
CD and DVD
CD and DVD are the most common device that helps in saving data from one
computer in a particular format and send data to other devices which works as an
input device to the computer.
Headset
The headset consists of a speaker and microphone where a speaker is an output
device and a microphone works as an input device.
Facsimile
A facsimile is a fax machine that consists of a scanner and printer, where the
scanner works as an input device and the printer works as an output device.

Interrupts
The interrupt is a signal emitted by hardware or software when a process o
r an event needs immediate attention. It alerts the processor to a high-prior
ity process requiring interruption of the current working process. In I/O de
vices one of the bus control lines is dedicated for this purpose and is called
the Interrupt Service Routine (ISR).

When a device raises an interrupt at let’s say process i, the processor irst c
ompletes the execution of instruction i. Then it loads the Program Counter (
PC) with the address of the irst instruction of the ISR. Before loading the Pr
ogram Counter with the address, the address of the interrupted instruction
is moved to a temporary location. Therefore, after handling the interrupt th
e processor can continue with process i+1.

While the processor is handling the interrupts, it must inform the device th
at its request has been recognized so that it stops sending the interrupt req
uest signal. Also, saving the registers so that the interrupted process can be
restored in the future, increases the delay between the time an interrupt is
received and the start of the execution of the ISR. This is called Interrupt La
tency.
Software Interrupts:
Opera ng System -II
A sort of interrupt called a software interrupt is one that is produced by sof
tware or a system as opposed to hardware. Traps and exceptions are other
names for software interruptions. They serve as a signal for the operating s
ystem or a system service to carry out a certain function or respond to an er
ror condition.
A particular instruction known as a “interrupt instruction” is used to create
software interrupts. When the interrupt instruction is used, the processor s
tops what it is doing and switches over to a particular interrupt handler cod
e. The interrupt handler routine completes the required work or handles an
y errors before handing back control to the interrupted application.
Hardware Interrupts:
In a hardware interrupt, all the devices are connected to the Interrupt Requ
est Line. A single request line is used for all the n devices. To request an inte
rrupt, a device closes its associated switch. When a device requests an inter
rupt, the value of INTR is the logical OR of the requests from individual devi
ces.
The sequence of events involved in handling an IRQ:
1. Devices raise an IRQ.
2. The processor interrupts the program currently being executed.
3. The device is informed that its request has been recognized and the d
evice deactivates the request signal.
4. The requested action is performed.
5. An interrupt is enabled and the interrupted program is resumed.
Handling Multiple Devices:
When more than one device raises an interrupt request signal, then additio
nal information is needed to decide which device to be considered irst. The
following methods are used to decide which device to select: Polling, Vector
ed Interrupts, and Interrupt Nesting. These are explained as following belo
w.
1. Polling: In polling, the irst device encountered with the IRQ bit set i
s the device that is to be serviced irst. Appropriate ISR is called to ser
vice the same. It is easy to implement but a lot of time is wasted by int
errogating the IRQ bit of all devices.
2. Vectored Interrupts: In vectored interrupts, a device requesting an i
nterrupt identi ies itself directly by sending a special code to the proc
essor over the bus. This enables the processor to identify the device t
hat generated the interrupt. The special code can be the starting addr
ess of the ISR or where the ISR is located in memory and is called the
interrupt vector.
3. Interrupt Nesting: In this method, the I/O device is organized in a pr
iority structure. Therefore, an interrupt request from a higher priorit
y device is recognized whereas a request from a lower priority device
Opera ng System -II
is not. The processor accepts interrupts only from devices/processes
having priority.
Processors’ priority is encoded in a few bits of PS (Process Status register).
It can be changed by program instructions that write into the PS. The proce
ssor is in supervised mode only while executing OS routines. It switches to
user mode before executing application programs.

Applications of Input/Output Interface


I/O Interface:
There is need of interface whenever any CPU wants to communicate with I/
O devices. The interface is used to interpret address which is generated by
CPU. Thus, surface is used to communicate to I/O devices i.e. to share infor
mation between CPU and I/O devices interface is used which is called as I/
O Interface.
Various applications of I/O Interface:
Application of I/O is that we can say interface have access to open any ile w
ithout any kind of information about ile i.e., even basic information of ile is
unknown. It also has feature that it can be used to also add new devices to c
omputer system even it does not cause any kind of interrupt to operating sy
stem. It can also used to abstract differences in I/O devices by identifying ge
neral kinds. The access to each of general kind is through standardized set o
f function which is called as interface.
Each type of operating system has its own category for interface of device-d
rivers. The device which is given may ship with multiple device-drivers-for i
nstance, drivers for Windows, Linux, AIX and Mac OS, devices may is varied
by dimensions which is as illustrated in the following table :

S.N
o. Basis Alteration Example

Mode of Data-
1. character or block terminal disk
transfer

Method of Ac
2. sequential or random modem, CD-ROM
cessing data

Transfer sche
3. synchronous or asynchronous tape, keyboard
dule
Opera ng System -II
S.N
o. Basis Alteration Example

Sharing meth
4. dedicated or sharable tape, keyboard
ods

Speed of devi latency, seek time, transfer rat


5.
ce e, delay between operations

read only, write only, read-writ CD-ROM graphics


6. I/O Interface
e controller disk
1. Character-stream or Block:
A character stream or block both transfers data in form of bytes. The
difference between both of them is that character-stream transfers by
tes in linear way i.e., one after another whereas block transfers whole
byte in single unit.
2. Sequential or Random Access:
To transfer data in ixed order determined by device, we use sequenti
al device whereas user to instruct device to seek to any of data storag
e locations, random-access device is used.
3. Synchronous or Asynchronous:
Data transfers with predictable response times is performed by synch
ronous device, in coordination with others aspects of system. An irreg
ular or unpredictable response times not coordinated with other com
puter events is exhibits by an asynchronous device.
4. Sharable or Dedicated:
Several processes or threads can be used concurrently by sharable de
vice; whereas dedicated device cannot.
5. Speed of Operation:
The speed of device has range set which is of few bytes per second to
few giga-bytes per second.
6. Read-write, read only, write-only:
Different devices perform different operations, some supports both in
put and output, but others supports only one data transfer direction e
ither input or output.
Opera ng System -II
Unit 2(Security Management Mechanism)

Protec on:

Need of Protection in Operating System


Various needs of protection in the operating system are as follows:

1. There may be security risks like unauthorized reading, writing, modification, or


preventing the system from working effectively for authorized users.
2. It helps to ensure data security, process security, and program security against
unauthorized user access or program access.
3. It is important to ensure no access rights' breaches, no viruses, no unauthorized
access to the existing data.
4. Its purpose is to ensure that only the systems' policies access programs,
resources, and data.

Goals of Protection in Operating System


Various goals of protection in the operating system are as follows:

1. The policies define how processes access the computer system's resources, such
as the CPU, memory, software, and even the operating system. It is the
responsibility of both the operating system designer and the app programmer.
Although, these policies are modified at any time.
2. Protection is a technique for protecting data and processes from harmful or
intentional infiltration. It contains protection policies either established by itself,
set by management or imposed individually by programmers to ensure that
their programs are protected to the greatest extent possible.
3. It also provides a multiprogramming OS with the security that its users expect
when sharing common space such as files or directories.
Opera ng System -II
Domain of Protection
Various domains of protection in operating system are as follows:

1. The protec on policies restrict each process's access to its resource handling. A process is
obligated to use only the resources necessary to fulfil its task within the me constraints and
in the mode in which it is required. It is a process's protected domain.
2. Processes and objects are abstract data types in a computer system, and these objects have
opera ons that are unique to them. A domain component is defined as <object, {set of
operations on object}>.

3. Each domain comprises a collec on of objects and the opera ons that may be implemented
on them. A domain could be made up of only one process, procedure, or user. If a domain is
linked with a procedure, changing the domain would mean changing the procedure ID.
Objects may share one or more common opera ons.

Association between Process and Domain


When processes have the necessary access rights, they can switch from one domain to
another. It could be of two types, as shown below.

1. Fixed or Static

In a fixed association, all access rights could be given to processes at the start. However, the
results in a large number of access rights for domain switching. As a result, a technique of
changing the domain's contents is found dynamically.

2. Changing or dynamic

A process may switch dynamically and creating a new domain in the process.

Security measures of Operating System


Opera ng System -II
There are various security measures of the operating system that the users may take. Some of
them are as follows:

1. The network used for file transfers must be secure at all mes. During the transfer, no alien
so ware should be able to harvest informa on from the network. It is referred to as network
sniffing, and it could be avoided by implemen ng encrypted data transfer routes. Moreover,
the OS should be capable of resis ng forceful or even accidental viola ons.
2. Passwords are a good authen ca on method, but they are the most common and
vulnerable. It is very easy to crack passwords.
3. Security measures at various levels are put in place to prevent malprac ces, like no one
being allowed on the premises or access to the systems.
4. The best authen ca on techniques include a username-password combina on, eye re na
scan, fingerprint, or even user cards to access the system.

System Authentication
One-time passwords, encrypted passwords, and cryptography are used to create a
strong password and a formidable authentication source.

1. One-time Password

It is a way that is unique at every login by the user. It is a combination of two passwords that
allow the user access. The system creates a random number, and the user supplies a matching
one. An algorithm generates a random number for the system and the user, and the output is
matched using a common function.

2. Encrypted Passwords

It is also a very effective technique of authenticating access. Encrypted data is passed via the
network, which transfers and checks passwords, allowing data to pass without interruption or
interception.

3. Cryptography

It's another way to ensure that unauthorized users can't access data transferred over a
network. It aids in the data secure transmission. It introduces the concept of a key to
protecting the data. The key is crucial in this situation. When a user sends data, he encodes it
using a computer that has the key, and the receiver must decode the data with the same key.
As a result, even if the data is stolen in the middle of the process, there's a good possibility
the unauthorized user won't be able to access it.
Opera ng System -II
Authen ca on:
1. Passwords: Password verifica on is the most popular and commonly use
d authen ca on technique. A password is a secret text that is supposed t
o be known only to a user. In a password-based system, each user is assig
ned a valid username and password by the system administrator. The sys
tem stores all usernames and Passwords. When a user logs in, their user
name and password are verified by comparing them with the stored logi
n name and password. If the contents are the same then the user is allo
wed to access the system otherwise it is rejected.

Android Operating System


Android is a mobile operating system based on a modified version of the Linux kernel
and other open-source software, designed primarily for touchscreen mobile devices
such as smartphones and tablets. Android is developed by a partnership of developers
known as the Open Handset Alliance and commercially sponsored by Google. It was
disclosed in November 2007, with the first commercial Android device, the HTC Dream,
launched in September 2008.

It is free and open-source software. Its source code is Android Open Source Project
(AOSP), primarily licensed under the Apache License. However, most Android devices
dispatch with additional proprietary software pre-installed, mainly Google Mobile
Services (GMS), including core apps such as Google Chrome, the digital distribution
platform Google Play and the associated Google Play Services development platform.

ADVERTISEMENT

o About 70% of Android Smartphone runs Google's ecosystem, some with vendor-
customized user interface and some with software suite, such as TouchWizand
later One UI by Samsung, and HTC Sense.
o Competing Android ecosystems and forksinclude Fire OS (developed by Amazon) or
LineageOS. However, the "Android" name and logo are trademarks of Google which
impose standards to restrict "uncertified" devices outside their ecosystem to use
android branding.

Features of Android Operating System


Below are the following unique features and characteristics of the android
operating system, such as:
Opera ng System -II

1. Near Field Communication (NFC)

Most Android devices support NFC, which allows electronic devices to interact across
short distances easily. The main goal here is to create a payment option that is simpler
than carrying cash or credit cards, and while the market hasn't exploded as many
experts had predicted, there may be an alternative in the works, in the form of
Bluetooth Low Energy (BLE).

2. Infrared Transmission

The Android operating system supports a built-in infrared transmitter that allows you
to use your phone or tablet as a remote control.

3. Automation

The Tasker app allows control of app permissions and also automates them.

4. Wireless App Downloads

You can download apps on your PC by using the Android Market or third-party options
like AppBrain. Then it automatically syncs them to your Droid, and no plugging is
required.

5. Storage and Battery Swap

Android phones also have unique hardware capabilities. Google's OS makes it possible
to upgrade, replace, and remove your battery that no longer holds a charge. In
addition, Android phones come with SD card slots for expandable storage.
Opera ng System -II
6. Custom Home Screens

While it's possible to hack certain phones to customize the home screen, Android
comes with this capability from the get-go. Download a third-party launcher like Apex,
Nova, and you can add gestures, new shortcuts, or even performance enhancements
for older-model devices.

7. Widgets

Apps are versatile, but sometimes you want information at a glance instead of having
to open an app and wait for it to load. Android widgets let you display just about any
feature you choose on the home screen, including weather apps, music widgets, or
productivity tools that helpfully remind you of upcoming meetings or approaching
deadlines.

8. Custom ROMs

Because the Android operating system is open-source, developers can twist the
current OS and build their versions, which users can download and install in place of
the stock OS. Some are filled with features, while others change the look and feel of a
device. Chances are, if there's a feature you want, someone has already built a custom
ROM for it.

Architecture of Android OS
The android architecture contains a different number of components to support any
android device needs. Android software contains an open-source Linux Kernel with
many C/C++ libraries exposed through application framework services.

ADVERTISEMENT

ADVERTISEMENT

Among all the components, Linux Kernel provides the main operating system functions
to Smartphone and Dalvik Virtual Machine (DVM) to provide a platform for running an
android application. An android operating system is a stack of software components
roughly divided into five sections and four main layers, as shown in the below
architecture diagram.

o Applications
o Application Framework
o Android Runtime
o Platform Libraries
o Linux Kernel
Opera ng System -II

1. Applications

ADVERTISEMENT

ADVERTISEMENT

An application is the top layer of the android architecture. The pre-installed


applications like camera, gallery, home, contacts, etc., and third-party applications
downloaded from the play store like games, chat applications, etc., will be installed on
this layer.

It runs within the Android run time with the help of the classes and services provided
by the application framework.

2. Application framework

Application Framework provides several important classes used to create an Android


application. It provides a generic abstraction for hardware access and helps in
managing the user interface with application resources. Generally, it provides the
services with the help of which we can create a particular class and make that class
helpful for the Applications creation.

It includes different types of services, such as activity manager, notification manager,


view system, package manager etc., which are helpful for the development of our
application according to the prerequisite.
Opera ng System -II
The Application Framework layer provides many higher-level services to applications
in the form of Java classes. Application developers are allowed to make use of these
services in their applications. The Android framework includes the following key
services:

o Activity Manager: Controls all aspects of the application lifecycle and activity stack.
o Content Providers: Allows applications to publish and share data with other
applications.
o Resource Manager: Provides access to non-code embedded resources such as strings,
colour settings and user interface layouts.
o Notifications Manager: Allows applications to display alerts and notifications to the
user.
o View System: An extensible set of views used to create application user interfaces.

3. Application runtime

Android Runtime environment contains components like core libraries and the Dalvik
virtual machine (DVM). It provides the base for the application framework and powers
our application with the help of the core libraries.

Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based
virtual machine designed and optimized for Android to ensure that a device can run
multiple instances efficiently.

It depends on the layer Linux kernel for threading and low-level memory management.
The core libraries enable us to implement android applications using the
standard JAVA or Kotlin programming languages.

4. Platform libraries

The Platform Libraries include various C/C++ core libraries and Java-based libraries
such as Media, Graphics, Surface Manager, OpenGL, etc., to support Android
development.

o app: Provides access to the application model and is the cornerstone of all Android
applications.
o content: Facilitates content access, publishing and messaging between applications
and application components.
o database: Used to access data published by content providers and includes SQLite
database, management classes.
Opera ng System -II
o OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
o os: Provides applications with access to standard operating system services, including
messages, system services and inter-process communication.
o text: Used to render and manipulate text on a device display.
o view: The fundamental building blocks of application user interfaces.
o widget: A rich collection of pre-built user interface components such as buttons,
labels, list views, layout managers, radio buttons etc.
o WebKit: A set of classes intended to allow web-browsing capabilities to be built into
applications.
o media: Media library provides support to play and record an audio and video format.
o surface manager: It is responsible for managing access to the display subsystem.
o SQLite: It provides database support, and FreeType provides font support.
o SSL: Secure Sockets Layer is a security technology to establish an encrypted link
between a web server and a web browser.

5. Linux Kernel

Linux Kernel is the heart of the android architecture. It manages all the available drivers
such as display, camera, Bluetooth, audio, memory, etc., required during the runtime.

The Linux Kernel will provide an abstraction layer between the device hardware and
the other android architecture components. It is responsible for the management of
memory, power, devices etc. The features of the Linux kernel are:

o Security: The Linux kernel handles the security between the application and the
system.
o Memory Management: It efficiently handles memory management, thereby
providing the freedom to develop our apps.
o Process Management: It manages the process well, allocates resources to processes
whenever they need them.
o Network Stack: It effectively handles network communication.
o Driver Model: It ensures that the application works properly on the device and
hardware manufacturers responsible for building their drivers into the Linux build.

Android Applications
Opera ng System -II
Android applications are usually developed in the Java language using the Android
Software Development Kit. Once developed, Android applications can be packaged
easily and sold out either through a store such as Google Play, SlideME, Opera
Mobile Store, Mobango, F-droid or the Amazon Appstore.

Android powers hundreds of millions of mobile devices in more than 190 countries
around the world. It's the largest installed base of any mobile platform and growing
fast. Every day more than 1 million new Android devices are activated worldwide.

You might also like