0% found this document useful (0 votes)
100 views24 pages

Course Outlines: Operating System II

Uploaded by

MBINDE ALEN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views24 pages

Course Outlines: Operating System II

Uploaded by

MBINDE ALEN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Course outlines

Operating System II
1. Memory Management
a. Memory
b. Partitioning, paging and segmentation.
c. Virtual memory
d. Page Faults
e. Address translation and page fault handling
f. Memory management hardware: page table and Translation
g. Memory management algorithms: fetch policy, replacement
policy

2. Input / Output Management and Disk Scheduling


h. I/O devices
i. Organization of I/O function
j. I/O buffering
k. Disk scheduling, RAID

3. File Management
l. File systems
i. File systems interface
ii. File system structures
m. Organization: files and directories
n. Secondary storage management, file systems: FAT and NTFS
o. File protection & Security

4. Deadlocks
p. Conditions for deadlocks
q. Deadlock avoidance
r. Deadlock prevention
s. Research on deadlocks

5. Multi-processor systems
t. Multicomputer
u. Virtualization
v. Distributed systems

6. Operating system security


w. Cryptography
x. Authentication
y. Malware etc.

7. Operating system designs


z. Case studies
i. Linux\
ii. Windows Vista
iii. Symbian OS

1. Memory Management
A. Memory:
What is Memory?
Computer memory is any physical device capable of storing
information temporarily, like RAM (random access memory), or
permanently, like ROM (read-only memory). Memory devices
utilize integrated circuits and are used by operating systems,
software, and hardware. When information is put in memory, it
is written (write). When information is grabbed from memory, it
is read (read).

Volatile vs. non-volatile memory


Memory can be either volatile or non-volatile memory. Volatile
memory loses its contents when the computer or hardware
device loses power. Computer RAM is an example of volatile
memory. It is why if your computer freezes or reboots when
working on the program, you lose anything that wasn't saved.
Non-volatile memory, sometimes abbreviated as NVRAM,
keeps its contents even if the power is lost. EPROM is an
example of non-volatile memory. Computers use both non-
volatile and volatile memory.

Are some types of memory faster than others?


Yes. Some memory devices can store and access information
faster than others. When buying RAM, for example, you can
easily compare different options by looking at the DDR (double
data rate) version. DDR4 RAM is about two times faster than
DDR3 RAM. For a more specific indicator, RAM has a
megahertz (MHz) number next to it, indicating its exact speed,
the higher the MHz, the faster the RAM speed. While the
capacity of RAM determines the amount of information your
device can handle at one time, the speed at which the
information is stored and accessed also varies between
memory devices.

What happens to memory when the computer is


turned off?
As mentioned above, because RAM is volatile memory,
anything stored in RAM is lost when the computer loses power.
For example, while working on a document, it is stored in RAM.
If its data was not previously saved to non-volatile memory
(e.g., the hard drive), the data would be lost when the computer
loses power.

Memory is not disk storage.

It is common for new computer users to be confused by what parts of the


computer is memory. Although both the hard drive and RAM are memory,
it's more appropriate to refer to RAM as "memory" or "primary memory"
and a hard drive as "storage" or "secondary storage."

When someone asks how much memory is in your computer, it is often


between 1GB and 16GB of RAM and several hundred gigabytes, or even a
terabyte, of hard disk drive storage. In other words, you always have more
hard drive space than RAM.

How is memory used?


When a program, such as your Internet browser, is open, it is loaded from
your hard drive and placed into RAM. This process allows that program to
communicate with the processor at higher speeds. Anything you save to
your computer, such as pictures or videos, is sent to your hard drive for
storage.

Why is memory important or needed for a


computer?
Each device in a computer operates at different speeds, and computer
memory gives your computer a place to access data quickly. If the CPU
had to wait for a secondary storage device, like a hard disk drive, a
computer would be much slower.

B. Partitioning, paging and segmentation


Partitioning
This is a technique that divides the memory into several partitions. Where
each partition can contain a single process. When a process is loaded in
memory it is loaded into one of these partitions.
This tech can be done in two ways fixed partitioning and dynamic
partitioning.
In a multiprogramming system, in other to share the processor, multiple
processes must be kept in the memory simultaneously.
In operating systems, Memory Management is the function responsible for allocating and
managing a computer’s main memory. The memory Management function keeps track of the
status of each memory location, either allocated or free to ensure effective and efficient use of
Primary Memory. 
There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In
Contiguous Technique, executing process must be loaded entirely in the main memory.  
Fixed Partitioning

The earliest and one of the simplest techniques which can be used to load more than
one process into the main memory is Fixed partitioning or Contiguous memory
allocation.

In this technique, the main memory is divided into partitions of equal or different sizes.
The operating system always resides in the first partition while the other partitions can
be used to store user processes. The memory is assigned to the processes in a
contiguous way.

In fixed partitioning,

1. The partitions cannot overlap.


2. A process must be contiguously present in a partition for the execution.

There are various cons of using this technique.

1. Internal Fragmentation

If the size of the process is lesser then the total size of the partition then some size of
the partition get wasted and remain unused. This is wastage of the memory and called
internal fragmentation.

As shown in the image below, the 4 MB partition is used to load only 3 MB process and
the remaining 1 MB got wasted.

2. External Fragmentation

The total unused space of various partitions cannot be used to load the processes even
though there is space available but not in the contiguous form.

As shown in the image below, the remaining 1 MB space of each partition cannot be
used as a unit to store a 4 MB process. Despite of the fact that the sufficient space is
available to load the process, process will not be loaded.

3. Limitation on the size of the process

If the process size is larger than the size of the maximum-sized partition then that
process cannot be loaded into the memory. Therefore, a limitation can be imposed on
the process size that is it cannot be larger than the size of the largest partition.
4. Degree of multiprogramming is less

By Degree of multiprogramming, we simply mean the maximum number of processes


that can be loaded into the memory at the same time. In fixed partitioning, the degree
of multiprogramming is fixed and very less due to the fact that the size of the partition
cannot be varied according to the size of processes.

Dynamic Partitioning

Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In


this technique, the partition size is not declared initially. It is declared at the time of
process loading.

The first partition is reserved for the operating system. The remaining space is divided
into parts. The size of each partition will be equal to the size of the process. The
partition size varies according to the need of the process so that the internal
fragmentation can be avoided.
 Advantages of Dynamic Partitioning over fixed partitioning

1. No Internal Fragmentation

Given the fact that the partitions in dynamic partitioning are created according to the
need of the process, It is clear that there will not be any internal fragmentation because
there will not be any unused remaining space in the partition.

2. No Limitation on the size of the process

In Fixed partitioning, the process with the size greater than the size of the largest
partition could not be executed due to the lack of sufficient contiguous memory. Here,
In Dynamic partitioning, the process size can't be restricted since the partition size is
decided according to the process size.

3. Degree of multiprogramming is dynamic

Due to the absence of internal fragmentation, there will not be any unused space in the
partition hence more processes can be loaded in the memory at the same time.

Disadvantages of dynamic partitioning


External Fragmentation

Absence of internal fragmentation doesn't mean that there will not be external
fragmentation.

Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded
in the respective partitions of the main memory.

After some time P1 and P3 got completed and their assigned space is freed. Now there
are two unused partitions (1 MB and 1 MB) available in the main memory but they
cannot be used to load a 2 MB process in the memory since they are not contiguously
located.

The rule says that the process must be contiguously present in the main memory to get
executed. We need to change this rule to avoid external fragmentation.

Complex Memory Allocation

In Fixed partitioning, the list of partitions is made once and will never change but in
dynamic partitioning, the allocation and deallocation is very complex since the partition
size will be varied every time when it is assigned to a new process. OS has to keep track
of all the partitions.

Since the allocation and deallocation are done very frequently in dynamic memory
allocation and the partition size will be changed at each time, it is going to be very
difficult for OS to manage everything.

Paging

In Operating Systems, Paging is a storage mechanism used to retrieve processes from


the secondary storage into the main memory in the form of pages.

The main idea behind the paging is to divide each process in the form of pages. The
main memory will also be divided in the form of frames.

One page of the process is to be stored in one of the frames of the memory. The pages
can be stored at the different locations of the memory but the priority is always to find
the contiguous frames or holes.

Pages of the process are brought into the main memory only when they are required
otherwise they reside in the secondary storage.

In other words, we could say Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory. The process of retrieving processes in the form of pages from
the secondary storage into the main memory is known as paging. The basic purpose of paging is to
separate each procedure into pages. Additionally, frames will be used to split the main memory.This
scheme permits the physical address space of a process to be non – contiguous.

In this scheme, the operating system retrieves data from secondary storage
(usually the swap space on the disk) in same-size blocks called pages. Paging is
an important part of virtual memory implementations in modern operating
systems, using secondary storage to let programs exceed the size of available
physical memory.

Different operating system defines different frame sizes. The sizes of each frame must
be equal. Considering the fact that the pages are mapped to the frames in Paging, page
size needs to be as same as frame size.
Page Table

Part of the concept of paging is the page table, which is a data structure used by the
virtual memory system to store the mapping between virtual addresses and physical
addresses. Virtual addresses are used by the executed program, while physical
addresses are used by the hardware, or more specifically, by the RAM subsystem. The
page table is a key component of virtual address translation which is necessary to
access data in memory.

Role of the page table

In operating systems that use virtual memory, every process is given the impression
that it is working with large, contiguous sections of memory. Physically, the memory of
each process may be dispersed across different areas of physical memory, or may have
been moved (paged out) to another storage, typically to a hard disk drive or solid-state
drive.

When a process requests access to data in its memory, it is the responsibility of the
operating system to map the virtual address provided by the process to the physical
address of the actual memory where that data is stored. The page table is where the
operating system stores its mappings of virtual addresses to physical addresses, with
each mapping also known as a page table entry (PTE).
Mapping Virtual Memory to Physical Memory.

The above image shows the relationship between pages addressed by virtual
addresses and the pages in physical memory, within a simple address space
scheme. Physical memory can contain pages belonging to many processes. If a
page is not used for a period of time, the operating system can, if deemed
necessary, move that page to secondary storage. The purple indicates where in
physical memory the pieces of the executing processes reside - BUT - in the
virtual environments, the memory is contiguous.

The translation process

The CPU's memory management unit (MMU) stores a cache of recently used mappings


from the operating system's page table. This is called the translation lookaside
buffer (TLB), which is an associative cache.
Actions taken upon a virtual to physical address translation request.

When a virtual address needs to be translated into a physical address, the TLB is
searched first. If a match is found (a TLB hit), the physical address is returned
and memory access can continue. However, if there is no match (called a TLB
miss), the memory management unit, or the operating system TLB miss handler,
will typically look up the address mapping in the page table to see whether a
mapping exists (a page walk). If one exists, it is written back to the TLB (this
must be done, as the hardware accesses memory through the TLB in a virtual
memory system), and the faulting instruction is restarted (this may happen in
parallel as well). The subsequent translation will find a TLB hit, and the memory
access will continue.

Translation failures

The page table lookup may fail, triggering a page fault, for two reasons:

 The lookup may fail if there is no translation available for the virtual address, meaning
that virtual address is invalid. This will typically occur because of a programming error,
and the operating system must take some action to deal with the problem. On modern
operating systems, it will cause a segmentation fault signal being sent to the offending
program.
 The lookup may also fail if the page is currently not resident in physical memory. This
will occur if the requested page has been moved out of physical memory to make room
for another page. In this case the page is paged out to a secondary store located on a
medium such as a hard disk drive (this secondary store, or "backing store", is often
called a "swap partition" if it is a disk partition, or a swap file, "swapfile" or "page file" if
it is a file). When this happens the page needs to be taken from disk and put back into
physical memory. A similar mechanism is used for memory-mapped files, which are
mapped to virtual memory and loaded to physical memory on demand.

When physical memory is not full this is a simple operation; the page is written back
into physical memory, the page table and TLB are updated, and the instruction is
restarted. However, when physical memory is full, one or more pages in physical
memory will need to be paged out to make room for the requested page. The page
table needs to be updated to mark that the pages that were previously in physical
memory are no longer there, and to mark that the page that was on disk is now in
physical memory. The TLB also needs to be updated, including removal of the paged-
out page from it, and the instruction restarted. Which page to page out is the subject
of page replacement algorithms.

Some MMUs trigger a page fault for other reasons, whether or not the page is currently
resident in physical memory and mapped into the virtual address space of a process:

 Attempting to write when the page table has the read-only bit set causes a page
fault. This is a normal part of many operating system's implementation of copy-
on-write; it may also occur when a write is done to a location from which the
process is allowed to read but to which it is not allowed to write, in which case a
signal is delivered to the process.
 Attempting to execute code when the page table has the NX bit (no-execute bit)
set in the page table causes a page fault. This can be used by an operating
system, in combination with the read-only bit, to provide a Write XOR
Execute feature that stops some kinds of exploits.
Example

Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
memory will be divided into the collection of 16 frames of 1 KB each.

There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each process
is divided into pages of 1 KB each so that one page can be stored in one frame.

Initially, all the frames are empty therefore pages of the processes will get stored in the
contiguous way.

Frames, pages and the mapping between the two is shown in the image below.
Let us consider that, P2 and P4 are moved to waiting state after some time. Now, 8
frames become empty and therefore other pages can be loaded in that empty place.
The process P5 of size 8 KB (8 pages) is waiting inside the ready queue.

Given the fact that, we have 8 non contiguous frames available in the memory and
paging provides the flexibility of storing the process at the different places. Therefore,
we can load the pages of process P5 in the place of P2 and P4.
Memory Management Unit

The purpose of Memory Management Unit (MMU) is to convert the logical address into
the physical address. The logical address is the address generated by the CPU for every
page while the physical address is the actual address of the frame where each page will
be stored.

When a page is to be accessed by the CPU by using the logical address, the operating
system needs to obtain the physical address to access that page physically.

The logical address has two parts.

1. Page Number
2. Offset

Memory management unit of OS needs to convert the page number to the frame
number

Example

Considering the above image, let's say that the CPU demands 10th word of 4th page of
process P3. Since the page number 4 of process P1 gets stored at frame number 9
therefore the 10th word of 9th frame will be returned as the physical address.

Let us look some important terminologies:


 Logical Address or Virtual Address (represented in bits): An address generated by the CPU
 Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all
logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on memory unit
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses

Features of paging:
1) Mapping logical address to physical address.
2) Page size is equal to frame size.
3) Number of entries in a page table is equal to number of pages in logical address space.
4) The page table entry contains the frame number.
5) All the page table of the processes are placed in main memory .
Example:
 If Logical Address = 31 bit, then Logical Address Space = 2 31 words = 2 G words (1 G = 230)

 If Logical Address Space = 128 M words = 2 7 * 220 words, then Logical Address = log2 227 = 27 bits
 If Physical Address = 22 bit, then Physical Address Space = 2 22 words = 4 M words (1 M = 220)

 If Physical Address Space = 16 M words = 2 4 * 220 words, then Physical Address = log2 224 = 24 bits

The mapping from virtual to physical address is done by the memory management unit (MMU) which is
a hardware device and this mapping is known as paging technique.

 The Physical Address Space is conceptually divided into a number of fixed-size blocks, called
frames.
 The Logical address Space is also splitted into fixed-size blocks, called pages.
 Page Size = Frame Size

Let us consider an example:

 Physical Address = 12 bits, then Physical Address Space = 4 K words


 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

The address generated by the CPU is divided into:

 Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number

 Page offset(d): Number of bits required to represent particular word in a page or page size of
Logical Address Space or word number of a page or page offset.

Physical Address is divided into


 Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.
 Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.

The hardware implementation of page table can be done by using dedicated registers. But the usage of
register for the page table is satisfactory only if page table is small. If page table contain large number of
entries then we can use TLB(translation Look-aside buffer), a special, small, fast look up hardware cache.

 The TLB is associative, high speed memory.


 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously. If the item is
found, then corresponding value is returned.

Segmentation
A process is divided into Segments. The chunks that a program is divided into which are not necessarily
all of the same sizes are called segments. Segmentation gives the user’s view of the process which
paging does not give. Here the user’s view is mapped to physical memory. There are types of
segmentation:

1. Virtual memory segmentation – Each process is divided into a number of segments, not all of
which are resident at any one point in time.

2. Simple segmentation – Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.

There is no simple relationship between logical addresses and physical addresses in segmentation. A
table stores the information about all such segments and is called Segment Table.

Segment Table – It maps two-dimensional Logical address into one-dimensional Physical address. It’s
each table entry has:

Base Address: It contains the starting physical address where the segments reside in memory.

Limit: It specifies the length of the segment.


Segmentation Table Mapping to Physical Address. 

Translation of Two-dimensional Logical Address to dimensional Physical Address. 

Walking through the diagram above:

1. CPU generates a 2 part logical address. 


2. The segment number is used to get the Limit and the Base Address value from
the segment table.
3. If the segment offset (d) is less than the Limit value from the segment table then
o The Base Address returned from the segment table, points to the
beginning of the segment
o The Limit value points to the end of the segment in physcial memory.

Address generated by the CPU is divided into:

 Segment number (s): Number of bits required to represent the segment.


 Segment offset (d): Number of bits required to represent the size of the
segment.

Why is Segmentation required?

Till now, we were using Paging as our main memory management technique. Paging is
closer to the Operating system rather than the User. It divides all the processes into the
form of pages although a process can have some relative parts of functions which need
to be loaded in the same page.

Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be included
in one segment and the library functions can be included in the other segment.
For Example:

Suppose a 16-bit address is used with 4 bits for the segment number and 12 bits for the
segment offset so the maximum segment size is 4096 and the maximum number of
segments that can be refereed is 16.

When a program is loaded into memory, the segmentation system tries to locate space
that is large enough to hold the first segment of the process, space information is
obtained from the free list maintained by the memory manager. Then it tries to locate
space for other segments. Once adequate space is located for all the segments, it loads
them into their respective areas.

The operating system also generates a segment map table for each program.

With the help of segment map tables and hardware assistance, the operating system can easily
translate a logical address into physical address on execution of a program.

The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the address
is valid otherwise it throws an error as the address is invalid.

In the case of valid addresses, the base address of the segment is added to the offset to
get the physical address of the actual word in the main memory.
The above figure shows how address translation is done in case of segmentation

Advantages of Segmentation

1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
6. As a complete module is loaded all at once, segmentation improves CPU utilization.
7. The user’s perception of physical memory is quite similar to segmentation. Users can divide user
programs into modules via segmentation. These modules are nothing more than the separate
processes’ codes.
8. The user specifies the segment size, whereas in paging, the hardware determines the page size.
9. Segmentation is a method that can be used to segregate data from security operations.
10. Flexibility: Segmentation provides a higher degree of flexibility than paging. Segments can be of
variable size, and processes can be designed to have multiple segments, allowing for more fine-
grained memory allocation.
11. Sharing: Segmentation allows for sharing of memory segments between processes. This can be
useful for inter-process communication or for sharing code libraries.
12. Protection: Segmentation provides a level of protection between segments, preventing one
process from accessing or modifying another process’s memory segment. This can help increase
the security and stability of the system.

Disadvantages
1. As processes are loaded and removed from the memory, the free memory space
is broken into little pieces, causing External fragmentation.
2. it is difficult to allocate contiguous memory to variable-sized partition.
3. Costly memory management algorithms.
4. Overhead is associated with keeping a segment table for each activity.
5. Due to the need for two memory accesses, one for the segment table and the other for main
memory, access time to retrieve the instruction increases.
6. Fragmentation: As mentioned, segmentation can lead to external fragmentation as memory
becomes divided into smaller segments. This can lead to wasted memory and decreased
performance.
7. Overhead: The use of a segment table can increase overhead and reduce performance. Each
segment table entry requires additional memory, and accessing the table to retrieve memory
locations can increase the time needed for memory operations.
8. Complexity: Segmentation can be more complex to implement and manage than paging. In
particular, managing multiple segments per process can be challenging, and the potential for
segmentation faults can increase as a result.

You might also like