0% found this document useful (0 votes)
11 views13 pages

OS Unit 4

Uploaded by

bestthingsforus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views13 pages

OS Unit 4

Uploaded by

bestthingsforus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Sunday, 15 December 2024

1. Concepts of Virtual Memory

• De nition : Virtual memory is a memory management technique that creates an


illusion of a large main memory by using a combination of RAM and a portion of
the hard disk (called a swap space).

• Purpose : To run programs larger than the physical memory and to improve
multitasking.

• Key Concepts:
1. Physical Memory: This is the actual RAM installed in your computer. It's a nite
resource.
2. Physical Address: Address in main memory (RAM).
3. Virtual Memory: This is a simulated memory space created by the operating system. It's
much larger than physical memory.
4. Logical Address: Address generated by the CPU.
5. Mapping: Done using page tables to translate logical to physical addresses.
6. Paging: A memory management scheme that divides both physical and virtual memory
into xed-sized blocks called pages.
7. Page Table: A data structure that maps virtual pages to physical frames (physical
memory blocks).
8. Page Fault: Occurs when a process tries to access a page that isn't currently in physical
memory. The OS must then bring the page from secondary storage (like a hard disk) into
physical memory.
9. Demand Paging: A strategy where pages are brought into physical memory only when
they are needed, reducing memory overhead.
10. Thrashing: A condition where the system spends more time swapping pages in and out
of memory than executing processes, leading to poor performance.

1

fi
fi
fi
• Bene ts of Virtual Memory:
1. Increased Multiprogramming: Allows more processes to run concurrently.
2. Ef cient Memory Utilization: Reduces memory fragmentation.
3. Larger Address Space: Enables programs to use more memory than physically
available.

• Drawbacks of Virtual Memory:


1. Performance Overhead: Page faults can slow down execution.
2. Thrashing: Can lead to severe performance degradation.

• How Virtual Memory Works:


1. Address Translation: When a program tries to access a memory location, it generates a
virtual address.
2. Page Table Lookup: The operating system uses the page table to translate the virtual
address into a physical address.
3. Memory Access: If the page is in physical memory, the CPU accesses the data directly.
4. Page Fault Handling: If the page is not in physical memory, a page fault occurs. The
OS:
• Selects a victim page to be replaced.
• Swaps the required page into the freed frame.
• Updates the page table.
• Restarts the instruction that caused the page fault.

2. Cache Memory Organization

• De nition: Cache memory is a small, high-speed memory located between the CPU and
RAM. It stores frequently accessed data to improve processing speed.

The cache has a signi cantly shorter access time than the main memory due to the applied
faster but more expensive implementation technology.

Temporal Locality: Using the same thing repeatedly.


Spatial Locality: Using things that are close to each other.

2

fi
fi
fi
fi
Computers use these concepts to optimize performance. They store frequently accessed data
and instructions in a special, fast memory called cache. This way, the CPU doesn't have to go
all the way to the main memory (slower) every time, speeding up the process.

• Levels:
- L1 Cache: Fastest, but small and inside the CPU.
- L2 Cache: Larger and slower than L1 but faster than RAM.
- L3 Cache: Shared among CPU cores, slower than L1 and L2.

Cache Memory on a Hit:

A cache hit occurs when the data requested by the CPU is already present in the cache.

Steps of Read Implementation on a Cache Hit:

1. CPU Requests Data: The CPU sends an address to the memory system.
2. Tag Match Check:
◦ The cache checks if the requested address's tag matches an existing entry in
the cache.
◦ If there is a match, it’s a cache hit.
3. Data Delivery:
◦ The cache retrieves the data from the appropriate block and delivers it to the
CPU.
◦ No access to the main memory is required, reducing latency.
4. Timing: Since cache is faster than main memory, access time is minimal.

Cache Memory on a Miss:

A cache miss occurs when the data requested by the CPU is not found in the cache.

Steps of Read Implementation on a Cache Miss:

1. CPU Requests Data: The CPU sends an address to the memory system.
2. Tag Match Check:
◦ The cache checks for the requested data.
◦ If the tag does not match any entry, it’s a cache miss.
3. Access Main Memory:
◦ The cache sends a request to main memory for the required data block.
◦ The main memory retrieves the block and sends it back to the cache.
4. Update the Cache:

3

◦ The fetched block is written to the cache using a cache replacement policy if
the cache is full (e.g., FIFO, LRU).
◦ The tag for the block is updated to match the new data.
5. Data Delivery:
◦ Once the block is stored in the cache, the data is delivered to the CPU.
6. Timing: A cache miss incurs more delay since main memory access is much slower
than cache access.

3. Demand Paging:
Demand paging is a memory management technique that brings pages of a process into
physical memory only when they are needed. This helps optimize memory usage and allows
more processes to run concurrently.

Process:
1. The program is divided into pages.
2. Pages are loaded from disk to memory when accessed.
3. If the required page is not in memory, a page fault occurs.

How Does Demand Paging Work?

1. Page Fault:
◦ When a process tries to access a memory location that isn't currently in
physical memory, a page fault occurs.
2. Page Table Check:
◦ The operating system checks the page table to see if the page is present in
memory.
3. Page Replacement (if necessary):
◦ If the page isn't in memory, the OS selects a victim page to be replaced.
4. Page Fetch:
◦ The required page is fetched from secondary storage (like a hard disk) and
loaded into the freed frame.
5. Page Table Update:
◦ The page table is updated to re ect the new location of the page.
6. Process Resumption:
◦ The process is resumed, and the instruction that caused the page fault is re-
executed.

Key Advantages of Demand Paging:

4

fl
• Ef cient Memory Utilization: Only the necessary pages are loaded into memory,
reducing memory overhead.
• Increased Multiprogramming: More processes can be run concurrently as they don't
need to be entirely loaded into memory.
• Simpli ed Memory Management: The OS doesn't need to worry about allocating
contiguous blocks of memory for each process.

Key Disadvantages of Demand Paging:

• Performance Overhead: Page faults can signi cantly slow down program execution.
• Thrashing: If too many page faults occur, the system can spend more time swapping
pages than executing processes, leading to poor performance.

3. Page Replacement Algorithms:


Page replacement algorithms are crucial in operating systems that utilize virtual memory,
particularly in demand paging systems. These algorithms determine which page in physical
memory should be replaced when a new page needs to be loaded. The goal is to minimize the
number of page faults, which occur when a requested page is not present in memory.

Here are some common page replacement algorithms:

1. First-In-First-Out (FIFO):

• This algorithm replaces the oldest page in memory.


• It is simple to implement but often performs poorly, as it may replace frequently used
pages.

2. Optimal Page Replacement:

• This algorithm replaces the page that will not be used for the longest time in the
future.
• It is the best possible algorithm, but it is not feasible to implement in practice as it
requires knowledge of future page references.

5

fi
fi
fi
3. Least Recently Used (LRU):

• This algorithm replaces the page that has not been used for the longest time.
• It is generally a good choice as it tends to replace pages that are less likely to be
needed in the near future.
• However, it requires additional overhead to track the usage history of each page.

4. Most Recently Used (MRU):

• This algorithm replaces the page that was most recently used.
• It is often not a good choice as it may replace pages that are likely to be needed again
soon.

5. Least Frequently Used (LFU):

• This algorithm replaces the page that has been used the least number of times.
• It can be effective in some cases, but it may not always be accurate, as a page that has
been used infrequently in the past may be needed frequently in the future.

5. Allocation of Frames :

Frame allocation in operating systems is the process of assigning physical memory frames to
processes. It's a crucial aspect of memory management, as it directly impacts the system's
performance and stability.

Here are some common frame allocation strategies:

6

1. Equal Allocation:

• Each process is allocated an equal number of frames, regardless of its size or priority.
• This approach is simple to implement but may not be ef cient, as it can lead to
underutilization or overutilization of memory.

2. Proportional Allocation:

• Frames are allocated to processes based on their size or importance.


• Larger processes receive more frames, while smaller processes receive fewer.
• This approach is more ef cient than equal allocation, as it better matches the memory
requirements of each process.

3. Priority-Based Allocation:

• Frames are allocated based on the priority of the processes.


• High-priority processes receive more frames than low-priority processes.
• This approach can be useful for real-time systems, where it's important to ensure that
critical processes have suf cient memory.

4. Demand Paging:

• Frames are allocated to processes on demand, as they need them.


• This approach is ef cient, as it avoids allocating memory to processes that are not
currently active.
• However, it can lead to page faults, which can degrade performance.

5. Global and Local Replacement:

• Global Replacement: When a page fault occurs, the operating system can choose any
page in memory to replace, regardless of which process it belongs to.

• Local Replacement: The operating system can only choose a page to replace from
the process that caused the page fault.

6. Thrashing :

Thrashing is a condition in operating systems where excessive paging occurs, leading to a


signi cant decrease in system performance. This happens when the system spends more time

7

fi
fi
fi
fi
fi
swapping pages between main memory and secondary storage (like a hard disk) than it does
executing processes.

Causes of Thrashing:

• Insuf cient Physical Memory: When there's not enough physical memory to
accommodate all the processes and their data, the system relies heavily on virtual
memory.
• High Degree of Multiprogramming: Running too many processes simultaneously
can lead to excessive page faults.
• Poor Page Replacement Algorithms: Inef cient page replacement algorithms can
exacerbate the problem.

Symptoms of Thrashing:

• Slow System Performance: Signi cant delays in response times.


• High CPU Utilization: The CPU spends most of its time swapping pages.
• High Disk I/O: Constant disk activity as pages are swapped in and out.
• Low Throughput: The system can't handle a large number of tasks ef ciently.

How to Avoid Thrashing:

• Add More Physical Memory: Increasing RAM can reduce the need for virtual
memory.
• Reduce the Degree of Multiprogramming: Limit the number of processes running
simultaneously.
• Improve Page Replacement Algorithms: Use more ef cient algorithms like LRU or
Clock.
• Process Swapping: Temporarily swap out inactive processes to free up memory.

6. Demand Segmentation :

De nition: A memory management technique similar to demand paging, but instead of xed-
size pages, segments of variable sizes are loaded into memory only when required.

Key Concepts:

8

fi
fi
fi
fi
fi
fi
fi
• Segment: A logical division of a process's memory space.
• Segment Table: A table that maps logical segment numbers to physical memory
addresses.
• Segment Size: Segments can be of varying sizes, allowing for more exible memory
allocation.

Advantages of Demand Segmentation:

• Ef cient Memory Utilization: Segments can be allocated as needed, reducing


memory fragmentation.
• Improved Security: Segments can be protected from unauthorized access.
• Modular Programming: Processes can be divided into smaller, more manageable
modules.
• Sharing of Code and Data: Different processes can share segments, reducing
memory overhead.

Disadvantages of Demand Segmentation:

• Increased Complexity: The implementation of segmentation is more complex than


paging.
• Overhead: Maintaining segment tables and performing address translation can incur
performance overhead.

8. Role of Operating System in Security :

The Role of Operating Systems in Security:


Operating systems play a crucial role in maintaining system security. They provide a layer of
protection between the hardware and software, ensuring that resources are used securely and
data is protected from unauthorized access. Here are some key security functions provided by
operating systems:

1. User Authentication and Authorization:

9

fi
fl
• User Authentication: OS veri es the identity of users through mechanisms like
passwords, biometrics, or tokens.
• Access Control: OS enforces access controls to restrict user access to speci c
resources based on their privileges.

2. Resource Protection:

• Memory Protection: OS prevents processes from accessing memory that is not


allocated to them, preventing unauthorized access and malicious code execution.
• File System Protection: OS enforces le permissions to control who can read, write,
or execute les.

3. Security Updates and Patches:

• Regular Updates: OS vendors release regular security patches to address


vulnerabilities and protect systems from attacks.
• Patch Management: OS ensures timely installation of security patches to mitigate
risks.

4. Malware Protection:

• Antivirus and Anti-malware Software: OS can integrate with antivirus software to


detect and remove malicious software.
• Firewall: OS can implement rewalls to lter network traf c and prevent
unauthorized access.

5. Network Security:

• Secure Network Protocols: OS can enforce secure network protocols like SSL/TLS
to protect data transmission.
• Network Access Control: OS can control network access to prevent unauthorized
users from connecting to the system.

6. System Integrity:

• File System Integrity Checks: OS can periodically check the integrity of le systems
to detect and prevent corruption.
• System Log Monitoring: OS can monitor system logs to identify unusual activity or
potential security breaches.

10

fi
fi
fi
fi
fi
fi
fi
fi
7. User Education and Awareness:

• Security Policies: OS can enforce security policies and guidelines to educate users
about best practices.

• Security Training: OS can provide training and awareness programs to help users
identify and avoid security threats.

11

12

13

You might also like