0% found this document useful (0 votes)
67 views

Tutorial 2 Solutions

This document discusses memory and process management concepts including: 1) Internal and external fragmentation in dynamic contiguous memory allocation. 2) Page trace analysis using LRU replacement and 3-4 page frames. 3) Key aspects of paged virtual memory including page tables and advantages. 4) Terms like thrashing, working set, and page modified bits. 5) The 5 state process state diagram and transitions between states. 6) How scheduling criteria like CPU utilization and response time can conflict.

Uploaded by

Byron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Tutorial 2 Solutions

This document discusses memory and process management concepts including: 1) Internal and external fragmentation in dynamic contiguous memory allocation. 2) Page trace analysis using LRU replacement and 3-4 page frames. 3) Key aspects of paged virtual memory including page tables and advantages. 4) Terms like thrashing, working set, and page modified bits. 5) The 5 state process state diagram and transitions between states. 6) How scheduling criteria like CPU utilization and response time can conflict.

Uploaded by

Byron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Tutorial 2 – Memory & Process Management (Solutions)

Answer the following questions:

1) Describe how internal fragmentation and external fragmentation can happen in


Dynamic contiguous memory allocation systems.

Internal fragmentation does not happen in dynamic partitioning memory systems because
each process is allocated exactly its needs of memory.

External fragmentation happens as when for example a process needs 49 k memory and it
is allocated from a 50 k memory block. The remaining 1 k of memory is external
fragmentation.

2) A program requests pages in the following order:


dcbadcedcbae
Construct a page trace analysis indicating page faults with an asterisk (*) using
Least Recently Used (LRU) where:
(i) Memory is divided into 3 page frames;
Page Request d c b a d c e d c b a e
Page frame 1 d d d a a a e e e b b b
Page frame 2 c c c d d d d d d a a
Page frame 3 b b b c c c c c c e
Page Fault * * * * * * * * * *
Time

(ii) Memory is divided into 4 page frames.


Page Request d c b a d c e d c b a e
Page frame 1 d d d d d d d d d d d e
Page frame 2 c c c c c c c c c c c
Page frame 3 b b b b e e e e a a
Page frame 4 a a a a a a b b b
Page Fault * * * * * * * *
Time

3) Computers use paged memory:


(i) Briefly explain what is meant by paged memory;
– Divides each incoming job into pages of equal size
– Memory is divided into page frames of equal size
– Memory manager tasks prior to program execution
– Determines number of pages in program
– Locates enough empty page frames in main memory
– Loads all program pages into page frames
– Job can be stored non-contiguously
– Must keep track of job’s pages
– Entire job must be loaded into memory
– Three tables for tracking pages
– Job Table
– Page Map Table (PMT)
– Memory Map Table (MMT)
– Page size selection is crucial
– Too small: generates very long PMTs
– Too large: excessive internal fragmentation

(ii) Give two advantages of paged memory.


– Advantages:
– Job can be loaded into non-contiguous memory
– More efficient memory usage
– Disadvantages:
– Increased overhead from address resolution
– Internal fragmentation in last page
– Must load entire job into memory

4) Explain the following terms:


(i) Thrashing;
Frequent swapping out of pages of with other pages in virtual memory when those
pages are needed
The processor spends most of its time swapping pages rather than executing user
instructions

(ii) Most recently used page;


The page in memory that was most recently used or accessed.
Most-recently-used (MRU) is a page-removal algorithm that removes from memory
the most-recently-used page.

(iii) The working set of pages in demand paging;


Working set: Set of pages residing in memory that can be accessed directly
without incurring a page fault
– Improves performance of demand page schemes
– Requires the concept of “locality of reference”
System must decide
– How many pages compose the working set
– The maximum number of pages the operating system will allow for a working set
Continually monitor number of page faults. Low number is an indication the work
set is large, then reduce it. If the page faults is high then it means the working set
is too low and increase it.

(iv) Memory deallocation in dynamic partitioning memory;


Deallocation: Freeing an allocated memory space
De-allocation for dynamic-partition system: Algorithm tries to combine or joins free
areas of memory whenever possible
Three cases:
Case 1: When the block to be deallocated is adjacent to another free block
Case 2: When the block to be deallocated is between two free blocks
Case 3: When the block to be deallocated is isolated from other free blocks

(v) Page modified bit.


A flag refers to a page usually found as a field in the page map table. This is used
in demand paging memory systems where pages are swapped between main and
virtual memory. The page modified bit flag indicates if page contents have been
altered. This will determine whether or not a page must be rewritten to secondary
storage when it is swapped out.

5) With reference to the 5 state “process state diagram”:


(i) Draw and label the 5 state “process state diagram”.

(ii) Briefly describe all transitions within the diagram.


Transition from one status to another is initiated by either the Job Scheduler (JS)
or the Process Scheduler (PS):

- HOLD to READY: JS, using a predefined policy


- READY to RUNNING: PS, using some predefined algorithm
- RUNNING back to READY: PS, according to some predefined time limit or other
criterion
- RUNNING to WAITING: PS, and is initiated by an instruction in the job
- WAITING to READY: PS, and is initiated by signal from I/O device manager that
I/O request has been satisfied and job can continue.
- RUNNING to FINISHED: PS or JS, if job is finished or error has occurred
6) Discuss how the following pairs of scheduling criteria conflict in certain settings:
(i) CPU utilization and response time
CPU utilization is increased if the overhead associated with context switching is
minimized. The context switching overheads could be lowered by performing
context switches infrequently. This could, however, result in increasing the
response time for processes.

(ii) Average turnaround time and maximum waiting time


Average turnaround time is minimized by executing the shortest tasks first. Such a
scheduling policy could, however, starve long-running tasks and thereby increase
their waiting time.

(iii) I/O device utilization and CPU utilization


CPU utilization is maximized by running long-running CPU-bound tasks without
performing context switches. I/O device utilization is maximized by scheduling I/O-
bound jobs as soon as they become ready to run, thereby incurring the overheads
of context switches.

You might also like