0% found this document useful (0 votes)
60 views18 pages

Chapter - 5 - Memory Management and IO Management

This document discusses several topics related to memory management and I/O in operating systems, including: 1. Memory management handles allocating and tracking memory used by processes. It uses base/limit registers to provide protection. Binding can occur at compile, load, or runtime. 2. Dynamic loading only loads routines when called, improving memory usage. Dynamic linking links libraries at execution rather than compile time, keeping program size smaller. 3. Paging and segmentation divide memory into pages/segments to reduce fragmentation. Virtual memory uses demand paging to execute processes larger than physical memory. 4. I/O management involves devices communicating with computers via ports/buses. Controllers handle communication between devices and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views18 pages

Chapter - 5 - Memory Management and IO Management

This document discusses several topics related to memory management and I/O in operating systems, including: 1. Memory management handles allocating and tracking memory used by processes. It uses base/limit registers to provide protection. Binding can occur at compile, load, or runtime. 2. Dynamic loading only loads routines when called, improving memory usage. Dynamic linking links libraries at execution rather than compile time, keeping program size smaller. 3. Paging and segmentation divide memory into pages/segments to reduce fragmentation. Virtual memory uses demand paging to execute processes larger than physical memory. 4. I/O management involves devices communicating with computers via ports/buses. Controllers handle communication between devices and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Operating System

Concepts
CS284
Chapter 4
Memory Management and IO
Management
• Memory Management
• Memory management is the functionality of an operating system which
handles or manages primary memory. Memory management keeps track of
each and every memory location either it is allocated to some process or it is
free. It checks how much memory is to be allocated to processes. It decides
which process will get memory at what time. It tracks whenever some
memory gets freed or unallocated and correspondingly it updates the status.
• Memory management provides protection by using two registers, a base
register and a limit register. The base register holds the smallest legal
physical memory address and the limit register specifies the size of the
range.
• Instructions and data to memory addresses can be done in following ways
•  Compile time -- When it is known at compile time where the process will
reside, compile time binding is used to generate the absolute code.
•  Load time -- When it is not known at compile time where the process will
reside in memory, then the compiler generates re-locatable code.
•  Execution time -- If the process can be moved during its execution from
one memory segment to another, then binding must be delayed to be done
at run time
• Dynamic Loading
• In dynamic loading, a routine of a program is not loaded until it is called by
the program. All routines are kept on disk in a re-locatable load format. The
main program is loaded into memory and is executed. Other routines
methods or modules are loaded on request. Dynamic loading makes better
memory space utilization and unused routines are never loaded.
• Dynamic Linking
• Linking is the process of collecting and combining various modules of code
and data into a executable file that can be loaded into memory and
executed. Operating system can link system level libraries to a program.
When it combines the libraries at load time, the linking is called static
linking and when this linking is done at the time of execution, it is called as
dynamic linking.
• In static linking, libraries linked at compile time, so program code size
becomes bigger whereas in dynamic linking libraries linked at execution
time so program code size remains smaller.
• Logical versus Physical Address Space
• An address generated by the CPU is a logical address whereas address
actually available on memory unit is a physical address. Logical address is
also known a Virtual address.
• Virtual and physical addresses are the same in compile-time and load-time.
Virtual and physical addresses differ in execution-time.
• The set of all logical addresses generated by a program is referred to as a
logical address space. The set of all physical addresses corresponding to
these logical addresses is referred to as a physical address space.
• The run-time mapping from virtual to physical address is done by the
memory management unit (MMU) which is a hardware device.
• Swapping
• Swapping is a mechanism in which a process can be swapped temporarily
out of main memory to a backing store, and then brought back into memory
for continued execution. Backing store is a usually a hard disk drive or any other
secondary storage which fast in access and large enough to accommodate copies of all
memory images for all users
• Memory Allocation
• Main memory usually has two partitions
•  Low Memory -- Operating system resides in this memory.
•  High Memory -- User processes then held in high memory.

• Operating system uses the following memory allocation mechanism.


1. Single-partition allocation : In this type of allocation, relocation-register
scheme is used to protect user processes from each other, and from
changing operating-system code and data. Relocation register contains
value of smallest physical address whereas limit register contains range of
logical addresses. Each logical address must be less than the limit
register.
2. Multiple-partition allocation: In this type of allocation, main memory is
divided into a number of fixed-sized partitions where each partition
should contain only one process. When a partition is free, a process is
selected from the input queue and is loaded into the free partition. When
the process terminates, the partition becomes available for another
process.
• Fragmentation
• As processes are loaded and removed from memory, the free memory space
is broken into little pieces.
1. External fragmentation: Total memory space is enough to satisfy a
request or to reside a process in it, but it is not contiguous so it cannot be
used.
2. Internal fragmentation: Memory block assigned to process is bigger.
Some portion of memory is left unused as it cannot be used by another
process.

External fragmentation can be reduced by compaction or shuffle memory


contents to place all free memory together in one large block. To make
compaction feasible, relocation should be dynamic.
• Paging
• External fragmentation is avoided by using paging technique. Paging is a
technique in which physical memory is broken into blocks of the same size
called pages (size is power of 2, between 512 bytes and 8192 bytes). Address
generated by CPU is divided into
•  Page number (p) -- page number is used as an index into a page table
which contains base address of each page in physical memory.
•  Page offset (d) -- page offset is combined with base address to define the
physical memory address.

• Segmentation
• Segmentation is a technique to break memory into logical pieces where each
piece represents a group of related information. For example, data segments
or code segment for each process, data segment for operating system and so
on. Segmentation can be implemented using or without using paging.
• Unlike paging, segment is having varying sizes and thus eliminates internal
fragmentation. External fragmentation still exists but to lesser extent
• Virtual Memory
• Virtual memory is a technique that allows the execution of processes which
are not completely available in memory. The main visible advantage of this
scheme is that programs can be larger than physical memory. Virtual
memory is the separation of user logical memory from physical memory.
• This separation allows an extremely large virtual memory to be provided for
programmers when only a smaller physical memory is available. Following
are the situations, when entire program is not required to be loaded fully in
main memory.
• Virtual memory is commonly implemented by demand paging. It can also be
implemented in a segmentation system. Demand segmentation can also be
used to provide virtual memory.
Demand Paging
• A demand paging system is quite similar to a paging system with swapping.
When we want to execute a process, we swap it into memory. Rather than
swapping the entire process into memory, however, we use a lazy swapper
called pager.
• In virtual memory systems, demand paging is a type of swapping in
which pages of data are not copied from disk to Memory until they are
needed.
Page Replacement Algorithm
Page replacement algorithms are the techniques using which Operating
System decides which memory pages to swap out, write to disk when a page
of memory needs to be allocated. Paging happens whenever a page fault
occurs and a free page cannot be used for allocation purpose accounting to
reason that pages are not available or the number of free pages is lower than
required pages.
• Reference String
The string of memory references is called reference string. Reference strings
are generated artificially or by tracing a given system and recording the
address of each memory reference.
• First In First Out (FIFO) algorithm
Oldest page in main memory is the one which will be selected for
replacement. Easy to implement, keep a list, replace pages from the tail and
add new pages at the head.
• Optimal Page algorithm
An optimal page-replacement algorithm has the lowest page-fault rate of all
algorithms. An optimal page-replacement algorithm exists, and has been
called OPT or MIN. Replace the page that will not be used for the longest
period of time . Use the time when a page is to be used.
• Least Recently Used (LRU) algorithm
Page which has not been used for the longest time in main memory is the one
which will be selected for replacement. Easy to implement, keep a list, replace
pages by looking back into time.
• Page Buffering algorithm
To get process start quickly, keep a pool of free frames. On page fault, select a
page to be replaced. Write new page in the frame of free pool, mark the page
table and restart the process. Now write the dirty page out of disk and place
the frame holding replaced page in free pool.
• Least frequently Used (LFU) algorithm
Page with the smallest count is the one which will be selected for
replacement. This algorithm suffers from the situation in which a page is
used heavily during the initial phase of a process, but then is never used
again.
• Most frequently Used (MFU) algorithm
Page with the largest count is the one which will be selected for replacement.
• I/O Management
• Computers operate on many kinds of devices. General types
include storage devices (disks, tapes), transmission devices
(network cards, modems), and human-interface devices (screen,
keyboard, mouse). Other devices are more specialized. A device
communicates with a computer system by sending signals over
a cable or even through the air. The device communicates with
the machine via a connection point termed a port (for example,
a serial port). If one or more devices use a common set of wires,
the connection is called a bus. In other terms, a bus is a set of
wires and a rigidly defined protocol that specifies a set of
messages that can be sent on the wires.
• Controller
• A controller is a collection of electronics that can operate a
port, a bus, or a device. A serial-port controller is an example of
a simple device controller. This is a single chip in the computer
that controls the signals on the wires of a serial port.
• The bus controller is often implemented as a separate circuit
board (a host adapter) that plugs into the computer. It contains
a processor, microcode, and some private memory to enable it
to process the SCSI protocol messages. Some devices have their
own built-in controllers.
I/O port
An I/O port typically consists of four registers, called the status, control,
data-in, and data-out registers

S.N. Register & Description


1 Status Register The status register contains bits that can be read by the
host. These bits indicate states such as whether the current command
has completed, whether a byte is available to be read from the data-in
register, and whether there has been a device error.
2 Control register The control register can be written by the host to start a
command or to change the mode of a device. For instance, a certain bit
in the control register of a serial port chooses between full-duplex and
half-duplex communication, another enables parity checking, a third bit
sets the word length to 7 or 8 bits, and other bits select one of the
speeds supported by the serial port.
3 Data-in register The data-in register is read by the host to get input.
4 Data-out register The data out register is written by the host to send
output.
• Polling
• Polling is a process by which a host waits for controller response. It is a
looping process, reading the status register over and over until the busy bit
of status register becomes clear. The controller uses/sets the busy bit when
it is busy working on a command, and clears the busy bit when it is ready
to accept the next command. The host signals its wish via the command-
ready bit in the command register. The host sets the command-ready bit
when a command is available for the controller to execute.
• I/O devices
I/O Devices can be categorized into following category
S.N. Category & Description
1 Human readable Human Readable devices are suitable for
communicating with the computer user. Examples are printers, video
display terminals, keyboard etc.
2 Machine readable Machine Readable devices are suitable for
communicating with electronic equipment. Examples are disk and tape
drives, sensors, controllers and actuators.
2 Communication: Communication devices are suitable for
communicating with remote devices. Examples are digital line drivers
and modems.
• Direct Memory Access (DMA)
• Many computers avoid burdening the main CPU with programmed I/O by
offloading some of this work to a special purpose processor. This type of
processor is called, a Direct Memory Access (DMA) controller. A special
control unit is used to transfer block of data directly between an external
device and the main memory, without intervention by the processor. This
approach is called Direct Memory Access (DMA).
• Device Controllers
• A computer system contains a many types of I/O devices and their
respective controllers
• network card
• graphics adapter
• disk controller
• DVD-ROM controller
• serial port
• USB
• sound card
I/O Software
Interrupts
• An interrupt is a signal from a device attached to a computer or from a
program within the computer that causes the main program that operates
the computer to stop and figure out what to do next
• The basic mechanism of interrupt enables the CPU to respond to an
asynchronous event, such as when a device controller becomes ready for
service. Most CPUs have two interrupt request lines.
• non-maskable interrupt - A non-maskable interrupt is a
hardware interrupt that cannot be ignored by standard interrupt masking
techniques in the system. It is typically used to signal attention for non-
recoverable hardware errors. 
• maskable interrupt - A maskable interrupt is an interrupt that can be
ignored or postponed by the user or the software or by the OS itself.
• Application I/O Interface
• Application I/O Interface represents the structuring techniques and interfaces for
the operating system to enable I/O devices to be treated in a standard, uniform way.
The actual differences lies kernel level modules called device drivers which are
custom tailored to corresponding devices but show one of the standard interfaces to
applications. The purpose of the device-driver layer is to hide the differences among
device controllers from the I/O subsystem of the kernel, such as the I/O system
calls. Following are the characteristics of I/O interfaces with respected to devices.
• Character-stream / block - A character-stream device transfers bytes in one by one
fashion, whereas a block device transfers a complete unit of bytes.
• Sequential / random-access - A sequential device transfers data in a fixed order
determined by the device, random-access device can be instructed to seek position to
any of the available data storage locations.
• Synchronous / asynchronous - A synchronous device performs data transfers with
known response time where as an asynchronous device shows irregular or
unpredictable response time.
• Sharable / dedicated - A sharable device can be used concurrently by several
processes or threads but a dedicated device cannot be used.
• Speed of operation - Device speeds may range from a few bytes per second to a few
gigabytes per second.
• Read-write, read only, or write only - Some devices perform both input and
output, but others support only one data direction that is read only.
• Clocks
• Clocks are also called timers. The clock software takes the form of a device driver
though a clock is neither a blocking device nor a character based device. The
clock software is the clock driver. The exact function of the clock driver may vary
depending on operating system. Generally, the functions of the clock driver
include the following.
• Maintaining the time of the day
• Preventing process from running too long
• Accounting for CPU Usage
• Providing watchdog timers for parts for the system itself
• Kernel I/O Subsystem
• Kernel I/O Subsystems responsible to provide many services related to I/O.
Following are some of the services provided.
• Scheduling
• Buffering
• Caching
• Spooling and Device Reservation
• Error Handling
Device driver
• Device driver is a program or routine developed for an I/O device. A device driver
implements I/O operations or behaviors on a specific class of devices.

You might also like