0% found this document useful (0 votes)
82 views

Memory Management Segments: F F F F F

The document discusses memory management techniques used by operating systems. It describes how a process's memory is divided into segments like text, data, bss, heap, and stack. Some segments are read-only while others are read-write. Memory is assigned to segments by the compiler, linker, and loader. Dynamic memory allocation uses operations like malloc and new to allocate memory during runtime. Memory allocation methods include stack allocation and heap allocation. Heap allocation uses free lists and algorithms like best-fit and first-fit to manage fragmented memory holes. Memory must be reclaimed to avoid issues like dangling pointers and memory leaks. Garbage collection automatically reclaims unused memory.

Uploaded by

ulatjelek
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

Memory Management Segments: F F F F F

The document discusses memory management techniques used by operating systems. It describes how a process's memory is divided into segments like text, data, bss, heap, and stack. Some segments are read-only while others are read-write. Memory is assigned to segments by the compiler, linker, and loader. Dynamic memory allocation uses operations like malloc and new to allocate memory during runtime. Memory allocation methods include stack allocation and heap allocation. Heap allocation uses free lists and algorithms like best-fit and first-fit to manage fragmented memory holes. Memory must be reclaimed to avoid issues like dangling pointers and memory leaks. Garbage collection automatically reclaims unused memory.

Uploaded by

ulatjelek
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Memory management Segments

n Kernel management (next lecture)


n Process’ memory is divided into logical segments (text, data, bss,
n User-process memory management heap, stack)
u internal to process
u Some are read-only, others read-write
F segments
u Some are known at compile time, others grow dynamically as
F static allocation program runs
F dynamic allocation n Who assigns memory to segments?
F allocation methods u Compiler and assembler generate an object file (containing
F reclaiming: garbage collection code and data segments) from each source file
u external u Linker combines all the object files for a program into a single
F stages in address binding executable object file, which is complete and self-sufficient
F static relocation u Loader (part of OS) loads an executable object file into memory
at location(s) determined by the operating system
F dynamic relocation
u Program (as it runs) uses new and malloc to dynamically
• swapping
allocate memory, gets space on stack during function calls
• compaction
F paging (next lecture but one)
1 2

Internal dynamic memory allocation Memory allocation methods


n Stack (hierarchical)
n Static allocation is not sufficient for heaps and stacks
u Good when allocation and freeing are somewhat predictable
u Need dynamic storage — programmer may not know how
u Typically used:
much memory will be needed when program runs
F to pass parameters to procedures
F Use malloc or new to get what’s necessary when it’s
necessary F for allocating space for local variables inside a procedure

F For complex data structures (e.g., trees), allocate space F for tree traversal, expression evaluation, parsing, etc.

for nodes on demand u Use stack operations: push and pop


u OS doesn’t know in advance which procedures will be called u Keeps all free space together in a structured organization
(would be wasteful to allocate space for every variable in u Simple and efficient, but restricted
every procedure in advance)
n Heap
u OS must be able to handle recursive procedures
u Used when allocation and freeing are not predictable
n Dynamic memory requires two fundamental operations:
u used for arbitrary list structures, complex data organizations,
u Allocate dynamic storage etc.
u Free memory when it’s no longer needed u Use new (malloc) to allocate space, use delete or free to
n Methods vary for stack and heap release space
u System memory consists of allocated areas and free areas
3 4
(holes)

Internal fragmentation Heap memory allocation


n Heap-based dynamic memory allocation techniques
typically maintain a free list, which keeps track of all the holes
n eventually end up with many small holes n Algorithms to manage the free list:
(fragments), each too small to be useful
u Best fit
u fragmentation, and it leads to wasted
F Keep linked list of free blocks
memory
F Search the whole list at each allocation
u Fragmentation wasn’t a problem with
stack allocation, since we always F Choose the hole that comes the closest to matching the request
add/delete from top of stack size; any unused space becomes a new (smaller) hole
u Solution goal: reuse the space in the F When freeing memory, combine adjacent holes

holes in such a way as to keep the F Any way to do this efficiently?


number of holes small, and their size large u First fit
n Compared to stack: more general, less F Scan the list for the first hole that is large enough, choose that hole
efficient, more difficult to implement F Otherwise, same as best fit

u Worst fit
F scan for the largest hole (hoping that the remaining hole will be
large enough to be useful)
5 6
u Which is better? Why??
Reclaiming memory Garbage collection
n When can memory be freed?
u Whenever programmer says to
u Garbage collection
F Used in LISP, Java
u Any way to do so automatically?
F Storage isn’t explicitly freed by a free operation; programmer
F Difficult if that item is shared (i.e., if there are multiple pointers
to it) just deletes the pointers and doesn’t worry about what it’s
pointing at
n Potential problems in reclamation
F When OS needs more storage space, it recursively searches
u Dangling pointers — have to make sure that everyone is finished
through all the active pointers and reclaims memory that no
using it one is using
u Memory leak — must not “lose” memory by forgetting to free it
F Makes life easier for application programmer, but is difficult to
when appropriate program the garbage collector
n Implementing automatic reclamation: F Often expensive — may use 20% of CPU time in systems that
u Reference counts use it
F Used by file systems • May spend as much as 50% of time allocating and
F OS keeps track of number of outstanding pointers to each automatically freeing memory
memory item
F When count goes to zero, free the memory
7 8

Stages of address Linking


binding
n binding - assigning memory addresses to
program objects (variables, pointers, etc.) n Functions of a linker:
n physical address - the “hardware” address u Combine all files and libraries of a program
of a memory word (0xb0000) u Regroup all the segments from each file together (one big data
n relocatable address - relative address (14 segment, etc.)
bytes from the beginning of this module) u Adjust addresses to match regrouping
n absolute code - all addresses are physical u Result is an executable program
n relocatable code - all addresses are relative n Contents of object files:
n linking - preparing compiled program to run u File header — size and starting address (in memory) of each
u static - all lib-functions included in the segment
code u Segments for code and initialized data
u dynamic - lib-functions can be hooked u Symbol table (symbols, addresses)
up at load-time u External symbols (symbols, location)
n loading - a program into memory u Relocation information (symbols, location)
u static - all at once u Debugging information
u dynamic - on demand 9 u For UNIX details, type man a.out 10

Why is linking difficult Loading


n The loader loads the completed program into memory
n When assembler assembles a file, it may find external references where it can be executed
— symbols it doesn’t know about (e.g., printf, scanf) u Loads code and initialized data segments into memory at
u Compiler just puts in an address of 0 when producing the object specified location
code u Leaves space for uninitialized data (bss)
u Compiler records external symbols and their location (in object
u Returns value of start address to operating system
file) in a cross-reference list, and stores that list in the object file
n Alternatives in loading
u Linker must resolve those external references as it links the
u Absolute loader — loads executable file at fixed location
files together
u Relocatable loader — loads the program at an arbitrary
n Compiler doesn’t know where program will go in memory (if
multiprogramming, always 0 for uniprogramming) memory location specified by OS
u Assembler and linker provide relocatable addreses, loader
u Compiler just assumes program starts at 0
translates them to physical addresses
u Compiler records relocation information (location of addresses
F When program is loaded, loader modifies all addresses
to be adjusted later), and stores it in the object file
by adding the real start location to those addresses
n for details type man ld

11 12
Static external relocation Static external relocation (cont.)
n Problems with static relocation:
u Safety — not satisfied — one process can access / corrupt
n Put the OS in the highest another’s memory, can even corrupt OS’s memory
address main main memory u Processes can not change size (why…?)
memory address memory
2400 2400 n Compiler and linker u Processes can not move after beginning to run (why would they
OS OS assume each process want to?)
2200 2200
1900 starts at address 0 u Used by MS-DOS, Windows, Mac OS

B n At load time, the OS: n An alternative: dynamic relocation


u Allocates the process u The basic idea is to change each memory address dynamically as
1200 1200 a segment of memory the process runs
in which it fits
u This translation is done by hardware — between the CPU and the
A completely memory is a memory management unit (MMU) (also called a
A
u Adjusts the translation unit ) that converts virtual addresses to physical
addresses in the addresses
0 0 processes to reflect
F This translation happens for every memory reference the
its assigned location process makes
in memory 13 14

Dynamic relocation Dynamic


relocation
(cont.)
n There are now two different views of the address space:
u The physical address space — seen only by the OS — is as large
as there is physical memory on the machine
u The virtual (logical) address space —seen by the process — can
be as large as the instruction set architecture allows
F For now, we’ll assume it’s much smaller than the physical
address space
u Multiple processes share the physical memory, but each can see
only its own virtual address space
n The OS and hardware must now manage two different addresses:
u Virtual address — seen by the process
u Physical address — address in physical memory (seen by OS)

15 16

Implementing dynamic relocation Swapping


virtual address
n If there isn’t room enough in memory for all processes, some
MMU processes can be swapped out to make room
base u OS swaps a process out by storing its complete state to disk
(relocation) + > limit
register register u OS can reclaim space used by ready or blocked processes
n When process becomes active again, OS must swap it back in
(into memory)
address error u With static relocation, the process must be replaced in the
physical address
exception — same location
trap to OS
u With dynamic relocation, OS can place the process in any
n MMU protects address space, and translates virtual addresses free partition (must update the relocation and limit registers)
u Base register holds lowest virtual address of process, limit n Swapping and dynamic relocation make it easy to increase the
register holds highest size of a process and to compact memory (although slow!)
u Translation:
physical address = virtual address + base
u Protection:
if virtual address > limit, then trap to the OS with an address
17 18
exception
Compaction Evaluation of dynamic relocation

n Dynamic relocation leads to external fragmentation n Advantages:


- unused space is left between processes u OS can easily move a process
n compaction - overcomes this problem by moving u OS can allow processes to grow
the processes so that memory allocation is u Hardware changes are minimal, but fairly fast and
contiguous efficient
n in previous example we can compact the n Disadvantages:
processes to free up 256K of contiguous memory
u Compared to static relocation, memory addressing is
space (enough to load additional process) by
slower due to translation
moving the total of 416K of memory
u Memory allocation is complex (partitions, holes,
n how is it done?
fragmentation, etc.)
n Can compaction be used with static relocation?
u If process grows, OS may have to move it
n Is compaction efficient?
u Process limited to physical memory size
u Not possible to share code or data between processes

19 20

Memory
management
techniques
comparison

Figure from Operating


Systems, 3d edition, Stallings, 21
Prentice Hall, 1998

You might also like