0% found this document useful (0 votes)
625 views45 pages

Memory Allocation

The document discusses different methods of allocating memory for variables during program execution, including static allocation which assigns memory at compile time, and dynamic allocation which assigns memory at runtime using either a stack or heap; it describes the key differences between static and dynamic allocation and covers approaches for allocating memory dynamically from the heap like first fit and best fit.

Uploaded by

APARNA TIWARI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
625 views45 pages

Memory Allocation

The document discusses different methods of allocating memory for variables during program execution, including static allocation which assigns memory at compile time, and dynamic allocation which assigns memory at runtime using either a stack or heap; it describes the key differences between static and dynamic allocation and covers approaches for allocating memory dynamically from the heap like first fit and best fit.

Uploaded by

APARNA TIWARI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 45

Dynamic Memory Allocation

Memory Management

 Memory areas: In languages like C or Java, the


memory used by a program can be allocated
from three different areas:

 Static
 Stack
 Heap
Storage /Memory Allocation
 Memory allocation in programming is very important for
storing values when you assign them to variables. The
allocation is done either before or at the time of program
execution. This eventually allocates memory for the
variables declared by a programmer via the compiler.
 There are two memory management schemes for the
storage allocation of data –

 1) Static Storage Management


 II) Dynamic Storage Management

3
Static & Dynamic Storage Management
 Static Storage-Storage allocated at the time of
compilation. It can neither extended or returned to the
memory pool/ bank of free storage.
 Dynamic Storage- Allows user to allocate or de-allocate
memory as per the requirement during the execution of
programs . It is suitable to multiprogramming
environment also.

4
Static Storage
 With static storage, the location of every variable is fixed,
allocated and known at compile-time. In principle, every
variable has a fixed constant machine address.
 One uses the word binding to mean an association of a
property with an entity in a programming language. All
binding of storage locations to program names occur at
compile-time , is known as Early Binding. The bindings
are fixed and unchanged throughout run-time.

5
Static Vs. Dynamic Storage Allocation
 The major difference between static and dynamic
memory allocations are:
Static Memory Allocation Dynamic Memory Allocation
variables get allocated permanently variables get allocated only if your
program unit gets active
Allocation is done before program Allocation is done during program
execution execution
It uses the static data structure for It uses the data structure called heap for
implementing static allocation implementing dynamic allocation
Less efficient More efficient

There is no memory reusability There is memory reusability and memory


can be freed when not required

It uses Early Binding. It uses Late Binding.


6
Dynamic Storage Allocation
 Two general approaches to dynamic storage allocation:
 Stack Allocation
 Heap Allocation
 Stack allocation (hierarchical): restricted, but simple and
efficient. A stack-based organization keeps all the free
space together in one place.
 Heap allocation: more general, but less efficient, more
difficult to implement. Heap allocation must be used when
allocation and release are unpredictable.

7
Stack Allocation
 Stack-allocated memory
 A run-time stack is a simple and efficient way to provide
the storage needed for function calls.
 When a function is called, memory is allocated for all of its
parameters and local variables.
 Each active function call has memory on the stack current g()
function call on top)
 When a function call terminates, g()
the memory is deallocated (“freed up”)
 Ex: main() calls f(), f()
f() calls g()
g() recursively calls g() main()
8
 When a function is called, all storage needed for the
function is allocated on the stack in a section called the
activation record.
 The storage includes room for the return address, the
return value (possibly a pointer), actual parameters
passed to the function, and any variables declared within
the functionThe activation record also holds locations for
temporary values, and pointers to other parts of the
stack (to facilitate access and deallocation).
 On return from the function, the storage on the stack is
deallocated

9
Heap Allocation
 Heap-allocated memory
 Memory consists of allocated areas and free areas (or holes).
 The most flexible allocation scheme is heap-based allocation.
Here, storage can be allocated and de-allocated dynamically at
arbitrary times during program execution. This will be more
expensive than either static or stack-based allocation global
variables
 Heap memory is allocated in a more complex way than stack
memory.

10
Note: void * denotes a generic pointer type

Allocating new heap memory


void *malloc(size_t size);
 Allocate a block of size bytes,
return a pointer to the block
(NULL if unable to allocate block)

void *calloc(size_t num_elements, size_t element_size);


 Allocate a block of num_elements * element_size bytes,
initialize every byte to zero,
return pointer to the block
(NULL if unable to allocate block)

11
Reallocating new heap memory
void *realloc(void *ptr, size_t new_size);

 Given a previously allocated block starting at ptr,


 change the block size to new_size,
 return pointer to resized block
 If block size is increased, contents of old block may be copied to a
completely different region
 In this case, the pointer returned will be different from the ptr
argument, and ptr will no longer point to a valid memory region
 If ptr is NULL, realloc is identical to malloc

12
Deallocating heap memory
void free(void *pointer);
 Given a pointer to previously allocated memory,
 put the region back in the heap of unallocated memory

 Note: easy to forget to free memory when no longer


needed...
 especially if you’re used to a language with “garbage collection”
like Java
 This is the source of the notorious “memory leak” problem
 Difficult to trace – the program will run fine for some time,
until suddenly there is no more memory!

13
Checking for successful allocation
 implementation of alloc:
#undef malloc

void *alloc(size_t size) {


void *new_mem;
new_mem = malloc(size);
if (new_mem == NULL) exit(1);
return new_mem;
}
 Nice solution – as long as “terminate the program” is
always the right response

14
Storage Reclamation in Heap
 How do we know when dynamically-allocated memory
can be freed?
 Easy when a chunk is only used in one place.
 Reclamation is hard when information is shared: it can't be
recycled until all of the users are finished.
 Usage is indicated by the presence of pointers to the data.
Without a pointer, can't access (can't find it).
 Two problems in reclamation:
 Dangling pointers: storage is freed but referred to later.
Pointer pointing to a de-allocated memory.
 Memory leaks: storage is not freed even though it is no
longer needed .

15
Fixed Partitioning
 Partition main memory into a set of non-
overlapping memory regions called partitions.
 Fixed partitions are of equal sizes.
 It is the simplest storage maintenance method.
 Leftover space in partition, after program
assignment, is called internal fragmentation.
Variable Partitioning
 Degree of multiprogramming limited by number of partitions.
 Variable-partition sizes for efficiency (sized to a given process’ needs).
 Hole – block of available memory; holes of various size are scattered
throughout memory.
 When a process arrives, it is allocated memory from a hole large enough to
accommodate it.
 Process exiting frees its partition, adjacent free partitions combined.
 Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
Fixed Vs. Variable Sized Block Partition
 As memory space utilization is concerned, the variable
sized block storage policy is preferred to that of the fixed
sized block storages.
 Initially when there is no program in the memory , the
entire memory is a block.
Memory Management
 For the purpose of dynamic storage allocation, we view
memory as a single array broken into a series of variable-
size blocks, where some of the blocks are free blocks and
some are reserved blocks or already allocated. The free
blocks are linked together to form a free list used for
servicing future memory requests.
 When a memory request is received by the memory
manager, some block on the free list must be found that is
large enough to service the request. If no such block is
found, then the memory manager must resort to a failure
policy such as garbage collection.

19
Memory Allocation Policies for Variable
Sized Blocks

 First fit chooses the first block in the free list big enough to
satisfy the request, and split it.
 Next fit is like first fit, except that the search for a fitting
block will start where the last one stopped, instead of at the
beginning of the free list.
 Best fit chooses the smallest block bigger than the requested
one.
 Worst fit chooses the biggest, with the aim of avoiding the
creation of too many small fragments – but doesn’t work well
in practice.
First Fit

 In the first fit approach is to allocate the first free


partition or hole large enough which can accommodate
the process. It finishes after finding the first suitable free
partition.
 Advantage
 Fastest algorithm because it searches as little as possible.
 Disadvantage
 The remaining unused memory areas left after allocation
become waste if it is too smaller. Thus request for larger
memory requirement cannot be accomplished.

21
Best Fit
 The best fit deals with allocating the smallest free
partition which meets the requirement of the requesting
process. This algorithm first searches the entire list of free
partitions and considers the smallest hole that is
adequate. It then tries to find a hole which is close to
actual process size needed.
 Advantage
 Memory utilization is much better than first fit as it
searches the smallest free partition first available.
 Disadvantage
 It is slower and may even tend to produce small free
blocks that may be too small for subsequent allocation.

22
Worst fit

 In worst fit approach is to locate largest available free


portion so that the portion left will be big enough to be
useful. It is the reverse of best fit.
 Advantage
 Reduces the rate of production of small gaps.
 Disadvantage
 If a process requiring larger memory arrives at a later
stage then it cannot be accommodated as the largest hole
is already split and occupied.

23
Next fit
 Next fit is a modified version of first fit. It begins as first
fit to find a free partition. When called next time it starts
searching from where it left off, not from the beginning.

24
Disadvantage of Storage Allocation
Strategies:-

 Encounter the problem of memory fragmentation,


which never occurs in the case of Fixed size request.
 Another problem with dynamic storage allocation
systems is the areas of free memory(memory holes) are
interspersed with the actively used partitions throughout
the memory.
Memory Fragmentation

 Fragmentation is encountered when the free


memory space is broken into little pieces as
processes are loaded and removed from memory.
 Fragmentation is of two types:
 External fragmentation
 Internal fragmentation

 Fragmentation is major problem in dynamic memory


allocation
 Memory fragmentation:
 External fragmentation: Memory wasted outside
allocated blocks
 Internal fragmentation: Memory wasted inside
allocated block. Results when memory allocated is larger
than memory requested.
 The First fit algorithm is the best algorithm among all
because
 It takes lesser time compare to the other algorithms.
 It produces bigger holes that can be used to load other
processes later on.
 It is easiest to implement.

 Next Fit > First Fit > Best Fit, Worst Fit.
 Next Fit is the preferred one.

28
Storage Compaction/Defragmentation

 Storage compaction is another technique for reclaiming


free storage.
 works by actually relocating some or all portions into one
end of the memory and thus combine the holes into one
large free.
 There are two types of compaction :-
 Incremental
 Selective

29
 Incremental Compaction:- All free blocks are moved into
one end of the memory to make a large hole.
 Selective Compaction:- Searches for a minimum number
of free blocks, the movements of which yields a larger
free hole. This hole may not be at the end, it may be
anywhere.
Garbage Collection
 Garbage collection: storage isn't freed explicitly (using free
operation), but rather implicitly: just delete pointers.
When the system needs storage, it searches through all of
the pointers (must be able to find them all!) and collects
things that aren't used.
 If structures are circular then this is the only way to
reclaim space.
 Garbage collectors typically compact memory, moving
objects to coalesce all free space.
 Garbage collection is often expensive: 10-20% or all CPU
time in systems that use it.

31
Garbage Collection
 Basic idea
 keep track of what memory is referenced and when it is no
longer accessible, reclaim the memory
 Example
 linked list
Example

Obj1 Obj2 Obj3


head

next next next


tail

 Assume programmer does the following


 obj1.next = obj2.next;

Obj1 Obj2 Obj3


head

next next next


tail
Example
 Now there is no way for programmer to reference
obj2
 it’s garbage
 In system without garbage collection this is called a
memory leak
 location can’t be used but can’t be reallocated
 waste of memory and can eventually crash a program
 In system with garbage collection this chunk will be
found and reclaimed
Mark-and-Sweep
 Basic idea
 go through all memory and mark every chunk that is
referenced
 make a second pass through memory and remove all
chunks not marked

OS 0 1 2 3

p2 = p2 =
650 360
0 100 350 450 600

•Mark chunks 0, 1, and 3 as marked


•Place chunk 2 on the free list (turn it into a hole)
Mark and Sweep Algorithm
 Garbage Detection
 Depth-first search to mark live data cells
(cells in heap reachable from variables on
run-time stack).
 Garbage Reclamation
 Sweepthrough entire memory, putting
unmarked nodes on freelist. Sweep also
unmarks the marked nodes.

36
Issues in Mark & sweep
 Every object must have a mark bit. We need to keep a
little extra memory available for performing the
collection. The runtime of mark-sweep is linear in the
heap size. Memory becomes fragmented

37
Reference counting
 Reference counts: keep count of the number of outstanding
pointers to each chunk of memory. When this becomes
zero, free the memory. Example: Smalltalk, file descriptors
in Unix. Works fine for hierarchical structures.

 Issues: Reference counting adds overhead to every


pointer operation. It is, howerver, inherently incremental.
It cannot collect cyclic structures.

38
Buddy System
 One way of dealing with internal fragmentation is to
allow a variety of block sizes.
 Blocks of each size can be allocated and deallocated by
the use of a fixed size block allocate and deallocate
mechanism, and
 if a block of one size is not available, a larger block can
be allocated and a block of the desired split off of it.
 When this is done, all blocks resulting from splitting a
particular block are called buddies, and the block from
which they were split is called their parent.

39
 The resulting storage allocation mechanism is said to use
a buddy system. All buddy systems maintain an array of
lists of free blocks, where all blocks in each list are the
same size, and the array is indexed by a value computed
from the size.
 The oldest buddy system, the binary buddy system has
block sizes that are powers of two. Therefore, when a
block is split, it is split exactly in half, and when blocks are
combined, two equal size blocks are combined to make a
block twice as big.

40
Limitation of Buddy’s System
 Neither of the two buddy systems completely eliminates
internal fragmentation, and both have external
fragmentation problems which were not present when
fixed size blocks were used.

41
BOUNDARY TAG METHOD

 Boundary tags are data structures on the boundary


between blocks in the heap from which memory is
allocated

42
Boundary Tag Sytem
 To remove the arbitrary block from the free list (to
combine it with the newly made free block) without
traversing the whole list, the free list should be doubly
circular linked .
 Thus each and every free block must contain 2 pointers,
next and prev to the next and previous free block on the
free block. (This is required when combining a newly
freed block with a free block which immediately proceeds
in the memory).

43
Boundary Tags

1 word
Header a = 1: allocated block
size a
a = 0: free block
Format of Application size: total block size
allocated and Memory
free blocks (and padding?)
Application memory
Boundary tag size a (allocated blocks only)
(footer)

4 4 4 4 6 6 4 4

44
 Thus the front of the free block should be accessible
from its rear. One way to do this is to introduce a bsize
field at the given offset from the last location or address
of each free block. This field exhibits the same value as
the size filed at the front of the block. This field have the
some value as the size field at the front of the block. The
figure below shows the structure of free and allocated
blocks under this method which is called the boundary
tag method.

45

You might also like