SlideShare a Scribd company logo
Prepared by: Mohamed AbdAllah
Embedded C-Programming
Lecture 3
1
Agenda
 Memory types (RAM, ROM, EEPROM, etc).
 Program memory segments.
 Static vs. Dynamic memory allocation.
 Static vs. Dynamic linking.
 Function call with respect to stack, i/p, o/p and i/o parameters and return
value.
 Functions types (Synch. vs. ASynch, Reentrant vs. non-Reentrant, Recursive,
Inline function vs. function-like macro).
 Task 3.
2
Agenda
3
Memory types
4
 ROM
Read-Only Memory (ROM), a very cheap memory that we can program it
once, and then we can never change the data that is on it. This makes it
useless because we can't upgrade the information on the ROM chip (be it
program code or data), we can't fix it if there is an error, etc....
Because of this, they are usually called "Programmable Read-Only
Memory" (PROM), because we can program it once, but then we can't
change it at all.
5
Memory types
 EPROM
In contrast to PROM is EPROM ("Erasable Programmable Read-Only
Memory"). EPROM chips will have a little window, made of either glass or
quartz that can be used to erase the memory on the chip.
To erase an EPROM, the window needs to be uncovered (they usually have
some sort of guard or cover), and the EPROM needs to be exposed to UV
radiation to erase the memory, and allow it to be reprogrammed.
6
Memory types
 EEPROM
A step up from EPROM is EEPROM ("Electrically Erasable Programmable
Read-Only Memory"). EEPROM can be erased by exposing it to an
electrical charge. This means that EEPROM can be erased in circuit (as
opposed to EPROM, which needs to be removed from the circuit, and
exposed to UV). We can erase one byte at a time or an appropriate
electrical charge will erase the entire chip, so we can't erase just certain
data items at a time.
Many modern microcontroller have an EEPROM section on-board, which
can be used to permanently store system parameters or calibration values.
These are often referred to as non-volatile memory (NVM). They can be
accessed - read and write - as single bytes or blocks of bytes. EEPROM
allows only a limited number of write cycles, usually several ten-thousand.
Write access to on-board NVM tends to be considerably slower than RAM.
Embedded software must take this into account and "queue" write
requests to be executed in background.
7
Memory types
 RAM
Random Access Memory (RAM) is a temporary, volatile memory that
requires a persistent electric current to maintain information. As such, a
RAM chip will not store data when we turn the power OFF.
RAM is more expensive than ROM, and it is often that Embedded systems
can have many Kbytes of ROM (sometimes Megabytes or more), but often
they have less than 100 bytes of RAM available for use in program flow.
8
Memory types
 Flash memory
Flash memory is a combination of the best parts of RAM and ROM. Like
ROM, Flash memory can hold data when the power is turned off. Like
RAM, Flash can be reprogrammed electrically, in whole or in part, at any
time during program execution.
Flash memory modules are only good for a limited number of Read/Write
cycles, which means that they can burn out if we use them too much, too
often. As such, Flash memory is better used to store persistent data, and
RAM should be used to store volatile data items.
Flash memory works much faster than traditional EEPROMs because it
writes data in chunks, usually 512 bytes in size, instead of 1 byte at a time
like EEPROM can do, but it may be a disadvantage as we have to erase a
whole sector in the Flash even we need to erase only 1 byte.
9
Memory types
Program Memory segments
10
In object files there are areas called sections. Sections can hold executable
code, data, dynamic linking information, debugging data, symbol tables,
relocation information, comments, string tables, and notes.
There are several sections that are common to all executable formats (may be
named differently, depending on the compiler/linker) as listed below:
 .text
 .bss
 .data
 .rdata
11
Program Memory segments
 .text
This section contains the executable instruction codes and is shared
among every process running the same binary. This section usually has
READ and EXECUTE permissions only. This section is the one most affected
by optimization.
 .bss
BSS stands for ‘Block Started by Symbol’. It holds un-initialized global and
static variables. Since the BSS only holds variables that don't have any
values yet, it doesn't actually need to store the image of these variables.
The size that BSS will require at runtime is recorded in the object file, but
the BSS (unlike the data section) doesn't take up any actual space in the
object file.
12
Program Memory segments
 .data
Contains the initialized global and static variables and their values. It is
usually the largest part of the executable. It usually has READ/WRITE
permissions.
 .rdata
Also known as .rodata (read-only data) section. This contains constants
and string literals.
13
Program Memory segments
 Stack vs. Heap
There are also 2 more memory segments when program is loaded into
memory (Stack and Heap).
 Stack:
The stack area contains the program stack (functions working area), a
LIFO (Last In First Out) structure, typically located in the higher parts
of memory. On the standard PC x86 computer architecture it grows
toward address zero, on some other architectures it grows the
opposite direction.
A “stack pointer” register tracks the top of the stack, it is adjusted
each time a value is “pushed” onto the stack.
It is where automatic variables are stored, along with information that
is saved each time a function is called.
14
Program Memory segments
 Heap:
It is the segment where dynamic memory
allocation takes place.
The heap area begins at the end of the BSS
segment and grows to larger addresses from there.
The Heap area is managed by malloc, realloc, and
free.
The stack area traditionally adjoined the heap area
and grew the opposite direction.
Heaps are more difficult to implement. Typically,
we need some code that manages the heap. We
request memory from the heap, and the heap
code must have data structures to keep track of
which memory has been allocated.
15
Program Memory segments
Memory allocation
16
 Static Memory Allocation
Memory is allocated for the declared variable by the compiler. The
address can be obtained by using ‘address of’ & operator and can be
assigned to a pointer. The memory is allocated during compile time.
Since most of the declared variables have static memory, this kind of
assigning the address of a variable to a pointer is known as static memory
allocation.
We have no control over the lifetime of this memory. E.g: a variable in a
function, is only there until the function finishes.
17
Memory allocation
 Dynamic Memory Allocation
Allocation of memory at the time of execution (run time) is known as
dynamic memory allocation. The functions calloc() and malloc() support
allocating of dynamic memory.
Dynamic allocation of memory space is done by using these functions
when value is returned by functions and assigned to pointer variables.
We control the exact size and the lifetime of these memory locations. If we
don't free it, we'll run into memory leaks, which may cause our application
to crash, since it, at some point cannot allocation more memory.
Example:
/* Allocate block of 100 integers */
int *ptr1 = (int*) malloc(100*sizeof(int));
/* Allocate block of 100 integers zero initialized */
int *ptr2 = (int*) calloc(100, sizeof(int));
/* Free allocated memory */
free(ptr1); free(ptr2);
18
Memory allocation
 Dynamic Memory Allocation Problems
There are a number of problems with dynamic memory allocation in a
real time system:
• The standard library functions (malloc() and free()) are not normally
reentrant, which would be problematic in a multithreaded application.
• A more intractable problem is associated with the performance of
malloc(). Its behavior is unpredictable, as the time it takes to allocate
memory is extremely variable. Such nondeterministic behavior is
intolerable in real time systems.
• Without great care, it is easy to introduce memory leaks into
application code implemented using malloc() and free(). This is caused
by memory being allocated and never being deallocated. Such errors
tend to cause a gradual performance degradation and eventual failure.
This type of bug can be very hard to locate.
• Memory fragmentation problems arise because of dynamic memory
allocation.
19
Memory allocation
 Dynamic Memory Allocation Problems
Memory segmentation problem example:
For this example, it is assumed hat there is a 10K heap. First, an area of 3K
is requested, thus:
#define K (1024)
char *p1;
p1 = malloc(3*K);
To check if allocation succeeded
if (p1 != NULL)
{
printf(“Succeeded”);
}
Then, a further 4K is requested:
p2 = malloc(4*K);
3K of memory is now free. 20
Memory allocation
3K
4K
3K - Empty
p1
p2
 Dynamic Memory Allocation Problems
Some time later, the first memory allocation, pointed to by p1, is de-
allocated:
free(p1);
This leaves 6K of memory free in two 3K chunks. A further request for a 4K
allocation is issued:
p1 = malloc(4*K);
This results in a failure – NULL is returned into p1 – because, even though
6K of memory is available, there is not a 4K contiguous block available.
This is memory fragmentation.
21
Memory allocation
3K - Empty
4K
3K - Empty
p2
 Dynamic Memory Allocation Problems
malloc(0) is implementation defined, it can return NULL or it can return a
valid pointer to be used in free() but shouldn’t be dereferenced.
malloc(-10) or any negative number returns NULL pointer as malloc()
accepts unsigned values, so the negative number is converted to a very big
positive number.
22
Memory allocation
Program linking
23
 Linking
The linker actually enables separate compilation. An executable can be
made up of a number of source files which can be compiled and
assembled into their object files respectively, independently.
In a typical system, a number of programs will be running. Each program
relies on a number of functions, some of which will be standard C library
functions, like printf(), malloc(), strcpy(), etc. and some are non-standard
or user defined functions.
If every program uses the standard C library, it means that each program
would normally have a unique copy of this particular library present within
it. Unfortunately, this result in wasted resources, degrade the efficiency
and performance.
24
Program linking
 Linking
Since the C library is common, it is better to have each program reference
the common, one instance of that library (shared library), instead of
having each program contain a copy of the library.
This is implemented during the linking process where some of the objects
are linked during the link time (static linking) whereas some done during
the run time (deferred/dynamic linking).
25
Program linking
 Static Linking
The term ‘statically linked’ means that the program and the particular
library that it’s linked against are combined together by the linker at link
time.
This means that the binding between the program and the particular
library is fixed and known at link time before the program run. It also
means that we can't change this binding, unless we re-link the program
with a new version of the library.
Programs that are linked statically are linked against archives of objects
(libraries) that typically have the extension of .a. An example of such a
collection of objects is the standard C library, libc.a.
You might consider linking a program statically for example, in cases where
you weren't sure whether the correct version of a library will be available
at runtime, or if you were testing a new version of a library that you don't
yet want to install as shared.
26
Program linking
 Static Linking
To generate a static library using gcc and be used later for linking:
1-Generate object code for required source file
27
Program linking
D:newProject>gcc -c file.c -o file.o
D:newProject>ar rcs file.lib file.o
D:newProject>gcc main.c file.lib -o app.exe
2-Archive the object code inside a library
3-Statically link the library to our main.c file
The drawback of this technique is that the executable is quite big in size,
all the needed information need to be brought together.
 Dynamic Linking
The term ‘dynamically linked’ means that the program and the particular
library it references are not combined together by the linker at link time.
Instead, the linker places information into the executable that tells the
loader which shared object module the code is in and which runtime linker
should be used to find and bind the references.
This means that the binding between the program and the shared object is
done at runtime that is before the program starts, the appropriate shared
objects are found and bound.
This type of program is called a partially bound executable, because it isn't
fully resolved. The linker, at link time, didn't cause all the referenced
symbols in the program to be associated with specific code from the
library. 28
Program linking
 Dynamic Linking
Instead, the linker simply said something like: “This program calls some
functions within a particular shared object, so I'll just make a note of
which shared object these functions are in, and continue on”.
Symbols for the shared objects are only verified for their validity to ensure
that they do exist somewhere and are not yet combined into the program.
The linker stores in the executable program, the locations of the external
libraries where it found the missing symbols. Effectively, this defers the
binding until runtime.
Programs that are linked dynamically are linked against shared objects that
have the extension .so. An example of such an object is the shared object
version of the standard C library, libc.so.
29
Program linking
 Dynamic Linking
The advantageous to defer some of the objects/modules during the static
linking step until they are finally needed (during the run time) includes:
• Program files (on disk) become much smaller because they need not
hold all necessary text and data segments information. It is very useful
for portability.
• Standard libraries may be upgraded or patched without every one
program need to be re-linked. This clearly requires some agreed
module-naming convention that enables the dynamic linker to find the
newest, installed module such as some version specification.
Furthermore the distribution of the libraries is in binary form (no
source), including dynamically linked libraries (DLLs) and when you
change your program you only have to recompile the file that was
changed.
30
Program linking
 Dynamic Linking
• Software vendors need only provide the related libraries module
required. Additional runtime linking functions allow such programs to
programmatically-link the required modules only.
• In combination with virtual memory, dynamic linking permits two or
more processes to share read-only executable modules such as
standard C libraries. Using this technique, only one copy of a module
needs be resident in memory at any time, and multiple processes, each
can executes this shared code (read only). This results in a
considerable memory saving, although demands an efficient swapping
policy.
31
Program linking
 Dynamic Linking
To generate a shared library using gcc and be used later for linking:
1-Generate the shared library directly for the required source file
32
Program linking
D:newProject>gcc -shared file.c -o file.dll
D:newProject>gcc main.c -L. -lfile -o app.exe
D:newProject>gcc main.c file.dll -o app.exe
2-Dynamically link the library to our main.c file
OR we can simply type
Now the code inside file.dll will not exist inside the output executable
but file.dll should exist when running app.exe.
Function call
33
 Function call with respect to stack
Assembly languages provide the ability to support functions. Functions are
perhaps the most fundamental language feature for abstraction and code
reuse. It allows us to refer to some piece of code by a name. All we need
to know is how many arguments are needed, what type of arguments, and
what the function returns, and what the function computes to use the
function.
In particular, it's not necessary to know how the function does what it
does to be able to manage function calls with respect to stack.
34
Function call
 Function call with respect to stack
To think about what's required, let's think about what happens in a
function call.
• When a function call is executed, the arguments need to be evaluated
to values (at least, for C-like programming languages).
• Then, control flow jumps to the body of the function, and code begins
executing there.
• Once a return statement has been encountered, we're done with the
function, and return back to the function call.
Programming languages make functions easy to maintain and write by
giving each function its own section of memory to operate in.
35
Function call
 Function call with respect to stack
For example, suppose we have the following function:
int pickMin( int x, int y, int z )
{
int min = x;
if (y < min)
min = y;
if (z < min)
min = z;
return min;
}
We declare parameters x, y, and z. We also declare local variables, min.
We know that these variables won't interfere with other variables in other
functions, even if those functions use the same variable names.
36
Function call
 Function call with respect to stack
In fact, we also know that these variables won't interfere with separate
invocations of itself. For example, consider this recursive function:
int factorial(unsigned int i)
{
if(i <= 1)
{
return 1;
}
return i * factorial(i - 1);
}
Each call to fact produces a new memory location for i. Thus, each
separate call (or invocation) to fact has its own copy of i.
How does this get implemented? In order to understand function calls, we
need to understand the stack, and we need to understand how assembly
languages (MIPS as our example) deal with the stack. 37
Function call
 Function call with respect to stack
When a program starts executing, a certain contiguous section of memory
is set aside for the program called the stack.
38
Function call
 Function call with respect to stack
The stack pointer is usually a register that contains the top of the stack.
The stack pointer contains the smallest address x such that any address
smaller than x is considered garbage, and any address greater than or
equal to x is considered valid.
In the example above, the stack pointer contains the value 0x0000 1000,
which was somewhat arbitrarily chosen.
The shaded region of the diagram represents valid parts of the stack.
It's useful to think of the following aspects of a stack:
• Stack bottom: The largest valid address of a stack. When a stack is
initialized, the stack pointer points to the stack bottom.
• Stack limit: The smallest valid address of a stack. If the stack pointer
gets smaller than this, then there's a stack overflow (this should not be
confused with overflow from math operations). 39
Function call
 Function call with respect to stack
Like the data structure by the same name, there are two operations on the
stack: push and pop.
Usually, push and pop are defined as follows:
• Push: We can push one or more registers, by setting the stack pointer
to a smaller value (usually by subtracting 4 times the number of
registers to be pushed on the stack) and copying the registers to the
stack.
• Pop: We can pop one or more registers, by copying the data from the
stack to the registers, then to add a value to the stack pointer (usually
adding 4 times the number of registers to be popped on the stack)
Thus, pushing is a way of saving the contents of the register to memory,
and popping is a way of restoring the contents of the register from
memory.
40
Function call
 Function call with respect to stack
Some ISAs have an explicit push and pop instruction. However, MIPS (as
our example) does not. However, we can get the same behavior as push
and pop by manipulating the stack pointer directly.
The stack pointer, by convention (not mandatory but recommended), is
$r29. That is, it's register 29.
Here's how to implement the equivalent of push $r2 in MIPS, which is to
push register $r2 onto the stack.
push: addi $sp, $sp, -4 # Decrement stack pointer by 4
sw $r3, 0($sp) # Save $r3 to stack
The label "push" is unnecessary. It's there just to make it easier to tell it's
a push. The code would work just fine without this label.
41
Function call
 Function call with respect to stack
Here's a diagram of a push operation:
42
Function call
The diagram on the left shows the stack before the push operation.
The diagram in the center shows the stack pointer being decremented by 4.
The diagram on the right shows the stack after the 4 bytes from register 3 has
been copied to address 0x000f fffc
 Function call with respect to stack
Popping off the stack is the opposite of pushing on the stack. First, we
copy the data from the stack to the register, then we adjust the stack
pointer:
pop: lw $r3, 0($sp) # Copy from stack to $r3
addi $sp, $sp, 4 # Increment stack pointer by 4
43
Function call
As we can see, the data still is on the stack, but once the pop operation is
completed, we consider that part of the data invalid. Thus, the next push
operation overwrites this data.
 Function call with respect to stack
That’s why when we return a pointer to a local variable or to a parameter
that was passed by value and the value stayed valid initially, but later on
got corrupted.
The data still stays on the garbage part of the stack until the next push
operation overwrites it (that's when the data gets corrupted).
44
Function call
 Function call with respect to stack
Stack and Functions: Let's now see how the stack is used to implement
functions. For each function call, there's a section of the stack reserved for
the function. This is usually called a stack frame.
Let's imagine we're starting in main() in a C program. The stack looks
something like this:
45
Function call
 Function call with respect to stack
We'll call this the stack frame for main. A stack frame exists whenever a
function has started, but yet to complete.
Suppose, inside of body of main() there's a call to foo(). Suppose foo()
takes two arguments. One way to pass the arguments to foo() is through
the stack. Thus, there needs to be assembly language code in main() to
"push" arguments for foo() onto the stack. The result looks like:
46
Function call
 Function call with respect to stack
As you can see, by placing the arguments on the stack, the stack frame for
main() has increased in size. We also reserved some space for the return
value. The return value is computed by foo(), so it will be filled out once
foo() is done.
Once we get into code for foo(), the function foo() may need local
variables, so foo() needs to push some space on the stack, which looks
like:
47
Function call
 Function call with respect to stack
foo() can access the arguments passed to it from main() because the code
in main() places the arguments just as foo() expects it.
We've added a new pointer called FP which stands for frame pointer. The
frame pointer points to the location where the stack pointer was, just
before foo() moved the stack pointer for foo()'s own local variables.
Having a frame pointer is convenient when a function is likely to move the
stack pointer several times throughout the course of running the function.
The idea is to keep the frame pointer fixed for the duration of foo()'s stack
frame. The stack pointer, in the meanwhile, can change values.
Thus, we can use the frame pointer to compute the locations in memory
for both arguments as well as local variables. Since it doesn't move, the
computations for those locations should be some fixed offset from the
frame pointer. 48
Function call
 Function call with respect to stack
And, once it's time to exit foo(), you just have to set the stack pointer to
where the frame pointer is, which effectively pops off foo()'s stack frame.
It's quite handy to have a frame pointer.
We can imagine the stack growing if foo() calls another function, say, bar().
foo() would push arguments on the stack just as main() pushed arguments
on the stack for foo().
So when we exit foo() the stack looks just as it did before we pushed on
foo()'s stack frame, except this time the return value has been filled in.
49
Function call
 Function call with respect to stack
50
Function call
Once main() has the return value, it can pop that and the arguments to
foo() off the stack:
 Function call with respect to parameters
Input parameter: parameter passed to function to provide an information
for the function, it can be passed by value or by address:
 Pass by value: variable value is passed, gets a new name local to the
function so if this new named variable is changed the original variable
that was passed will not be affected.
void muFunc(int x)
{
x++;
}
Inside any other function we typed:
int x = 10;
myFunc(x);
printf(“%d”, x); /* Still prints 10 */
51
Function call
 Function call with respect to parameters
 Pass by address: variable address is passed and saved inside a pointer,
so if we dereference the pointer and change the variable it points at
then the original variable that was passed will be changed.
void muFunc(int* ptr)
{
*ptr += 1;
}
Inside any other function we typed:
int x = 10;
myFunc(&x);
printf(“%d”, x); /* Prints 11 */
Passing by reference is very useful when passing a structure for example
so that instead of passing the whole structure we can just pass a pointer to
it, so memory is saved.
52
Function call
 Function call with respect to parameters
Output parameter: parameter passed to function to be filled by function
with the required information so it should be a pass by address.
void readTemperature(int *temp_ptr)
{
*temp_ptr = readADCValue();
}
Inside any other function we typed:
int temp;
myFunc(&temp);
After returning from myFunc(), temp will contain the updated temperature
value.
Output parameter is very useful when it is required to return multiple
variables not only one.
53
Function call
 Function call with respect to parameters
I/O parameter: parameter passed to function to provide an information
then filled by function with the a new information so it should be also a
pass by address.
void myFunc(int *ptr)
{
*ptr = doSomeProcessing(*ptr);
}
54
Function call
 Function call with respect to parameters
Return value: Function can return only one value, or a void data type in
case of no return value.
int square(int x)
{
return x*x;
}
Inside any other function we typed:
int sq = square(10); /* sq value will be 100 */
55
Function call
Functions types
56
 Synchronous vs. Asynchronous function
Synchronous function: function returns after all the required processing is
finished, it may do some sort of busy wait for a specific operation like I/O.
void getDeviceDataSynch(char *data)
{
/* Will not return until reading is finished */
data = readInputDevice();
}
Asynchronous function: function returns immediately after submitting
operation request and the operation makes some sort of callback to your
code or ISR when it completes to give its completion status.
void getDeviceDataAsynch (void)
{
/* Will return immediately without any data */
submitReadRequest();
}
57
Functions types
 Reentrant vs. Non-Reentrant function
Reentrant function: A function is reentrant if it can be invoked while
already in the process of executing. That is, a function is reentrant if it can
be interrupted in the middle of execution (for example, by a signal or
interrupt) and invoked again before the interrupted execution completes.
A Reentrant Function shall satisfy the following conditions:
• Should not call another non-reentrant function.
• Should not contain static time variables.
• Should not use any global variables.
• Should not use any shared resources.
Non-Reentrant function: Function is non-reentrant if it breaks one or
more condition of reentrancy conditions.
58
Functions types
 Recursive function
Recursive function: Recursion is the process of repeating items in a self-
similar way. In programming languages, if a program allows you to call a
function inside the same function, then it is called a recursive call of the
function.
Example: the factorial function:
int factorial(unsigned int i)
{
if(i <= 1)
{
return 1;
}
return i * factorial(i - 1);
}
Recursive function is not safe as It may lead to stack overflow for example
in case of missing break condition. 59
Functions types
 Inline function vs. Function-like macro
Inline function: By declaring a function inline, you can direct the compiler
to make calls to that function faster. One way it can achieve this is to
integrate that function's code into the code for its callers (copy function’s
code in call location without actual call). This makes execution faster by
eliminating the function-call overhead.
inline int myFunc(unsigned int i)
{
/* Any code */
}
Note that inline is just a request for the compiler, it may be done or not,
but function-like macro will always be replaced with corresponding code
lines.
Inline function is the same as any normal function with respect to type
checking and so on, so it doesn’t contain the problems of function-like
macro. 60
Functions types
Task 3
61
Mohamed AbdAllah
Embedded Systems Engineer
mohabdallah8@gmail.com
62

More Related Content

PDF
Embedded C - Lecture 4
Mohamed Abdallah
 
PDF
Embedded C - Lecture 2
Mohamed Abdallah
 
PDF
Embedded C - Lecture 1
Mohamed Abdallah
 
PDF
Hardware interfacing basics using AVR
Mohamed Abdallah
 
PPTX
Experience with jemalloc
Kit Chan
 
PPTX
Dynamic memory allocation in c language
Tanmay Modi
 
PPT
Embedded c program and programming structure for beginners
Kamesh Mtec
 
Embedded C - Lecture 4
Mohamed Abdallah
 
Embedded C - Lecture 2
Mohamed Abdallah
 
Embedded C - Lecture 1
Mohamed Abdallah
 
Hardware interfacing basics using AVR
Mohamed Abdallah
 
Experience with jemalloc
Kit Chan
 
Dynamic memory allocation in c language
Tanmay Modi
 
Embedded c program and programming structure for beginners
Kamesh Mtec
 

What's hot (20)

PPTX
Embedded C programming session10
Keroles karam khalil
 
PDF
C Programming - Refresher - Part III
Emertxe Information Technologies Pvt Ltd
 
PPTX
用Raspberry Pi 學Linux I2C Driver
艾鍗科技
 
PPTX
Turing machine and Halting Introduction
AmartyaYandrapu1
 
PDF
Design and Implementation of GCC Register Allocation
Kito Cheng
 
PPTX
Removing ambiguity-from-cfg
Ashik Khan
 
PDF
Embedded Operating System - Linux
Emertxe Information Technologies Pvt Ltd
 
PPT
Basic Linux Internals
mukul bhardwaj
 
PPT
Embedded _c_
Moorthy Peesapati
 
PDF
Intel x86 Architecture
ChangWoo Min
 
PPTX
ADDRESSING MODE
Anika Shahabuddin
 
PDF
Instruction formats-in-8086
MNM Jain Engineering College
 
PPSX
C language (Collected By Dushmanta)
Dushmanta Nath
 
PPTX
General register organization (computer organization)
rishi ram khanal
 
PPTX
Unix signals
Isaac George
 
PPTX
Programmable peripheral interface 8255
Marajulislam3
 
Embedded C programming session10
Keroles karam khalil
 
C Programming - Refresher - Part III
Emertxe Information Technologies Pvt Ltd
 
用Raspberry Pi 學Linux I2C Driver
艾鍗科技
 
Turing machine and Halting Introduction
AmartyaYandrapu1
 
Design and Implementation of GCC Register Allocation
Kito Cheng
 
Removing ambiguity-from-cfg
Ashik Khan
 
Embedded Operating System - Linux
Emertxe Information Technologies Pvt Ltd
 
Basic Linux Internals
mukul bhardwaj
 
Embedded _c_
Moorthy Peesapati
 
Intel x86 Architecture
ChangWoo Min
 
ADDRESSING MODE
Anika Shahabuddin
 
Instruction formats-in-8086
MNM Jain Engineering College
 
C language (Collected By Dushmanta)
Dushmanta Nath
 
General register organization (computer organization)
rishi ram khanal
 
Unix signals
Isaac George
 
Programmable peripheral interface 8255
Marajulislam3
 
Ad

Viewers also liked (19)

PDF
Raspberry Pi - Lecture 5 Python for Raspberry Pi
Mohamed Abdallah
 
PDF
Raspberry Pi - Lecture 2 Linux OS
Mohamed Abdallah
 
PDF
Raspberry Pi - Lecture 1 Introduction
Mohamed Abdallah
 
PDF
Raspberry Pi - Lecture 4 Hardware Interfacing Special Cases
Mohamed Abdallah
 
PDF
Raspberry Pi - Lecture 3 Embedded Communication Protocols
Mohamed Abdallah
 
PDF
Raspberry Pi - Lecture 6 Working on Raspberry Pi
Mohamed Abdallah
 
PPTX
Embedded c c++ programming fundamentals master
Hossam Hassan
 
PDF
Embedded C programming based on 8051 microcontroller
Gaurav Verma
 
PPTX
Embedded c
Ami Prakash
 
PPT
Embedded system - embedded system programming
Vibrant Technologies & Computers
 
PPTX
States & Capitals 111
Bermanburgh
 
PPT
C++ for Embedded Programming
Colin Walls
 
PPTX
5 signs you need a leased line
Entanet
 
PPTX
Embedded c programming
PriyaDYP
 
PPT
Embedded c & working with avr studio
Nitesh Singh
 
ODP
C prog ppt
xinoe
 
PPTX
Embedded C
Krunal Siddhapathak
 
Raspberry Pi - Lecture 5 Python for Raspberry Pi
Mohamed Abdallah
 
Raspberry Pi - Lecture 2 Linux OS
Mohamed Abdallah
 
Raspberry Pi - Lecture 1 Introduction
Mohamed Abdallah
 
Raspberry Pi - Lecture 4 Hardware Interfacing Special Cases
Mohamed Abdallah
 
Raspberry Pi - Lecture 3 Embedded Communication Protocols
Mohamed Abdallah
 
Raspberry Pi - Lecture 6 Working on Raspberry Pi
Mohamed Abdallah
 
Embedded c c++ programming fundamentals master
Hossam Hassan
 
Embedded C programming based on 8051 microcontroller
Gaurav Verma
 
Embedded c
Ami Prakash
 
Embedded system - embedded system programming
Vibrant Technologies & Computers
 
States & Capitals 111
Bermanburgh
 
C++ for Embedded Programming
Colin Walls
 
5 signs you need a leased line
Entanet
 
Embedded c programming
PriyaDYP
 
Embedded c & working with avr studio
Nitesh Singh
 
C prog ppt
xinoe
 
Ad

Similar to Embedded C - Lecture 3 (20)

PPTX
Dynamic Memory Allocation in C
Vijayananda Ratnam Ch
 
PPTX
Lecture 3.3.1 Dynamic Memory Allocation and Functions.pptx
roykaustav382
 
PDF
Memory Management for C and C++ _ language
23ecuos117
 
PPTX
1. C Basics for Data Structures Bridge Course
P. Subathra Kishore, KAMARAJ College of Engineering and Technology, Madurai
 
PDF
CH08.pdf
ImranKhan880955
 
PPTX
C dynamic ppt
RJ Mehul Gadhiya
 
DOCX
Dma
Acad
 
PPTX
Introduction to Data Structures, Data Structures using C.pptx
poongothai11
 
PDF
muja osjkkhkhkkhkkkfdxfdfddkhvjlbjljlhgg
mujahidHajishifa
 
PPTX
C MEMORY MODEL​.pptx
SKUP1
 
PPTX
C MEMORY MODEL​.pptx
LECO9
 
PPT
7. Memory management in operating system.ppt
imrank39199
 
PPT
C language introduction
musrath mohammad
 
PPTX
Software for embedded systems complete
Anand Kumar
 
PPTX
Memory Management.pptx
BilalImran17
 
PPT
operationg systemsdocumentmemorymanagement
SNIGDHAAPPANABHOTLA
 
PPT
OS-unit-3 part -1mxmxmxmmxmxmmxmxmxmxmxmmxmxmmx.ppt
SNIGDHAAPPANABHOTLA
 
PPT
Bullet pts Dyn Mem Alloc.pptghftfhtfytftyft
2023209914aditya
 
PPTX
BBACA-SEM-III-Datastructure-PPT(0) for third semestetr
ssuser951fc8
 
PPT
Memory management principles in operating systems
nazimsattar
 
Dynamic Memory Allocation in C
Vijayananda Ratnam Ch
 
Lecture 3.3.1 Dynamic Memory Allocation and Functions.pptx
roykaustav382
 
Memory Management for C and C++ _ language
23ecuos117
 
CH08.pdf
ImranKhan880955
 
C dynamic ppt
RJ Mehul Gadhiya
 
Dma
Acad
 
Introduction to Data Structures, Data Structures using C.pptx
poongothai11
 
muja osjkkhkhkkhkkkfdxfdfddkhvjlbjljlhgg
mujahidHajishifa
 
C MEMORY MODEL​.pptx
SKUP1
 
C MEMORY MODEL​.pptx
LECO9
 
7. Memory management in operating system.ppt
imrank39199
 
C language introduction
musrath mohammad
 
Software for embedded systems complete
Anand Kumar
 
Memory Management.pptx
BilalImran17
 
operationg systemsdocumentmemorymanagement
SNIGDHAAPPANABHOTLA
 
OS-unit-3 part -1mxmxmxmmxmxmmxmxmxmxmxmmxmxmmx.ppt
SNIGDHAAPPANABHOTLA
 
Bullet pts Dyn Mem Alloc.pptghftfhtfytftyft
2023209914aditya
 
BBACA-SEM-III-Datastructure-PPT(0) for third semestetr
ssuser951fc8
 
Memory management principles in operating systems
nazimsattar
 

Recently uploaded (20)

PDF
ETO & MEO Certificate of Competency Questions and Answers
Mahmoud Moghtaderi
 
PPTX
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
PPT
High Data Link Control Protocol in Data Link Layer
shailajacse
 
PDF
오픈소스 LLM, vLLM으로 Production까지 (Instruct.KR Summer Meetup, 2025)
Hyogeun Oh
 
PDF
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
PPTX
Fluid Mechanics, Module 3: Basics of Fluid Mechanics
Dr. Rahul Kumar
 
PDF
dse_final_merit_2025_26 gtgfffffcjjjuuyy
rushabhjain127
 
PPTX
anatomy of limbus and anterior chamber .pptx
ZePowe
 
PDF
algorithms-16-00088-v2hghjjnjnhhhnnjhj.pdf
Ajaykumar966781
 
PPTX
Module_II_Data_Science_Project_Management.pptx
anshitanarain
 
PDF
Monitoring Global Terrestrial Surface Water Height using Remote Sensing - ARS...
VICTOR MAESTRE RAMIREZ
 
PDF
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
quakeplayz54
 
PDF
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
PPTX
TE-AI-Unit VI notes using planning model
swatigaikwad6389
 
PDF
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
PDF
Top 10 read articles In Managing Information Technology.pdf
IJMIT JOURNAL
 
PDF
Cryptography and Information :Security Fundamentals
Dr. Madhuri Jawale
 
PPTX
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd
ghousebhasha2007
 
ETO & MEO Certificate of Competency Questions and Answers
Mahmoud Moghtaderi
 
Module2 Data Base Design- ER and NF.pptx
gomathisankariv2
 
High Data Link Control Protocol in Data Link Layer
shailajacse
 
오픈소스 LLM, vLLM으로 Production까지 (Instruct.KR Summer Meetup, 2025)
Hyogeun Oh
 
FLEX-LNG-Company-Presentation-Nov-2017.pdf
jbloggzs
 
Fluid Mechanics, Module 3: Basics of Fluid Mechanics
Dr. Rahul Kumar
 
dse_final_merit_2025_26 gtgfffffcjjjuuyy
rushabhjain127
 
anatomy of limbus and anterior chamber .pptx
ZePowe
 
algorithms-16-00088-v2hghjjnjnhhhnnjhj.pdf
Ajaykumar966781
 
Module_II_Data_Science_Project_Management.pptx
anshitanarain
 
Monitoring Global Terrestrial Surface Water Height using Remote Sensing - ARS...
VICTOR MAESTRE RAMIREZ
 
Unit I Part II.pdf : Security Fundamentals
Dr. Madhuri Jawale
 
Lesson 3_Tessellation.pptx finite Mathematics
quakeplayz54
 
2010_Book_EnvironmentalBioengineering (1).pdf
EmilianoRodriguezTll
 
TE-AI-Unit VI notes using planning model
swatigaikwad6389
 
Chad Ayach - A Versatile Aerospace Professional
Chad Ayach
 
Top 10 read articles In Managing Information Technology.pdf
IJMIT JOURNAL
 
Cryptography and Information :Security Fundamentals
Dr. Madhuri Jawale
 
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd
ghousebhasha2007
 

Embedded C - Lecture 3

  • 1. Prepared by: Mohamed AbdAllah Embedded C-Programming Lecture 3 1
  • 2. Agenda  Memory types (RAM, ROM, EEPROM, etc).  Program memory segments.  Static vs. Dynamic memory allocation.  Static vs. Dynamic linking.  Function call with respect to stack, i/p, o/p and i/o parameters and return value.  Functions types (Synch. vs. ASynch, Reentrant vs. non-Reentrant, Recursive, Inline function vs. function-like macro).  Task 3. 2
  • 5.  ROM Read-Only Memory (ROM), a very cheap memory that we can program it once, and then we can never change the data that is on it. This makes it useless because we can't upgrade the information on the ROM chip (be it program code or data), we can't fix it if there is an error, etc.... Because of this, they are usually called "Programmable Read-Only Memory" (PROM), because we can program it once, but then we can't change it at all. 5 Memory types
  • 6.  EPROM In contrast to PROM is EPROM ("Erasable Programmable Read-Only Memory"). EPROM chips will have a little window, made of either glass or quartz that can be used to erase the memory on the chip. To erase an EPROM, the window needs to be uncovered (they usually have some sort of guard or cover), and the EPROM needs to be exposed to UV radiation to erase the memory, and allow it to be reprogrammed. 6 Memory types
  • 7.  EEPROM A step up from EPROM is EEPROM ("Electrically Erasable Programmable Read-Only Memory"). EEPROM can be erased by exposing it to an electrical charge. This means that EEPROM can be erased in circuit (as opposed to EPROM, which needs to be removed from the circuit, and exposed to UV). We can erase one byte at a time or an appropriate electrical charge will erase the entire chip, so we can't erase just certain data items at a time. Many modern microcontroller have an EEPROM section on-board, which can be used to permanently store system parameters or calibration values. These are often referred to as non-volatile memory (NVM). They can be accessed - read and write - as single bytes or blocks of bytes. EEPROM allows only a limited number of write cycles, usually several ten-thousand. Write access to on-board NVM tends to be considerably slower than RAM. Embedded software must take this into account and "queue" write requests to be executed in background. 7 Memory types
  • 8.  RAM Random Access Memory (RAM) is a temporary, volatile memory that requires a persistent electric current to maintain information. As such, a RAM chip will not store data when we turn the power OFF. RAM is more expensive than ROM, and it is often that Embedded systems can have many Kbytes of ROM (sometimes Megabytes or more), but often they have less than 100 bytes of RAM available for use in program flow. 8 Memory types
  • 9.  Flash memory Flash memory is a combination of the best parts of RAM and ROM. Like ROM, Flash memory can hold data when the power is turned off. Like RAM, Flash can be reprogrammed electrically, in whole or in part, at any time during program execution. Flash memory modules are only good for a limited number of Read/Write cycles, which means that they can burn out if we use them too much, too often. As such, Flash memory is better used to store persistent data, and RAM should be used to store volatile data items. Flash memory works much faster than traditional EEPROMs because it writes data in chunks, usually 512 bytes in size, instead of 1 byte at a time like EEPROM can do, but it may be a disadvantage as we have to erase a whole sector in the Flash even we need to erase only 1 byte. 9 Memory types
  • 11. In object files there are areas called sections. Sections can hold executable code, data, dynamic linking information, debugging data, symbol tables, relocation information, comments, string tables, and notes. There are several sections that are common to all executable formats (may be named differently, depending on the compiler/linker) as listed below:  .text  .bss  .data  .rdata 11 Program Memory segments
  • 12.  .text This section contains the executable instruction codes and is shared among every process running the same binary. This section usually has READ and EXECUTE permissions only. This section is the one most affected by optimization.  .bss BSS stands for ‘Block Started by Symbol’. It holds un-initialized global and static variables. Since the BSS only holds variables that don't have any values yet, it doesn't actually need to store the image of these variables. The size that BSS will require at runtime is recorded in the object file, but the BSS (unlike the data section) doesn't take up any actual space in the object file. 12 Program Memory segments
  • 13.  .data Contains the initialized global and static variables and their values. It is usually the largest part of the executable. It usually has READ/WRITE permissions.  .rdata Also known as .rodata (read-only data) section. This contains constants and string literals. 13 Program Memory segments
  • 14.  Stack vs. Heap There are also 2 more memory segments when program is loaded into memory (Stack and Heap).  Stack: The stack area contains the program stack (functions working area), a LIFO (Last In First Out) structure, typically located in the higher parts of memory. On the standard PC x86 computer architecture it grows toward address zero, on some other architectures it grows the opposite direction. A “stack pointer” register tracks the top of the stack, it is adjusted each time a value is “pushed” onto the stack. It is where automatic variables are stored, along with information that is saved each time a function is called. 14 Program Memory segments
  • 15.  Heap: It is the segment where dynamic memory allocation takes place. The heap area begins at the end of the BSS segment and grows to larger addresses from there. The Heap area is managed by malloc, realloc, and free. The stack area traditionally adjoined the heap area and grew the opposite direction. Heaps are more difficult to implement. Typically, we need some code that manages the heap. We request memory from the heap, and the heap code must have data structures to keep track of which memory has been allocated. 15 Program Memory segments
  • 17.  Static Memory Allocation Memory is allocated for the declared variable by the compiler. The address can be obtained by using ‘address of’ & operator and can be assigned to a pointer. The memory is allocated during compile time. Since most of the declared variables have static memory, this kind of assigning the address of a variable to a pointer is known as static memory allocation. We have no control over the lifetime of this memory. E.g: a variable in a function, is only there until the function finishes. 17 Memory allocation
  • 18.  Dynamic Memory Allocation Allocation of memory at the time of execution (run time) is known as dynamic memory allocation. The functions calloc() and malloc() support allocating of dynamic memory. Dynamic allocation of memory space is done by using these functions when value is returned by functions and assigned to pointer variables. We control the exact size and the lifetime of these memory locations. If we don't free it, we'll run into memory leaks, which may cause our application to crash, since it, at some point cannot allocation more memory. Example: /* Allocate block of 100 integers */ int *ptr1 = (int*) malloc(100*sizeof(int)); /* Allocate block of 100 integers zero initialized */ int *ptr2 = (int*) calloc(100, sizeof(int)); /* Free allocated memory */ free(ptr1); free(ptr2); 18 Memory allocation
  • 19.  Dynamic Memory Allocation Problems There are a number of problems with dynamic memory allocation in a real time system: • The standard library functions (malloc() and free()) are not normally reentrant, which would be problematic in a multithreaded application. • A more intractable problem is associated with the performance of malloc(). Its behavior is unpredictable, as the time it takes to allocate memory is extremely variable. Such nondeterministic behavior is intolerable in real time systems. • Without great care, it is easy to introduce memory leaks into application code implemented using malloc() and free(). This is caused by memory being allocated and never being deallocated. Such errors tend to cause a gradual performance degradation and eventual failure. This type of bug can be very hard to locate. • Memory fragmentation problems arise because of dynamic memory allocation. 19 Memory allocation
  • 20.  Dynamic Memory Allocation Problems Memory segmentation problem example: For this example, it is assumed hat there is a 10K heap. First, an area of 3K is requested, thus: #define K (1024) char *p1; p1 = malloc(3*K); To check if allocation succeeded if (p1 != NULL) { printf(“Succeeded”); } Then, a further 4K is requested: p2 = malloc(4*K); 3K of memory is now free. 20 Memory allocation 3K 4K 3K - Empty p1 p2
  • 21.  Dynamic Memory Allocation Problems Some time later, the first memory allocation, pointed to by p1, is de- allocated: free(p1); This leaves 6K of memory free in two 3K chunks. A further request for a 4K allocation is issued: p1 = malloc(4*K); This results in a failure – NULL is returned into p1 – because, even though 6K of memory is available, there is not a 4K contiguous block available. This is memory fragmentation. 21 Memory allocation 3K - Empty 4K 3K - Empty p2
  • 22.  Dynamic Memory Allocation Problems malloc(0) is implementation defined, it can return NULL or it can return a valid pointer to be used in free() but shouldn’t be dereferenced. malloc(-10) or any negative number returns NULL pointer as malloc() accepts unsigned values, so the negative number is converted to a very big positive number. 22 Memory allocation
  • 24.  Linking The linker actually enables separate compilation. An executable can be made up of a number of source files which can be compiled and assembled into their object files respectively, independently. In a typical system, a number of programs will be running. Each program relies on a number of functions, some of which will be standard C library functions, like printf(), malloc(), strcpy(), etc. and some are non-standard or user defined functions. If every program uses the standard C library, it means that each program would normally have a unique copy of this particular library present within it. Unfortunately, this result in wasted resources, degrade the efficiency and performance. 24 Program linking
  • 25.  Linking Since the C library is common, it is better to have each program reference the common, one instance of that library (shared library), instead of having each program contain a copy of the library. This is implemented during the linking process where some of the objects are linked during the link time (static linking) whereas some done during the run time (deferred/dynamic linking). 25 Program linking
  • 26.  Static Linking The term ‘statically linked’ means that the program and the particular library that it’s linked against are combined together by the linker at link time. This means that the binding between the program and the particular library is fixed and known at link time before the program run. It also means that we can't change this binding, unless we re-link the program with a new version of the library. Programs that are linked statically are linked against archives of objects (libraries) that typically have the extension of .a. An example of such a collection of objects is the standard C library, libc.a. You might consider linking a program statically for example, in cases where you weren't sure whether the correct version of a library will be available at runtime, or if you were testing a new version of a library that you don't yet want to install as shared. 26 Program linking
  • 27.  Static Linking To generate a static library using gcc and be used later for linking: 1-Generate object code for required source file 27 Program linking D:newProject>gcc -c file.c -o file.o D:newProject>ar rcs file.lib file.o D:newProject>gcc main.c file.lib -o app.exe 2-Archive the object code inside a library 3-Statically link the library to our main.c file The drawback of this technique is that the executable is quite big in size, all the needed information need to be brought together.
  • 28.  Dynamic Linking The term ‘dynamically linked’ means that the program and the particular library it references are not combined together by the linker at link time. Instead, the linker places information into the executable that tells the loader which shared object module the code is in and which runtime linker should be used to find and bind the references. This means that the binding between the program and the shared object is done at runtime that is before the program starts, the appropriate shared objects are found and bound. This type of program is called a partially bound executable, because it isn't fully resolved. The linker, at link time, didn't cause all the referenced symbols in the program to be associated with specific code from the library. 28 Program linking
  • 29.  Dynamic Linking Instead, the linker simply said something like: “This program calls some functions within a particular shared object, so I'll just make a note of which shared object these functions are in, and continue on”. Symbols for the shared objects are only verified for their validity to ensure that they do exist somewhere and are not yet combined into the program. The linker stores in the executable program, the locations of the external libraries where it found the missing symbols. Effectively, this defers the binding until runtime. Programs that are linked dynamically are linked against shared objects that have the extension .so. An example of such an object is the shared object version of the standard C library, libc.so. 29 Program linking
  • 30.  Dynamic Linking The advantageous to defer some of the objects/modules during the static linking step until they are finally needed (during the run time) includes: • Program files (on disk) become much smaller because they need not hold all necessary text and data segments information. It is very useful for portability. • Standard libraries may be upgraded or patched without every one program need to be re-linked. This clearly requires some agreed module-naming convention that enables the dynamic linker to find the newest, installed module such as some version specification. Furthermore the distribution of the libraries is in binary form (no source), including dynamically linked libraries (DLLs) and when you change your program you only have to recompile the file that was changed. 30 Program linking
  • 31.  Dynamic Linking • Software vendors need only provide the related libraries module required. Additional runtime linking functions allow such programs to programmatically-link the required modules only. • In combination with virtual memory, dynamic linking permits two or more processes to share read-only executable modules such as standard C libraries. Using this technique, only one copy of a module needs be resident in memory at any time, and multiple processes, each can executes this shared code (read only). This results in a considerable memory saving, although demands an efficient swapping policy. 31 Program linking
  • 32.  Dynamic Linking To generate a shared library using gcc and be used later for linking: 1-Generate the shared library directly for the required source file 32 Program linking D:newProject>gcc -shared file.c -o file.dll D:newProject>gcc main.c -L. -lfile -o app.exe D:newProject>gcc main.c file.dll -o app.exe 2-Dynamically link the library to our main.c file OR we can simply type Now the code inside file.dll will not exist inside the output executable but file.dll should exist when running app.exe.
  • 34.  Function call with respect to stack Assembly languages provide the ability to support functions. Functions are perhaps the most fundamental language feature for abstraction and code reuse. It allows us to refer to some piece of code by a name. All we need to know is how many arguments are needed, what type of arguments, and what the function returns, and what the function computes to use the function. In particular, it's not necessary to know how the function does what it does to be able to manage function calls with respect to stack. 34 Function call
  • 35.  Function call with respect to stack To think about what's required, let's think about what happens in a function call. • When a function call is executed, the arguments need to be evaluated to values (at least, for C-like programming languages). • Then, control flow jumps to the body of the function, and code begins executing there. • Once a return statement has been encountered, we're done with the function, and return back to the function call. Programming languages make functions easy to maintain and write by giving each function its own section of memory to operate in. 35 Function call
  • 36.  Function call with respect to stack For example, suppose we have the following function: int pickMin( int x, int y, int z ) { int min = x; if (y < min) min = y; if (z < min) min = z; return min; } We declare parameters x, y, and z. We also declare local variables, min. We know that these variables won't interfere with other variables in other functions, even if those functions use the same variable names. 36 Function call
  • 37.  Function call with respect to stack In fact, we also know that these variables won't interfere with separate invocations of itself. For example, consider this recursive function: int factorial(unsigned int i) { if(i <= 1) { return 1; } return i * factorial(i - 1); } Each call to fact produces a new memory location for i. Thus, each separate call (or invocation) to fact has its own copy of i. How does this get implemented? In order to understand function calls, we need to understand the stack, and we need to understand how assembly languages (MIPS as our example) deal with the stack. 37 Function call
  • 38.  Function call with respect to stack When a program starts executing, a certain contiguous section of memory is set aside for the program called the stack. 38 Function call
  • 39.  Function call with respect to stack The stack pointer is usually a register that contains the top of the stack. The stack pointer contains the smallest address x such that any address smaller than x is considered garbage, and any address greater than or equal to x is considered valid. In the example above, the stack pointer contains the value 0x0000 1000, which was somewhat arbitrarily chosen. The shaded region of the diagram represents valid parts of the stack. It's useful to think of the following aspects of a stack: • Stack bottom: The largest valid address of a stack. When a stack is initialized, the stack pointer points to the stack bottom. • Stack limit: The smallest valid address of a stack. If the stack pointer gets smaller than this, then there's a stack overflow (this should not be confused with overflow from math operations). 39 Function call
  • 40.  Function call with respect to stack Like the data structure by the same name, there are two operations on the stack: push and pop. Usually, push and pop are defined as follows: • Push: We can push one or more registers, by setting the stack pointer to a smaller value (usually by subtracting 4 times the number of registers to be pushed on the stack) and copying the registers to the stack. • Pop: We can pop one or more registers, by copying the data from the stack to the registers, then to add a value to the stack pointer (usually adding 4 times the number of registers to be popped on the stack) Thus, pushing is a way of saving the contents of the register to memory, and popping is a way of restoring the contents of the register from memory. 40 Function call
  • 41.  Function call with respect to stack Some ISAs have an explicit push and pop instruction. However, MIPS (as our example) does not. However, we can get the same behavior as push and pop by manipulating the stack pointer directly. The stack pointer, by convention (not mandatory but recommended), is $r29. That is, it's register 29. Here's how to implement the equivalent of push $r2 in MIPS, which is to push register $r2 onto the stack. push: addi $sp, $sp, -4 # Decrement stack pointer by 4 sw $r3, 0($sp) # Save $r3 to stack The label "push" is unnecessary. It's there just to make it easier to tell it's a push. The code would work just fine without this label. 41 Function call
  • 42.  Function call with respect to stack Here's a diagram of a push operation: 42 Function call The diagram on the left shows the stack before the push operation. The diagram in the center shows the stack pointer being decremented by 4. The diagram on the right shows the stack after the 4 bytes from register 3 has been copied to address 0x000f fffc
  • 43.  Function call with respect to stack Popping off the stack is the opposite of pushing on the stack. First, we copy the data from the stack to the register, then we adjust the stack pointer: pop: lw $r3, 0($sp) # Copy from stack to $r3 addi $sp, $sp, 4 # Increment stack pointer by 4 43 Function call As we can see, the data still is on the stack, but once the pop operation is completed, we consider that part of the data invalid. Thus, the next push operation overwrites this data.
  • 44.  Function call with respect to stack That’s why when we return a pointer to a local variable or to a parameter that was passed by value and the value stayed valid initially, but later on got corrupted. The data still stays on the garbage part of the stack until the next push operation overwrites it (that's when the data gets corrupted). 44 Function call
  • 45.  Function call with respect to stack Stack and Functions: Let's now see how the stack is used to implement functions. For each function call, there's a section of the stack reserved for the function. This is usually called a stack frame. Let's imagine we're starting in main() in a C program. The stack looks something like this: 45 Function call
  • 46.  Function call with respect to stack We'll call this the stack frame for main. A stack frame exists whenever a function has started, but yet to complete. Suppose, inside of body of main() there's a call to foo(). Suppose foo() takes two arguments. One way to pass the arguments to foo() is through the stack. Thus, there needs to be assembly language code in main() to "push" arguments for foo() onto the stack. The result looks like: 46 Function call
  • 47.  Function call with respect to stack As you can see, by placing the arguments on the stack, the stack frame for main() has increased in size. We also reserved some space for the return value. The return value is computed by foo(), so it will be filled out once foo() is done. Once we get into code for foo(), the function foo() may need local variables, so foo() needs to push some space on the stack, which looks like: 47 Function call
  • 48.  Function call with respect to stack foo() can access the arguments passed to it from main() because the code in main() places the arguments just as foo() expects it. We've added a new pointer called FP which stands for frame pointer. The frame pointer points to the location where the stack pointer was, just before foo() moved the stack pointer for foo()'s own local variables. Having a frame pointer is convenient when a function is likely to move the stack pointer several times throughout the course of running the function. The idea is to keep the frame pointer fixed for the duration of foo()'s stack frame. The stack pointer, in the meanwhile, can change values. Thus, we can use the frame pointer to compute the locations in memory for both arguments as well as local variables. Since it doesn't move, the computations for those locations should be some fixed offset from the frame pointer. 48 Function call
  • 49.  Function call with respect to stack And, once it's time to exit foo(), you just have to set the stack pointer to where the frame pointer is, which effectively pops off foo()'s stack frame. It's quite handy to have a frame pointer. We can imagine the stack growing if foo() calls another function, say, bar(). foo() would push arguments on the stack just as main() pushed arguments on the stack for foo(). So when we exit foo() the stack looks just as it did before we pushed on foo()'s stack frame, except this time the return value has been filled in. 49 Function call
  • 50.  Function call with respect to stack 50 Function call Once main() has the return value, it can pop that and the arguments to foo() off the stack:
  • 51.  Function call with respect to parameters Input parameter: parameter passed to function to provide an information for the function, it can be passed by value or by address:  Pass by value: variable value is passed, gets a new name local to the function so if this new named variable is changed the original variable that was passed will not be affected. void muFunc(int x) { x++; } Inside any other function we typed: int x = 10; myFunc(x); printf(“%d”, x); /* Still prints 10 */ 51 Function call
  • 52.  Function call with respect to parameters  Pass by address: variable address is passed and saved inside a pointer, so if we dereference the pointer and change the variable it points at then the original variable that was passed will be changed. void muFunc(int* ptr) { *ptr += 1; } Inside any other function we typed: int x = 10; myFunc(&x); printf(“%d”, x); /* Prints 11 */ Passing by reference is very useful when passing a structure for example so that instead of passing the whole structure we can just pass a pointer to it, so memory is saved. 52 Function call
  • 53.  Function call with respect to parameters Output parameter: parameter passed to function to be filled by function with the required information so it should be a pass by address. void readTemperature(int *temp_ptr) { *temp_ptr = readADCValue(); } Inside any other function we typed: int temp; myFunc(&temp); After returning from myFunc(), temp will contain the updated temperature value. Output parameter is very useful when it is required to return multiple variables not only one. 53 Function call
  • 54.  Function call with respect to parameters I/O parameter: parameter passed to function to provide an information then filled by function with the a new information so it should be also a pass by address. void myFunc(int *ptr) { *ptr = doSomeProcessing(*ptr); } 54 Function call
  • 55.  Function call with respect to parameters Return value: Function can return only one value, or a void data type in case of no return value. int square(int x) { return x*x; } Inside any other function we typed: int sq = square(10); /* sq value will be 100 */ 55 Function call
  • 57.  Synchronous vs. Asynchronous function Synchronous function: function returns after all the required processing is finished, it may do some sort of busy wait for a specific operation like I/O. void getDeviceDataSynch(char *data) { /* Will not return until reading is finished */ data = readInputDevice(); } Asynchronous function: function returns immediately after submitting operation request and the operation makes some sort of callback to your code or ISR when it completes to give its completion status. void getDeviceDataAsynch (void) { /* Will return immediately without any data */ submitReadRequest(); } 57 Functions types
  • 58.  Reentrant vs. Non-Reentrant function Reentrant function: A function is reentrant if it can be invoked while already in the process of executing. That is, a function is reentrant if it can be interrupted in the middle of execution (for example, by a signal or interrupt) and invoked again before the interrupted execution completes. A Reentrant Function shall satisfy the following conditions: • Should not call another non-reentrant function. • Should not contain static time variables. • Should not use any global variables. • Should not use any shared resources. Non-Reentrant function: Function is non-reentrant if it breaks one or more condition of reentrancy conditions. 58 Functions types
  • 59.  Recursive function Recursive function: Recursion is the process of repeating items in a self- similar way. In programming languages, if a program allows you to call a function inside the same function, then it is called a recursive call of the function. Example: the factorial function: int factorial(unsigned int i) { if(i <= 1) { return 1; } return i * factorial(i - 1); } Recursive function is not safe as It may lead to stack overflow for example in case of missing break condition. 59 Functions types
  • 60.  Inline function vs. Function-like macro Inline function: By declaring a function inline, you can direct the compiler to make calls to that function faster. One way it can achieve this is to integrate that function's code into the code for its callers (copy function’s code in call location without actual call). This makes execution faster by eliminating the function-call overhead. inline int myFunc(unsigned int i) { /* Any code */ } Note that inline is just a request for the compiler, it may be done or not, but function-like macro will always be replaced with corresponding code lines. Inline function is the same as any normal function with respect to type checking and so on, so it doesn’t contain the problems of function-like macro. 60 Functions types