0% found this document useful (0 votes)
15 views

Basic Interrupt Stack Design and Implementation

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Basic Interrupt Stack Design and Implementation

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Basic Interrupt Stack Design and

Implementation
Exception Handlers and Stack Design in ARM-Based Systems

Exception handlers in ARM-based systems use stacks


extensively. Each mode of the ARM processor has a dedicated
register for the stack pointer.
The design of these stacks is influenced by two main factors:
1. Operating System Requirements: Each operating system has
specific requirements for stack design.
2. Target Hardware: The physical constraints of the hardware,
such as the size and positioning of memory, influence stack
design.
Key Design Decisions for Exception Stacks

1. Location:
The stack's location in the memory map must be
determined.
ARM-based systems typically have a stack that descends
downwards, starting from a high memory address and growing
towards lower addresses.

2.Stack Size:
The size of the stack depends on the type of interrupt
handler.
Nested interrupt handlers require more memory because
the stack grows with each nested interrupt.
Preventing Stack Overflow
A well-designed stack should prevent stack overflow, which can
lead to system instability. Two main software techniques are
used to detect and handle overflow:

1.Memory Protection: This technique uses hardware features to


limit stack growth and catch overflows.
2. Stack Check Functions: These functions are called at the
start of each routine to ensure there is enough space on the
stack.

IRQ Mode Stack

• The IRQ mode stack must be set up before enabling interrupts.


• This setup is usually done in the system's initialization code.
• In simple embedded systems, knowing the maximum stack size
is crucial since it is reserved during boot-up by the firmware.
Memory Layouts for Stacks

• Two typical memory layouts for stacks in a linear address space are
shown in Figure 9.6:
1. Layout A:
- The interrupt stack is located underneath the code segment.
- This traditional layout has the potential risk of corrupting the vector
table in the event of a stack overflow.

2. Layout B:
- The interrupt stack is placed at the top of memory, above the user
stack.
- The main advantage of this layout is that it prevents the corruption
of the vector table during a stack overflow, giving the system a
chance to recover
Firmware
• Definition: Firmware is the deeply embedded, low-
level software that provides an interface between the
hardware and the application or operating system
level software.
• It resides in the ROM and executes when power is
applied to the embedded hardware system.
• Firmware can remain active after system
initialization to support basic system operations.
Functionality
• Provides a stable mechanism to load and boot an
operating system.
• Ensures the platform is correctly initialized,
including configuring system registers and setting
up the memory map.
• Identifies and discovers the exact core and platform
it is operating on, typically by reading specific
registers or peripherals.
Bootloader

• Definition: The boot loader is a small application


that installs the operating system or application
onto a hardware target.
• It only exists up to the point that the operating
system or application is executing and is
commonly incorporated into the firmware.
Firmware Execution Flow
Stage 1: Set Up Target Platform
Stage 2: Abstract the Hardware
Stage 3: Load a Bootable Image
Stage 4: Relinquish Control
Stage 1: Set Up Target Platform

• Program the Hardware System Registers: Initialize


control registers and configure the memory map to
the expected layout for the operating system.
• Platform Identification: Identify the processor core
and platform, typically by reading specific registers
or checking for particular peripherals.
Stage 2: Abstract the Hardware

• Hardware Abstraction Layer (HAL): Provides a


consistent programming interface regardless of the
underlying hardware. This allows for easier porting
of the firmware to different platforms.
• Device Drivers: Implement specific functionalities for
hardware peripherals, providing standard APIs to
interact with these devices.
Stage 3: Load a Bootable Image

• Basic Filing System: Many ARM-based systems use a flash

ROM filing system (FFS) to store multiple executable images.

The firmware must understand the filing system format to read

and load the correct image.

• Image Formats: Firmware must handle different image formats,

such as plain binary or Executable and Linking Format (ELF).

• ELF is common in ARM-based systems and can be relocatable,

executable, or shared object. The firmware may need to decrypt

or decompress the image before loading.


Stage 4: Relinquish Control
• Update Vector Table: Modify exception and interrupt
vectors to point to the operating system handlers.
• Modify the Program Counter (PC): Set the PC to the
entry point address of the operating system.
• Pass Data Structures: For sophisticated operating
systems like Linux, pass a data structure to the
kernel that describes the environment, such as
available RAM and MMU type.
Additional Features
• Diagnostics: Diagnostic software can quickly identify
basic hardware malfunctions.
• Debug Capability: Includes setting breakpoints,
modifying memory, viewing processor register
contents, and disassembling memory into ARM and
Thumb instructions.
• Command Line Interpreter (CLI): Advanced firmware
implementations often include a CLI for changing the
operating system to be booted by altering default
configurations which is typically controlled through
a host terminal application over a serial or network
connection.
ARM Firmware Suite (AFS)

• The ARM Firmware Suite (AFS) is a firmware


package developed specifically for ARM-based
embedded systems.
• It supports various boards and processors,
including the Intel XScale and StrongARM
processors. AFS includes two major components:
μHAL (micro-HAL) and Angel.
μHAL (Micro-HAL)

• μHAL is a Hardware Abstraction Layer that provides a


low-level device driver framework.
• It allows the firmware to operate over different
communication devices (e.g., USB, Ethernet, serial)
and provides a standard API.

• This standardization makes the porting process


straightforward, as the hardware-specific parts must

be implemented according to the μHAL API functions.


Key Features of μHAL

1. System Initialization:
- Sets up the target platform and processor core.
- Can be simple or complex depending on the platform.

2. Polled Serial Driver:


- Provides basic communication with a host.

3. LED Support:
- Controls LEDs for simple user feedback and operational status
display.
4. Timer Support:
- Sets up periodic interrupts, essential for preemptive context-
switching operating systems.

5. Interrupt Controllers:
- Supports various interrupt controllers.
μHAL CLI and Boot Monitor
-μHAL includes a command-line interface (CLI) for
interacting with the boot monitor.

The CLI allows basic operations like configuring the system


and handling communication.

Angel Debug Monitor


Angel is a debug monitor that facilitates communication
between a host debugger and the target platform. It enables:

- Inspecting and modifying memory.

- Downloading and executing images.

- Setting breakpoints.

- Displaying processor register contents.


Key Features of Angel

1. SWI Instructions:
- Provides APIs for programs to open,
read, and write to a host filing system.

2. IRQ/FIQ Interrupts:
- Used for communication with the host
debugger.
3. Host Debugger Control:
- All controls are managed through the
host debugger, requiring access to the SWI,
IRQ, or FIQ vectors.
Key Features of RedBoot

1. Communication:
-Serial Communication: Uses the X-Modem
protocol to communicate with GDB over a
serial connection.
-Ethernet Communication: Utilizes TCP to
communicate with GDB over Ethernet.
- Network Standards: Supports bootp, telnet,
and tftp, enabling versatile network
communication options.
. Flash ROM Memory Management:

- Provides a set of file system routines for


downloading, updating, and erasing images in flash
ROM.
- Supports both compressed and uncompressed
images, offering flexibility in how data is stored and
managed in flash memory.
. Full Operating System Support:

- Capable of loading and booting a variety of operating systems,

including Embedded Linux and Red Hat eCos.

- For Embedded Linux, RedBoot allows the definition of

parameters that are passed directly to the kernel upon

booting, facilitating customized boot configurations.


Sandstone

• Sandstone is designed as a minimalistic firmware


system aimed at the ARM Evaluator-7T platform,
which includes an ARM7TDMI processor.
• It primarily focuses on
– setting up the target platform environment,

– loading a bootable image into memory, and

– transferring control to an operating system or application.

• Sandstone is a static design, meaning it cannot be


reconfigured after the build process is complete.
Key Characteristics of Sandstone

Code: Written entirely in ARM assembly instructions.

Toolchain: ARM Developer Suite 1.2.


Image Size: 700 bytes.
Source Size: 17 KB.
Memory Remapping: Supported.
Sandstone Directory Layout
The directory structure of Sandstone follows a standard
style:
• makefile: Used for building the project.
• readme.txt: Contains instructions on building the
binary image.
• build: Directory where build-related files are placed.
• payload: Contains the payload image that is loaded
and booted by Sandstone.
• src: Contains the Sandstone source file (sand.s).
• obj: Directory for the object file produced by the
assembler.
• image: Directory for the final linked Sandstone image,
including both Sandstone code and the payload.
Sandstone Code Structure
The Sandstone code is organized into a series of steps
corresponding to different stages in the execution flow:
1. Take the Reset Exception: Execution starts with a reset vector,
setting up dummy handlers and passing control to hardware
initialization code.
2. Start Initializing the Hardware: System registers are
configured, and the segment display is set up for feedback.
3. Remap Memory: SRAM is initialized, and memory is remapped to
make flash ROM available at a new location.
4. Initialize Communication Hardware: A serial port is configured,
and a standard banner is sent out to indicate firmware
functionality.
5. Bootloader: The payload is copied into SRAM, and control is
relinquished to the payload.
Detailed Execution Steps
Step 1: Take the Reset Exception NOP ; not used...

- Code: B int_irq ; irq vector

```assembly B int_fiq ; fiq vector

AREA start,CODE,READONLY ex_und B ex_und ; loop forever

ENTRY ex_swi B ex_swi ; loop forever

sandstone_start ex_dabt B ex_dabt ; loop forever

B sandstone_init1 ; reset vector ex_pabt B ex_pabt ; loop forever

B ex_und ; undefined vector int_irq B int_irq ; loop forever

B ex_swi ; swi vector int_fiq B int_fiq ; loop forever

B ex_pabt ; prefetch abort vector ```



B ex_dabt ; data abort vector
Step 2: Start Initializing the Hardware

- Code:
```assembly
sandstone_init1
LDR r3, =SYSCFG ; where SYSCFG=0x03ff0000
LDR r4, =0x03ffffa0
STR r4, [r3]
```
Step 3: Remap Memory
• Initial Memory State: • ```assembly
• - Flash ROM: 0x00000000 - • LDR r14, =sandstone_init2
0x00080000 (512K) • LDR r4, =0x01800000 ; new
• - SRAM bank 0: Unavailable flash ROM location
(256K) • ADD r14, r14, r4
• - SRAM bank 1: Unavailable • ADRL r0,
(256K) memorymaptable_str
• - **New Memory State**: • LDMIA r0, {r1-r12}
• - Flash ROM: 0x01800000 - • LDR r0, =EXTDBWTH ;
0x01880000 (512K) =(SYSCFG + 0x3010)
• - SRAM bank 0: 0x00000000 • STMIA r0, {r1-r12}
- 0x00040000 (256K) • MOV pc, r14 ; jump to
• - SRAM bank 1: 0x00040000 remapped memory
- 0x00080000 (256K) • sandstone_init2
• -Code:
Step 4: Initialize Communication
Hardware
• Settings: 9600 baud, no parity, one stop bit, no flow
control.
• Banner:
• ```
• Sandstone Firmware (0.01)
• - platform ......... e7t
• - status ........... alive
• - memory ........... remapped
• + booting payload ...
• ```
Step 5: Bootloader—Copy Payload
and Relinquish Control
• - **Code**: • BLT _copy
• ```assembly • MOV pc,#0
• sandstone_load_and_boot • payload_start_address
• MOV r13,#0 ; destination addr • DCD startAddress
• LDR • payload_end_address
r12,payload_start_address ; start • DCD endAddress
addr • ```
• LDR r14,payload_end_address ; •
end addr
• This sequence ensures the
• _copy payload is loaded into SRAM and
• LDMIA r12!,{r0-r11} executed, completing the boot
• STMIA r13!,{r0-r11} process.
• CMP r12,r14
MODULE -5
CACHES
• Cache policies are essential to the efficient
operation of a cache memory system.
• They determine how data is managed within
the cache, particularly when the cache is
full and new data needs to be brought in.
• There are primarily three types of cache
policies:
– Cache Replacement Policy
– Write Policy
– Allocation Policy
1. Cache Replacement Policy

 This policy defines how a cache line is selected for


eviction when a cache miss occurs and the cache is
full. Several common replacement policies include:
 Least Recently Used (LRU): The cache line that hasn't
been accessed for the longest time is evicted.
 First-In-First-Out (FIFO): The oldest cache line (the first
one brought into the cache) is evicted.
 Optimal Replacement: This is an ideal but impractical
policy where the cache line that won't be used for the
longest time in the future is evicted. It serves as a
benchmark for other policies.
 Random Replacement: A random cache line is selected
for eviction.
2. Write Policy

• This policy determines how write operations


are handled in the cache and main memory.
The two primary write policies are:
• Write-Through: Data is written to both the
cache and main memory simultaneously.
• Write-Back: Data is written only to the cache.
The modified cache line is written back to
main memory only when it is evicted from the
cache (dirty bit is set).
3. Allocation Policy

• This policy specifies when a cache line is


brought into the cache. The two main
allocation policies are:
• Write Allocate: A cache line is brought into
the cache on a write miss.
• No Write Allocate: A cache line is brought into
the cache only on a read miss.
Memory Hierarchy
Levels of the Memory Hierarchy

• Processor Core: The heart of the system, containing the register file
for immediate data access.
• Chip Level:
– Tightly Coupled Memory (TCM): High-speed memory directly accessible by
the processor core.
– Level 1 Cache (L1 Cache): Small, fast cache for storing frequently accessed
data and instructions.
– Write Buffer: Temporary storage for data to be written to main memory,
improving write performance.
• Board Level:
– SRAM: Fast static random access memory.
– DRAM: Main system memory, slower than SRAM but larger in capacity.
– Flash Memory: Non-volatile memory for storing data persistently.
• Device Level:
– Secondary Storage: Slower, larger capacity storage like hard drives, SSDs,
and optical drives.
Key Concepts

• Memory Access Time: The time it takes to


retrieve data from memory. Higher levels in
the hierarchy have faster access times.
• Memory Capacity: The amount of data a
memory component can store. Generally,
higher levels have smaller capacities.
• Memory Cost: The cost per unit of storage.
Typically, higher levels are more expensive.
• Figure 12.2 shows the relationship that a cache has with main
memory system and the processor core.
• The upper half of the figure shows a block diagram of a system
without a cache. Main memory is accessed directly by the
processor core using the datatypes supported by the processor
core.
• The lower half of the diagram shows a system with a cache. The
cache memory is much faster than main memory and thus
responds quickly to data requests by the core.
• The cache’s relationship with main memory involves the transfer
of small blocks of data between the slower main memory to the
faster cache memory.
• These blocks of data are known as cache lines.
• The write buffer acts as a temporary buffer that frees available
space in the cache memory.
• The cache transfers a cache line to the write buffer at high speed
and then the write buffer drains it to main memory at slow speed.
Caches and Memory Management Units

• Cache can be placed between the core and the MMU, or


between the MMU and physical memory.
• Placement affects the addressing realm and how
programmers view the cache system.

Logical (Virtual) Cache:


- Stores data in a virtual address space.
- Located between the processor and the MMU.
- Processor can access data directly without going through the
MMU.Also known as a virtual cache.

Physical Cache
- Stores data using physical addresses.
- Located between the MMU and main memory.
- Processor must translate virtual addresses to
physical addresses via the MMU before accessing
data.
ARM Processor Cache Usage:
- ARM7 to ARM10 (including Intel StrongARM and
Intel XScale) use logical caches.
- ARM11 processor family uses physical caches.
Cache Performance Improvement:
• Caches improve performance due to
predictable program execution.
• The principle of locality of reference explains
the performance gains.
• Programs often execute small loops of code
repeatedly, operating on local sections of data.
• Repeated use of the same code or data
benefits from being in faster cache memory.
• Loading data into the cache on first access
ensures subsequent accesses are much faster.

You might also like