Embedded System Notes
Embedded System Notes
for computer processors, originally developed by ARM Holdings (now part of NVIDIA). ARM
architecture is widely used in a variety of computing devices, ranging from smartphones and tablets
to embedded systems and servers. Here are some key features and aspects of ARM architecture:
1. RISC Architecture: ARM follows the RISC design philosophy, which emphasizes a simple and
efficient instruction set. RISC architectures typically have a smaller number of instructions,
each taking a single clock cycle to execute.
2. Processor Modes: ARM processors support different operating modes, including User mode,
Supervisor mode (for the operating system kernel), and System mode. This flexibility allows
the processor to execute instructions with different privileges.
3. Instruction Sets: ARM instruction sets are typically divided into different versions, such as
ARMv6, ARMv7, and ARMv8. The instruction set evolves with each version, introducing new
features, enhancements, and support for additional technologies.
4. Thumb and Thumb-2 Instruction Sets: ARM processors support a compressed instruction set
called Thumb. Thumb instructions are 16 bits long, as opposed to the regular 32-bit ARM
instructions. Thumb instructions are used to reduce code size, which is crucial in memory-
constrained environments. Thumb-2 extends this idea by providing a mix of 16-bit and 32-bit
instructions for better performance.
5. ARMv8-A Architecture: ARMv8-A is the latest version of the ARM architecture designed for
64-bit processing. It introduces the A64 instruction set, enabling support for 64-bit memory
addressing and processing. ARMv8-A is used in a variety of devices, including smartphones,
tablets, servers, and other computing systems.
6. ARM Cortex Processors: ARM Cortex processors are a family of processor cores designed
using the ARM architecture. They come in various configurations and are used in a wide
range of applications. Cortex-M cores are often used in microcontrollers, Cortex-R cores are
used in real-time systems, and Cortex-A cores are used in more performance-oriented
applications, including smartphones and servers.
7. Energy Efficiency: ARM architecture is known for its energy efficiency, making it particularly
suitable for battery-powered devices like smartphones and tablets. The emphasis on
efficiency also makes ARM processors popular for use in embedded systems and Internet of
Things (IoT) devices.
Fetch Unit:
Instruction Fetch (IF): In this stage, the processor fetches the next
instruction from memory. The program counter (PC) is used to determine
the address of the instruction to be fetched.
ARM processors can operate in both big-endian and little-endian modes, and the
endianness can be configured based on the specific requirements of the system.
Endianness refers to the order in which bytes are stored in memory.
1. Big-Endian (BE): In big-endian mode, the most significant byte (MSB) is
stored at the lowest memory address, and the least significant byte (LSB)
is stored at the highest memory address. This is often represented as "BE"
or "BE32" (big-endian 32-bit).
2. Little-Endian (LE): In little-endian mode, the least significant byte (LSB)
is stored at the lowest memory address, and the most significant byte
(MSB) is stored at the highest memory address. This is often represented
as "LE" or "LE32" (little-endian 32-bit).
The endianness of an ARM processor is configurable, and the choice of
endianness is made during the design of the system or the implementation of the
processor. Some ARM-based systems are designed to operate exclusively in
big-endian mode, while others use little-endian mode. Some ARM processors
even support both modes, allowing the endianness to be selected by software or
system configuration.
To configure the endianness on ARM processors, you typically set a control bit
or use specific instructions provided by the processor. For example, in ARM
Cortex processors, the Configuration Control Register (CCR) may include a bit
that controls the endianness. Setting this bit to 0 may configure the processor in
little-endian mode, while setting it to 1 may configure it in big-endian mode.
Cache Mechanism:
ARM processors often incorporate cache memory as part of their memory hierarchy to improve
performance by reducing the time it takes to access frequently used data. Caches store copies of
frequently accessed data and instructions, allowing the processor to retrieve them more quickly than
fetching from the main memory. The cache mechanism in ARM architectures can vary depending on
the specific ARM core and the implementation by the chip manufacturer, but here are some common
features:
Cache Lines:
The cache is organized into cache lines, which are blocks of contiguous memory. When a piece of
data is loaded into the cache, it brings an entire cache line.
Cache Associativity:
Determines how the cache lines are mapped to the cache sets. Common associativity levels include
direct-mapped, set-associative, and fully-associative.
Write Policies:
Write-Through: Data is written to both the cache and the main memory simultaneously.
Write-Back: Data is written to the cache first and then later to the main memory when the cache line
is replaced.
Cache Coherency:
In multi-core or multi-processor systems, cache coherency mechanisms ensure that all processors
see a consistent view of memory. This involves maintaining consistency between the caches and the
main memory.
Determines which cache line to evict when a new line needs to be loaded. Common policies include
Least Recently Used (LRU), First-In-First-Out (FIFO), and random replacement.
Prefetching:
Some ARM processors implement prefetching mechanisms to predict and fetch data into the cache
before it is actually needed, reducing memory access latency.
Cache Locking:
Allows certain cache lines to be "locked" in the cache, preventing them from being replaced. This can
be useful in real-time systems or for critical code sections.
ARM architectures provide specific instructions for cache maintenance, such as flushing the cache,
invalidating cache lines, or cleaning and invalidating a specific cache entry.
The Memory Management Unit (MMU) is a crucial component in modern computer architectures,
including those based on ARM processors. The MMU plays a significant role in managing the
virtual memory system, providing features such as address translation, memory protection, and
memory access control. Here are key aspects of the Memory Management Unit in ARM
architectures:
1. Virtual Memory:
The MMU enables the use of virtual memory, allowing programs to execute as if
they have access to a large, contiguous block of memory, even if the physical
memory is fragmented or limited.
2. Address Translation:
The MMU translates virtual addresses used by programs into physical addresses in
the actual memory. This translation is typically done using a combination of page
tables and page table entries.
3. Page Tables:
ARM processors typically use a two-level or three-level page table structure for
address translation. The page tables store information about the mapping between
virtual and physical addresses.
The TLB is a cache that stores recently used virtual-to-physical address translations,
speeding up the address translation process. The TLB helps avoid accessing the
page tables for every memory access.
5. Memory Protection:
The MMU provides mechanisms for memory protection, allowing the operating
system to control access rights to different regions of memory. This includes read-
only, read-write, and execute permissions.
6. Caching Control:
The MMU allows the operating system to control the caching behaviour for
different memory regions. This includes specifying whether a region should be
cached or not, and if cached, the type of caching (write-through or write-back).
Some ARM architectures include a feature known as domains, allowing the MMU
to group pages and control access to those groups. This is useful for enforcing
access control policies.
The MMU can support security features like ASLR, which randomizes the location
of executable code and data in the virtual address space, making it harder for
attackers to predict memory locations.
The MMU generates faults for memory-related errors, such as page faults, access
violations, and other exceptional conditions. The operating system can handle
these faults to ensure the integrity and security of the system