An5156 Introduction To Security For Stm32 Mcus Stmicroelectronics
An5156 Introduction To Security For Stm32 Mcus Stmicroelectronics
Application note
Introduction
This application note presents the basics of security in STM32 microcontrollers.
Security in microcontrollers encompasses several aspects including protection of firmware intellectual property, protection of
private data in the device, and guarantee of a service execution.
The context of IoT has made security even more important. The huge number of connected devices makes them an attractive
target for attackers and several remote attacks have shown the vulnerabilities of device communication channels. With IoT, the
security extends the requirements for confidentiality and authentication to communication channels, which often require
encryption.
This document is intended to help the building of a secure system by applying countermeasures to different types of attack.
In the first part, after a quick overview of different types of threats, examples of typical attacks are presented to show how
attackers exploit the different vulnerabilities in an embedded system.
The subsequent sections focus on the set of hardware and software protections that defend the system from these attacks.
The last sections list all security features available in the STM32 Series, and guidelines are given to build a secure system.
STM32C0 Series, STM32F0 Series, STM32F1 Series, STM32F2 Series, STM32F3 Series, STM32F4 Series,
STM32F7 Series, STM32G0 Series, STM32G4 Series, STM32H5 Series, STM32H7 Series, STM32L0 Series,
Microcontrollers
STM32L1 Series, STM32L4 Series, STM32L4+ Series, STM32L5 Series, STM32U5 Series, STM32WB Series,
STM32WBA Series, STM32WL Series, STM32U0 Series, STM32WB0 Series.
1 General information
The table below presents a nonexhaustive list of the acronyms used in this document and their definitions.
Table 2. Glossary
Term Definition
Term Definition
Documentation references
The reference manual of each device gives details on the availability of security features. It also informs about
memory protections implementation.
A programming manual is also available for each Arm® Cortex® version and can be used for an MPU (memory
protection unit) description:
• STM32 Cortex®-M33 MCUs programming manual (PM0264)
• STM32F7 series and STM32H7 series Cortex®-M7 processor programming manual (PM0253)
• STM32 Cortex®-M4 MCUs and MPUs programming manual (PM0214)
• STM32F10xxx/20xxx/21xxx/L1xxxx Cortex®-M3 programming manual (PM0056)
• Cortex®-M0+ programming manual for STM32L0, STM32G0, STM32WL, STM32WB, and STM32WB0
series (PM0223)
Refer to the following set of user manuals and application notes (available on www.st.com) for detailed
description of security features:
Doc
Ref. Title Comment
number
2 Overview
Services provider
Device
Unsecure Device
Device
DT50942V1
Device
Corrupted system Attack propagation
Sensor data (such as healthcare data or log of positions) Unauthorized sale of personal data
User data (such as ID, PIN, password or accounts) Usurpation
Data
Transactions logs Spying
Cryptographic keys Blackmail
Denial of service
Control of device (bootloader, Device correct functionality
Attacks on service providers
malicious application) Device/user identity
Fraudulent access to service (cloud)
Device counterfeit
Device hardware architecture/design
Software counterfeit
User code Software patent/architecture
Software modification
Technology patents
Access to secure areas
3 Attack types
This section presents the different types of attack that a microcontroller may have to face, from the most basic
ones to very sophisticated and expensive ones. The last part presents typical examples of attacks targeting
an IoT system.
Attacks on microcontroller are classified in one of the following types:
• software attack: exploits software vulnerabilities (such as bug or protocol weaknesses).
• hardware non-invasive attack: focuses on MCU interfaces and environment information.
• hardware invasive attack: destructive attack with direct access to silicon
Attack types
While there are more detailed groups and categories of attack, the basic categories are the following ones:
• Software attacks are carried by exploiting bugs, protocol weaknesses, or untrusted pieces of code among
others. Attacks on communication channels (interception or usurpation) are part of this category. Software
attacks represent the vast majority of cases. Their cost may be very low. They can be widely spread and
repeated with huge damage. It is not necessary to have a physical access to the device. The attack can be
executed remotely.
• Hardware attacks need physical access to the device. The most obvious one exploits the debug port, if it
is not protected. However, in general, hardware attacks are sophisticated and can be very expensive. They
are carried out with specific materials and require electronics engineering skills. A distinction is made
between noninvasive attacks (carried out at board or chip level without device destruction), and invasive
attacks (carried out at device-silicon level with package destruction). In most cases, such an attack is only
profitable if it reveals information that leads to a new and widely applicable remote attack.
The table below gives an overview of the cost and techniques used for each type of attack.
Attacks
Non-invasive Semi–invasive Invasive
types
Scope Remote or local Local board and device level Local device level
Software bugs Debug port Probing
Protocol weaknesses Power glitches Laser
Techniques
Trojan horse Fault injection FIB
Eavesdropping Side-channels analysis Reverse engineering
From very low to high, Quite low cost. Need only moderately
Cost/ Very expensive. Need dedicated
depending on the security sophisticated equipment and knowledge to
expertise equipment and very specific skills.
failure targeted implement.
Malware injection
There are various methods to inject a piece of code inside the system. The size of the malware depends on the
target but may be very small (few tens of bytes). To be executed, the malware must be injected in the device
memory (RAM or flash memory). Once injected, the challenge is to have it executed by the CPU, which means
that the PC (program counter) must branch to it.
Methods of injecting malware can be categorized as follows:
• basics device access/"open doors”:
– Debug port: JTAG or SWD interface
– Bootloader: if accessible, can be used to read/write memory content through any available interface.
– Execution from external memory
These malware injections are easy to counter with simple hardware mechanisms that are described
in Section 4: Device protections .
• Application download:
– Firmware update procedure: a malware can be transferred instead of a new FW.
– OS with capability to download new applications
This category countermeasure is based on authentication between the device and the server or directly
with code authentication. Authentication relies on cryptography algorithms.
• Weaknesses of communication ports and bugs exploitation:
– Execution of data. Sometimes it is possible to sneak the malware in as data, and to exploit incorrect
boundary check to execute it.
– Stack-based buffer overflows, heap-based buffer overflows, jump-to-libc attacks, and data-only
attacks
This third category is by definition difficult to circumvent. Most embedded system applications are coded
using low-level languages such as C/C++. These languages are considered unsafe because they can lead
to memory management errors leveraged by attackers (such as stack, heap, or buffers overflow). The
general idea is to reduce as much as possible what is called the attack surface, by minimizing the
untrusted or unverified part of firmware. One solution consists in isolating the execution and the resources
of the different processes. For example, the TF-M includes such a mechanism.
• Use of untrusted libraries with device back door:
This last category is an intentional malware introduction that facilitates device corruption. Today, lot of
firmware developments rely on software shared on the web and complex ones can hide Trojan horses. As
in previous category, the way to countermeasure this threat is to reduce the surface attack by isolating as
much as possible the process execution and protecting the critical code and data.
Brute forcing
This type of attack targets the authentication based on a shared secret. A secure device may require a session
authentication before accessing services (in the cloud for example) and a human machine interface (HMI) can be
exploited with an automatic process in order to try successive passwords exhaustively.
Interesting countermeasures are listed below:
• Limit the number of login trials with a monotonic counter (implemented with a timer, or if possible, with a
backup domain).
• Increase the delay between consecutive login attempts.
• Add a challenge-response mechanism to break automatic trials.
General-purpose microcontrollers are not the best candidates to counter the most advanced physical attacks.
If a highest protection level is required, consider pairing a secure element with the general-purpose
microcontroller. Secure elements are dedicated microcontrollers certified as per the latest security standards with
specific hardware.
Refer to ST secure microcontrollers web page.
Countermeasures:
• Software:
– Check function return values.
– Use strict comparisons when branching.
– Make sure that no code was skipped in critical parts by incrementing a dedicated variable in each
branch with prime number and check for the expected value.
– Use non-trivial values as true and false (avoid comparing to 0 or -1, try complex values with high
mutual Hamming distance).
• Hardware:
– Use Clock security system (CSS) if available.
– Use internal clock sources.
– Use internal voltage regulators.
– Use memory error detection (ECC and parity).
Reverse engineering
The goal is to understand the inner structure of the device and analyze its functionality. This is quite a challenging
task with modern devices featuring millions of gates.
The first step is to create a map of the microcontroller. It can be done by using an optical microscope to produce a
high-resolution photograph of the device surface. Deeper layers can then be analyzed in a second step, after the
metal layers have been stripped off by etching the device.
Device modification
More sophisticated tools can be used to perform attacks. FIB (focused ion beam) workstations, for example,
simplify the manual probing of deep metal and polysilicon lines. They also can be used to modify the device
structure by cutting existing or creating new interconnection lines and even new transistors.
Connectivity
STM32
Sensors Actuators
DT50946V1
Initial provisioning
The cryptographic data for root of trust for the chain of security must be injected to the SoC in a controlled trusted
way. Whether it is a key, a certificate or a hash initial value, it must remain immutable and/or secret. Once
programmed inside the device, the data protection mechanism must be enabled and only authorized process
must have access to it.
• Risks: firmware corruption or usurpation
• Countermeasures:
– trusted manufacturer environment
– use of secure data provisioning services (SFI)
– data protection mechanisms
– secure application isolation
– use of OTP memory
Boot modification
The purpose of this attack is to use the bootloader to access to device content. The attack aims at modifying the
boot mode and/or the boot address to preempt the user application and to take control of the CPU through the
bootloader (via USB DFU, I2C or SPI), the debug port or through a firmware injected in RAM. The boot mode and
the address are controlled by device configuration and/or input pin and must be protected.
• Risks: full access of the microcontroller content
• Countermeasures:
– unique boot entry
– bootloader and debug disabled (see Section 6.2: Readout protection (RDP))
Firmware update
The firmware update procedure allows a product owner to propose corrected version of the firmware to ensure
the best user experience during device lifetime. However, a firmware update gives an attacker an opportunity to
enter the device with its own firmware or a corrupted version of the existing firmware.
The process must be secured with firmware authentication and integrity verification. A successful attack requires
an access to the cryptographic procedure and keys (refer to the Initial provisioning section at the beginning of this
chapter).
• Risk: device firmware corruption
• Countermeasure: SFU application with authentication and integrity checks. Confidentiality can also be
added by encrypting the firmware in addition to signature.
Communication interfaces
Serial interfaces (such as SPI, I2C or USART) are used either by the bootloader or by applications to exchange
data and/or commands with the device. The interception of a communication allows an attacker to use the
interface as a device entry point. The firmware protocol can also be prone for bugs (like overflow).
• Risk: Access to device content
• Countermeasures:
– Make physical bus hard to reach on board.
– Isolate software communication stacks to prevent them from accessing critical data and operations.
– Use cryptography for data exchange.
– Disable I/F ports when not needed.
– Check inputs carefully.
Debug port
The debug port provides access to the full content of the device: core and peripherals registers, flash memory and
SRAM content. Used for application development, it may be tempting to keep it alive for investigating future bugs.
This is the first breach tried by an attacker with physical access to the device.
• Risk: full access to the device
• Countermeasure: Disable device debug capabilities (see Section 6.2: Readout protection (RDP)).
SRAM
The SRAM is the device running memory. It embeds runtime buffers and variables (such as stack or heap) and
can embed firmware and keys. While in the non-volatile memory, the secrets may be stored as encrypted, when
loaded to the SRAM, they need to be present in plain view to be used. In the same time, the SRAM usually holds
communication buffers. For these two reasons, an attacker may be tempted to focus his effort on the SRAM. At
least three types of attack can be raised against this memory: code (malware) injection, memory corruption
through buffer overflow and retrieval of secrets through temporary stored variables.
• Risks: buffer overflow, data theft or device control
• Countermeasures:
– firewall
– memory protection unit
– Secure area
Communication stack
Connectivity protocols (such as Bluetooth, Ethernet, Wi-Fi or LoRa) have complex communication firmware
stacks. These stacks, often available in open source, must not always be considered as trusted. A potential
weakness can be massively exploited.
• Risk: device access (content, control) through network
• Countermeasures:
– communication process isolation
– server authentication
– secure firmware update to patch bugs
Communication eavesdrop
Data exchanges between a device and an IoT service can be eavesdropped, either directly by a compatible RF
device or through the network. An hacker may seek for retrieving data, getting device IDs or accessing services.
Cryptography can be adopted by all communication protocols. Several encryption steps are often considered to
protect the communication between all the different layers (device, gateway, applications).
• Risk: observation and spoofing of network traffic
• Countermeasure: use of cryptographic version of the communication stack (like TLS for Ethernet)
4 Device protections
Security protections described in this section are controlled by hardware mechanisms. They are set either
by configuring the device through option bytes, or dynamically by hardware component settings:
• Memory protection: main security feature, used to protect code and data from internal (software) and
external attacks
• Software isolation: inter-processes protection to avoid internal attacks
• Interface protection: used to protect device entry points like serial or debug ports
• System monitoring: detects device external tampering attempts or abnormal behaviors
Secure modes are orthogonal to the existing modes, Thread and Handler. Thus, there can be a Thread or
Handler mode in each secure mode (see the figure below).
DT63687V1
On typical firmware architecture running on Armv8 TrustZone®, the nonsecure domain executes the application
and the OS tasks, while the secure domain executes the secure application and the system root-of-trust
mechanisms.
CPU1 CPU2
nonsecure secure DMA
MPU MPU
AHB
Security
Flash interface AHB/APB bridge
controller
SRAM
DT67318V1
Flash Flash
user memory user memory
OTP
(such as CPU or DMA)
Bank 1 Bank 2
Bus masters
SRAM
STM32
microcontroller
NOR/NAND flash
FMC
SDRAM
Octo-SPI or
Quad-SPI
OTFDEC
Octo-SPI or
Quad-SPI flash
DT50948V1
The table below summarizes the particularities of each type of memories and typical protection features.
. Internal
System flash ROM part of the flash memory. Embeds device Cannot be updated (erase/written).
. NVM
memory bootloader and other ST services. A part may also be unreadable.
. ROM
External attacks
The embedded flash memory is easy to protect against external attacks, unlike external flash memories. Disabling
the debugging port access with RDP and the controlled access of connectivity interface provide sufficient isolation
from outside.
Associated protection: RDP to disable debug access
Internal attacks
An internal read or write access to the memory can come from a malware injected either in the device SRAM or
inside an untrusted library, so that the critical code and data must only be accessible by authorized processes.
Associated protections: PCROP, MPU, firewall, secure hide protection, or TrustZone
Code execution
The part of the firmware that requires faster performances can be downloaded from the user or the external flash
memory, and executed from the SRAM. Another reason to execute code from the SRAM is when using encrypted
external flash memory on devices without on-the-fly decryption: the code is decrypted inside the SRAM before its
execution. Appropriate memory protections must then be enabled on the SRAM address range containing the
code. When no code must be executed in the SRAM, it is advised to prevent any malware execution by setting
the appropriate attribute (execute never) with the MPU.
Associated protections: MPU or firewall
SRAM cleaning
The SRAM can contain sensitive data or temporary values allowing some secrets retrieving. A typical example is
the transfer of a secret cryptographic key from protected flash memory area in clear text, inside the SRAM. It is
highly recommended to clean explicitly the working buffers and variables immediately after the processing of
functions manipulating sensitive data.
Note: In case of reset, the STM32 MCUs allow the automatic erase of the SRAM (refer to the reference manual). For
some devices, part of the SRAM is protected against external access or untrusted boot (SRAM boot) when the
RDP is set.
Write protection
The write protection can be used to isolate part of the area from being corrupted by another process or
by preventing an overflow attack. An overflow attack consists in writing more data than the targeted buffer size
(during a data transfer through interface ports for example). If no boundary checks are performed, the memory
address above the buffer is corrupted, and a malware can be injected this way. This protection is only featured by
the SRAM regions, which are used primarily for code execution (this protection is not practical for data).
The SRAM write protection is available for SRAM2 region on some STM32 MCUs only
(refer to Section 6.1: Overview of security features and to the reference manual).
Associated protections: MPU, TrustZone, or SRAM write protection (available on some STM32 devices only)
1. Support of this feature in Armv6-M products is limited as constant data cannot be stored in NVM.
2. Write protection can be unset when RDP level ≠ 2.
3. The SRAM is protected by a secure area only at secure code execution. It must be cleaned before leaving the secure area.
1. The attribute protection is only for CPU access and is not taken into account for other bus master (such as DMA).
2. Reading the CPUID indicates which CPU is currently executing code. An example can be found in the
HAL_GetCurrentCPUID function.
Other serial interfaces can also be used. If the bootloader is available, the device content can be accessed
through I2C, SPI, USART, or USB‑DFU. If the interface is open during the runtime, the application transfer
protocol must limit its access capabilities (such as operation mode or address access range).
Associated STM32 features:
• read protection (RDP)
• disable of unused ports
• bootloader access forbidden (configured by RDP in STM32 devices)
5 Secure applications
In order to create a secure system, the hardware features must be used in a secure firmware architecture
implementation. An industry standard solution is the PSA, proposed by Arm for the IoT ecosystem. The
STMicroelectronics proprietary solution is Secure boot (SB) and Secure firmware update (SFU). It is possible to
use Secure firmware installation (SFI) to securely provision blank devices in manufacturing.
This section defines the root and chain of trust concept before presenting the following typical secure
applications implementing the features listed below:
• Secure boot
• Secure firmware update
• Secure storage
• Cryptographic services
These applications have a close link with cryptography. All cryptographic schemes are based on the three
concepts of secret key, public key, and hashing. Basics of cryptography are explained in Appendix A.
Cryptography - Main concepts.
Note: • The document [9] provides an implementation example of SB and SFU (www.st.com/en/product/
x‑cube‑sbsfu).
• The user manual 'Getting started with STM32CubeL5 TF-M application' (UM2671) describes an example
of TF-M implementation with the STM32L5 Series MCU.
• The user manual 'Getting started with STM32CubeU5 TF-M application' (UM2851) describes an example
of TF-M implementation with the STM32U5 Series MCU.
SB main functionalities:
• Check the STM32 security configuration and set up runtime protections.
• Assert the integrity and authenticity of the user application images that are executed (see the figure below).
Reset
X
Secure boot Set security peripheral configuration (MPU, firewall, or IWDG)
DT50950V1
User application
Protection attributes
The SB firmware must have the following attributes to fulfill its role:
• It must be the device-unique entry point (no bypass).
• Its code must be immutable.
• It must have access to sensitive data (such as certificates or application signatures).
The most sensitive SB part takes benefit from process and data isolation features, like firewall, MP, U or secure
hide protection. the implementation depends on the STM32 available features.
Architecture
An SFU transfer involves two entities: the firmware owner (OEM) and the device to be updated (see the figure
below). As the communication channel is generally considered as nonsecure since it is subject to eavesdropping,
the overall security responsibility is shared between the sender (firmware owner server) and the receiver (the
device).
DT50951V1
New firmware
Application
From OEM side, a secure server is maintained that is responsible for sending the encrypted (if confidentiality is
required) and signed firmware to an authenticated device.
The SFU application running on device is in charge of the following:
• authentication and integrity checking of the loaded image before installing it
• decrypting the new firmware if confidentiality is required
• checking the new firmware version (anti-rollback mechanism)
5.3.3 Configurations
The ST proprietary SBSFU is very configurable. The most important configuration option is the choice to use a
single or dual image handling of application code. Each has a separate example. Single image leaves more
space for application code. Two or more images add some advanced features to the image handling.
The second most important option is the cryptographic scheme selection. There are usually the following choices:
• ECDSA asymmetric cryptography for firmware verification with AES-CBC or AES-CTR symmetric
cryptography for firmware encryption
• ECDSA asymmetric cryptography for firmware verification without firmware encryption
• X509 certificate-based ECDSA asymmetric cryptography for firmware verification without firmware
encryption
• AES-GCM symmetric cryptography for both firmware verification and encryption
For more details, see the document [9] or the document Integration guide for the X‑CUBE‑SBSFU STM32Cube
Expansion Package (AN5056).
Both alternatives are based on TF-M and MCU boot, but while SBSFU intends to replicate familiar features of X-
CUBE-SBSFU while retaining most flash memory space for user code, TF-M offers more functionality. Some of
that can be dropped to gain memory space. For the STM32H57x line, the Secure manager, a closed-source
implementation of TF-M, offers a convenient and express way to adopt certified secure solutions.
The certifications and evaluations related to STM32 microcontrollers include, but are not limited to:
• PSA certified (platform security architecture), governed by Arm, focused on IoT security, MCU certification,
three levels of assessment
– STM32L4 devices are certifiable up to Level 1.
– STM32L5 devices with TF-M are certifiable up to Level 2.
– STM32U5 and STM32H5 devices with TF-M are certifiable up to Level 3.
– To achieve Arm PSA certifiable security level, refer to the user manual STM32U585 security
guidance for PSA Certified™ Level 3 with SESIP Profile (UM2852).
• SESIP (security evaluation standard for IoT platforms), international methodology adopted by several major
security evaluation labs, five levels
– Systems using SBSFU or TF-M are compliant to Level 3 with STM32L4, STM32L4+, STM32L5,
STM32H5, and STM32U5 devices.
• PCI (payment card information), important security standard focusing on point of sale (POS) applications
– Good record of successful evaluation of systems, using for example, STM32L4 devices
Note: FIPS (Federal Information Processing Standards ) is a set of standards published by NIST, some of which (FIPS
140, SP800) are related to security or cryptography.
STM32G0/
This section presents all the STM32 features that can be used to meet the different security concepts presented
in previous sections, and to achieve a high level of security.
Cortex core Cortex‑M0+ Cortex‑M0 Cortex‑M3 Cortex‑M3 Cortex‑M4 Cortex‑M4 Cortex‑M0+ Cortex‑M4
Backup
RDP additional Bad OBL Backup 2 level RDP Backup Backup
Backup SRAM Backup SRAM registers,
protection recovery registers only registers registers
CCMSRAM
CCM SRAM,
SRAM WRP No No No No No No No with 1‑Kbyte
granularity
Yes (securable
HDP No No No No No Yes (securable memory area)
memory area)
Firewall No No No No No No No No
Internal tamper
No No No No No No Yes Yes
detection
Hardware
No No No AES, HASH No AES, HASH AES
crypto(4)
TF-M No No No No No No No No
KMS No No No No No No No No
Table 11. Security features for STM32L0/1/4/4+, STM32WB, STM32WBA, STM32WB0x, STM32WL devices
Cortex‑M4/ Cortex‑M4/
Cortex core Cortex‑M0 Cortex‑M3 Cortex‑M4 Cortex‑M33 Cortex‑M0+
Cortex‑M0+ Cortex‑M0+
RDP four
Backup
RDP additional levels,backup
EEPROM Backup registers, SRAM2 No registers,
protection registers,
SRAM2
SRAM
By area with
Two areas,
By area with 2‑Kbyte granularity, 4‑Kbyte By area with 2‑Kbyte granularity,
Flash WRP By sectors (4 Kbytes) defined by
one area per bank granularity, two two areas available
page range
areas available
SRAM2, with
SRAM WRP No No SRAM2, with 1‑Kbyte granularity No 1‑Kbyte
granularity
Yes (dedicated
HDP No No No No to Cortex‑M0+ Yes No Yes
firmware only)
Yes
MPU Yes Yes Yes Yes Yes Yes Yes
(Cortex‑M4)
OTP No No 1 Kbyte
Yes (lockable
Yes (boot lock
UBE(1) No No No No No secure and NS No
feature)
address)
Internal tamper
No No No No No Yes No Yes
detection
TF-M No No No No No Yes No No
Table 12. Security features for STM32L5, STM32U0, STM32U5, STM32H503/5, STM32H7R/S, STM32H72x/73/74x/75,
STM32H7Ax/7Bx, STM32F7 devices
Cortex‑M3 Cortex‑M0
Cortex core Cortex‑M33 Cortex‑M7
3 +
Product
RDP four RDP four Backup Backup
state Backup
levels, Backup levels, SRAM, SRAM,
RDP additional Product state instead of instead of SRAM, Backup
backup registers, backup backup backup
protection RDP RDP, backup SRAM
registers, SRAM2 registers, registers, registers,
backup registers
SRAM2 SRAM3 OTFDEC OTFDEC
SRAM
Up to four By sectors
protected By group (16 K,
areas with By sectors of 4 64 K,
Flash WRP Two areas per bank defined by page range By sectors (128 Kbytes)
2‑K or (8 Kbytes) 8‑Kbyte 128 K, or
4‑Kbyte sectors 256 Kbytes
granularity )
SRAM2,
with
SRAM WRP No SRAM2, with 1‑Kbyte granularity No No No No No
1‑Kbyte
granularity
No No No By area
By area with 256‑byte
(replaced (replaced (replaced with
PCROP No No No granularity, one area per By sectors
by by by 256‑byte
bank
TrustZone) TrustZone) TrustZone) granularity
Up to two Up to two
secure secure
hide areas Yes, with hide areas
(HDP) second (HDP) 3‑stage temporal isolation, one per Yes (secure user memory, with
HDP No
inside the stage inside the bank 256‑byte granularity)
TrustZone extension TrustZone
secure secure
domain domain
No No No
(replaced (replaced (replaced
Firewall No No No No No No No
by by by
TrustZone) TrustZone) TrustZone)
MPU Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Yes (boot
Yes (unique entry point in secure
UBE(1) Yes (boot lock feature) Yes Yes lock No
access)
feature)
Internal tamper
Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
detection
AES,
AES, AES, AES,
HASH, AES,
Hardware crypto(2) HASH, AES HASH, HASH HASH, AES, DES, HASH, OTFDEC
OTFDEC, HASH
PKA PKA PKA
PKA
SP800‑90‑ SP SP800‑90‑
RNG SP 800‑90‑B
A(3) 800‑90‑B A(3)
KMS No No No No Yes - No No No No
Debug port
registers
Backup
Backup
Backup
FLASH FLASH FLASH
Reg
Reg
SRAM1 SRAM SRAM
SRAM2 SRAM1 SRAM
DT50952V1
Level 0 Level 1 Level 2
1. Backup registers/SRAM
The RDP level 2 is mandatory to implement an application with higher security level (such as immutable code).
The drawback is that the RDP level 2 can prevent a device examination, for instance after a customer return.
The RDP level 0.5 is used to debug a nonsecure application, while protecting contents within secure area
boundaries from debug access. Refer to section 'Development recommendations using TrustZone®' of the
application note Arm® TrustZone® features on STM32L5 and STM32U5 series (AN5347) for more information
about this protection.
Note: The RDP is available on all STM32 device, unless succeeded by the lifecycle management product state (see
Section 6.3).
6.6 TrustZone®
This section describes the main features of the TrustZone® architecture. For further information, refer to the
application note Arm® TrustZone® features on STM32L5 and STM32U5 series (AN5347), and to the device
reference manual.
The Armv8-M TrustZone® architecture defines two domains at system level: secure and nonsecure. The full
memory-map space is split into secure and nonsecure areas. This includes all memory types (flash memory,
SRAM, and external memories), as well as all peripherals that can be shared (with specific context for each
domain) or dedicated to one domain or the other.
At system level, the isolation between secure and nonsecure domains relies on the following hardware
mechanisms (see Figure 9):
• specific core architecture (Armv8-M Cortex-M33) with a dual execution domain for secure and nonsecure
domains, and an implementation defined attribution unit (IDAU) to assert address range security status
• secure attribution unit (SAU) is used to refine settings of the IDAU
• bus infrastructure that propagates the secure and privilege attributes of any transaction (AHB5)
• dedicated hardware blocks managing the split between the two domains (GTZC to define security attribute
for internal SRAMs and external FSMC/OCTOSPI memories, and peripherals)
Armv8-M
Cortex-M33
AHB master
SAU/MPU
AHB5
DT63688V1
SRAM memory
TrustZone specific implementation
Note: A security attribute cannot be modified to be less secure (by security order: secure > NSC > nonsecure) than a
default attribute set by hardware through an IDAU (implementation defined secure attribute).
Refer to implementation details of each device in the reference manual.
Address aliasing
The security attribute is set depending on the fixed resource address. However, a memory‑mapped resource can
be set either as secure or nonsecure, depending on the application. To overcome this apparent contradiction,
two addresses are assigned to each memory-mapped resource: one used when the resource must be accessed
in secure mode, one used in nonsecure mode. This mechanism is called address aliasing.
The address aliasing allows also all peripheral accesses to be grouped in only two regions instead of multiple
scattered regions. Finally, the IDAU splits the memory-mapped resources in the following regions:
• peripherals secure/nonsecure regions
• flash memory secure/nonsecure regions
• SRAM secure/nonsecure regions
Refer to device reference manual for the detailed configuration.
The PCROP is a static protection set by option bytes. The number of protected areas and their granularity
depends on the STM32 device (see Section 6.1.2: Security features by STM32 devices). When the PCROP is in
use, care must be taken to compile the firmware with the execute-only attribute (refer to user compiler options).
Refer to the document [3] for more details.
In particular, the ARMv6-M instruction set has difficulty working with this constraint. The compiler must be set not
to place constants and literal pools within the program, otherwise even the code execution can eventually trigger
the protection.
When to use the PCROP
The PCROP is used to protect third-party firmware (intellectual property), as well as the most sensitive parts of
the user firmware.
Note: The PCROP is available on all STM32 devices listed in Table 1, except on TrustZone-enabled devices, where it
is superseded by another protection mechanism.
Reset
Flash memory
The HDP is a static protection configured by option bytes. Once set, the CPU boots on the firmware embedded
in this area, independent of the boot configuration set by boot pin or boot address. HDP can be single stage or
use a monotonic counter to gradually cover more memory as the secure boot progresses (STM32H5). Some
devices implement a dynamic HDP expansion. This means that the HDP can be extended by the number of
sectors using the register value, without programming the OB (STM32H5, STM32U0).
When to use the HDP
The HDP is suited for a code that must only be executed after reset, like secure boot for root of trust. It should be
used in synergy with the boot lock feature.
Note: The HDP is available in STM32H7, STM32G0, STM32G4, STM32L5, STM32U0, STM32U5, and STM32H5
devices, with slight differences in its implementation and name (refer to the reference manuals for details).
6.10 Firewall
The firewall is a hardware protection peripheral controlling the bus transactions and filtering accesses to three
particular areas: a code area (flash memory), a volatile data area (SRAM) and a nonvolatile data area (flash
memory). The protected code is accessible through a single entry point (the call-gate mechanism explained
below). Any attempt to jump and try to execute any of the functions included in the code section without passing
through the entry point, generates a system reset.
The firewall is part of the dynamic protections. It must be set at startup (for example by an SB application).
IDLE
Firewall
enabled Reset
Reset
Call gate
CLOSED OPEN
Instruction fetch
DT50954V1
out of prot. area
Since the only way to respect the call gate sequence is to pass through the single call gate entry point,
a mechanism must be provided in order to support application calling multiple firewall-protected functions from
unprotected code area (such as encrypt and decrypt functions). A parameter can be used to specify which
function to execute (such as CallGate(F1_ID) or CallGate(F2_ID)). According to the parameter, the right
function is internally called. This mechanism is represented in the figure below.
unprotected_code.c
Function ID f1()
f1a()
Call gate single entry point
f1b()
f2()
f2a()
f2b()
f3()
f3a()
DT50961V1
Firewall code section f3b()
The table below shows the different cases supported by mixing modes and access attributes.
1. XN attribute is set by region, and is valid for both modes. It can be used to avoid SRAM code injection for example.
The code executed in privileged mode can access additional specific instructions (MRS), and can also access
Arm® core peripheral registers (such as NVIC, DWT, or SBC). This is useful for OS kernels or pieces of secure
code requiring access to sensitive resources that are otherwise inaccessible to unprivileged firmware.
An OS kernel can manipulate MPU attributes dynamically to grant access to specific resources depending on
the currently running task. Access right can be updated each time the OS switches from one task to another.
When to use the MPU
The MPU is used at runtime to isolate sensitive code, and/or to manage access to resources depending on
the process currently executed by the device. This feature is useful especially for advanced embedded operating
systems that incorporate security in their design.
Note: The MPU is available on all STM32 devices except the STM32F0 (see the various programming manuals for
more details).
CKS
Key 0
Wireless stack Key 1
CPU2 Key n
AES hardware
Secure key
register
IPCC and mailbox
Data register
CPU1
DT61168V1
User application
6.18 Device ID
Each STM32 device has a unique 96-bit identifier providing an individual reference for any device in any context.
These bits can never be altered by the user.
The unique device identifier can be used for direct device authentication, or for instance to derive a unique key
from a master OEM key.
6.19 Cryptography
As described in Section 5, cryptography is essential to secure an embedded system. The cryptography enables
confidentiality, integrity, and authentication of data or code. For efficiently supporting these functions, most STM32
series include products with hardware cryptography peripherals. These peripherals allow cryptographic
computations (such as hashing or symmetric algorithms) to be accelerated. For devices with no such specific
hardware acceleration, the STM32 cryptographic firmware library (CryptoLib) provides a software implementation
of a large set of cryptographic algorithms.
The cryptography related features for each product can be identified via their root part number, specifically on the
second number or letter after the series identifier. For example the STM32L486 and STM32H7B3 devices have
crypto hardware while the STM32L476 and STM32H7A3 have to use software library to implement cryptography,
and it can be known by their naming.
Instruction data/system
cache cache
OTFDEC
OCTOSPI
SPI bus
Device boundary
DT48973V1
SPI NOR
flash memory
The OTFDEC uses the AES-128 CTR mode, with a 128-bit key to achieve a latency below 12 system bus cycles.
Up to four independent and nonoverlapping encrypted regions can be defined (4-Kbyte granularity),
each with its own key.
When to use the OTFDEC
The OTFDEC is used when an external memory is used by the system. For TrustZone® capable MCUs, the
decryption keys can only be made accessible through the secure mode. See the application note How to use
OTFDEC for encryption/decryption in trusted environment on STM32H73/H7B MCUs (AN5281) for more details.
Note: The OTFDEC is available on STM32H5, STM35H7, STM32L5, and STM32U5 devices only.
7 Guidelines
Secure systems can take advantage of many security supporting hardware feature. Some are useful for any
system, and need little change in the application code to be activated and fully functional. It is the case of the
RDP feature, that prevents basic access to the flash memory by disabling the debug port. Other features must be
selected depending on user application and the required security level.
This section helps defining the adapted set of security features, depending on the system use-cases. The use-
cases are gathered in four main groups: protection against external (1) and internal (2) threats, security
maintenance (3), and other use-cases related to cryptography (4) (see the table below).
1 Device protection against external threats: RDP protection, tamper detection, device monitoring
1.1 Device configuration (option bytes, not supposed to be modified ever)
• Use RDP level 2. This closes the device from any external access.
1.2 Remove debug capability for the device.
• Use RDP level 2 for permanently disabling the debug.
1.3 Protect a device against a loss of external clock source (crystal).
• Enable clock security system (CSS).
-
1.4 Detect a system-level intrusion.
• Use tamper detection capability of the RTC.
1.5 Protect a device from code injection.
• Use the RDP.
• Isolate communication port protocol with the MPU, firewall, or HDP.
• Limit communication port protocol access range.
• Use write protection on empty memory areas (flash memory and SRAM).
2. Code protection against internal threats: TrustZone, PCROP, MPU, firewall, and HDP
2.1 Protect the code against cloning.
• Use RDP level 1 or 2 against external access.
• Use PCROP on most sensitive parts of the code against internal read access.
• Use OTFDEC to secure code stored in the external memory.
2.2 Protect secret data from other processes.
• Use firewall to protect both code and data.
• Use MPU to protect secret data area from being read.
- • Use HDP in case data must only be used at reset.
• Use secure domain of TrustZone, if available.
2.3 Protect code and data when not fully verified or trusted libraries are used.
• Use PCROP to protect user most sensitive code.
• Use firewall to protect user sensitive application (code, data and execution).
• Use MPU and de-privilege the untrusted library.
• Use IWDG to avoid any deadlock.
• Use secure domain of TrustZone, if available.
3. Device security check and maintenance: integrity checks, SB, SFU
3.1 Check code integrity.
• Hash firmware code at reset and compare to expected value.
• Enable ECC on the flash memory and parity check on the SRAM.
-
3.2 Security checks or embedded firmware authentication
• Implement SB application with cryptography.
• Protect SB application secret data (refer to previous sections).
8 Conclusion
No system can be made secure by simply enabling security features in the hardware. Security must be rooted
in the architecture of the complete solution.
The threats must be identified, the countermeasures correctly designed and implemented in synergy with other
security features.
As security demands considerable resources, it is important to correctly evaluate the risks, and spend
the resources efficiently, keeping in mind the cost of attack and the value of the protected asset.
The concept of root of trust is pivotal because it uses a more hierarchic and centralized approach, as opposed to
attempting to apply security ad hoc.
With the STM32 microcontrollers, the embedded and IoT security is very cost-effective and robust.
DT50955V1
Secret key Secret key
John Doe1 John Doe2
The inherent weakness of these algorithms is the key sharing between both parties. It may not be an issue in
secure environments (such as manufacturing plants), but when both parties are distant, the key transfer becomes
a challenge.
Among all secret key algorithms, block-based algorithms are very common since they can be efficiently
accelerated by hardware or software parallel implementations. Typical AES (advanced encryption standard)
algorithms operate on clear blocks of 128 bits. They produce ciphered blocks of the same length using keys
of 128, 192, or 256 bits. The different ways to chain consecutive blocks are called “mode of operations”. They
include cipher block chaining (CBC), counter mode (CTR) and Galois counter mode (GCM).
Since these algorithms are deterministic, they always mix input data with a random value, known as nonce, used
only for one session as initialization vector.
Private key A
John Doe2
DT50956V1
Private key A
John Doe3
• A message encrypted by the public key can only be read by the private key owner.
Private key A
John Doe2
DT50987V1
Private key A
John Doe3
The main use of public key algorithms is authentication. It is also used to resolve the “key sharing” issue of
symmetric cryptography. However, this comes at the cost of more complex operations, increased computation
time and bigger memory footprint.
RSA and elliptic curve cryptography (ECC) are the most common asymmetric algorithms.
Hybrid cryptography
Common secure transfer protocols (such as Bluetooth and TLS) rely on both algorithm types. This scheme is
known as hybrid cryptography:
• Asymmetric cryptography is used first, in order to solve the symmetric key-sharing problem. A session key
is exchanged by the public key owner to the private key owner.
• Transfer confidentiality is then provided by a symmetric algorithm using the session key.
Message HASH
DT50958V1
John Doe1 HASH Digest =? John Doe2
The difference with classic CRC is the robustness due to operations that are more complex and a much higher
digest length: up to 512 bits instead of 16 or 32 bits. As an example, CRC are reserved for fast integrity checks
during data transfers. Digest length makes them virtually unique and ensures that no collision occurs.
Typical algorithms are the MD5 (128-bit digest), SHA-1 (160-bit digest), SHA-2 (224-, 256-,384-, or 512-bit
digest), and SHA-3 (224-, 256-, 384-, or 512-bit digest).
Message HASH
Secret Secret
DT50959V1
key key
MAC MAC MAC
Message HASH
DT50960V1
Signature Signature Signature
Certificate
A certificate is related to public key algorithms. It authenticates the public key in an asymmetric transfer. It is used
to counteract usurpation by an attacker that substitutes the right public key by his own key. A certificate consists
in the public key signed by a certificate authority (CA) private key. This CA is considered as fully trusted.
In addition to the public key, the certificate also contains version numbers, validity period and some IDs.
Revision history
Table 17. Document revision history
Updated:
• Section 3.3.2 Silicon invasive attacks
• Section 4.1 TrustZone® for Armv8-M architecture
• Table 5. Memory types and associated protection
• Section 5.3 Arm TF-M solution
• Table 8. Basic feature differences
06-Nov-2020 5 (cont'd)
• Section 6.1 Security features overview including updates in all the tables
• Section 6.2 Readout protection (RDP)
• Section 6.4 TrustZone
Added:
• Section 4.2 Dual-core security
• Section 6.3 One-time programmable (OTP)
Updated:
• Document's scope to add STM32U5 Series
• Table 1. Applicable products
• Section 3.3.1 Non-invasive attacks
• Section 4.3.3 Embedded SRAM
• Section 4.3.4 External Flash memories
• Section 5 Secure applications
• Table 9. Security features for STM32Fx Series
• Table 10. Security features for STM32Lx and STM32U5 Series
07-Jul-2021 6 • Table 11. Security features for STM32H7, STM32G0, STM32G4, STM32WB and STM32WL Series
• Section 6.3 One-time programmable (OTP)
• Section 6.6 Execute-only firmware (PCROP)
• Section 6.8 Firewall
• Section 6.9 Memory protection unit (MPU)
• Section 6.17 Cryptography
• Section 6.17.1 Hardware accelerators
• Section 6.17.2 CryptoLib software library
Added:
• Section 5.4 Product certifications
Updated:
• Document scope to add STM32C0 and STM32H5 Series
• Section 1 General information
• Debug port access and SCA in Section 3.3.1 Non-invasive attacks
• Random number generation and Communication eavesdrop in Section 3.5 List of attack targets
• New Section 4.1 Configuration protection
13-Jan-2023 7 • Introduction of Section 5.2 ST proprietary SBSFU solution
• New Section 5.2.3 Configurations
• Section 5.3 Arm TF-M solution
• Section 6.1 Overview of security features
• Last note in Section 6.2 Readout protection (RDP)
• New Section 6.3 Lifecycle management – product state
• Section 6.7 Execute-only firmware (PCROP)
Updated:
• Section 1 General information
• Section 4.1 Configuration protection
• Section 4.2 TrustZone® for Armv8-M architecture
• Table 6. Scope of STM32 embedded memory protection features
• Table 7. Software isolation mechanism
• Section 5.4 Arm TF-M solution
22-Mar-2023 8
• Section 5.5 Product certifications
• Table 9. Security features for STM32C0, STM32F0/1/2/3/4, STM32G0/4 devices
• Section 6.2 Readout protection (RDP)
• Section 6.5 TrustZone®
• Section 6.7 Execute-only firmware (PCROP)
• Section 6.12 Antitamper (TAMP)/backup registers (BKP)
• Section 6.18 Cryptography
Contents
1 General information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
2.1 Security purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Attack types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
3.1 Introduction to attack types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Software attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Hardware attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3.1 Non-invasive attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3.2 Silicon invasive attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 IoT system attack examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 List of attack targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Device protections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
4.1 Configuration protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 TrustZone® for Armv8-M architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Dual-core architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4 Memory protections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4.1 System flash memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.2 User flash memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.3 Embedded SRAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.4 External flash memories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4.5 STM32 memory protections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5 Software isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.6 Debug port and other interface protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7 Boot protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.8 System monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Secure applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
5.1 Secure firmware install (SFI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Root and chain of trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 STMicroelectronics proprietary SBSFU solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3.1 Secure boot (SB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3.2 Secure firmware update (SFU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.3.3 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.4 Arm TF-M solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.5 Secure manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
List of tables
Table 1. Applicable products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Table 2. Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Table 3. Assets to be protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Table 4. Attacks types and costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Table 5. Memory types and associated protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Table 6. Scope of STM32 embedded memory protection features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Table 7. Software isolation mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Table 8. Basic feature differences of TrustZone-based secure software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Table 9. Certifications coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Table 10. Security features for STM32C0, STM32F0/1/2/3/4, STM32G0/4 devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Table 11. Security features for STM32L0/1/4/4+, STM32WB, STM32WBA, STM32WB0x, STM32WL devices . . . . . . . . . . 29
Table 12. Security features for STM32L5, STM32U0, STM32U5, STM32H503/5, STM32H7R/S, STM32H72x/73/74x/75,
STM32H7Ax/7Bx, STM32F7 devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Table 13. RDP protections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Table 14. Attributes and access permission managed by MPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Table 15. Process isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Table 16. Security use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Table 17. Document revision history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
List of figures
Figure 1. Corrupted connected device threat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 2. IoT system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Figure 3. Armv8-M TrustZone® execution modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 4. Simplified diagram of dual-core system architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 5. Memory types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 6. Secure boot FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 7. Secure server/device SFU architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Figure 8. Example of RDP protections (STM32L4 series). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Figure 9. TrustZone® implementation at system level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 10. HDP protected firmware access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Figure 11. Firewall FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Figure 12. Firewall application example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Figure 13. Dual-core architecture with CKS service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Figure 14. Typical OTFDEC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 15. Symmetric cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 16. Signature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Figure 17. PKA encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Figure 18. Message hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Figure 19. MAC generation with secrete key algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Figure 20. Signature generation with public key algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50